Updates from: 04/10/2021 03:08:05
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Active Directory Technical Profile https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/active-directory-technical-profile.md
Azure Active Directory B2C (Azure AD B2C) provides support for the Azure Active
The **Name** attribute of the **Protocol** element needs to be set to `Proprietary`. The **handler** attribute must contain the fully qualified name of the protocol handler assembly `Web.TPEngine.Providers.AzureActiveDirectoryProvider, Web.TPEngine, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null`.
-Following [custom policy starter pack](custom-policy-get-started.md#custom-policy-starter-pack) Azure AD technical profiles include the **AAD-Common** technical profile. The Azure AD technical profiles don't specify the protocol because the protocol is configured in the **AAD-Common** technical profile:
+Following [custom policy starter pack](tutorial-create-user-flows.md?pivots=b2c-custom-policy#custom-policy-starter-pack) Azure AD technical profiles include the **AAD-Common** technical profile. The Azure AD technical profiles don't specify the protocol because the protocol is configured in the **AAD-Common** technical profile:
- **AAD-UserReadUsingAlternativeSecurityId** and **AAD-UserReadUsingAlternativeSecurityId-NoError** - Look up a social account in the directory. - **AAD-UserWriteUsingAlternativeSecurityId** - Create a new social account.
active-directory-b2c Add Identity Provider https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/add-identity-provider.md
On the sign-up or sign-in page, Azure AD B2C presents a list of external identit
![Mobile sign-in example with a social account (Facebook)](media/add-identity-provider/external-idp.png)
-You can add identity providers that are supported by Azure Active Directory B2C (Azure AD B2C) to your [user flows](user-flow-overview.md) using the Azure portal. You can also add identity providers to your [custom policies](custom-policy-get-started.md).
+You can add identity providers that are supported by Azure Active Directory B2C (Azure AD B2C) to your [user flows](user-flow-overview.md) using the Azure portal. You can also add identity providers to your [custom policies](user-flow-overview.md).
## Select an identity provider
active-directory-b2c Add Password Change Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/add-password-change-policy.md
In Azure Active Directory B2C (Azure AD B2C), you can enable users who are signe
## Prerequisites
-* Complete the steps in [Get started with custom policies in Active Directory B2C](custom-policy-get-started.md).
+* Complete the steps in [Get started with custom policies in Active Directory B2C](tutorial-create-user-flows.md?pivots=b2c-custom-policy).
* If you haven't already done so, [register a web application in Azure Active Directory B2C](tutorial-register-applications.md). ## Add the elements
active-directory-b2c Add Password Reset Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/add-password-reset-policy.md
To enable self-service password reset for the sign-up or sign-in user flow:
::: zone pivot="b2c-custom-policy"
-The following sections describe how to add a self-service password experience to a custom policy. The sample is based on the policy files included in the [custom policy starter pack](./custom-policy-get-started.md).
+The following sections describe how to add a self-service password experience to a custom policy. The sample is based on the policy files included in the [custom policy starter pack](./tutorial-create-user-flows.md?pivots=b2c-custom-policy#custom-policy-starter-pack).
> [!TIP] > You can find a complete sample of the "sign-up or sign-in with password reset" policy on [GitHub](https://github.com/azure-ad-b2c/samples/tree/master/policies/embedded-password-reset).
To let users of your application reset their password, you create a password res
### Create a password reset policy
-Custom policies are a set of XML files you upload to your Azure AD B2C tenant to define user journeys. We provide starter packs with several pre-built policies including: sign-up and sign-in, password reset, and profile editing policy. For more information, see [Get started with custom policies in Azure AD B2C](custom-policy-get-started.md).
+Custom policies are a set of XML files you upload to your Azure AD B2C tenant to define user journeys. We provide starter packs with several pre-built policies including: sign-up and sign-in, password reset, and profile editing policy. For more information, see [Get started with custom policies in Azure AD B2C](tutorial-create-user-flows.md?pivots=b2c-custom-policy).
::: zone-end
active-directory-b2c Add Profile Editing Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/add-profile-editing-policy.md
If you want to enable users to edit their profile in your application, you use a
## Create a profile editing policy
-Custom policies are a set of XML files you upload to your Azure AD B2C tenant to define user journeys. We provide starter packs with several pre-built policies including: sign-up and sign-in, password reset, and profile editing policy. For more information, see [Get started with custom policies in Azure AD B2C](custom-policy-get-started.md).
+Custom policies are a set of XML files you upload to your Azure AD B2C tenant to define user journeys. We provide starter packs with several pre-built policies including: sign-up and sign-in, password reset, and profile editing policy. For more information, see [Get started with custom policies in Azure AD B2C](tutorial-create-user-flows.md?pivots=b2c-custom-policy).
::: zone-end
active-directory-b2c Add Sign Up And Sign In Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/add-sign-up-and-sign-in-policy.md
The sign-up and sign-in user flow handles both sign-up and sign-in experiences w
## Create a sign-up and sign-in policy
-Custom policies are a set of XML files you upload to your Azure AD B2C tenant to define user journeys. We provide starter packs with several pre-built policies including: sign-up and sign-in, password reset, and profile editing policy. For more information, see [Get started with custom policies in Azure AD B2C](custom-policy-get-started.md).
+Custom policies are a set of XML files you upload to your Azure AD B2C tenant to define user journeys. We provide starter packs with several pre-built policies including: sign-up and sign-in, password reset, and profile editing policy. For more information, see [Get started with custom policies in Azure AD B2C](tutorial-create-user-flows.md?pivots=b2c-custom-policy).
::: zone-end
active-directory-b2c Custom Email Mailjet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/custom-email-mailjet.md
Previously updated : 03/15/2021 Last updated : 04/09/2021
+zone_pivot_groups: b2c-policy-type
# Custom email verification with Mailjet
-Use custom email in Azure Active Directory B2C (Azure AD B2C) to send customized email to users that sign up to use your applications. By using [DisplayControls](display-controls.md) (currently in preview) and the third-party email provider Mailjet, you can use your own email template and *From:* address and subject, as well as support localization and custom one-time password (OTP) settings.
+
+Use custom email in Azure Active Directory B2C (Azure AD B2C) to send customized email to users that sign up to use your applications. By using the third-party email provider Mailjet, you can use your own email template and *From:* address and subject, as well as support localization and custom one-time password (OTP) settings.
++++ Custom email verification requires the use of a third-party email provider like [Mailjet](https://Mailjet.com), [SendGrid](./custom-email-sendgrid.md), or [SparkPost](https://sparkpost.com), a custom REST API, or any HTTP-based email provider (including your own). This article describes setting up a solution that uses Mailjet.
You can find an example of a custom email verification policy on GitHub:
- [Custom email verification - DisplayControls](https://github.com/azure-ad-b2c/samples/tree/master/policies/custom-email-verifcation-displaycontrol) - For information about using a custom REST API or any HTTP-based SMTP email provider, see [Define a RESTful technical profile in an Azure AD B2C custom policy](restful-technical-profile.md).+
active-directory-b2c Custom Email Sendgrid https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/custom-email-sendgrid.md
Previously updated : 03/15/2021 Last updated : 04/09/2021
+zone_pivot_groups: b2c-policy-type
# Custom email verification with SendGrid
-Use custom email in Azure Active Directory B2C (Azure AD B2C) to send customized email to users that sign up to use your applications. By using [DisplayControls](display-controls.md) (currently in preview) and the third-party email provider SendGrid, you can use your own email template and *From:* address and subject, as well as support localization and custom one-time password (OTP) settings.
+
+Use custom email in Azure Active Directory B2C (Azure AD B2C) to send customized email to users that sign up to use your applications. By using the third-party email provider SendGrid, you can use your own email template and *From:* address and subject, as well as support localization and custom one-time password (OTP) settings.
++++ Custom email verification requires the use of a third-party email provider like [SendGrid](https://sendgrid.com), [Mailjet](https://Mailjet.com), or [SparkPost](https://sparkpost.com), a custom REST API, or any HTTP-based email provider (including your own). This article describes setting up a solution that uses SendGrid.
You can find an example of a custom email verification policy on GitHub:
- [Custom email verification - DisplayControls](https://github.com/azure-ad-b2c/samples/tree/master/policies/custom-email-verifcation-displaycontrol) - For information about using a custom REST API or any HTTP-based SMTP email provider, see [Define a RESTful technical profile in an Azure AD B2C custom policy](restful-technical-profile.md).+
active-directory-b2c Custom Policy Get Started https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/custom-policy-get-started.md
- Title: Get started with custom policies-
-description: Learn how to get started with custom policies in Azure Active Directory B2C.
------- Previously updated : 02/28/2020-----
-# Get started with custom policies in Azure Active Directory B2C
--
-[Custom policies](custom-policy-overview.md) are configuration files that define the behavior of your Azure Active Directory B2C (Azure AD B2C) tenant. In this article, you create a custom policy that supports local account sign-up or sign-in by using an email address and password. You also prepare your environment for adding identity providers.
-
-## Prerequisites
--- If you don't have one already, [create an Azure AD B2C tenant](tutorial-create-tenant.md) that is linked to your Azure subscription.-- [Register your application](tutorial-register-applications.md) in the tenant that you created so that it can communicate with Azure AD B2C.-- Complete the steps in [Set up sign-up and sign-in with a Facebook account](identity-provider-facebook.md) to configure a Facebook application. Although a Facebook application is not required for using custom policies, it's used in this walkthrough to demonstrate enabling social login in a custom policy.-
-> [!TIP]
-> This article explains how to set up your tenant manually. You can automate the entire process from this article. Automating will deploy the Azure AD B2C [SocialAndLocalAccountsWithMFA starter pack](https://github.com/Azure-Samples/active-directory-b2c-custom-policy-starterpack), which will provide Sign Up and Sign In, Password Reset and Profile Edit journeys. To automate the walkthrough below, visit the [IEF Setup App](https://aka.ms/iefsetup) and follow the instructions.
--
-## Add signing and encryption keys
-
-1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Select the **Directory + Subscription** icon in the portal toolbar, and then select the directory that contains your Azure AD B2C tenant.
-1. In the Azure portal, search for and select **Azure AD B2C**.
-1. On the overview page, under **Policies**, select **Identity Experience Framework**.
-
-### Create the signing key
-
-1. Select **Policy Keys** and then select **Add**.
-1. For **Options**, choose `Generate`.
-1. In **Name**, enter `TokenSigningKeyContainer`. The prefix `B2C_1A_` might be added automatically.
-1. For **Key type**, select **RSA**.
-1. For **Key usage**, select **Signature**.
-1. Select **Create**.
-
-### Create the encryption key
-
-1. Select **Policy Keys** and then select **Add**.
-1. For **Options**, choose `Generate`.
-1. In **Name**, enter `TokenEncryptionKeyContainer`. The prefix `B2C_1A`_ might be added automatically.
-1. For **Key type**, select **RSA**.
-1. For **Key usage**, select **Encryption**.
-1. Select **Create**.
-
-### Create the Facebook key
-
-Add your Facebook application's [App Secret](identity-provider-facebook.md) as a policy key. You can use the App Secret of the application you created as part of this article's prerequisites.
-
-1. Select **Policy Keys** and then select **Add**.
-1. For **Options**, choose `Manual`.
-1. For **Name**, enter `FacebookSecret`. The prefix `B2C_1A_` might be added automatically.
-1. In **Secret**, enter your Facebook application's *App Secret* from developers.facebook.com. This value is the secret, not the application ID.
-1. For **Key usage**, select **Signature**.
-1. Select **Create**.
-
-## Register Identity Experience Framework applications
-
-Azure AD B2C requires you to register two applications that it uses to sign up and sign in users with local accounts: *IdentityExperienceFramework*, a web API, and *ProxyIdentityExperienceFramework*, a native app with delegated permission to the IdentityExperienceFramework app. Your users can sign up with an email address or username and a password to access your tenant-registered applications, which creates a "local account." Local accounts exist only in your Azure AD B2C tenant.
-
-You need to register these two applications in your Azure AD B2C tenant only once.
-
-### Register the IdentityExperienceFramework application
-
-To register an application in your Azure AD B2C tenant, you can use the **App registrations** experience.
-
-1. Select **App registrations**, and then select **New registration**.
-1. For **Name**, enter `IdentityExperienceFramework`.
-1. Under **Supported account types**, select **Accounts in this organizational directory only**.
-1. Under **Redirect URI**, select **Web**, and then enter `https://your-tenant-name.b2clogin.com/your-tenant-name.onmicrosoft.com`, where `your-tenant-name` is your Azure AD B2C tenant domain name.
-1. Under **Permissions**, select the *Grant admin consent to openid and offline_access permissions* check box.
-1. Select **Register**.
-1. Record the **Application (client) ID** for use in a later step.
-
-Next, expose the API by adding a scope:
-
-1. In the left menu, under **Manage**, select **Expose an API**.
-1. Select **Add a scope**, then select **Save and continue** to accept the default application ID URI.
-1. Enter the following values to create a scope that allows custom policy execution in your Azure AD B2C tenant:
- * **Scope name**: `user_impersonation`
- * **Admin consent display name**: `Access IdentityExperienceFramework`
- * **Admin consent description**: `Allow the application to access IdentityExperienceFramework on behalf of the signed-in user.`
-1. Select **Add scope**
-
-* * *
-
-### Register the ProxyIdentityExperienceFramework application
-
-1. Select **App registrations**, and then select **New registration**.
-1. For **Name**, enter `ProxyIdentityExperienceFramework`.
-1. Under **Supported account types**, select **Accounts in this organizational directory only**.
-1. Under **Redirect URI**, use the drop-down to select **Public client/native (mobile & desktop)**.
-1. For **Redirect URI**, enter `myapp://auth`.
-1. Under **Permissions**, select the *Grant admin consent to openid and offline_access permissions* check box.
-1. Select **Register**.
-1. Record the **Application (client) ID** for use in a later step.
-
-Next, specify that the application should be treated as a public client:
-
-1. In the left menu, under **Manage**, select **Authentication**.
-1. Under **Advanced settings**, in the **Allow public client flows** section, set **Enable the following mobile and desktop flows** to **Yes**. Ensure that **"allowPublicClient": true** is set in the application manifest.
-1. Select **Save**.
-
-Now, grant permissions to the API scope you exposed earlier in the *IdentityExperienceFramework* registration:
-
-1. In the left menu, under **Manage**, select **API permissions**.
-1. Under **Configured permissions**, select **Add a permission**.
-1. Select the **My APIs** tab, then select the **IdentityExperienceFramework** application.
-1. Under **Permission**, select the **user_impersonation** scope that you defined earlier.
-1. Select **Add permissions**. As directed, wait a few minutes before proceeding to the next step.
-1. Select **Grant admin consent for (your tenant name)**.
-1. Select your currently signed-in administrator account, or sign in with an account in your Azure AD B2C tenant that's been assigned at least the *Cloud application administrator* role.
-1. Select **Accept**.
-1. Select **Refresh**, and then verify that "Granted for ..." appears under **Status** for the scopes - offline_access, openid and user_impersonation. It might take a few minutes for the permissions to propagate.
-
-* * *
-
-## Custom policy starter pack
-
-Custom policies are a set of XML files you upload to your Azure AD B2C tenant to define technical profiles and user journeys. We provide starter packs with several pre-built policies to get you going quickly. Each of these starter packs contains the smallest number of technical profiles and user journeys needed to achieve the scenarios described:
--- **LocalAccounts** - Enables the use of local accounts only.-- **SocialAccounts** - Enables the use of social (or federated) accounts only.-- **SocialAndLocalAccounts** - Enables the use of both local and social accounts.-- **SocialAndLocalAccountsWithMFA** - Enables social, local, and multi-factor authentication options.-
-Each starter pack contains:
--- **Base file** - Few modifications are required to the base. Example: *TrustFrameworkBase.xml*-- **Extension file** - This file is where most configuration changes are made. Example: *TrustFrameworkExtensions.xml*-- **Relying party files** - Task-specific files called by your application. Examples: *SignUpOrSignin.xml*, *ProfileEdit.xml*, *PasswordReset.xml*-
-In this article, you edit the XML custom policy files in the **SocialAndLocalAccounts** starter pack. If you need an XML editor, try [Visual Studio Code](https://code.visualstudio.com/download), a lightweight cross-platform editor.
-
-### Get the starter pack
-
-Get the custom policy starter packs from GitHub, then update the XML files in the SocialAndLocalAccounts starter pack with your Azure AD B2C tenant name.
-
-1. [Download the .zip file](https://github.com/Azure-Samples/active-directory-b2c-custom-policy-starterpack/archive/master.zip) or clone the repository:
-
- ```console
- git clone https://github.com/Azure-Samples/active-directory-b2c-custom-policy-starterpack
- ```
-
-1. In all of the files in the **SocialAndLocalAccounts** directory, replace the string `yourtenant` with the name of your Azure AD B2C tenant.
-
- For example, if the name of your B2C tenant is *contosotenant*, all instances of `yourtenant.onmicrosoft.com` become `contosotenant.onmicrosoft.com`.
-
-### Add application IDs to the custom policy
-
-Add the application IDs to the extensions file *TrustFrameworkExtensions.xml*.
-
-1. Open `SocialAndLocalAccounts/`**`TrustFrameworkExtensions.xml`** and find the element `<TechnicalProfile Id="login-NonInteractive">`.
-1. Replace both instances of `IdentityExperienceFrameworkAppId` with the application ID of the IdentityExperienceFramework application that you created earlier.
-1. Replace both instances of `ProxyIdentityExperienceFrameworkAppId` with the application ID of the ProxyIdentityExperienceFramework application that you created earlier.
-1. Save the file.
-
-## Upload the policies
-
-1. Select the **Identity Experience Framework** menu item in your B2C tenant in the Azure portal.
-1. Select **Upload custom policy**.
-1. In this order, upload the policy files:
- 1. *TrustFrameworkBase.xml*
- 1. *TrustFrameworkExtensions.xml*
- 1. *SignUpOrSignin.xml*
- 1. *ProfileEdit.xml*
- 1. *PasswordReset.xml*
-
-As you upload the files, Azure adds the prefix `B2C_1A_` to each.
-
-> [!TIP]
-> If your XML editor supports validation, validate the files against the `TrustFrameworkPolicy_0.3.0.0.xsd` XML schema that is located in the root directory of the starter pack. XML schema validation identifies errors before uploading.
-
-## Test the custom policy
-
-1. Under **Custom policies**, select **B2C_1A_signup_signin**.
-1. For **Select application** on the overview page of the custom policy, select the web application named *webapp1* that you previously registered.
-1. Make sure that the **Reply URL** is `https://jwt.ms`.
-1. Select **Run now**.
-1. Sign up using an email address.
-1. Select **Run now** again.
-1. Sign in with the same account to confirm that you have the correct configuration.
-
-## Add Facebook as an identity provider
-
-As mentioned in [Prerequisites](#prerequisites), Facebook is *not* required for using custom policies, but is used here to demonstrate how you can enable federated social login in a custom policy.
-
-1. In the `SocialAndLocalAccounts/`**`TrustFrameworkExtensions.xml`** file, replace the value of `client_id` with the Facebook application ID:
-
- ```xml
- <TechnicalProfile Id="Facebook-OAUTH">
- <Metadata>
- <!--Replace the value of client_id in this technical profile with the Facebook app ID"-->
- <Item Key="client_id">00000000000000</Item>
- ```
-
-1. Upload the *TrustFrameworkExtensions.xml* file to your tenant.
-1. Under **Custom policies**, select **B2C_1A_signup_signin**.
-1. Select **Run now** and select Facebook to sign in with Facebook and test the custom policy.
-
-## Next steps
-
-Next, try adding Azure Active Directory (Azure AD) as an identity provider. The base file used in this getting started guide already contains some of the content that you need for adding other identity providers like Azure AD. For information about setting up Azure AD as an identity provider, see [Set up sign-up and sign-in with an Azure Active Directory account using Active Directory B2C custom policies](identity-provider-azure-ad-single-tenant.md).
-
-Visit our [partner gallery](partner-gallery.md) to learn more on how to implement ISV integration using custom policies.
active-directory-b2c Custom Policy Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/custom-policy-overview.md
A custom policy is represented as one or more XML-formatted files, which refer t
## Custom policy starter pack
-Azure AD B2C custom policy [starter pack](custom-policy-get-started.md#get-the-starter-pack) comes with several pre-built policies to get you going quickly. Each of these starter packs contains the smallest number of technical profiles and user journeys needed to achieve the scenarios described:
+Azure AD B2C custom policy [starter pack](tutorial-create-user-flows.md?pivots=b2c-custom-policy#get-the-starter-pack) comes with several pre-built policies to get you going quickly. Each of these starter packs contains the smallest number of technical profiles and user journeys needed to achieve the scenarios described:
- **LocalAccounts** - Enables the use of local accounts only. - **SocialAccounts** - Enables the use of social (or federated) accounts only.
You get started with Azure AD B2C custom policy:
1. [Create an Azure AD B2C tenant](tutorial-create-tenant.md) 1. [Register a web application](tutorial-register-applications.md) using the Azure portal so you'll be able to test your policy.
-1. Add the necessary [policy keys](custom-policy-get-started.md#add-signing-and-encryption-keys) and [register the Identity Experience Framework applications](custom-policy-get-started.md#register-identity-experience-framework-applications).
-1. [Get the Azure AD B2C policy starter pack](custom-policy-get-started.md#get-the-starter-pack) and upload to your tenant.
-1. After you upload the starter pack, [test your sign-up or sign-in policy](custom-policy-get-started.md#test-the-custom-policy).
+1. Add the necessary [policy keys](tutorial-create-user-flows.md?pivots=b2c-custom-policy#add-signing-and-encryption-keys) and [register the Identity Experience Framework applications](tutorial-create-user-flows.md?pivots=b2c-custom-policy#register-identity-experience-framework-applications).
+1. [Get the Azure AD B2C policy starter pack](tutorial-create-user-flows.md?pivots=b2c-custom-policy#get-the-starter-pack) and upload to your tenant.
+1. After you upload the starter pack, [test your sign-up or sign-in policy](tutorial-create-user-flows.md?pivots=b2c-custom-policy#test-the-custom-policy).
1. We recommend you to download and install [Visual Studio Code](https://code.visualstudio.com/) (VS Code). Visual Studio Code is a lightweight but powerful source code editor, which runs on your desktop and is available for Windows, macOS, and Linux. With VS Code, you can quickly navigate through and edit your Azure AD B2C custom policy XML files by installing the [Azure AD B2C extension for VS Code](https://marketplace.visualstudio.com/items?itemName=AzureADB2CTools.aadb2c) ## Next steps
active-directory-b2c Custom Policy Reference Sso https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/custom-policy-reference-sso.md
The `<OutputClaims>` is used for retrieving claims from the session.
### NoopSSOSessionProvider
-As the name dictates, this provider does nothing. This provider can be used for suppressing SSO behavior for a specific technical profile. The following `SM-Noop` technical profile is included in the [custom policy starter pack](custom-policy-get-started.md#custom-policy-starter-pack).
+As the name dictates, this provider does nothing. This provider can be used for suppressing SSO behavior for a specific technical profile. The following `SM-Noop` technical profile is included in the [custom policy starter pack](tutorial-create-user-flows.md?pivots=b2c-custom-policy#custom-policy-starter-pack).
```xml <TechnicalProfile Id="SM-Noop">
As the name dictates, this provider does nothing. This provider can be used for
### DefaultSSOSessionProvider
-This provider can be used for storing claims in a session. This provider is typically referenced in a technical profile used for managing local and federated accounts. The following `SM-AAD` technical profile is included in the [custom policy starter pack](custom-policy-get-started.md#custom-policy-starter-pack).
+This provider can be used for storing claims in a session. This provider is typically referenced in a technical profile used for managing local and federated accounts. The following `SM-AAD` technical profile is included in the [custom policy starter pack](tutorial-create-user-flows.md?pivots=b2c-custom-policy#custom-policy-starter-pack).
```xml <TechnicalProfile Id="SM-AAD">
This provider can be used for storing claims in a session. This provider is typi
```
-The following `SM-MFA` technical profile is included in the [custom policy starter pack](custom-policy-get-started.md#custom-policy-starter-pack) `SocialAndLocalAccountsWithMfa`. This technical profile manages the multi-factor authentication session.
+The following `SM-MFA` technical profile is included in the [custom policy starter pack](tutorial-create-user-flows.md?pivots=b2c-custom-policy#custom-policy-starter-pack) `SocialAndLocalAccountsWithMfa`. This technical profile manages the multi-factor authentication session.
```xml <TechnicalProfile Id="SM-MFA">
The following `SM-MFA` technical profile is included in the [custom policy start
### ExternalLoginSSOSessionProvider
-This provider is used to suppress the "choose identity provider" screen and sign-out from a federated identity provider. It is typically referenced in a technical profile configured for a federated identity provider, such as Facebook, or Azure Active Directory. The following `SM-SocialLogin` technical profile is included in the [custom policy starter pack](custom-policy-get-started.md#custom-policy-starter-pack).
+This provider is used to suppress the "choose identity provider" screen and sign-out from a federated identity provider. It is typically referenced in a technical profile configured for a federated identity provider, such as Facebook, or Azure Active Directory. The following `SM-SocialLogin` technical profile is included in the [custom policy starter pack](tutorial-create-user-flows.md?pivots=b2c-custom-policy#custom-policy-starter-pack).
```xml <TechnicalProfile Id="SM-SocialLogin">
active-directory-b2c Custom Policy Rest Api Claims Exchange https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/custom-policy-rest-api-claims-exchange.md
You can also design the interaction as a validation technical profile. This is s
## Prerequisites -- Complete the steps in [Get started with custom policies](custom-policy-get-started.md). You should have a working custom policy for sign-up and sign-in with local accounts.
+- Complete the steps in [Get started with custom policies](tutorial-create-user-flows.md?pivots=b2c-custom-policy). You should have a working custom policy for sign-up and sign-in with local accounts.
- Learn how to [Integrate REST API claims exchanges in your Azure AD B2C custom policy](custom-policy-rest-api-intro.md). ## Prepare a REST API endpoint
active-directory-b2c Custom Policy Rest Api Claims Validation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/custom-policy-rest-api-claims-validation.md
You can also design the interaction as an orchestration step. This is suitable w
## Prerequisites -- Complete the steps in [Get started with custom policies](custom-policy-get-started.md). You should have a working custom policy for sign-up and sign-in with local accounts.
+- Complete the steps in [Get started with custom policies](tutorial-create-user-flows.md?pivots=b2c-custom-policy). You should have a working custom policy for sign-up and sign-in with local accounts.
- Learn how to [Integrate REST API claims exchanges in your Azure AD B2C custom policy](custom-policy-rest-api-intro.md). ## Prepare a REST API endpoint
active-directory-b2c Customize Ui With Html https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/customize-ui-with-html.md
You should see a page similar to the following example with the elements centere
To configure UI customization, copy the **ContentDefinition** and its child elements from the base file to the extensions file.
-1. Open the base file of your policy. For example, <em>`SocialAndLocalAccounts/`**`TrustFrameworkBase.xml`**</em>. This base file is one of the policy files included in the custom policy starter pack, which you should have obtained in the prerequisite, [Get started with custom policies](./custom-policy-get-started.md).
+1. Open the base file of your policy. For example, <em>`SocialAndLocalAccounts/`**`TrustFrameworkBase.xml`**</em>. This base file is one of the policy files included in the custom policy starter pack, which you should have obtained in the prerequisite, [Get started with custom policies](./tutorial-create-user-flows.md?pivots=b2c-custom-policy).
1. Search for and copy the entire contents of the **ContentDefinitions** element. 1. Open the extension file. For example, *TrustFrameworkExtensions.xml*. Search for the **BuildingBlocks** element. If the element doesn't exist, add it. 1. Paste the entire contents of the **ContentDefinitions** element that you copied as a child of the **BuildingBlocks** element.
active-directory-b2c Deploy Custom Policies Devops https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/deploy-custom-policies-devops.md
There are three primary steps required for enabling Azure Pipelines to manage cu
## Prerequisites * [Azure AD B2C tenant](tutorial-create-tenant.md), and credentials for a user in the directory with the [B2C IEF Policy Administrator](../active-directory/roles/permissions-reference.md#b2c-ief-policy-administrator) role
-* [Custom policies](custom-policy-get-started.md) uploaded to your tenant
+* [Custom policies](tutorial-create-user-flows.md?pivots=b2c-custom-policy) uploaded to your tenant
* [Management app](microsoft-graph-get-started.md) registered in your tenant with the Microsoft Graph API permission *Policy.ReadWrite.TrustFramework* * [Azure Pipeline](https://azure.microsoft.com/services/devops/pipelines/), and access to an [Azure DevOps Services project][devops-create-project]
active-directory-b2c Embedded Login https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/embedded-login.md
When using iframe, consider the following:
## Prerequisites
-* Complete the steps in the [Get started with custom policies in Active Directory B2C](custom-policy-get-started.md).
+* Complete the steps in the [Get started with custom policies in Active Directory B2C](tutorial-create-user-flows.md?pivots=b2c-custom-policy).
* [Enable custom domains](custom-domain.md) for your policies. ## Configure your policy
active-directory-b2c Identity Provider Azure Ad B2c https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/identity-provider-azure-ad-b2c.md
This article describes how to set up a federation with another Azure AD B2C tena
To enable sign-in for users with an account from another Azure AD B2C tenant (for example, Fabrikam), in your Azure AD B2C (for example, Contoso):
-1. Create a [user flow](tutorial-create-user-flows.md), or a [custom policy](custom-policy-get-started.md).
+1. Create a [user flow](tutorial-create-user-flows.md?pivots=b2c-user-flow), or a [custom policy](tutorial-create-user-flows.md?pivots=b2c-custom-policy).
1. Then create an application in the Azure AD B2C, as describe in this section. To create an application.
active-directory-b2c Identity Provider Local https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/identity-provider-local.md
After you download the starter pack.
1. In each file, replace the string `yourtenant` with the name of your Azure AD B2C tenant. For example, if the name of your B2C tenant is *contosob2c*, all instances of `yourtenant.onmicrosoft.com` become `contosob2c.onmicrosoft.com`.
-1. Complete the steps in the [Add application IDs to the custom policy](custom-policy-get-started.md#add-application-ids-to-the-custom-policy) section of [Get started with custom policies in Azure Active Directory B2C](custom-policy-get-started.md). For example, update `/phone-number-passwordless/`**`Phone_Email_Base.xml`** with the **Application (client) IDs** of the two applications you registered when completing the prerequisites, *IdentityExperienceFramework* and *ProxyIdentityExperienceFramework*.
+1. Complete the steps in the [Add application IDs to the custom policy](tutorial-create-user-flows.md?pivots=b2c-custom-policy#add-application-ids-to-the-custom-policy) section of [Get started with custom policies in Azure Active Directory B2C](tutorial-create-user-flows.md?pivots=b2c-custom-policy). For example, update `/phone-number-passwordless/`**`Phone_Email_Base.xml`** with the **Application (client) IDs** of the two applications you registered when completing the prerequisites, *IdentityExperienceFramework* and *ProxyIdentityExperienceFramework*.
1. Upload the policy files ::: zone-end
active-directory-b2c Jwt Issuer Technical Profile https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/jwt-issuer-technical-profile.md
The CryptographicKeys element contains the following attributes:
| Attribute | Required | Description | | | -- | -- |
-| issuer_secret | Yes | The X509 certificate (RSA key set) to use to sign the JWT token. This is the `B2C_1A_TokenSigningKeyContainer` key you configure in [Get started with custom policies](custom-policy-get-started.md). |
-| issuer_refresh_token_key | Yes | The X509 certificate (RSA key set) to use to encrypt the refresh token. You configured the `B2C_1A_TokenEncryptionKeyContainer` key in [Get started with custom policies](custom-policy-get-started.md) |
+| issuer_secret | Yes | The X509 certificate (RSA key set) to use to sign the JWT token. This is the `B2C_1A_TokenSigningKeyContainer` key you configure in [Get started with custom policies](tutorial-create-user-flows.md?pivots=b2c-custom-policy). |
+| issuer_refresh_token_key | Yes | The X509 certificate (RSA key set) to use to encrypt the refresh token. You configured the `B2C_1A_TokenEncryptionKeyContainer` key in [Get started with custom policies](tutorial-create-user-flows.md?pivots=b2c-custom-policy) |
## Session management
active-directory-b2c Manage Custom Policies Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/manage-custom-policies-powershell.md
Azure PowerShell provides several cmdlets for command line- and script-based cus
## Prerequisites * [Azure AD B2C tenant](tutorial-create-tenant.md), and credentials for a user in the directory with the [B2C IEF Policy Administrator](../active-directory/roles/permissions-reference.md#b2c-ief-policy-administrator) role
-* [Custom policies](custom-policy-get-started.md) uploaded to your tenant
+* [Custom policies](tutorial-create-user-flows.md?pivots=b2c-custom-policy) uploaded to your tenant
* [Azure AD PowerShell for Graph **preview module**](/powershell/azure/active-directory/install-adv2) ## Connect PowerShell session to B2C tenant
active-directory-b2c Multi Factor Authentication https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/multi-factor-authentication.md
A customer account is created in your tenant before the multi-factor authenticat
::: zone pivot="b2c-custom-policy"
-To enable Multi-Factor Authentication get the custom policy starter packs from GitHub, then update the XML files in the **SocialAndLocalAccountsWithMFA** starter pack with your Azure AD B2C tenant name. The **SocialAndLocalAccountsWithMFA** enables social, local, and multi-factor authentication options. For more information, see [Get started with custom policies in Active Directory B2C](custom-policy-get-started.md).
+To enable Multi-Factor Authentication get the custom policy starter packs from GitHub, then update the XML files in the **SocialAndLocalAccountsWithMFA** starter pack with your Azure AD B2C tenant name. The **SocialAndLocalAccountsWithMFA** enables social, local, and multi-factor authentication options. For more information, see [Get started with custom policies in Active Directory B2C](tutorial-create-user-flows.md?pivots=b2c-custom-policy).
::: zone-end
active-directory-b2c Multiple Token Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/multiple-token-endpoints.md
The following sections present an example of how to enable multiple issuers in a
You need the following Azure AD B2C resources in place before continuing with the steps in this article:
-* [User flows](tutorial-create-user-flows.md) or [custom policies](custom-policy-get-started.md) created in your tenant
+* [User flows](tutorial-create-user-flows.md?pivots=b2c-user-flow) or [custom policies](tutorial-create-user-flows.md?pivots=b2c-custom-policy) created in your tenant
## Get token issuer endpoints
active-directory-b2c Oauth2 Technical Profile https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/oauth2-technical-profile.md
The technical profile also returns claims that aren't returned by the identity p
| ExtraParamsInClaimsEndpointRequest | No | Contains the extra parameters that can be returned in the **ClaimsEndpoint** request by some identity providers. Multiple parameter names should be escaped and separated by the comma ',' delimiter. | | IncludeClaimResolvingInClaimsHandling  | No | For input and output claims, specifies whether [claims resolution](claim-resolver-overview.md) is included in the technical profile. Possible values: `true`, or `false` (default). If you want to use a claims resolver in the technical profile, set this to `true`. | | ResolveJsonPathsInJsonTokens | No | Indicates whether the technical profile resolves JSON paths. Possible values: `true`, or `false` (default). Use this metadata to read data from a nested JSON element. In an [OutputClaim](technicalprofiles.md#output-claims), set the `PartnerClaimType` to the JSON path element you want to output. For example: `firstName.localized`, or `data.0.to.0.email`.|
-|token_endpoint_auth_method| No| Specifies how Azure AD B2C sends the authentication header to the token endpoint. Possible values: `client_secret_post` (default), and `client_secret_basic` (public preview). For more information, see [OpenID Connect client authentication section](https://openid.net/specs/openid-connect-core-1_0.html#ClientAuthentication). |
-|SingleLogoutEnabled| No| Indicates whether during sign-in the technical profile attempts to sign out from federated identity providers. For more information, see [Azure AD B2C session sign-out](session-behavior.md#sign-out). Possible values: `true` (default), or `false`.|
+|token_endpoint_auth_method| No| Specifies how Azure AD B2C sends the authentication header to the token endpoint. Possible values: `client_secret_post` (default), and `client_secret_basic` (public preview), `private_key_jwt` (public preview). For more information, see [OpenID Connect client authentication section](https://openid.net/specs/openid-connect-core-1_0.html#ClientAuthentication). |
+|token_signing_algorithm| No | Specifies the signing algorithm to use when `token_endpoint_auth_method` is set to `private_key_jwt`. Possible values: `RS256` (default) or `RS512`.|
+|SingleLogoutEnabled| No | Indicates whether during sign-in the technical profile attempts to sign out from federated identity providers. For more information, see [Azure AD B2C session sign-out](session-behavior.md#sign-out). Possible values: `true` (default), or `false`.|
| UsePolicyInRedirectUri | No | Indicates whether to use a policy when constructing the redirect URI. When you configure your application in the identity provider, you need to specify the redirect URI. The redirect URI points to Azure AD B2C, `https://{your-tenant-name}.b2clogin.com/{your-tenant-name}.onmicrosoft.com/oauth2/authresp`. If you specify `true`, you need to add a redirect URI for each policy you use. For example: `https://{your-tenant-name}.b2clogin.com/{your-tenant-name}.onmicrosoft.com/{policy-name}/oauth2/authresp`. | ## Cryptographic keys
active-directory-b2c Openid Connect Technical Profile https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/openid-connect-technical-profile.md
The technical profile also returns claims that aren't returned by the identity p
| MarkAsFailureOnStatusCode5xx | No | Indicates whether a request to an external service should be marked as a failure if the Http status code is in the 5xx range. The default is `false`. | | DiscoverMetadataByTokenIssuer | No | Indicates whether the OIDC metadata should be discovered by using the issuer in the JWT token. | | IncludeClaimResolvingInClaimsHandling  | No | For input and output claims, specifies whether [claims resolution](claim-resolver-overview.md) is included in the technical profile. Possible values: `true`, or `false` (default). If you want to use a claims resolver in the technical profile, set this to `true`. |
-| token_endpoint_auth_method | No | Specifies how Azure AD B2C sends the authentication header to the token endpoint. Possible values: `client_secret_post` (default), and `client_secret_basic` (public preview). For more information, see [OpenID Connect client authentication section](https://openid.net/specs/openid-connect-core-1_0.html#ClientAuthentication). |
-| token_signing_algorithm | No | The signing algorithm used for client assertions when the **token_endpoint_auth_method** metadata is set to `private_key_jwt`. Possible values: `RS256` (default). |
+|token_endpoint_auth_method| No | Specifies how Azure AD B2C sends the authentication header to the token endpoint. Possible values: `client_secret_post` (default), and `client_secret_basic` (public preview), `private_key_jwt` (public preview). For more information, see [OpenID Connect client authentication section](https://openid.net/specs/openid-connect-core-1_0.html#ClientAuthentication). |
+|token_signing_algorithm| No | Specifies the signing algorithm to use when `token_endpoint_auth_method` is set to `private_key_jwt`. Possible values: `RS256` (default) or `RS512`.|
| SingleLogoutEnabled | No | Indicates whether during sign-in the technical profile attempts to sign out from federated identity providers. For more information, see [Azure AD B2C session sign-out](./session-behavior.md#sign-out). Possible values: `true` (default), or `false`. | |ReadBodyClaimsOnIdpRedirect| No| Set to `true` to read claims from response body on identity provider redirect. This metadata is used with [Apple ID](identity-provider-apple-id.md), where claims return in the response payload.|
active-directory-b2c Partner Arkose Labs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/partner-arkose-labs.md
To enable the API connector, in the **API connector** settings for your user flo
- [Custom policies in Azure AD B2C](./custom-policy-overview.md) -- [Get started with custom policies in Azure AD B2C](./custom-policy-get-started.md?tabs=applications)
+- [Get started with custom policies in Azure AD B2C](./tutorial-create-user-flows.md?pivots=b2c-custom-policy)
active-directory-b2c Partner Dynamics 365 Fraud Protection https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/partner-dynamics-365-fraud-protection.md
The following architecture diagram shows the implementation.
## Set up the solution 1. [Create a Facebook application](./identity-provider-facebook.md#create-a-facebook-application) configured to allow federation to Azure AD B2C.
-2. [Add the Facebook secret](./custom-policy-get-started.md#create-the-facebook-key) you created as an Identity Experience Framework policy key.
+2. [Add the Facebook secret](./tutorial-create-user-flows.md?pivots=b2c-custom-policy#create-the-facebook-key) you created as an Identity Experience Framework policy key.
## Configure your application under Microsoft DFP
The value of the userID needs to be the same as the one in the corresponding Azu
1. Go to the [Azure AD B2C policy](https://github.com/azure-ad-b2c/partner-integrations/tree/master/samples/Dynamics-Fraud-Protection/Policies) in the Policies folder.
-2. Follow this [document](./custom-policy-get-started.md?tabs=applications#custom-policy-starter-pack) to download [LocalAccounts starter pack](https://github.com/Azure-Samples/active-directory-b2c-custom-policy-starterpack/tree/master/LocalAccounts)
+2. Follow this [document](./tutorial-create-user-flows.md?pivots=b2c-custom-policy?tabs=applications#custom-policy-starter-pack) to download [LocalAccounts starter pack](https://github.com/Azure-Samples/active-directory-b2c-custom-policy-starterpack/tree/master/LocalAccounts)
3. Configure the policy for the Azure AD B2C tenant.
For additional information, review the following articles:
- [Custom policies in Azure AD B2C](./custom-policy-overview.md) -- [Get started with custom policies in Azure AD B2C](./custom-policy-get-started.md?tabs=applications)
+- [Get started with custom policies in Azure AD B2C](./tutorial-create-user-flows.md?pivots=b2c-custom-policy)
active-directory-b2c Partner Experian https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/partner-experian.md
In the provided [custom policies](https://github.com/azure-ad-b2c/partner-integr
### Part 6 - Configure the Azure AD B2C policy
-Refer to this [document](./custom-policy-get-started.md?tabs=applications#custom-policy-starter-pack) for instructions on how to set up your Azure AD B2C tenant and configure policies.
+Refer to this [document](./tutorial-create-user-flows.md?pivots=b2c-custom-policy#custom-policy-starter-pack) for instructions on how to set up your Azure AD B2C tenant and configure policies.
>[!NOTE] >This sample policy is based on [Local Accounts starter
For additional information, review the following articles:
- [Custom policies in Azure AD B2C](./custom-policy-overview.md) -- [Get started with custom policies in Azure AD B2C](./custom-policy-get-started.md?tabs=applications)
+- [Get started with custom policies in Azure AD B2C](tutorial-create-user-flows.md?pivots=b2c-custom-policy)
active-directory-b2c Partner Gallery https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/partner-gallery.md
Microsoft partners with the following ISVs for security.
- [Custom policies in Azure AD B2C](./custom-policy-overview.md) -- [Get started with custom policies in Azure AD B2C](./custom-policy-get-started.md?tabs=applications)
+- [Get started with custom policies in Azure AD B2C](tutorial-create-user-flows.md?pivots=b2c-custom-policy)
## Next steps
active-directory-b2c Partner Hypr https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/partner-hypr.md
The following architecture diagram shows the implementation.
1. Go to the [Azure AD B2C HYPR policy](https://github.com/HYPR-Corp-Public/Azure-AD-B2C-HYPR-Sample/tree/master/policy) in the Policy folder.
-2. Follow this [document](./custom-policy-get-started.md?tabs=applications#custom-policy-starter-pack) to download [LocalAccounts starter pack](https://github.com/Azure-Samples/active-directory-b2c-custom-policy-starterpack/tree/master/LocalAccounts)
+2. Follow this [document](tutorial-create-user-flows.md?pivots=b2c-custom-policy#custom-policy-starter-pack) to download [LocalAccounts starter pack](https://github.com/Azure-Samples/active-directory-b2c-custom-policy-starterpack/tree/master/LocalAccounts)
3. Configure the policy for the Azure AD B2C tenant.
For additional information, review the following articles:
- [Custom policies in Azure AD B2C](./custom-policy-overview.md) -- [Get started with custom policies in Azure AD B2C](./custom-policy-get-started.md?tabs=applications)
+- [Get started with custom policies in Azure AD B2C](tutorial-create-user-flows.md?pivots=b2c-custom-policy)
active-directory-b2c Partner Idology https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/partner-idology.md
The sample policy uses these key names:
### Part 4 - Configure the Azure AD B2C Policy
-1. Follow this [document](custom-policy-get-started.md?tabs=applications#custom-policy-starter-pack) to download the [LocalAccounts starter pack](https://github.com/Azure-Samples/active-directory-b2c-custom-policy-starterpack/tree/master/LocalAccounts) and configure the policy for the Azure AD B2C tenant. Follow the instructions until you complete the **Test the Custom Policy** section.
+1. Follow this [document](tutorial-create-user-flows.md?pivots=b2c-custom-policy#custom-policy-starter-pack) to download the [LocalAccounts starter pack](https://github.com/Azure-Samples/active-directory-b2c-custom-policy-starterpack/tree/master/LocalAccounts) and configure the policy for the Azure AD B2C tenant. Follow the instructions until you complete the **Test the Custom Policy** section.
2. Download the two sample policies [here](https://github.com/azure-ad-b2c/partner-integrations/tree/master/samples/IDology/policy).
For additional information, review the following articles:
- [Custom policies in Azure AD B2C](custom-policy-overview.md) -- [Get started with custom policies in Azure AD B2C](custom-policy-get-started.md?tabs=applications)
+- [Get started with custom policies in Azure AD B2C](tutorial-create-user-flows.md?pivots=b2c-custom-policy)
active-directory-b2c Partner Itsme https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/partner-itsme.md
For additional information, review the following articles:
* [Custom policies in Azure AD B2C](custom-policy-overview.md)
-* [Get started with custom policies in Azure AD B2C](custom-policy-get-started.md?tabs=applications)
+* [Get started with custom policies in Azure AD B2C](tutorial-create-user-flows.md?pivots=b2c-custom-policy)
active-directory-b2c Partner Jumio https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/partner-jumio.md
You can [configure application settings in Azure App Service](../app-service/con
1. Go to the [Azure AD B2C policy](https://github.com/azure-ad-b2c/partner-integrations/tree/master/samples/Jumio/Policies) in the Policies folder.
-2. Follow [this article](./custom-policy-get-started.md?tabs=applications#custom-policy-starter-pack) to download the [LocalAccounts starter pack](https://github.com/Azure-Samples/active-directory-b2c-custom-policy-starterpack/tree/master/LocalAccounts).
+2. Follow [this article](tutorial-create-user-flows.md?pivots=b2c-custom-policy#custom-policy-starter-pack) to download the [LocalAccounts starter pack](https://github.com/Azure-Samples/active-directory-b2c-custom-policy-starterpack/tree/master/LocalAccounts).
3. Configure the policy for the Azure AD B2C tenant.
For additional information, review the following articles:
- [Custom policies in Azure AD B2C](./custom-policy-overview.md) -- [Get started with custom policies in Azure AD B2C](./custom-policy-get-started.md?tabs=applications)
+- [Get started with custom policies in Azure AD B2C](tutorial-create-user-flows.md?pivots=b2c-custom-policy)
active-directory-b2c Partner Keyless https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/partner-keyless.md
For additional information, review the following articles:
- [Custom policies in Azure AD B2C](./custom-policy-overview.md) -- [Get started with custom policies in Azure AD B2C](./custom-policy-get-started.md?tabs=applications)
+- [Get started with custom policies in Azure AD B2C](tutorial-create-user-flows.md?pivots=b2c-custom-policy)
active-directory-b2c Partner Lexisnexis https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/partner-lexisnexis.md
In the provided [TrustFrameworkExtensions policy](https://github.com/azure-ad-b2
### Part 7 - Configure the Azure AD B2C policy
-Refer to this [document](./custom-policy-get-started.md?tabs=applications#custom-policy-starter-pack) to download [Local Accounts starter pack](https://github.com/Azure-Samples/active-directory-b2c-custom-policy-starterpack/tree/master/LocalAccounts) and configure the [policy](https://github.com/azure-ad-b2c/partner-integrations/tree/master/samples/ThreatMetrix/policy) for the Azure AD B2C tenant.
+Refer to this [document](tutorial-create-user-flows.md?pivots=b2c-custom-policy#custom-policy-starter-pack) to download [Local Accounts starter pack](https://github.com/Azure-Samples/active-directory-b2c-custom-policy-starterpack/tree/master/LocalAccounts) and configure the [policy](https://github.com/azure-ad-b2c/partner-integrations/tree/master/samples/ThreatMetrix/policy) for the Azure AD B2C tenant.
>[!NOTE] >Update the provided policies to relate to your specific tenant.
For additional information, review the following articles:
- [Custom policies in Azure AD B2C](./custom-policy-overview.md) -- [Get started with custom policies in Azure AD B2C](./custom-policy-get-started.md?tabs=applications)
+- [Get started with custom policies in Azure AD B2C](tutorial-create-user-flows.md?pivots=b2c-custom-policy)
active-directory-b2c Partner N8identity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/partner-n8identity.md
To get started, you'll need:
- [Optional] Connection and credential information for any databases or Lightweight Directory Access Protocols (LDAPs) you want to migrate customer data from. -- [Optional] Configured Azure AD B2C environment for using [custom policies](./custom-policy-get-started.md), if you wish to integrate TheAccessHub Admin Tool into your sign-up policy flow.
+- [Optional] Configured Azure AD B2C environment for using [custom policies](./tutorial-create-user-flows.md?pivots=b2c-custom-policy), if you wish to integrate TheAccessHub Admin Tool into your sign-up policy flow.
## Scenario description
To synchronize data from Azure AD B2C into TheAccessHub Admin Tool:
## Configure Azure AD B2C policies
-Occasionally syncing TheAccessHub Admin Tool is limited in its ability to keep its state up to date with Azure AD B2C. We can leverage TheAccessHub Admin ToolΓÇÖs API and Azure AD B2C policies to inform TheAccessHub Admin Tool of changes as they happen. This solution requires technical knowledge of [Azure AD B2C custom policies](./custom-policy-get-started.md). In the next section, we'll give you an example policy steps and a secure certificate to notify TheAccessHub Admin Tool of new accounts in your Sign-Up custom policies.
+Occasionally syncing TheAccessHub Admin Tool is limited in its ability to keep its state up to date with Azure AD B2C. We can leverage TheAccessHub Admin ToolΓÇÖs API and Azure AD B2C policies to inform TheAccessHub Admin Tool of changes as they happen. This solution requires technical knowledge of [Azure AD B2C custom policies](./user-flow-overview.md). In the next section, we'll give you an example policy steps and a secure certificate to notify TheAccessHub Admin Tool of new accounts in your Sign-Up custom policies.
### Create a secure credential to invoke TheAccessHub Admin ToolΓÇÖs API
Occasionally syncing TheAccessHub Admin Tool is limited in its ability to keep i
5. Select **Download** to get a zip file with basic policies that add customers into TheAccessHub Admin Tool as customers sign up.
-6. Follow this [tutorial](./custom-policy-get-started.md) to get started with designing custom policies in Azure AD B2C.
+6. Follow this [tutorial](./tutorial-create-user-flows.md?pivots=b2c-custom-policy) to get started with designing custom policies in Azure AD B2C.
## Next steps
For additional information, review the following articles:
- [Custom policies in Azure AD B2C](./custom-policy-overview.md) -- [Get started with custom policies in Azure AD B2C](./custom-policy-get-started.md?tabs=applications)
+- [Get started with custom policies in Azure AD B2C](tutorial-create-user-flows.md?pivots=b2c-custom-policy)
active-directory-b2c Partner Nevis https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/partner-nevis.md
To get started, you'll need:
- An [Azure AD B2C tenant](./tutorial-create-tenant.md) that is linked to your Azure subscription. -- Configured Azure AD B2C environment for using [custom policies](./custom-policy-get-started.md), if you wish to integrate Nevis into your sign-up policy flow.
+- Configured Azure AD B2C environment for using [custom policies](./tutorial-create-user-flows.md?pivots=b2c-custom-policy), if you wish to integrate Nevis into your sign-up policy flow.
## Scenario description
For additional information, review the following articles
- [Custom policies in Azure AD B2C](./custom-policy-overview.md) -- [Get started with custom policies in Azure AD B2C](./custom-policy-get-started.md?tabs=applications)
+- [Get started with custom policies in Azure AD B2C](tutorial-create-user-flows.md?pivots=b2c-custom-policy)
active-directory-b2c Partner Onfido https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/partner-onfido.md
In the provided [custom policies](https://github.com/azure-ad-b2c/partner-integr
### Part 4 - Configure the Azure AD B2C policy
-Refer to this [document](./custom-policy-get-started.md?tabs=applications#custom-policy-starter-pack) for instructions on how to set up your Azure AD B2C tenant and configure policies.
+Refer to this [document](tutorial-create-user-flows.md?pivots=b2c-custom-policy#custom-policy-starter-pack) for instructions on how to set up your Azure AD B2C tenant and configure policies.
>[!NOTE] > As a best practice, we recommend that customers add consent notification in the attribute collection page. Notify users that information will be send to third-party services for Identity verification.
For additional information, review the following articles:
- [Custom policies in Azure AD B2C](./custom-policy-overview.md) -- [Get started with custom policies in Azure AD B2C](./custom-policy-get-started.md?tabs=applications)
+- [Get started with custom policies in Azure AD B2C](tutorial-create-user-flows.md?pivots=b2c-custom-policy)
active-directory-b2c Partner Ping Identity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/partner-ping-identity.md
For additional information, review the following articles
- [Custom policies in Azure AD B2C](./custom-policy-overview.md) -- [Get started with custom policies in Azure AD B2C](./custom-policy-get-started.md?tabs=applications)
+- [Get started with custom policies in Azure AD B2C](tutorial-create-user-flows.md?pivots=b2c-custom-policy)
active-directory-b2c Partner Saviynt https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/partner-saviynt.md
For additional information, review the following articles:
- [Custom policies in Azure AD B2C](./custom-policy-overview.md) -- [Get started with custom policies in Azure AD B2C](./custom-policy-get-started.md?tabs=applications)
+- [Get started with custom policies in Azure AD B2C](tutorial-create-user-flows.md?pivots=b2c-custom-policy)
- [Create a web API application](./add-web-api-application.md)
active-directory-b2c Partner Strata https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/partner-strata.md
For additional information, review the following articles:
- [Custom policies in Azure AD B2C](./custom-policy-overview.md) -- [Get started with custom policies in Azure AD B2C](./custom-policy-get-started.md?tabs=applications)
+- [Get started with custom policies in Azure AD B2C](tutorial-create-user-flows.md?pivots=b2c-custom-policy)
active-directory-b2c Partner Trusona https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/partner-trusona.md
For additional information, review the following articles:
- [Custom policies in Azure AD B2C](custom-policy-overview.md) -- [Get started with custom policies in AAD B2C](custom-policy-get-started.md?tabs=applications)
+- [Get started with custom policies in AAD B2C](tutorial-create-user-flows.md?pivots=b2c-custom-policy)
active-directory-b2c Partner Twilio https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/partner-twilio.md
For additional information, review the following articles:
- [Custom policies in AAD B2C](custom-policy-overview.md) -- [Get started with custom policies in AAD B2C](custom-policy-get-started.md?tabs=applications)
+- [Get started with custom policies in AAD B2C](tutorial-create-user-flows.md?pivots=b2c-custom-policy)
active-directory-b2c Partner Typingdna https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/partner-typingdna.md
For additional information, review the following articles:
- [Custom policies in AAD B2C](./custom-policy-overview.md) -- [Get started with custom policies in AAD B2C](./custom-policy-get-started.md?tabs=applications)
+- [Get started with custom policies in AAD B2C](tutorial-create-user-flows.md?pivots=b2c-custom-policy)
active-directory-b2c Partner Whoiam https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/partner-whoiam.md
For additional information, review the following articles:
- [Custom policies in Azure AD B2C](./custom-policy-overview.md) -- [Get started with custom policies in Azure AD B2C](./custom-policy-get-started.md?tabs=applications)
+- [Get started with custom policies in Azure AD B2C](tutorial-create-user-flows.md?pivots=b2c-custom-policy)
active-directory-b2c Partner Zscaler https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/partner-zscaler.md
After you've configured Azure AD B2C, the rest of the IdP configuration resumes.
>[!Note] >This step is required only if you havenΓÇÖt already configured custom policies. If you already have one or more custom policies, you can skip this step.
-To configure custom policies on your Azure AD B2C tenant, see [Get started with custom policies in Azure Active Directory B2C](./custom-policy-get-started.md).
+To configure custom policies on your Azure AD B2C tenant, see [Get started with custom policies in Azure Active Directory B2C](./tutorial-create-user-flows.md?pivots=b2c-custom-policy).
### Step 3: Register ZPA as a SAML application in Azure AD B2C
Go to a ZPA user portal or a browser-access application, and test the sign-up or
For more information, review the following articles: -- [Get started with custom policies in Azure AD B2C](./custom-policy-get-started.md)
+- [Get started with custom policies in Azure AD B2C](./tutorial-create-user-flows.md?pivots=b2c-custom-policy)
- [Register a SAML application in Azure AD B2C](./saml-service-provider.md) - [Step-by-step configuration guide for ZPA](https://help.zscaler.com/zpa/step-step-configuration-guide-zpa) - [Configure an IdP for single sign-on](https://help.zscaler.com/zpa/configuring-idp-single-sign)
active-directory-b2c Policy Keys Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/policy-keys-overview.md
Azure Active Directory B2C (Azure AD B2C) stores secrets and certificates in the
This article discusses what you need to know about the policy keys that are used by Azure AD B2C. > [!NOTE]
-> Currently, configuration of policy keys is limited to [custom policies](./custom-policy-get-started.md) only.
+> Currently, configuration of policy keys is limited to [custom policies](./user-flow-overview.md) only.
You can configure secrets and certificates for establishing trust between services in the Azure portal under the **Policy keys** menu. Keys can be symmetric or asymmetric. *Symmetric* cryptography, or private key cryptography, is where a shared secret is used to both encrypt and decrypt the data. *Asymmetric* cryptography, or public key cryptography, is a cryptographic system that uses pairs of keys, consisting of public keys that are shared with the relying party application and private keys that are known only to Azure AD B2C.
active-directory-b2c Saml Service Provider https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/saml-service-provider.md
Organizations that use Azure AD B2C as their customer identity and access manage
## Prerequisites
-* Complete the steps in [Get started with custom policies in Azure AD B2C](custom-policy-get-started.md). You need the *SocialAndLocalAccounts* custom policy from the custom policy starter pack discussed in the article.
+* Complete the steps in [Get started with custom policies in Azure AD B2C](tutorial-create-user-flows.md?pivots=b2c-custom-policy). You need the *SocialAndLocalAccounts* custom policy from the custom policy starter pack discussed in the article.
* Basic understanding of the SAML protocol and familiarity with the application's SAML implementation. * A web application configured as a SAML application. For this tutorial, you can use a [SAML test application][samltest] that we provide.
active-directory-b2c Session Behavior https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/session-behavior.md
Users should not enable this option on public computers.
To enable KMSI, set the content definition `DataUri` element to [page identifier](contentdefinitions.md#datauri) `unifiedssp` and [page version](page-layout.md) *1.1.0* or above.
-1. Open the extension file of your policy. For example, <em>`SocialAndLocalAccounts/`**`TrustFrameworkExtensions.xml`**</em>. This extension file is one of the policy files included in the custom policy starter pack, which you should have obtained in the prerequisite, [Get started with custom policies](custom-policy-get-started.md).
+1. Open the extension file of your policy. For example, <em>`SocialAndLocalAccounts/`**`TrustFrameworkExtensions.xml`**</em>. This extension file is one of the policy files included in the custom policy starter pack, which you should have obtained in the prerequisite, [Get started with custom policies](tutorial-create-user-flows.md?pivots=b2c-custom-policy).
1. Search for the **BuildingBlocks** element. If the element doesn't exist, add it. 1. Add the **ContentDefinitions** element to the **BuildingBlocks** element of the policy.
active-directory-b2c User Flow Custom Attributes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/user-flow-custom-attributes.md
zone_pivot_groups: b2c-policy-type
[!INCLUDE [active-directory-b2c-choose-user-flow-or-custom-policy](../../includes/active-directory-b2c-choose-user-flow-or-custom-policy.md)]
-In the [Add claims and customize user input using custom policies](configure-user-input.md) article you learn how to use built-in [user profile attributes](user-profile-attributes.md). In this article, you enable a custom attribute in your Azure Active Directory B2C (Azure AD B2C) directory. Later, you can use the new attribute as a custom claim in [user flows](user-flow-overview.md) or [custom policies](custom-policy-get-started.md) simultaneously.
+In the [Add claims and customize user input using custom policies](configure-user-input.md) article you learn how to use built-in [user profile attributes](user-profile-attributes.md). In this article, you enable a custom attribute in your Azure Active Directory B2C (Azure AD B2C) directory. Later, you can use the new attribute as a custom claim in [user flows](user-flow-overview.md) or [custom policies](user-flow-overview.md) simultaneously.
Your Azure AD B2C directory comes with a [built-in set of attributes](user-profile-attributes.md). However, you often need to create your own attributes to manage your specific scenario, for example when:
active-directory-b2c User Migration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/user-migration.md
Use the seamless migration flow if plaintext passwords in the old identity provi
- The password is stored in a one-way encrypted format, such as with a hash function. - The password is stored by the legacy identity provider in a way that you can't access. For example, when the identity provider validates credentials by calling a web service.
-The seamless migration flow still requires pre migration of user accounts, but then uses a [custom policy](custom-policy-get-started.md) to query a [REST API](custom-policy-rest-api-intro.md) (which you create) to set each users' password at first sign-in.
+The seamless migration flow still requires pre migration of user accounts, but then uses a [custom policy](user-flow-overview.md) to query a [REST API](custom-policy-rest-api-intro.md) (which you create) to set each users' password at first sign-in.
The seamless migration flow thus has two phases: *pre migration* and *set credentials*.
active-directory Location Condition https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/conditional-access/location-condition.md
Organizations can use this network location for common tasks like:
The network location is determined by the public IP address a client provides to Azure Active Directory. Conditional Access policies by default apply to all IPv4 and IPv6 addresses.
-> [!TIP]
-> IPv6 ranges are only supported in the **[Named location (preview)](#preview-features)** interface.
- ## Named locations
-Locations are designated in the Azure portal under **Azure Active Directory** > **Security** > **Conditional Access** > **Named locations**. These named network locations may include locations like an organization's headquarters network ranges, VPN network ranges, or ranges that you wish to block.
+Locations are designated in the Azure portal under **Azure Active Directory** > **Security** > **Conditional Access** > **Named locations**. These named network locations may include locations like an organization's headquarters network ranges, VPN network ranges, or ranges that you wish to block. Named locations can be defined by IPv4/IPv6 address ranges or by countries/regions.
![Named locations in the Azure portal](./media/location-condition/new-named-location.png)
-To configure a location, you will need to provide at least a **Name** and the IP range.
-
-The number of named locations you can configure is constrained by the size of the related object in Azure AD. You can configure locations based on of the following limitations:
+### IP address ranges
-- One named location with up to 1200 IPv4 ranges.-- A maximum of 90 named locations with one IP range assigned to each of them.
+To define a named location by IPv4/IPv6 address ranges, you will need to provide a **Name** and an IP range.
-> [!TIP]
-> IPv6 ranges are only supported in the **[Named location (preview)](#preview-features)** interface.
+Named locations defined by IPv4/IPv6 address ranges are subject to the following limitations:
+- Configure up to 195 named locations
+- Configure up to 2000 IP ranges per named location
+- Both IPv4 and IPv6 ranges are supported
+- Private IP ranges connot be configured
+- The number of IP addresses contained in a range is limited. Only CIDR masks greater than /8 are allowed when defining an IP range.
### Trusted locations
-When creating a network location, an administrator has the option to mark a location as a trusted location.
+Administrators can designate named locations defined by IP address ranges to be trusted named locations.
![Trusted locations in the Azure portal](./media/location-condition/new-trusted-location.png)
-This option can factor in to Conditional Access policies where you may, for example, require registration for multi-factor authentication from a trusted network location. It also factors in to Azure AD Identity Protection's risk calculation, lowering a users' sign-in risk when coming from a location marked as trusted.
+Sign-ins from trusted named locations improve the accuracy of Azure AD Identity Protection's risk calculation, lowering a users' sign-in risk when they authenticate from a location marked as trusted. Additionally, trusted named locations can be targeted in Conditional Access policies. For example, you may require restrict multi-factor authentication registration to trusted named locations only.
### Countries and regions
-Some organizations may choose to define entire countries or regions IP boundaries as named locations for Conditional Access policies. They may use these locations when blocking unnecessary traffic when they know valid users will never come from a location such as North Korea. These mappings of IP address to country are updated periodically.
+Some organizations may choose to restrict access to certain countries or regions using Conditional Access. In addition to defining named locations by IP ranges, admins can define named locations by country or regions. When a user signs in, Azure AD resolves the user's IPv4 address to a country or region, and the mapping is updated periodically. Organizations can use named locations defined by countries to block traffic from countries where they do not do business, such as North Korea.
> [!NOTE]
-> IPv6 address ranges cannot be mapped to countries. Only IPv4 addresses map to countries.
+> Sign-ins from IPv6 addresses cannot be mapped to countries or regions, and are considered unknown areas. Only IPv4 addresses can be mapped to countries or regions.
![Create a new country or region-based location in the Azure portal](./media/location-condition/new-named-location-country-region.png)
For mobile and desktop applications, which have long lived session lifetimes, Co
If both steps fail, a user is considered to be no longer on a trusted IP.
-## Preview features
-
-In addition to the generally available named location feature, there is also a named location (preview). You can access the named location preview by using the banner at the top of the current named location blade.
-
-![Try the named locations preview](./media/location-condition/preview-features.png)
-
-With the named location preview, you are able to
--- Configure up to 195 named locations-- Configure up to 2000 IP Ranges per named location-- Configure IPv6 addresses alongside IPv4 addresses-
-WeΓÇÖve also added some additional checks to help reduce the change of misconfiguration.
--- Private IP ranges can no longer be configured-- The number of IP addresses that can be included in a range are limited. Only CIDR masks greater than /8 will be allowed when configuring an IP range.-
-With the preview, there are now two create options:
--- **Countries location**-- **IP ranges location**-
-> [!NOTE]
-> IPv6 address ranges cannot be mapped to countries. Only IPv4 addresses map to countries.
-
-![Named locations preview interface](./media/location-condition/named-location-preview.png)
- ## Location condition in policy When you configure the location condition, you have the option to distinguish between:
With this option, you can select one or more named locations. For a policy with
## IPv6 traffic
-By default, Conditional Access policies will apply to all IPv6 traffic. With the [named location preview](#preview-features), you can exclude specific IPv6 address ranges from a Conditional Access policy. This option is useful in cases where you donΓÇÖt want policy to be enforced for specific IPv6 ranges. For example, if you want to not enforce a policy for uses on your corporate network, and your corporate network is hosted on public IPv6 ranges.
+By default, Conditional Access policies will apply to all IPv6 traffic. You can exclude specific IPv6 address ranges from a Conditional Access policy if you donΓÇÖt want policies to be enforced for specific IPv6 ranges. For example, if you want to not enforce a policy for uses on your corporate network, and your corporate network is hosted on public IPv6 ranges.
### When will my tenant have IPv6 traffic?
active-directory Msal B2c Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/msal-b2c-overview.md
When a user selects **Forgot password**, your application throws an error which
Learn more about these Azure AD B2C concepts: -- [User flows](../../active-directory-b2c/tutorial-create-user-flows.md)-- [Custom policies](../../active-directory-b2c/custom-policy-get-started.md)
+- [User flows](../../active-directory-b2c/tutorial-create-user-flows.md?pivots=b2c-user-flow)
+- [Custom policies](../../active-directory-b2c/tutorial-create-user-flows.md?pivots=b2c-custom-policy)
- [UX customization](../../active-directory-b2c/configure-user-input.md)
active-directory Msal Net Aad B2c Considerations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/msal-net-aad-b2c-considerations.md
A symptom of such a scenario is that MSAL.NET returns `Missing from the token re
The suggested workaround is to use [caching by policy](#acquire-a-token-to-apply-a-policy) described earlier.
-Alternatively, you can use the `tid` claim if you're using [custom policies](../../active-directory-b2c/custom-policy-get-started.md) in Azure AD B2C. Custom policies can return additional claims to your application by using [claims transformation](../../active-directory-b2c/claims-transformation-technical-profile.md).
+Alternatively, you can use the `tid` claim if you're using [custom policies](../../active-directory-b2c/user-flow-overview.md) in Azure AD B2C. Custom policies can return additional claims to your application by using [claims transformation](../../active-directory-b2c/claims-transformation-technical-profile.md).
#### Mitigation for "Missing from the token response"
active-directory Reference Aadsts Error Codes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/reference-aadsts-error-codes.md
For example, if you received the error code "AADSTS50058" then do a search in [h
| AADSTS50055 | InvalidPasswordExpiredPassword - The password is expired. | | AADSTS50056 | Invalid or null password -Password does not exist in store for this user. | | AADSTS50057 | UserDisabled - The user account is disabled. The account has been disabled by an administrator. |
-| AADSTS50058 | UserInformationNotProvided - This means that a user is not signed in. This is a common error that's expected when a user is unauthenticated and has not yet signed in.</br>If this error is encouraged in an SSO context where the user has previously signed in, this means that the SSO session was either not found or invalid.</br>This error may be returned to the application if prompt=none is specified. |
+| AADSTS50058 | UserInformationNotProvided - This means that a user is not signed in. This is a common error that's expected when a user is unauthenticated and has not yet signed in.</br>If this error is encountered in an SSO context where the user has previously signed in, this means that the SSO session was either not found or invalid.</br>This error may be returned to the application if prompt=none is specified. |
| AADSTS50059 | MissingTenantRealmAndNoUserInformationProvided - Tenant-identifying information was not found in either the request or implied by any provided credentials. The user can contact the tenant admin to help resolve the issue. | | AADSTS50061 | SignoutInvalidRequest - The sign-out request is invalid. | | AADSTS50064 | CredentialAuthenticationError - Credential validation on username or password has failed. |
active-directory Scenario Desktop Acquire Token https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/scenario-desktop-acquire-token.md
This flow isn't supported on MSAL for macOS.
# [Node.js](#tab/nodejs)
-This extract is from the [MSAL Node dev samples](https://github.com/AzureAD/microsoft-authentication-library-for-js/tree/dev/samples/msal-node-samples/standalone-samples/username-password). In the code snippet below, the username and password are hardcoded for illustration purposes only. This should be avoided in production. Instead, a basic UI prompting the user to enter her username/password would be recommended.
+This extract is from the [MSAL Node dev samples](https://github.com/AzureAD/microsoft-authentication-library-for-js/tree/dev/samples/msal-node-samples/username-password). In the code snippet below, the username and password are hardcoded for illustration purposes only. This should be avoided in production. Instead, a basic UI prompting the user to enter her username/password would be recommended.
```JavaScript const msal = require("@azure/msal-node");
This flow doesn't apply to macOS.
# [Node.js](#tab/nodejs)
-This extract is from the [MSAL Node dev samples](https://github.com/AzureAD/microsoft-authentication-library-for-js/tree/dev/samples/msal-node-samples/standalone-samples/device-code).
+This extract is from the [MSAL Node dev samples](https://github.com/AzureAD/microsoft-authentication-library-for-js/tree/dev/samples/msal-node-samples/device-code).
```JavaScript const msal = require('@azure/msal-node');
active-directory Resilient End User Experience https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/fundamentals/resilient-end-user-experience.md
To help you set up the most common identity tasks, Azure AD B2C provides built-i
Choose built-in user flows if your business requirements can be met by them. Since extensively tested by Microsoft, you can minimize the testing needed for validating policy-level functional, performance, or scale of these identity user flows. You still need to test your applications for functionality, performance, and scale.
-Should you [choose custom policies](../../active-directory-b2c/custom-policy-get-started.md) because of your business requirements, make sure you perform policy-level testing for functional, performance, or scale in addition to application-level testing.
+Should you [choose custom policies](../../active-directory-b2c/user-flow-overview.md) because of your business requirements, make sure you perform policy-level testing for functional, performance, or scale in addition to application-level testing.
See the article that [compares user flows and custom polices](../../active-directory-b2c/user-flow-overview.md#comparing-user-flows-and-custom-policies) to help you decide.
active-directory Security Baseline https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/fundamentals/security-baseline.md
+
+ Title: Azure security baseline for Azure Active Directory
+description: The Azure Active Directory security baseline provides procedural guidance and resources for implementing the security recommendations specified in the Azure Security Benchmark.
+++ Last updated : 04/07/2021+++
+# Important: This content is machine generated; do not modify this topic directly. Contact mbaldwin for more information.
+++
+# Azure security baseline for Azure Active Directory
+
+This security baseline applies guidance from the [Azure Security Benchmark version 2.0](../../security/benchmarks/overview.md) to Azure Active Directory. The Azure Security Benchmark provides recommendations on how you can secure your cloud solutions on Azure. The content is grouped by the **security controls** defined by the Azure Security Benchmark and the related guidance applicable to Azure Active Directory.
+
+> [!NOTE]
+> **Controls** not applicable to Azure Active Directory, or for which the responsibility is Microsoft's, have been excluded from this baseline. To see how Azure Active Directory completely maps to the Azure Security Benchmark, see the [full Azure Active Directory security baseline mapping file](https://github.com/MicrosoftDocs/SecurityBenchmarks/tree/master/Azure%20Offer%20Security%20Baselines).
+
+## Network Security
+
+*For more information, see the [Azure Security Benchmark: Network Security](/azure/security/benchmarks/security-controls-v2-network-security).*
+
+### NS-6: Simplify network security rules
+
+**Guidance**: Use Azure Virtual Network Service Tags to define network access controls on network security groups or Azure Firewall configured for your Azure Active Directory resources. You can use service tags in place of specific IP addresses when creating security rules. By specifying the service tag name, like 'AzureActiveDirectory' in the appropriate source or destination field of a rule, you can allow or deny the traffic for the corresponding service. Microsoft manages the address prefixes encompassed by the service tag and automatically updates the service tag as addresses change.
+
+- [Understand and using Service Tags](../../virtual-network/service-tags-overview.md)
+
+**Responsibility**: Customer
+
+**Azure Security Center monitoring**: None
+
+## Identity Management
+
+*For more information, see the [Azure Security Benchmark: Identity Management](/azure/security/benchmarks/security-controls-v2-identity-management).*
+
+### IM-1: Standardize Azure Active Directory as the central identity and authentication system
+
+**Guidance**: Use Azure Active Directory (Azure AD) as the default identity and access management service. You should standardize Azure AD to govern your organizationΓÇÖs identity and access management in:
+- Microsoft Cloud resources, such as the Azure portal, Azure Storage, Azure Virtual Machine (Linux and Windows), Azure Key Vault, PaaS, and SaaS applications.
+- Your organization's resources, such as applications on Azure or your corporate network resources.
+
+
+Securing Azure AD should be a high priority in your organizationΓÇÖs cloud security practice. Azure AD provides an identity secure score to help you assess identity security posture relative to MicrosoftΓÇÖs best practice recommendations. Use the score to gauge how closely your configuration matches best practice recommendations, and to make improvements in your security posture.
+
+Azure AD supports external identity that allows users without a Microsoft account to sign in to their applications and resources with their external identity.
+
+- [Tenancy in Azure Active Directory](../develop/single-and-multi-tenant-apps.md)
+
+- [How to create and configure an Azure AD instance](active-directory-access-create-new-tenant.md)
+
+- [Use external identity providers for application](/azure/active-directory/b2b/identity-providers)
+
+- [What is the identity secure score in Azure Active Directory](identity-secure-score.md)
+
+**Responsibility**: Customer
+
+**Azure Security Center monitoring**: None
+
+### IM-2: Manage application identities securely and automatically
+
+**Guidance**: Use Managed Identity for Azure Resources for non-human accounts such as services or automation, it is recommended to use Azure-managed identity feature instead of creating a more powerful human account to access or execute your resources. You can natively authenticate to Azure services/resources that supports Azure Active Directory (Azure AD) authentication through pre-defined access grant rule without using credential hard coded in source code or configuration files. You cannot assign Azure managed identities to Azure AD resources but Azure AD is the mechanism to authenticate with managed identities assigned to other service's resources.
+
+- [Managed Identity for Azure Resources](../managed-identities-azure-resources/overview.md)
+
+- [Services that support Managed Identity for Azure Resources](../managed-identities-azure-resources/services-support-managed-identities.md)
+
+**Responsibility**: Customer
+
+**Azure Security Center monitoring**: None
+
+### IM-3: Use Azure AD single sign-on (SSO) for application access
+
+**Guidance**: Use Azure Active Directory to provide identity and access management to Azure resources, cloud applications, and on-premises applications. This includes enterprise identities such as employees, as well as external identities such as partners, vendors, and suppliers. This enables single sign-on (SSO) to manage and secure access to your organizationΓÇÖs data and resources on-premises and in the cloud. Connect all your users, applications, and devices to the Azure AD for seamless, secure access and greater visibility and control.
+
+
+- [Understand Application SSO with Azure AD](../manage-apps/what-is-single-sign-on.md)
+
+**Responsibility**: Customer
+
+**Azure Security Center monitoring**: None
+
+### IM-4: Use strong authentication controls for all Azure Active Directory based access
+
+**Guidance**: Use Azure Active Directory to support strong authentication controls through multi-factor authentication (MFA), and strong passwordless methods.
+- Multi-factor authentication - Enable Azure AD MFA and follow Azure Security Center Identity and Access Management recommendations for some best practices in your MFA setup. MFA can be enforced on all, select users or at the per-user level based on sign-in conditions and risk factors.
+- Passwordless authentication - Three passwordless authentication options are available: Windows Hello for Business, Microsoft Authenticator app, and on-premises authentication methods such as smart cards.
+
+For administrator and privileged users, ensure the highest level of the strong authentication method is used, followed by rolling out the appropriate strong authentication policy to other users.
+
+Azure AD supports Legacy password-based authentication such as Cloud-only accounts (user accounts created directly in Azure AD) that have a baseline password policy or Hybrid accounts (user accounts that come from on-premises Active Directory) that will follow the on-premises password policies. When using password-based authentication, Azure AD provides a password protection capability that prevents users from setting passwords that are easy to guess. Microsoft provides a global list of banned passwords that is updated based on telemetry, and customers can augment the list based on their needs (e.g. branding, cultural references, etc.). This password protection can be used for cloud-only and hybrid accounts.
+
+Authentication based on password credentials alone is susceptible to popular attack methods. For higher security, use strong authentication such as MFA and a strong password policy. For third-party applications and marketplace services that may have default passwords, you should change them upon the service initial setup.
+
+
+- [How to deploy Azure AD MFA](../authentication/howto-mfa-getstarted.md)
+
+
+
+
+- [Introduction to passwordless authentication options for Azure Active Directory](../authentication/concept-authentication-passwordless.md)
+
+
+
+
+- [Azure AD default password policy](https://docs.microsoft.com/azure/active-directory/authentication/concept-sspr-policy#password-policies-that-only-apply-to-cloud-user-accounts)
+
+- [Eliminate bad passwords using Azure AD Password Protection](../authentication/concept-password-ban-bad.md)
+
+**Responsibility**: Customer
+
+**Azure Security Center monitoring**: None
+
+### IM-5: Monitor and alert on account anomalies
+
+**Guidance**: Azure Active Directory provides the following data sources:
+
+
+- Sign-ins - The sign-ins report provides information about the usage of managed applications and user sign-in activities.
+
+
+- Audit logs - Provides traceability through logs for all changes done by various features within Azure AD. Examples of audit logs include changes made to any resources within Azure AD like adding or removing users, apps, groups, roles and policies.
+
+
+- Risky sign-ins - A risky sign-in is an indicator for a sign-in attempt that might have been performed by someone who is not the legitimate owner of a user account.
+
+
+- Users flagged for risk - A risky user is an indicator for a user account that might have been compromised.
+
+
+
+These data sources can be integrated with Azure Monitor, Azure Sentinel or third-party SIEM systems.
+
+
+
+
+Azure Security Center can also alert on certain suspicious activities such as excessive number of failed authentication attempts, deprecated accounts in the subscription.
+
+
+
+
+Azure Advanced Threat Protection (ATP) is a security solution that can use Active Directory signals to identify, detect, and investigate advanced threats, compromised identities, and malicious insider actions.
+
+
+
+
+- [Audit activity reports in the Azure Active Directory](../reports-monitoring/concept-audit-logs.md)
+
+
+
+
+- [How to view Azure AD risky sign-ins](/azure/active-directory/reports-monitoring/concept-risky-sign-ins)
+
+
+
+
+- [How to identify Azure AD users flagged for risky activity](/azure/active-directory/reports-monitoring/concept-user-at-risk)
+
+
+
+
+- [How to monitor users' identity and access activity in Azure Security Center](../../security-center/security-center-identity-access.md)
+
+
+
+
+- [Alerts in Azure Security Center's threat intelligence protection module](../../security-center/alerts-reference.md)
+
+
+
+
+- [How to integrate Azure Activity Logs into Azure Monitor](../reports-monitoring/howto-integrate-activity-logs-with-log-analytics.md)
+
+**Responsibility**: Customer
+
+**Azure Security Center monitoring**: None
+
+### IM-6: Restrict Azure resource access based on conditions
+
+**Guidance**: Use Azure Active Directory (Azure AD) Conditional Access for a more granular access control based on user-defined conditions, such as user logins from certain IP ranges will need to use MFA for login. Granular authentication session management policy can also be used for different use cases.
+
+
+- [Azure AD Conditional Access overview](../conditional-access/overview.md)
+
+
+
+
+- [Common Conditional Access policies](../conditional-access/concept-conditional-access-policy-common.md)
+
+
+
+
+- [Configure authentication session management with Conditional Access](../conditional-access/howto-conditional-access-session-lifetime.md)
+
+**Responsibility**: Customer
+
+**Azure Security Center monitoring**: None
+
+### IM-8: Secure user access to legacy applications
+
+**Guidance**: Ensure you have modern access controls and session monitoring for legacy applications and the data they store and process. While VPNs are commonly used to access legacy applications, they often have only basic access control and limited session monitoring.
+
+
+Azure AD Application Proxy enables you to publish legacy on-premises applications to remote users with SSO while explicitly validating trustworthiness of both remote users and devices with Azure AD Conditional Access.
+
+
+
+
+Alternatively, Microsoft Cloud App Security is a Cloud Access Security Broker (CASB) service that can provide controls for monitoring userΓÇÖs application sessions and blocking actions (for both legacy on-premises applications and cloud software as a service (SaaS) applications).
+
+
+
+
+- [Azure Active Directory Application Proxy](https://docs.microsoft.com/azure/active-directory/manage-apps/application-proxy#what-is-application-proxy)
+
+
+
+
+- [Microsoft Cloud App Security best practices](/cloud-app-security/best-practices)
+
+**Responsibility**: Customer
+
+**Azure Security Center monitoring**: None
+
+## Privileged Access
+
+*For more information, see the [Azure Security Benchmark: Privileged Access](/azure/security/benchmarks/security-controls-v2-privileged-access).*
+
+### PA-1: Protect and limit highly privileged users
+
+**Guidance**: The most critical built-in roles are Azure AD are Global Administrator and the Privileged Role Administrator, as users assigned to these two roles can delegate administrator roles:
+
+- Global Administrator: Users with this role have access to all administrative features in Azure AD, as well as services that use Azure AD identities.
+
+- Privileged Role Administrator: Users with this role can manage role assignments in Azure AD, as well as within Azure AD Privileged Identity Management (PIM). In addition, this role allows management of all aspects of PIM and administrative units.
+
+You may have other critical roles that need to be governed if you use custom roles with certain privileged permissions assigned. And you may also want to apply similar controls to the administrator account of critical business assets.
+
+Azure AD has highly privileged accounts: the users and service principals that are directly or indirectly assigned to, or eligible for, the Global Administrator or Privileged Role Administrator roles, and other highly privileged roles in Azure AD and Azure.
+
+Limit the number of highly privileged accounts and protect these accounts at an elevated level because users with this privilege can directly or indirectly read and modify every resource in your Azure environment.
+
+You can enable just-in-time (JIT) privileged access to Azure resources and Azure AD using Azure AD Privileged Identity Management (PIM). JIT grants temporary permissions to perform privileged tasks only when users need it. PIM can also generate security alerts when there is suspicious or unsafe activity in your Azure AD organization.
+
+- [Administrator role permissions in Azure AD](/azure/active-directory/users-groups-roles/directory-assign-admin-roles)
+
+- [Use Azure Privileged Identity Management security alerts](../privileged-identity-management/pim-how-to-configure-security-alerts.md)
+
+- [Securing privileged access for hybrid and cloud deployments in Azure AD](/azure/active-directory/users-groups-roles/directory-admin-roles-secure)
+
+**Responsibility**: Customer
+
+**Azure Security Center monitoring**: None
+
+### PA-2: Restrict administrative access to business-critical systems
+
+**Guidance**: Use Azure Active Directory Privileged Identity Management and Multi-factor authentication to restrict administrative access to business critical systems.
+
+- [Privileged Identity Management approval of role activation requests](../privileged-identity-management/azure-ad-pim-approval-workflow.md)
+
+- [Multi-factor authentication and Conditional Access](../conditional-access/howto-conditional-access-policy-admin-mfa.md)
+
+**Responsibility**: Customer
+
+**Azure Security Center monitoring**: None
+
+### PA-3: Review and reconcile user access regularly
+
+**Guidance**: Review user account access assignments regularly to ensure the accounts and their access are valid, especially focused on the highly privileged roles including Global Administrator and Privileged Role Administrator. You can use Azure Active Directory (Azure AD) access reviews to review group memberships, access to enterprise applications, and role assignments, both for Azure AD roles and Azure roles. Azure AD reporting can provide logs to help discover stale accounts. You can also use Azure AD Privileged Identity Management to create access review report workflow to facilitate the review process.
+
+In addition, Azure Privileged Identity Management can also be configured to alert when an excessive number of administrator accounts are created, and to identify administrator accounts that are stale or improperly configured.
+
+- [Create an access review of Azure AD roles in Privileged Identity Management (PIM)](../privileged-identity-management/pim-how-to-start-security-review.md)
+
+- [Create an access review of Azure resource roles in Privileged Identity Management (PIM)](../privileged-identity-management/pim-resource-roles-start-access-review.md)
+
+- [How to use Azure AD identity and access reviews](../governance/access-reviews-overview.md)
+
+**Responsibility**: Customer
+
+**Azure Security Center monitoring**: None
+
+### PA-4: Set up emergency access in Azure AD
+
+**Guidance**: To prevent being accidentally locked out of your Azure AD
+organization, set up an emergency access account for access when normal
+administrative accounts cannot be used. Emergency access accounts are usually
+highly privileged, and they should not be assigned to specific individuals.
+Emergency access accounts are limited to emergency or "break glass"'
+scenarios where normal administrative accounts can't be used.
+
+You should ensure that the credentials (such as password,
+certificate, or smart card) for emergency access accounts are kept secure and
+known only to individuals who are authorized to use them only in an emergency.
+
+- [Manage emergency access accounts in Azure AD](/azure/active-directory/users-groups-roles/directory-emergency-access)
+
+**Responsibility**: Customer
+
+**Azure Security Center monitoring**: None
+
+### PA-5: Automate entitlement management
+
+**Guidance**: Use Azure AD entitlement management features to automate
+access request workflows, including access assignments, reviews, and
+expiration. Dual or multi-stage approval is also supported.
+
+- [What is Azure AD entitlement management](../governance/entitlement-management-overview.md)
+
+**Responsibility**: Customer
+
+**Azure Security Center monitoring**: None
+
+### PA-6: Use privileged access workstations
+
+**Guidance**: Secured, isolated workstations are critically important for
+the security of sensitive roles like administrators, developers, and critical
+service operators. Use highly secured user workstations and/or Azure Bastion
+for administrative tasks. Use Azure Active Directory, Microsoft Defender
+Advanced Threat Protection (ATP), and/or Microsoft Intune to deploy a secure
+and managed user workstation for administrative tasks. The secured workstations
+can be centrally managed to enforce secured configuration including strong
+authentication, software and hardware baselines, restricted logical and network
+access.
+
+- [Securing devices as part of privileged access](/security/compass/privileged-access-devices)
+
+- [Privileged access implementation](/security/compass/privileged-access-deployment)
+
+**Responsibility**: Customer
+
+**Azure Security Center monitoring**: None
+
+### PA-7: Follow just enough administration (least privilege principle)
+
+**Guidance**: Customers can configure their Azure Active Directory (Azure AD) deployment for least
+privilege, by assigning users to the roles with the minimum permissions needed
+for users to complete their administrative tasks.
+
+- [Administrator role permissions in Azure AD](../roles/permissions-reference.md)
+
+- [Assign administrative roles in Azure AD](../roles/manage-roles-portal.md)
+
+**Responsibility**: Customer
+
+**Azure Security Center monitoring**: None
+
+### PA-8: Choose approval process for Microsoft support
+
+**Guidance**: Azure Active Directory doesn't support customer lockbox. Microsoft may
+work with customers through non-lockbox methods for approval to access customer
+data.
+
+**Responsibility**: Shared
+
+**Azure Security Center monitoring**: None
+
+## Data Protection
+
+*For more information, see the [Azure Security Benchmark: Data Protection](/azure/security/benchmarks/security-controls-v2-data-protection).*
+
+### DP-2: Protect sensitive data
+
+**Guidance**: Consider the following guidance for implementing protection of your sensitive data:
+
+- **Isolation:** A directory is the data boundary by which the Azure Active Directory (Azure AD) services
+store and process data for a customer.
+Customers should determine where they want most of their Azure AD Customer
+Data to reside by setting the Country property in their directory.
+
+- **Segmentation:** The global administrator's role has full
+control of all directory data, and the rules that govern access and processing
+instructions. A directory may be segmented into administrative units, and
+provisioned with users and groups to be managed by administrators of those
+units, Global administrators may delegate responsibility to other principles in
+their organization by assigning them to pre-defined roles or custom roles they
+can create.
+
+
+- **Access:** Permissions can be applied at a user,
+group, role, application, or device. The
+assignment may be permanent or temporal per Privileged Identity Management
+configuration.
+
+- **Encryption:** Azure AD encrypts
+all data at rest or in transit. The
+offering does not allow customers to encrypt directory data with their own
+encryption key.
+
+To determine how their selected country maps to the physical location of their directory see the 'Where is your data located article'.
+
+- [Where your data is located article](https://www.microsoft.com/trust-center/privacy/data-location)
+
+As the customer uses various Azure AD tools, features, and applications that interact with their directory, use the article Azure Active Directory ΓÇô Where is your data located?
+
+- [Where is your data located dashboard](https://msit.powerbi.com/view?r=eyJrIjoiYzEyZTc5OTgtNTdlZS00ZTVkLWExN2ItOTM0OWU4NjljOGVjIiwidCI6IjcyZjk4OGJmLTg2ZjEtNDFhZi05MWFiLTJkN2NkMDExZGI0NyIsImMiOjV9)
+
+- [Azure AD roles documentation](/azure/active-directory/roles/)
+
+**Responsibility**: Customer
+
+**Azure Security Center monitoring**: None
+
+### DP-4: Encrypt sensitive information in transit
+
+**Guidance**: To complement access controls,
+data in transit should be protected against ΓÇÿout of bandΓÇÖ attacks (e.g. traffic
+capture) using encryption to ensure that attackers cannot easily read or modify
+the data.
+
+Azure AD supports data encryption in
+transit with TLS v1.2 or greater.
+
+While this is optional for traffic on private networks,
+this is critical for traffic on external and public networks. For HTTP traffic,
+ensure that any clients connecting to your Azure resources can negotiate TLS
+v1.2 or greater. For remote management, use SSH (for Linux) or RDP/TLS (for
+Windows) instead of an unencrypted protocol. Obsoleted SSL, TLS, and SSH
+versions and protocols, and weak ciphers should be disabled.
+
+By default, Azure provides encryption for data in
+transit between Azure data centers.
+
+- [Understand encryption in transit with Azure](https://docs.microsoft.com/azure/security/fundamentals/encryption-overview#encryption-of-data-in-transit)
+
+- [Information on TLS Security](/security/engineering/solving-tls1-problem)
+
+- [Double encryption for Azure data in transit](https://docs.microsoft.com/azure/security/fundamentals/double-encryption#data-in-transit)
+
+**Responsibility**: Customer
+
+**Azure Security Center monitoring**: None
+
+## Asset Management
+
+*For more information, see the [Azure Security Benchmark: Asset Management](/azure/security/benchmarks/security-controls-v2-asset-management).*
+
+### AM-1: Ensure security team has visibility into risks for assets
+
+**Guidance**: Ensure security teams are granted Security Reader permissions in your Azure tenant and subscriptions so they can monitor for security risks using Azure Security Center.
+
+Depending on how security team responsibilities are structured, monitoring for security risks could be the responsibility of a central security team or a local team. That said, security insights and risks must always be aggregated centrally within an organization.
+
+Security Reader permissions can be applied broadly to an entire tenant (Root Management Group) or scoped to management groups or specific subscriptions.
+
+Additional permissions might be required to get visibility into workloads and services.
+
+- [Overview of Security Reader Role](https://docs.microsoft.com/azure/role-based-access-control/built-in-roles#security-reader)
+
+- [Overview of Azure Management Groups](../../governance/management-groups/overview.md)
+
+**Responsibility**: Customer
+
+**Azure Security Center monitoring**: None
+
+### AM-5: Limit users' ability to interact with Azure Resource Manager
+
+**Guidance**: Use Azure Active Directory (Azure AD) Conditional Access to limit users' ability to interact with Azure AD via the Azure portal by configuring "Block access" for the "Microsoft Azure Management" App.
+
+- [How to configure Conditional Access to block access to Azure Resources Manager](../../role-based-access-control/conditional-access-azure-management.md)
+
+**Responsibility**: Customer
+
+**Azure Security Center monitoring**: None
+
+## Logging and Threat Detection
+
+*For more information, see the [Azure Security Benchmark: Logging and Threat Detection](/azure/security/benchmarks/security-controls-v2-logging-threat-protection).*
+
+### LT-1: Enable threat detection for Azure resources
+
+**Guidance**: Use the Azure Active Directory (Azure AD) Identity Protection built-in threat detection capability for your Azure AD resources.
+
+
+
+
+
+Azure AD produces activity logs that are often used for threat detection and threat hunting. Azure AD sign-in logs provide a record of authentication and authorization activity for users, services, and apps. Azure AD audit logs track changes made to an Azure AD tenant, including changes that improve or diminish security posture.
+
+- [What is Azure Active Directory Identity Protection?](../identity-protection/overview-identity-protection.md)
+
+- [Connect Azure AD Identity Protection data to Azure Sentinel](../../sentinel/connect-azure-ad-identity-protection.md)
+
+- [Connect Azure Active Directory data to Azure Sentinel](../../sentinel/connect-azure-active-directory.md)
+
+**Responsibility**: Customer
+
+**Azure Security Center monitoring**: None
+
+### LT-2: Enable threat detection for Azure identity and access management
+
+**Guidance**: Azure Active Directory (Azure AD) provides the following user logs that can be viewed in Azure AD reporting or integrated with Azure Monitor, Azure Sentinel or other SIEM/monitoring tools for more sophisticated monitoring and analytics use cases:
+- Sign-ins ΓÇô The sign-ins report provides information about the usage of managed applications and user sign-in activities.
+- Audit logs - Provides traceability through logs for all changes done by various features within Azure AD. Examples of audit logs include changes made to any resources within Azure AD like adding or removing users, apps, groups, roles and policies.
+- Risky sign-ins - A risky sign-in is an indicator for a sign-in attempt that might have been performed by someone who is not the legitimate owner of a user account.
+- Risky users - A risky user is an indicator for a user account that might have been compromised.
+
+Identity Protection risk detections are enabled by default and require no onboarding process. The granularity or risk data is determined by license SKU.
+
+- [Audit activity reports in the Azure Active Directory](../reports-monitoring/concept-audit-logs.md)ΓÇ»
+
+- [Enable Azure Identity Protection](../identity-protection/overview-identity-protection.md)ΓÇ»
+
+- [Threat protection in Azure Security Center](/azure/security-center/threat-protection)
+
+**Responsibility**: Customer
+
+**Azure Security Center monitoring**: None
+
+### LT-4: Enable logging for Azure resources
+
+**Guidance**: Azure Active Directory (Azure AD) produces activity logs. Unlike some Azure services, Azure AD does not make a distinction between activity and resource logs. Activity logs are automatically available in the Azure AD section of the Azure portal, and can be exported to Azure Monitor, Azure Event Hubs, Azure Storage, SIEMs, and other locations.
+
+- Sign-ins ΓÇô The sign-ins report provides information about the usage of managed applications and user sign-in activities.
+
+- Audit logs - Provides traceability through logs for all changes done by various features within Azure AD. Examples of audit logs include changes made to any resources within Azure AD like adding or removing users, apps, groups, roles and policies.
+
+
+
+Azure AD also provides security-related logs that can be viewed in Azure AD reporting or integrated with Azure Monitor, Azure Sentinel or other SIEM/monitoring tools for more sophisticated monitoring and analytics use cases:
+- Risky sign-ins - A risky sign-in is an indicator for a sign-in attempt that might have been performed by someone who is not the legitimate owner of a user account.
+- Risky users - A risky user is an indicator for a user account that might have been compromised.
+
+Identity Protection risk detections are enabled by default and require no onboarding process. The granularity or risk data is determined by license SKU.
+
+
+- [Activity and security reports in Azure Active Directory](../reports-monitoring/overview-reports.md)
+
+**Responsibility**: Customer
+
+**Azure Security Center monitoring**: None
+
+### LT-5: Centralize security log management and analysis
+
+**Guidance**: We recommend the following guidelines when customers want to centralize their security logs for easier threat hunting and security posture analysis:
+
+- Centralize logging storage and analysis to enable correlation. For each log source within Azure AD, ensure you have assigned a data owner, access guidance, storage location, what tools are used to process and access the data, and data retention requirements.
+
+- Ensure you are integrating Azure activity logs into your central logging. Ingest logs via Azure Monitor to aggregate security data generated by endpoint devices, network resources, and other security systems. In Azure Monitor, use Log Analytics workspaces to query and perform analytics, and use Azure Storage accounts for long term and archival storage.
+
+- In addition, enable and onboard data to Azure Sentinel or a third-party SIEM.
+
+
+Many organizations choose to use Azure Sentinel for ΓÇ£hotΓÇ¥ data that is used frequently and Azure Storage for ΓÇ£coldΓÇ¥ data that is used less frequently.
+
+
+- [How to collect platform logs and metrics with Azure Monitor](/azure/azure-monitor/platform/diagnostic-settings)ΓÇ»
+
+
+- [How to onboard Azure Sentinel](../../sentinel/quickstart-onboard.md)
+
+**Responsibility**: Customer
+
+**Azure Security Center monitoring**: None
+
+### LT-6: Configure log storage retention
+
+**Guidance**: Ensure that any storage accounts or Log Analytics workspaces used for storing Azure Active Directory sign-in logs, audit logs, and risk data logs has the log retention period set according to your organization's compliance regulations.
+
+In Azure Monitor, you can set your Log Analytics workspace retention period according to your organization's compliance regulations. Use Azure Storage, Data Lake or Log Analytics workspace accounts for long-term and archival storage.
+
+- [How to configure Log Analytics Workspace Retention Period](/azure/azure-monitor/platform/manage-cost-storage)ΓÇ»
+
+- [Storing resource logs in an Azure Storage Account](/azure/azure-monitor/platform/resource-logs-collect-storage)
+
+**Responsibility**: Customer
+
+**Azure Security Center monitoring**: None
+
+## Incident Response
+
+*For more information, see the [Azure Security Benchmark: Incident Response](/azure/security/benchmarks/security-controls-v2-incident-response).*
+
+### IR-1: Preparation ΓÇô update incident response process for Azure
+
+**Guidance**: Ensure your organization has processes to respond to security incidents, has updated these processes for Azure, and is regularly exercising them to ensure readiness.
+
+- [Implement security across the enterprise environment](/azure/cloud-adoption-framework/security/security-top-10#4-process-update-incident-response-ir-processes-for-cloud)
+
+- [Incident response reference guide](/microsoft-365/downloads/IR-Reference-Guide.pdf)
+
+**Responsibility**: Customer
+
+**Azure Security Center monitoring**: None
+
+### IR-2: Preparation ΓÇô setup incident notification
+
+**Guidance**: Set up security incident contact information in Azure Security Center. This contact information is used by Microsoft to contact you if the Microsoft Security Response Center (MSRC) discovers that your data has been accessed by an unlawful or unauthorized party. You also have options to customize incident alert and notification in different Azure services based on your incident response needs.
+
+- [How to set the Azure Security Center security contact](../../security-center/security-center-provide-security-contact-details.md)
+
+**Responsibility**: Customer
+
+**Azure Security Center monitoring**: None
+
+### IR-3: Detection and analysis ΓÇô create incidents based on high-quality alerts
+
+**Guidance**: Ensure you have a process to create high-quality alerts and measure the quality of alerts. This allows you to learn lessons from past incidents and prioritize alerts for analysts, so they donΓÇÖt waste time on false positives.
+
+High-quality alerts can be built based on experience from past incidents, validated community sources, and tools designed to generate and clean up alerts by fusing and correlating diverse signal sources.
+
+Azure Security Center provides high-quality alerts across many Azure assets. You can use the ASC data connector to stream the alerts to Azure Sentinel. Azure Sentinel lets you create advanced alert rules to generate incidents automatically for an investigation.
+
+Export your Azure Security Center alerts and recommendations using the export feature to help identify risks to Azure resources. Export alerts and recommendations either manually or in an ongoing, continuous fashion.
+
+- [How to configure export](../../security-center/continuous-export.md)
+
+- [How to stream alerts into Azure Sentinel](../../sentinel/connect-azure-security-center.md)
+
+**Responsibility**: Customer
+
+**Azure Security Center monitoring**: None
+
+### IR-4: Detection and analysis ΓÇô investigate an incident
+
+**Guidance**: Ensure analysts can query and use diverse data sources as they investigate potential incidents, to build a full view of what happened. Diverse logs should be collected to track the activities of a potential attacker across the kill chain to avoid blind spots. You should also ensure insights and learnings are captured for other analysts and for future historical reference.
+
+The data sources for investigation include the centralized logging sources that are already being collected from the in-scope services and running systems, but can also include:
+
+- Network data ΓÇô use network security groups' flow logs, Azure Network Watcher, and Azure Monitor to capture network flow logs and other analytics information.
+
+- Snapshots of running systems:
+
+ - Use Azure virtual machine's snapshot capability to create a snapshot of the running system's disk.
+
+ - Use the operating system's native memory dump capability to create a snapshot of the running system's memory.
+
+ - Use the snapshot feature of the Azure services or your software's own capability to create snapshots of the running systems.
+
+Azure Sentinel provides extensive data analytics across virtually any log source and a case management portal to manage the full lifecycle of incidents. Intelligence information during an investigation can be associated with an incident for tracking and reporting purposes.
+
+- [Snapshot a Windows machine's disk](../../virtual-machines/windows/snapshot-copy-managed-disk.md)
+
+- [Snapshot a Linux machine's disk](../../virtual-machines/linux/snapshot-copy-managed-disk.md)
+
+- [Microsoft Azure Support diagnostic information and memory dump collection](https://azure.microsoft.com/support/legal/support-diagnostic-information-collection/)
+
+- [Investigate incidents with Azure Sentinel](../../sentinel/tutorial-investigate-cases.md)
+
+**Responsibility**: Customer
+
+**Azure Security Center monitoring**: None
+
+### IR-5: Detection and analysis ΓÇô prioritize incidents
+
+**Guidance**: Provide context to analysts on which incidents to focus on first based on alert severity and asset sensitivity.
+
+Azure Security Center assigns a severity to each alert to help you prioritize which alerts should be investigated first. The severity is based on how confident Security Center is in the finding or the analytics used to issue the alert, as well as the confidence level that there was malicious intent behind the activity that led to the alert.
+
+Additionally, mark resources using tags and create a naming system to identify and categorize Azure resources, especially those processing sensitive data. It is your responsibility to prioritize the remediation of alerts based on the criticality of the Azure resources and environment where the incident occurred.
+
+- [Security alerts in Azure Security Center](../../security-center/security-center-alerts-overview.md)
+
+- [Use tags to organize your Azure resources](/azure/azure-resource-manager/resource-group-using-tags)
+
+**Responsibility**: Customer
+
+**Azure Security Center monitoring**: None
+
+### IR-6: Containment, eradication and recovery ΓÇô automate the incident handling
+
+**Guidance**: Automate manual repetitive tasks to speed up response time and reduce the burden on analysts. Manual tasks take longer to execute, slowing each incident and reducing how many incidents an analyst can handle. Manual tasks also increase analyst fatigue, which increases the risk of human error that causes delays, and degrades the ability of analysts to focus effectively on complex tasks.
+Use workflow automation features in Azure Security Center and Azure Sentinel to automatically trigger actions or run a playbook to respond to incoming security alerts. The playbook takes actions, such as sending notifications, disabling accounts, and isolating problematic networks.
+
+- [Configure workflow automation in Security Center](../../security-center/workflow-automation.md)
+
+- [Set up automated threat responses in Azure Security Center](https://docs.microsoft.com/azure/security-center/tutorial-security-incident#triage-security-alerts)
+
+- [Set up automated threat responses in Azure Sentinel](../../sentinel/tutorial-respond-threats-playbook.md)
+
+**Responsibility**: Customer
+
+**Azure Security Center monitoring**: None
+
+## Posture and Vulnerability Management
+
+*For more information, see the [Azure Security Benchmark: Posture and Vulnerability Management](/azure/security/benchmarks/security-controls-v2-vulnerability-management).*
+
+### PV-1: Establish secure configurations for Azure services
+
+**Guidance**: Microsoft identity and access management solutions help IT
+protect access to applications and resources across on-premises and in the
+cloud. It is important that organizations follow security best practices to
+ensure their Identity and access management implementation is secure and more
+resilient to attacks.
+
+Based on your Identity and access management implementation
+strategy your organization should follow the Microsoft best practice guidance
+to secure your identity infrastructure.
+
+Organizations that collaborate with external partners should
+additionally assess and implement appropriate governance, security, and compliance
+configurations to reduce security risk and protect sensitive resources.
+
+- [Azure Identity Management and access control security best practices](../../security/fundamentals/identity-management-best-practices.md)
+
+- [Five steps to securing your identity infrastructure](../../security/fundamentals/steps-secure-identity.md)
+
+- [Securing external collaboration in Azure Active Directory and Microsoft 365](secure-external-access-resources.md)
+
+**Responsibility**: Customer
+
+**Azure Security Center monitoring**: None
+
+### PV-2: Sustain secure configurations for Azure services
+
+**Guidance**: Microsoft Secure Score provides organizations
+a measurement of their security posture and recommendations that can help
+protect organizations from threats. It is recommended that organizations
+routinely review their Secure Score for suggested improvement actions to
+improve their identity security posture.
+
+- [What is the identity secure score in Azure Active Directory?](identity-secure-score.md)
+
+- [Microsoft Secure Score](/microsoft-365/security/mtp/microsoft-secure-score)
+
+**Responsibility**: Customer
+
+**Azure Security Center monitoring**: None
+
+### PV-8: Conduct regular attack simulation
+
+**Guidance**: As required, conduct penetration testing or red team activities on your Azure resources and ensure remediation of all critical security findings.
+Follow the Microsoft Cloud Penetration Testing Rules of Engagement to ensure your penetration tests are not in violation of Microsoft policies. Use Microsoft's strategy and execution of Red Teaming and live site penetration testing against Microsoft-managed cloud infrastructure, services, and applications.
+
+- [Penetration testing in Azure](../../security/fundamentals/pen-testing.md)
+
+- [Penetration Testing Rules of Engagement](https://www.microsoft.com/msrc/pentest-rules-of-engagement?rtc=1)
+
+- [Microsoft Cloud Red Teaming](https://gallery.technet.microsoft.com/Cloud-Red-Teaming-b837392e)
+
+**Responsibility**: Shared
+
+**Azure Security Center monitoring**: None
+
+## Governance and Strategy
+
+*For more information, see the [Azure Security Benchmark: Governance and Strategy](/azure/security/benchmarks/security-controls-v2-governance-strategy).*
+
+### GS-1: Define asset management and data protection strategy
+
+**Guidance**: Ensure you document and communicate a clear strategy for continuous monitoring and protection of systems and data. Prioritize discovery, assessment, protection, and monitoring of business-critical data and systems.
+
+
+
+This strategy should include documented guidance, policy, and standards for the following elements:
+
+- Data classification standard in accordance with the business risks
+
+- Security organization visibility into risks and asset inventory
+
+- Security organization approval of Azure services for use
+
+- Security of assets through their lifecycle
+
+- Required access control strategy in accordance with organizational data classification
+
+- Use of Azure native and third-party data protection capabilities
+
+- Data encryption requirements for in-transit and at-rest use cases
+
+- Appropriate cryptographic standards
+
+
+
+For more information, see the following references:
+- [Azure Security Architecture Recommendation - Storage, data, and encryption](/azure/architecture/framework/security/storage-data-encryption)
+
+
+
+- [Azure Security Fundamentals - Azure Data security, encryption, and storage](../../security/fundamentals/encryption-overview.md)
+
+
+- [Cloud Adoption Framework - Azure data security and encryption best practices](../../security/fundamentals/data-encryption-best-practices.md)
+
+
+- [Azure Security Benchmark - Asset management](/azure/security/benchmarks/security-benchmark-v2-asset-management)
+
+
+- [Azure Security Benchmark - Data Protection](/azure/security/benchmarks/security-benchmark-v2-data-protection)
+
+**Responsibility**: Customer
+
+**Azure Security Center monitoring**: None
+
+### GS-2: Define enterprise segmentation strategy
+
+**Guidance**: Establish an enterprise-wide strategy to segmenting access to assets using a combination of identity, network, application, subscription, management group, and other controls.
+
+Carefully balance the need for security separation with the need to enable daily operation of the systems that need to communicate with each other and access data.
+
+Ensure that the segmentation strategy is implemented consistently across control types including network security, identity and access models, and application permission/access models, and human process controls.
+
+- [Guidance on segmentation strategy in Azure (video)](/security/compass/microsoft-security-compass-introduction#azure-components-and-reference-model-2151)
+
+- [Guidance on segmentation strategy in Azure (document)](/security/compass/governance#enterprise-segmentation-strategy)
+
+- [Align network segmentation with enterprise segmentation strategy](/security/compass/network-security-containment#align-network-segmentation-with-enterprise-segmentation-strategy)
+
+**Responsibility**: Customer
+
+**Azure Security Center monitoring**: None
+
+### GS-3: Define security posture management strategy
+
+**Guidance**: Continuously measure and mitigate risks to your individual assets and the environment they are hosted in. Prioritize high value assets and highly-exposed attack surfaces, such as published applications, network ingress and egress points, user and administrator endpoints, etc.
+
+- [Azure Security Benchmark - Posture and vulnerability management](/azure/security/benchmarks/security-benchmark-v2-posture-vulnerability-management)
+
+**Responsibility**: Customer
+
+**Azure Security Center monitoring**: None
+
+### GS-4: Align organization roles, responsibilities, and accountabilities
+
+**Guidance**: Ensure you document and communicate a clear strategy for roles and responsibilities in your security organization. Prioritize providing clear accountability for security decisions, educating everyone on the shared responsibility model, and educate technical teams on technology to secure the cloud.
+
+- [Azure Security Best Practice 1 ΓÇô People: Educate Teams on Cloud Security Journey](/azure/cloud-adoption-framework/security/security-top-10#1-people-educate-teams-about-the-cloud-security-journey)
+
+- [Azure Security Best Practice 2 - People: Educate Teams on Cloud Security Technology](/azure/cloud-adoption-framework/security/security-top-10#2-people-educate-teams-on-cloud-security-technology)
+
+- [Azure Security Best Practice 3 - Process: Assign Accountability for Cloud Security Decisions](/azure/cloud-adoption-framework/security/security-top-10#4-process-update-incident-response-ir-processes-for-cloud)
+
+**Responsibility**: Customer
+
+**Azure Security Center monitoring**: None
+
+### GS-5: Define network security strategy
+
+**Guidance**: Establish an Azure network security approach as part of your organizationΓÇÖs overall security access control strategy.
+
+This strategy should include documented guidance, policy, and standards for the following elements:
+
+- Centralized network management and security responsibility
+
+- Virtual network segmentation model aligned with the enterprise segmentation strategy
+
+- Remediation strategy in different threat and attack scenarios
+
+- Internet edge and ingress and egress strategy
+
+- Hybrid cloud and on-premises interconnectivity strategy
+
+- Up-to-date network security artifacts (e.g. network diagrams, reference network architecture)
+
+For more information, see the following references:
+- [Azure Security Best Practice 11 - Architecture. Single unified security strategy](/azure/cloud-adoption-framework/security/security-top-10#11-architecture-establish-a-single-unified-security-strategy)
+
+- [Azure Security Benchmark - Network Security](/azure/security/benchmarks/security-benchmark-v2-network-security)
+
+- [Azure network security overview](../../security/fundamentals/network-overview.md)
+
+- [Enterprise network architecture strategy](/azure/cloud-adoption-framework/ready/enterprise-scale/architecture)
+
+**Responsibility**: Customer
+
+**Azure Security Center monitoring**: None
+
+### GS-6: Define identity and privileged access strategy
+
+**Guidance**: Establish an Azure identity and privileged access approaches as part of your organizationΓÇÖs overall security access control strategy.
+
+This strategy should include documented guidance, policy, and standards for the following elements:
+
+- A centralized identity and authentication system and its interconnectivity with other internal and external identity systems
+
+- Strong authentication methods in different use cases and conditions
+
+- Protection of highly privileged users
+
+- Anomaly user activities monitoring and handling
+
+- User identity and access review and reconciliation process
+
+For more information, see the following references:
+
+- [Azure Security Benchmark - Identity management](/azure/security/benchmarks/security-benchmark-v2-identity-management)
+
+- [Azure Security Benchmark - Privileged access](/azure/security/benchmarks/security-benchmark-v2-privileged-access)
+
+- [Azure Security Best Practice 11 - Architecture. Single unified security strategy](/azure/cloud-adoption-framework/security/security-top-10#11-architecture-establish-a-single-unified-security-strategy)
+
+- [Azure identity management security overview](../../security/fundamentals/identity-management-overview.md)
+
+**Responsibility**: Customer
+
+**Azure Security Center monitoring**: None
+
+### GS-7: Define logging and threat response strategy
+
+**Guidance**: Establish a logging and threat response strategy to rapidly detect and remediate threats while meeting compliance requirements. Prioritize providing analysts with high-quality alerts and seamless experiences so that they can focus on threats rather than integration and manual steps.
+
+This strategy should include documented guidance, policy, and standards for the following elements:
+
+- The security operations (SecOps) organizationΓÇÖs role and responsibilities
+
+- A well-defined incident response process aligning with NIST or another industry framework
+
+- Log capture and retention to support threat detection, incident response, and compliance needs
+
+- Centralized visibility of and correlation information about threats, using SIEM, native Azure capabilities, and other sources
+
+- Communication and notification plan with your customers, suppliers, and public parties of interest
+
+- Use of Azure native and third-party platforms for incident handling, such as logging and threat detection, forensics, and attack remediation and eradication
+
+- Processes for handling incidents and post-incident activities, such as lessons learned and evidence retention
+
+For more information, see the following references:
+
+- [Azure Security Benchmark - Logging and threat detection](/azure/security/benchmarks/security-benchmark-v2-logging-threat-detection)
+
+- [Azure Security Benchmark - Incident response](/azure/security/benchmarks/security-benchmark-v2-incident-response)
+
+- [Azure Security Best Practice 4 - Process. Update Incident Response Processes for Cloud](/azure/cloud-adoption-framework/security/security-top-10#4-process-update-incident-response-ir-processes-for-cloud)
+
+- [Azure Adoption Framework, logging, and reporting decision guide](/azure/cloud-adoption-framework/decision-guides/logging-and-reporting/)
+
+- [Azure enterprise scale, management, and monitoring](/azure/cloud-adoption-framework/ready/enterprise-scale/management-and-monitoring)
+
+**Responsibility**: Customer
+
+**Azure Security Center monitoring**: None
+
+## Next steps
+
+- See the [Azure Security Benchmark V2 overview](/azure/security/benchmarks/overview)
+- Learn more about [Azure security baselines](/azure/security/benchmarks/security-baselines-overview)
active-directory Tutorial Windows Vm Ua Arm https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/managed-identities-azure-resources/tutorial-windows-vm-ua-arm.md
Title: Tutorial`:` Use a managed identity to access Azure Resource Manager - Windows - Azure AD
+ Title: "Tutorial: Use a managed identity to access Azure Resource Manager - Windows - Azure AD"
description: A tutorial that walks you through the process of using a user-assigned managed identity on a Windows VM, to access Azure Resource Manager. documentationcenter: ''
active-directory Groups Concept https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/roles/groups-concept.md
The following scenarios are not supported right now:
- *Azure AD P2 licensed customers only*: Don't assign a group as Active to a role through both Azure AD and Privileged Identity Management (PIM). Specifically, don't assign a role to a role-assignable group when it's being created *and* assign a role to the group using PIM later. This will lead to issues where users canΓÇÖt see their active role assignments in the PIM as well as the inability to remove that PIM assignment. Eligible assignments are not affected in this scenario. If you do attempt to make this assignment, you might see unexpected behavior such as: - End time for the role assignment might display incorrectly. - In the PIM portal, **My Roles** can show only one role assignment regardless of how many methods by which the assignment is granted (through one or more groups and directly).-- The **Enable staged rollout for managed user sign-in** feature doesn't support assignment via group. - *Azure AD P2 licensed customers only* Even after deleting the group, it is still shown an eligible member of the role in PIM UI. Functionally there's no problem; it's just a cache issue in the Azure portal. - Use the new [Exchange Admin Center](https://admin.exchange.microsoft.com/) for role assignments via group membership. The old Exchange Admin Center doesnΓÇÖt support this feature yet. Exchange PowerShell cmdlets will work as expected. - Azure Information Protection Portal (the classic portal) doesn't recognize role membership via group yet. You can [migrate to the unified sensitivity labeling platform](/azure/information-protection/configure-policy-migrate-labels) and then use the Office 365 Security & Compliance center to use group assignments to manage roles.
aks Concepts Scale https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/concepts-scale.md
This article introduces the core concepts that help you scale applications in AK
You can manually scale replicas (pods) and nodes to test how your application responds to a change in available resources and state. Manually scaling resources also lets you define a set amount of resources to use to maintain a fixed cost, such as the number of nodes. To manually scale, you define the replica or node count. The Kubernetes API then schedules creating additional pods or draining nodes based on that replica or node count.
-When scaling down nodes, the Kubernetes API calls the relevant Azure Compute API tied to the compute type used by your cluster. For example, for clusters built on VM Scale Sets the logic for selecting which nodes to remove is determined by the VM Scale Sets API. To learn more about how nodes are selected for removal on scale down, see the [VMSS FAQ](../virtual-machine-scale-sets/virtual-machine-scale-sets-faq.md#if-i-reduce-my-scale-set-capacity-from-20-to-15-which-vms-are-removed).
+When scaling down nodes, the Kubernetes API calls the relevant Azure Compute API tied to the compute type used by your cluster. For example, for clusters built on VM Scale Sets the logic for selecting which nodes to remove is determined by the VM Scale Sets API. To learn more about how nodes are selected for removal on scale down, see the [VMSS FAQ](../virtual-machine-scale-sets/virtual-machine-scale-sets-faq.yml#if-i-reduce-my-scale-set-capacity-from-20-to-15--which-vms-are-removed-).
To get started with manually scaling pods and nodes see [Scale applications in AKS][aks-scale].
aks Csi Storage Drivers https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/csi-storage-drivers.md
Create the AKS cluster with support for CSI storage drivers:
```azurecli-interactive # Create an AKS-managed Azure AD cluster
-az aks create -g MyResourceGroup -n MyManagedCluster --network-plugin azure -k 1.17.9 --aks-custom-headers EnableAzureDiskFileCSIDriver=true
+az aks create -g MyResourceGroup -n MyManagedCluster --network-plugin azure --aks-custom-headers EnableAzureDiskFileCSIDriver=true
``` If you want to create clusters in tree storage drivers instead of CSI storage drivers, you can do so by omitting the custom `--aks-custom-headers` parameter.
availability-zones Az Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/availability-zones/az-overview.md
description: Learn about regions and Availability Zones in Azure to meet your te
Previously updated : 02/23/2021 Last updated : 04/09/2021
As mentioned previously, Azure classifies services into three categories: founda
> || > | Azure API for FHIR | > | Azure Analysis Services |
+> | Azure Blockchain Service |
> | Azure Cognitive > | Azure Cognitive > | Azure Cognitive
+> | Azure Cognitive
+> | Azure Cognitive
> | Azure Cognitive > | Azure Cognitive
+> | Azure Cognitive
> | Azure Data Share |
+> | Azure Databricks |
> | Azure Database for MariaDB | > | Azure Database Migration Service | > | Azure Dedicated HSM |
As mentioned previously, Azure classifies services into three categories: founda
> | Azure Health Bot | > | Azure HPC Cache | > | Azure Lab Services |
-> | Azure Machine Learning Studio (classic) |
> | Azure NetApp Files | > | Azure Red Hat OpenShift | > | Azure SignalR Service |
-> | Azure Spring Cloud Service |
+> | Azure Spring Cloud |
+> | Azure Stream Analytics |
> | Azure Time Series Insights | > | Azure VMware Solution | > | Azure VMware Solution by CloudSimple |
-> | Data Lake Analytics |
> | Spatial Anchors | > | Storage: Archive Storage | > | Ultra Disk Storage |
azure-arc Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/release-notes.md
Previously updated : 04/06/2021 Last updated : 04/09/2021 # Customer intent: As a data professional, I want to understand why my solutions would benefit from running with Azure Arc enabled data services so that I can leverage the capability of the feature.
This article highlights capabilities, features, and enhancements recently releas
## March 2021
-The March 2021 release is introduced on April 6, 2021.
+The March 2021 release was initially introduced on April 5th 2021, and the final stages of release were completed April 9th 2021.
Review limitations of this release in [Known issues - Azure Arc enabled data services (Preview)](known-issues.md).
Both custom resource definitions (CRD) for PostgreSQL have been consolidated int
You will delete the previous CRDs as you cleanup past installations. See [Cleanup from past installations](create-data-controller-using-kubernetes-native-tools.md#cleanup-from-past-installations).
-### Azure Arc enabled managed instance
+### Azure Arc enabled SQL Managed Instance
+
+- You can now create a SQL managed instance from the Azure portal in the direct connected mode.
- You can now restore a database to SQL Managed Instance with 3 replicas and it will be automatically added to the availability group.
azure-cache-for-redis Cache Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-cache-for-redis/cache-best-practices.md
There are several things related to memory usage within your Redis server instan
* [Jedis (Java)](https://gist.github.com/JonCole/925630df72be1351b21440625ff2671f#file-redis-bestpractices-java-jedis-md) * [Node.js](https://gist.github.com/JonCole/925630df72be1351b21440625ff2671f#file-redis-bestpractices-node-js-md) * [PHP](https://gist.github.com/JonCole/925630df72be1351b21440625ff2671f#file-redis-bestpractices-php-md)
+ * [HiRedisCluster](https://github.com/Azure/AzureCacheForRedis/blob/main/HiRedisCluster%20Best%20Practices.md)
* [ASP.NET Session State Provider](https://gist.github.com/JonCole/925630df72be1351b21440625ff2671f#file-redis-bestpractices-session-state-provider-md)
azure-functions Python Memory Profiler Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/python-memory-profiler-reference.md
Before you start developing a Python function app, you must meet these requireme
3. Apply the following decorator above any functions that need memory profiling. This does not work directly on the trigger entrypoint `main()` method. You need to create subfunctions and decorate them. Also, due to a memory-profiler known issue, when applying to an async coroutine, the coroutine return value will always be None. ```python
- @memory_profiler.profile(stream=memory_logger)
+ @memory_profiler.profile(stream=profiler_logstream)
4. Test the memory profiler on your local machine by using azure Functions Core Tools command `func host start`. This should generate a memory usage report with file name, line of code, memory usage, memory increment, and the line content in it.
azure-government Documentation Government Csp List https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-government/documentation-government-csp-list.md
Below you can find a list of all the authorized Cloud Solution Providers, AOS-G
|[Hendrix Corporation](https://www.hendrixcorp.com/)| |[Hewlett Packard Enterprise](https://www.hpe.com)| |[Hiscomp](http://www.hiscompllc.com/)|
-|[Hitachi Vantara](https://www.hitachivantarafederal.com/rean-cloud/)|
+|[Hitachi Vantara](https://www.hitachivantarafederal.com/services/cloud-services/)|
|[HTS Voice & Data Systems, Inc.](https://www.hts-tx.com/)| |[HumanTouch LLC](https://www.humantouchllc.com/)| |[Hyertek Inc.](https://www.hyertek.com)|
azure-maps About Azure Maps https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/about-azure-maps.md
The Traffic service is a suite of web services that developers can use for web o
For more information, see the [Traffic service documentation](/rest/api/maps/traffic).
-### Weather services (Preview)
+### Weather services
Weather services offer APIs that developers can use to retrieve weather information for a particular location. The information contains details such as observation date and time, brief description of the weather conditions, weather icon, precipitation indicator flags, temperature, and wind speed information. Additional details such as RealFeelΓäó Temperature and UV index are also returned.
azure-maps Choose Pricing Tier https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/choose-pricing-tier.md
If the core geospatial APIs meet your service requirements, choose the S0 pricin
| Geofencing | |Γ£ô | | Azure Maps Data (Preview) | | Γ£ô | | Mobility (Preview) | | Γ£ô |
-| Weather (Preview) |Γ£ô |Γ£ô |
+| Weather |Γ£ô |Γ£ô |
| Creator (Preview) | |Γ£ô | | Elevation (Preview) | |Γ£ô |
azure-maps How To Request Weather Data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/how-to-request-weather-data.md
Title: Request real-time and forecasted weather data using Azure Maps Weather services (Preview)
-description: Learn how to request real-time (current) and forecasted (minute, hourly, daily) weather data using Microsoft Azure Maps Weather services (Preview)
+ Title: Request real-time and forecasted weather data using Azure Maps Weather services
+description: Learn how to request real-time (current) and forecasted (minute, hourly, daily) weather data using Microsoft Azure Maps Weather services
Last updated 12/07/2020
-# Request real-time and forecasted weather data using Azure Maps Weather services (Preview)
-
-> [!IMPORTANT]
-> Azure Maps Weather services are currently in public preview.
-> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
-> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+# Request real-time and forecasted weather data using Azure Maps Weather services
Azure Maps [Weather services](/rest/api/maps/weather) are a set of RESTful APIs that allows developers to integrate highly dynamic historical, real-time, and forecasted weather data and visualizations into their solutions. In this article, we'll show you how to request both real-time and forecasted weather data.
In this example, you'll use the [Get Minute Forecast API](/rest/api/maps/weather
## Next steps > [!div class="nextstepaction"]
-> [Azure Maps Weather services (Preview) concepts](./weather-services-concepts.md)
+> [Azure Maps Weather services concepts](./weather-services-concepts.md)
> [!div class="nextstepaction"]
-> [Azure Maps Weather services (Preview) REST API](/rest/api/maps/weather)
+> [Azure Maps Weather services REST API](/rest/api/maps/weather)
azure-maps Weather Coverage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/weather-coverage.md
Title: Microsoft Azure Maps Weather services (Preview) coverage
-description: Learn about Microsoft Azure Maps Weather services (Preview) coverage
+ Title: Microsoft Azure Maps Weather services coverage
+description: Learn about Microsoft Azure Maps Weather services coverage
Last updated 12/07/2020
-# Azure Maps Weather services (Preview) coverage
-
-> [!IMPORTANT]
-> Azure Maps Weather services are currently in public preview.
-> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
-> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
-
+# Azure Maps Weather services coverage
This article provides coverage information for Azure Maps [Weather services](/rest/api/maps/weather). Azure Maps Weather data services returns details such as radar tiles, current weather conditions, weather forecasts, and the weather along a route.
azure-maps Weather Service Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/weather-service-tutorial.md
# Tutorial: Join sensor data with weather forecast data by using Azure Notebooks (Python)
-> [!IMPORTANT]
-> Azure Maps Weather services are currently in public preview.
-> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
-> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
-
-Wind power is one alternative energy source for fossil fuels to combat against climate change. Because wind isn't consistent by nature, wind power operators need to build machine learning (ML) models to predict the wind power capacity. This prediction is necessary to meet electricity demand and ensure the grid stability. In this tutorial, we walk through how Azure Maps weather forecast data is combined with demo data for weather readings. Weather forecast data is requested by calling Azure Maps Weather services (Preview).
+Wind power is one alternative energy source for fossil fuels to combat against climate change. Because wind isn't consistent by nature, wind power operators need to build machine learning (ML) models to predict the wind power capacity. This prediction is necessary to meet electricity demand and ensure the grid stability. In this tutorial, we walk through how Azure Maps weather forecast data is combined with demo data for weather readings. Weather forecast data is requested by calling Azure Maps Weather services.
In this tutorial, you will:
df = pd.read_csv("./data/weather_dataset_demo.csv")
## Request daily forecast data
-In our scenario, we would like to request daily forecast for each sensor location. The following script calls the [Daily Forecast API](/rest/api/maps/weather/getdailyforecast) of the Azure Maps Weather services (Preview). This API returns weather forecast for each wind turbine, for the next 15 days from the current date.
-
+In our scenario, we would like to request daily forecast for each sensor location. The following script calls the [Daily Forecast API](/rest/api/maps/weather/getdailyforecast) of the Azure Maps Weather services. This API returns weather forecast for each wind turbine, for the next 15 days from the current date.
```python subscription_key = "Your Azure Maps key"
years,months,days = [],[],[]
dates_check=set() wind_speeds, wind_direction = [], []
-# Call azure maps Weather services (Preview) to get daily forecast data for 15 days from current date
+# Call azure maps Weather services to get daily forecast data for 15 days from current date
session = aiohttp.ClientSession() j=-1 for i in range(0, len(coords), 2):
azure-maps Weather Services Concepts https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/weather-services-concepts.md
Title: Weather services (Preview) concepts in Microsoft Azure Maps
-description: Learn about the concepts that apply to Microsoft Azure Maps Weather services (Preview).
+ Title: Weather services concepts in Microsoft Azure Maps
+description: Learn about the concepts that apply to Microsoft Azure Maps Weather services.
Last updated 09/10/2020
-# Weather services (Preview) in Azure Maps
-
-> [!IMPORTANT]
-> Azure Maps Weather services are currently in public preview.
-> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
-> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+# Weather services in Azure Maps
This article introduces concepts that apply to Azure Maps [Weather services](/rest/api/maps/weather). We recommend going through this article before starting out with the weather APIs.
azure-maps Weather Services Faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/weather-services-faq.md
Title: Microsoft Azure Maps Weather services (Preview) frequently asked questions (FAQ)
-description: Find answer to common questions about Azure Maps Weather services (Preview) data and features.
+ Title: Microsoft Azure Maps Weather services frequently asked questions (FAQ)
+description: Find answer to common questions about Azure Maps Weather services data and features.
Last updated 12/07/2020
-# Azure Maps Weather services (Preview) frequently asked questions (FAQ)
-
-> [!IMPORTANT]
-> Azure Maps Weather services are currently in public preview.
-> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
-> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+# Azure Maps Weather services frequently asked questions (FAQ)
This article answers to common questions about Azure Maps [Weather services](/rest/api/maps/weather) data and features. The following topics are covered:
These datasets are reviewed in real time for accuracy for the Digital Forecast S
Numerous weather forecast guidance systems are utilized to formulate global forecasts. Over 150 numerical forecast models are used each day, both external and internal datasets. This includes government models such as the European Centre ECMWF and the U.S. Global Forecast System (GFS). Additionally, AccuWeather incorporates proprietary high-resolution models that downscale forecasts to specific locations and strategic regional domains to predict weather with further accuracy. AccuWeatherΓÇÖs unique blending and weighting algorithms have been developed over the last several decades. These algorithms optimally leverage the numerous forecast inputs to provide highly accurate forecasts.
-## Weather services (Preview) coverage and availability
+## Weather services coverage and availability
**What kind of coverage can I expect for different countries/regions?**
Azure Maps Forecast APIs are cached for up to 30 mins. To see when the cached re
## Developing with Azure Maps SDKs
-**Does Azure Maps Web SDK natively support Weather services (Preview) integration?**
+**Does Azure Maps Web SDK natively support Weather services integration?**
The Azure Maps Web SDK provides a services module. The services module is a helper library that makes it easy to use the Azure Maps REST services in web or Node.js applications. by using JavaScript or TypeScript. To get started, see our [documentation](./how-to-use-services-module.md).
-**Does Azure Maps Android SDK natively support Weather services (Preview) integration?**
+**Does Azure Maps Android SDK natively support Weather services integration?**
The Azure Maps Android SDKs supports Mercator tile layers, which can have x/y/zoom notation, quad key notation, or EPSG 3857 bounding box notation.
If this FAQ doesnΓÇÖt answer your question, you can contact us through the follo
* Microsoft Support. To create a new support request, in the [Azure portal](https://portal.azure.com/), on the Help tab, select the **Help +** support button, and then select **New support request**. * [Azure Maps UserVoice](https://feedback.azure.com/forums/909172-azure-maps) to submit feature requests.
-Learn how to request real-time and forecasted weather data using Azure Maps Weather services (Preview):
+Learn how to request real-time and forecasted weather data using Azure Maps Weather
> [!div class="nextstepaction"] > [Request Real-time weather data ](how-to-request-weather-data.md)
-Azure Maps Weather services (Preview) concepts article:
+Azure Maps Weather services concepts article:
> [!div class="nextstepaction"] > [Weather services concepts](weather-coverage.md)
-Explore the Azure Maps Weather services (Preview) API documentation:
+Explore the Azure Maps Weather services API documentation:
> [!div class="nextstepaction"] > [Azure Maps Weather services](/rest/api/maps/weather)
azure-monitor Diagnostics Extension Stream Event Hubs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/agents/diagnostics-extension-stream-event-hubs.md
You can use a variety of methods to validate that data is being sent to the even
* [Event Hubs overview](../../event-hubs/event-hubs-about.md) * [Create an event hub](../../event-hubs/event-hubs-create.md)
-* [Event Hubs FAQ](../../event-hubs/event-hubs-faq.md)
+* [Event Hubs FAQ](../../event-hubs/event-hubs-faq.yml)
<!-- Images. --> [0]: ../../event-hubs/media/event-hubs-streaming-azure-diags-data/dashboard.png
azure-monitor Alerts Log Api Switch https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/alerts/alerts-log-api-switch.md
With request body containing the below JSON:
Here is an example of using [ARMClient](https://github.com/projectkudu/ARMClient), an open-source command-line tool, that simplifies invoking the above API call: ```powershell
-$switchJSON = '{"scheduledQueryRulesEnabled": "true"}'
+$switchJSON = '{"scheduledQueryRulesEnabled": true}'
armclient PUT /subscriptions/<subscriptionId>/resourceGroups/<resourceGroupName>/providers/Microsoft.OperationalInsights/workspaces/<workspaceName>/alertsversion?api-version=2017-04-26-preview $switchJSON ```
If the Log Analytics workspace wasn't switched, the response is:
- Learn about the [Azure Monitor - Log Alerts](./alerts-unified-log.md). - Learn how to [manage your log alerts using the API](alerts-log-create-templates.md). - Learn how to [manage log alerts using PowerShell](./alerts-log.md#managing-log-alerts-using-powershell).-- Learn more about the [Azure Alerts experience](./alerts-overview.md).
+- Learn more about the [Azure Alerts experience](./alerts-overview.md).
azure-monitor App Expression https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/logs/app-expression.md
Last updated 05/09/2019
The `app` expression is used in an Azure Monitor query to retrieve data from a specific Application Insights app in the same resource group, another resource group, or another subscription. This is useful to include application data in an Azure Monitor log query and to query data across multiple applications in an Application Insights query. > [!IMPORTANT]
-> The app() expression is not used if you're using a [workspace-based Application Insights resource](../app/create-workspace-resource.md) since log data is stored in a Log Analytics workspace. Use the log() expression to write a query that includes application in multiple workspaces. For multiple applications in the same workspace, you don't need a cross workspace query.
+> The app() expression is not used if you're using a [workspace-based Application Insights resource](../app/create-workspace-resource.md) since log data is stored in a Log Analytics workspace. Use the workspace() expression to write a query that includes application in multiple workspaces. For multiple applications in the same workspace, you don't need a cross workspace query.
## Syntax
union
- See the [workspace expression](../logs/workspace-expression.md) to refer to a Log Analytics workspace. - Read about how [Azure Monitor data](./log-query-overview.md) is stored.-- Access full documentation for the [Kusto query language](/azure/kusto/query/).
+- Access full documentation for the [Kusto query language](/azure/kusto/query/).
azure-monitor Data Security https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/logs/data-security.md
Log Analytics has an incident management process that all Microsoft services adh
* Operators working on the Microsoft Azure service have addition training obligations surrounding their access to sensitive systems hosting customer data. * Microsoft security response personnel receive specialized training for their roles
-If loss of any customer data occurs, we notify each customer within one day. However, customer data loss has never occurred with the service.
+While very rare, Microsoft will notify each customer within one day if significant loss of any customer data occurs.
For more information about how Microsoft responds to security incidents, see [Microsoft Azure Security Response in the Cloud](https://gallery.technet.microsoft.com/Azure-Security-Response-in-dd18c678/file/150826/4/Microsoft%20Azure%20Security%20Response%20in%20the%20cloud.pdf).
You can use these additional security features to further secure your Azure Moni
## Next steps * Learn how to collect data with Log Analytics for your Azure VMs following the [Azure VM quickstart](../vm/quick-collect-azurevm.md).
-* If you are looking to collect data from physical or virtual Windows or Linux computers in your environment, see the [Quickstart for Linux computers](../vm/quick-collect-linux-computer.md) or [Quickstart for Windows computers](../vm/quick-collect-windows-computer.md)
+* If you are looking to collect data from physical or virtual Windows or Linux computers in your environment, see the [Quickstart for Linux computers](../vm/quick-collect-linux-computer.md) or [Quickstart for Windows computers](../vm/quick-collect-windows-computer.md)
azure-netapp-files Azure Netapp Files Create Volumes Smb https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/azure-netapp-files-create-volumes-smb.md
# Create an SMB volume for Azure NetApp Files
-Azure NetApp Files supports creating volumes using NFS (NFSv3 and NFSv4.1), SMB3, or dual protocol (NFSv3 and SMB). A volume's capacity consumption counts against its pool's provisioned capacity. This article shows you how to create an SMB3 volume.
+Azure NetApp Files supports creating volumes using NFS (NFSv3 and NFSv4.1), SMB3, or dual protocol (NFSv3 and SMB). A volume's capacity consumption counts against its pool's provisioned capacity.
+
+This article shows you how to create an SMB3 volume. For NFS volumes, see [Create an NFS volume](azure-netapp-files-create-volumes.md). For dual-protocol volumes, see [Create a dual-protocol volume](create-volumes-dual-protocol.md).
## Before you begin
azure-netapp-files Azure Netapp Files Create Volumes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/azure-netapp-files-create-volumes.md
# Create an NFS volume for Azure NetApp Files
-Azure NetApp Files supports creating volumes using NFS (NFSv3 and NFSv4.1), SMB3, or dual protocol (NFSv3 and SMB). A volume's capacity consumption counts against its pool's provisioned capacity. This article shows you how to create an NFS volume.
+Azure NetApp Files supports creating volumes using NFS (NFSv3 and NFSv4.1), SMB3, or dual protocol (NFSv3 and SMB). A volume's capacity consumption counts against its pool's provisioned capacity.
+
+This article shows you how to create an NFS volume. For SMB volumes, see [Create an SMB volume](azure-netapp-files-create-volumes-smb.md). For dual-protocol volumes, see [Create a dual-protocol volume](create-volumes-dual-protocol.md).
## Before you begin * You must have already set up a capacity pool.
azure-netapp-files Azure Netapp Files Network Topologies https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/azure-netapp-files-network-topologies.md
na ms.devlang: na Previously updated : 09/08/2020 Last updated : 04/09/2021 # Guidelines for Azure NetApp Files network planning
The features below are currently unsupported for Azure NetApp Files:
The following network restrictions apply to Azure NetApp Files:
-* The number of IPs in use in a VNet with Azure NetApp Files (including peered VNets) cannot exceed 1000. We are working towards increasing this limit to meet customer scale demands.
+* The number of IPs in use in a VNet with Azure NetApp Files (including *immediately* peered VNets) cannot exceed 1000. We are working towards increasing this limit to meet customer scale demands.
* In each Azure Virtual Network (VNet), only one subnet can be delegated to Azure NetApp Files.
In the topology illustrated above, the on-premises network is connected to a hub
## Next steps
-[Delegate a subnet to Azure NetApp Files](azure-netapp-files-delegate-subnet.md)
+[Delegate a subnet to Azure NetApp Files](azure-netapp-files-delegate-subnet.md)
azure-netapp-files Configure Ldap Extended Groups https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/configure-ldap-extended-groups.md
na ms.devlang: na Previously updated : 04/05/2021 Last updated : 04/08/2021 # Configure ADDS LDAP with extended groups for NFS volume access
This article explains the considerations and steps for enabling LDAP with extend
2. LDAP volumes require an Active Directory configuration for LDAP server settings. Follow instructions in [Requirements for Active Directory connections](create-active-directory-connections.md#requirements-for-active-directory-connections) and [Create an Active Directory connection](create-active-directory-connections.md#create-an-active-directory-connection) to configure Active Directory connections on the Azure portal.
-3. Ensure that the Active Directory LDAP server is up and running on the Active Directory. You can do so by installing and configuring the [Active Directory Lightweight Directory Services (AD LDS)](/previous-versions/windows/it-pro/windows-server-2012-r2-and-2012/hh831593(v=ws.11)) role on the AD machine.
+3. Ensure that the Active Directory LDAP server is up and running on the Active Directory.
-4. LDAP NFS users need to have certain POSIX attributes on the LDAP server. Follow [Manage LDAP POSIX Attributes](create-volumes-dual-protocol.md#manage-ldap-posix-attributes) to set the required attributes.
+4. LDAP NFS users need to have certain POSIX attributes on the LDAP server. Set the attributes for LDAP users and LDAP groups as follows:
+
+ * Required attributes for LDAP users:
+ `uid: Alice`, `uidNumber: 139`, `gidNumber: 555`, `objectClass: user`
+ * Required attributes for LDAP groups:
+ `objectClass: group`, `gidNumber: 555`
+
+ You can manage POSIX attributes by using the Active Directory Users and Computers MMC snap-in. The following example shows the Active Directory Attribute Editor:
+
+ ![Active Directory Attribute Editor](../media/azure-netapp-files/active-directory-attribute-editor.png)
5. If you want to configure an LDAP-integrated Linux client, see [Configure an NFS client for Azure NetApp Files](configure-nfs-clients.md).
azure-netapp-files Create Volumes Dual Protocol https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/create-volumes-dual-protocol.md
# Create a dual-protocol (NFSv3 and SMB) volume for Azure NetApp Files
-Azure NetApp Files supports creating volumes using NFS (NFSv3 and NFSv4.1), SMB3, or dual protocol. This article shows you how to create a volume that uses the dual protocol of NFSv3 and SMB with support for LDAP user mapping.
+Azure NetApp Files supports creating volumes using NFS (NFSv3 and NFSv4.1), SMB3, or dual protocol. This article shows you how to create a volume that uses the dual protocol of NFSv3 and SMB with support for LDAP user mapping.
+To create NFS volumes, see [Create an NFS volume](azure-netapp-files-create-volumes.md). To create SMB volumes, see [Create an SMB volume](azure-netapp-files-create-volumes-smb.md).
## Before you begin
You can manage POSIX attributes such as UID, Home Directory, and other values by
You need to set the following attributes for LDAP users and LDAP groups: * Required attributes for LDAP users:
- `uid`: Alice, `uidNumber`: 139, `gidNumber`: 555, `objectClass`: posixAccount
+ `uid: Alice`, `uidNumber: 139`, `gidNumber: 555`, `objectClass: posixAccount`
* Required attributes for LDAP groups:
- `objectClass`: "posixGroup", `gidNumber`: 555
+ `objectClass: posixGroup`, `gidNumber: 555`
## Configure the NFS client
azure-relay Relay Api Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-relay/relay-api-overview.md
repository.
To learn more about Azure Relay, visit these links: * [What is Azure Relay?](relay-what-is-it.md)
-* [Relay FAQ](relay-faq.md)
+* [Relay FAQ](relay-faq.yml)
azure-relay Relay Create Namespace Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-relay/relay-create-namespace-portal.md
Congratulations! You have now created a relay namespace.
## Next steps
-* [Relay FAQ](relay-faq.md)
+* [Relay FAQ](relay-faq.yml)
* [Get started with .NET](relay-hybrid-connections-dotnet-get-started.md) * [Get started with Node](relay-hybrid-connections-node-get-started.md)
azure-relay Relay Exceptions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-relay/relay-exceptions.md
There are two common causes for this error:
Occasionally, the Relay service might experience delays in processing requests. This might happen, for example, during periods of high traffic. If this occurs, retry your operation after a delay, until the operation is successful. If the same operation continues to fail after multiple attempts, check the [Azure service status site](https://azure.microsoft.com/status/) to see if there are known service outages. ## Next steps
-* [Azure Relay FAQs](relay-faq.md)
+* [Azure Relay FAQs](relay-faq.yml)
* [Create a relay namespace](relay-create-namespace-portal.md) * [Get started with Azure Relay and .NET](relay-hybrid-connections-dotnet-get-started.md) * [Get started with Azure Relay and Node](relay-hybrid-connections-node-get-started.md)
azure-relay Relay Faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-relay/relay-faq.md
- Title: Azure Relay FAQs | Microsoft Docs
-description: This article provides answers to some of the frequently asked questions about the Azure Relay service.
- Previously updated : 06/23/2020--
-# Azure Relay FAQs
-
-This article answers some frequently asked questions (FAQs) about [Azure Relay](https://azure.microsoft.com/services/service-bus/). For general Azure pricing and support information, see the [Azure Support FAQs](https://azure.microsoft.com/support/faq/).
---
-## General questions
-### What is Azure Relay?
-The [Azure Relay service](relay-what-is-it.md) facilitates your hybrid applications by helping you more securely expose services that reside within a corporate enterprise network to the public cloud. You can expose the services without opening a firewall connection, and without requiring intrusive changes to a corporate network infrastructure.
-
-### What is a Relay namespace?
-A [namespace](relay-create-namespace-portal.md) is a scoping container that you can use to address Relay resources within your application. You must create a namespace to use Relay. This is one of the first steps in getting started.
-
-### What happened to Service Bus Relay service?
-The previously named Service Bus Relay service is now called [Azure Relay](service-bus-relay-tutorial.md). You can continue to use this service as usual. The Hybrid Connections feature is an updated version of a service that's been transplanted from Azure BizTalk Services. WCF Relay and Hybrid Connections both continue to be supported.
-
-## Pricing
-This section answers some frequently asked questions about the Relay pricing structure. You also can see the [Azure Support FAQs](https://azure.microsoft.com/support/faq/) for general Azure pricing information. For complete information about Relay pricing, see [Service Bus pricing details][Pricing overview].
-
-### How do you charge for Hybrid Connections and WCF Relay?
-For complete information about Relay pricing, see the [Hybrid Connections and WCF Relays][Pricing overview] table on the Service Bus pricing details page. In addition to the prices noted on that page, you are charged for associated data transfers for egress outside of the datacenter in which your application is provisioned.
-
-### How am I billed for Hybrid Connections?
-Here are three example billing scenarios for Hybrid Connections:
-
-* Scenario 1:
- * You have a single listener, such as an instance of the Hybrid Connections Manager installed and continuously running for the entire month.
- * You send 3 GB of data across the connection during the month.
- * Your total charge is $5.
-* Scenario 2:
- * You have a single listener, such as an instance of the Hybrid Connections Manager installed and continuously running for the entire month.
- * You send 10 GB of data across the connection during the month.
- * Your total charge is $7.50. That's $5 for the connection and first 5 GB + $2.50 for the additional 5 GB of data.
-* Scenario 3:
- * You have two instances, A and B, of the Hybrid Connections Manager installed and continuously running for the entire month.
- * You send 3 GB of data across connection A during the month.
- * You send 6 GB of data across connection B during the month.
- * Your total charge is $10.50. That's $5 for connection A + $5 for connection B + $0.50 (for the sixth gigabyte on connection B).
--
-### How are hours calculated for Relay?
-
-WCF Relay is available only in Standard tier namespaces. Pricing and [connection quotas](../service-bus-messaging/service-bus-quotas.md) for relays otherwise have not changed. This means that relays continue to be charged based on the number of messages (not operations) and relay hours. For more information, see the ["Hybrid Connections and WCF Relays"](https://azure.microsoft.com/pricing/details/service-bus/) table on the pricing details page.
-
-### What if I have more than one listener connected to a specific relay?
-In some cases, a single relay might have multiple connected listeners. A relay is considered open when at least one relay listener is connected to it. Adding listeners to an open relay results in additional relay hours. The number of relay senders (clients that invoke or send messages to relays) that are connected to a relay does not affect the calculation of relay hours.
-
-### How is the messages meter calculated for WCF Relays?
-(**This applies only to WCF relays. Messages are not a cost for Hybrid Connections.**)
-
-In general, billable messages for relays are calculated by using the same method that is used for brokered entities (queues, topics, and subscriptions), described previously. However, there are some notable differences.
-
-Sending a message to a Azure Relay is treated as a "full through" send to the relay listener that receives the message. It is not treated as a send operation to the Azure Relay, followed by a delivery to the relay listener. A request-reply style service invocation (of up to 64 KB) against a relay listener results in two billable messages: one billable message for the request and one billable message for the response (assuming the response is also 64 KB or smaller). This is different than using a queue to mediate between a client and a service. If you use a queue to mediate between a client and a service, the same request-reply pattern requires a request send to the queue, followed by a dequeue/delivery from the queue to the service. This is followed by a response send to another queue, and a dequeue/delivery from that queue to the client. Using the same size assumptions throughout (up to 64 KB), the mediated queue pattern results in 4 billable messages. You'd be billed for twice the number of messages to implement the same pattern that you accomplish by using relay. Of course, there are benefits to using queues to achieve this pattern, such as durability and load leveling. These benefits might justify the additional expense.
-
-Relays that are opened by using the **netTCPRelay** WCF binding treat messages not as individual messages, but as a stream of data flowing through the system. When you use this binding, only the sender and listener have visibility into the framing of the individual messages sent and received. For relays that use the **netTCPRelay** binding, all data is treated as a stream for calculating billable messages. In this case, Service Bus calculates the total amount of data sent or received via each individual relay on a 5-minute basis. Then, it divides that total amount of data by 64 KB to determine the number of billable messages for that relay during that time period.
-
-## Quotas
-| Quota name | Scope | Notes | Value |
-| | | | |
-| Concurrent listeners on a relay |Entity (hybrid connection or WCF relay) |Subsequent requests for additional connections are rejected and an exception is received by the calling code. |25 |
-| Concurrent relay connections per all relay endpoints in a service namespace |Namespace |- |5,000 |
-| Relay endpoints per service namespace |Namespace |- |10,000 |
-| Message size for [NetOnewayRelayBinding](/dotnet/api/microsoft.servicebus.netonewayrelaybinding) and [NetEventRelayBinding](/dotnet/api/microsoft.servicebus.neteventrelaybinding) relays |Namespace |Incoming messages that exceed these quotas are rejected and an exception is received by the calling code. |64 KB |
-| Message size for [HttpRelayTransportBindingElement](/dotnet/api/microsoft.servicebus.httprelaytransportbindingelement) and [NetTcpRelayBinding](/dotnet/api/microsoft.servicebus.nettcprelaybinding) relays |Namespace |No limit on message size. |Unlimited |
-
-### Does Relay have any usage quotas?
-By default, for any cloud service, Microsoft sets an aggregate monthly usage quota that is calculated across all of a customer's subscriptions. We understand that at times your needs might exceed these limits. You can contact customer service at any time, so we can understand your needs and adjust these limits appropriately. For Service Bus, the aggregate usage quotas are as follows:
-
-* 5 billion messages
-* 2 million relay hours
-
-Although we reserve the right to disable an account that exceeds its monthly usage quotas, we provide e-mail notification, and we make multiple attempts to contact the customer before taking any action. Customers that exceed these quotas are still responsible for excess charges.
-
-### Naming restrictions
-A Relay namespace name must be between 6 and 50 characters in length.
-
-## Subscription and namespace management
-### How do I migrate a namespace to another Azure subscription?
-
-To move a namespace from one Azure subscription to another subscription, you can either use the [Azure portal](https://portal.azure.com) or use PowerShell commands. To move a namespace to another subscription, the namespace must already be active. The user running the commands must be an Administrator user on both the source and target subscriptions.
-
-#### Azure portal
-
-To use the Azure portal to migrate Azure Relay namespaces from one subscription to another subscription, see [Move resources to a new resource group or subscription](../azure-resource-manager/management/move-resource-group-and-subscription.md#use-the-portal).
-
-#### PowerShell
-
-To use PowerShell to move a namespace from one Azure subscription to another subscription, use the following sequence of commands. To execute this operation, the namespace must already be active, and the user running the PowerShell commands must be an Administrator user on both the source and target subscriptions.
-
-```azurepowershell-interactive
-# Create a new resource group in the target subscription.
-Select-AzSubscription -SubscriptionId 'ffffffff-ffff-ffff-ffff-ffffffffffff'
-New-AzResourceGroup -Name 'targetRG' -Location 'East US'
-
-# Move the namespace from the source subscription to the target subscription.
-Select-AzSubscription -SubscriptionId 'aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa'
-$res = Find-AzResource -ResourceNameContains mynamespace -ResourceType 'Microsoft.ServiceBus/namespaces'
-Move-AzResource -DestinationResourceGroupName 'targetRG' -DestinationSubscriptionId 'ffffffff-ffff-ffff-ffff-ffffffffffff' -ResourceId $res.ResourceId
-```
-
-## Troubleshooting
-### What are some of the exceptions generated by Azure Relay APIs, and suggested actions you can take?
-For a description of common exceptions and suggested actions you can take, see [Relay exceptions][Relay exceptions].
-
-### What is a shared access signature, and which languages can I use to generate a signature?
-Shared Access Signatures (SAS) are an authentication mechanism based on SHA-256 secure hashes or URIs. For information about how to generate your own signatures in Node.js, PHP, Python, Java, C, and C#, see [Service Bus authentication with shared access signatures][Shared Access Signatures].
-
-### Is it possible to allow only some relay endpoints?
-Yes. The relay client makes connections to the Azure Relay service by using fully qualified domain names. Customers can add an entry for `*.servicebus.windows.net` on firewalls that support DNS approval listing.
-
-## Next steps
-* [Create a namespace](relay-create-namespace-portal.md)
-* [Get started with .NET](relay-hybrid-connections-dotnet-get-started.md)
-* [Get started with Node](relay-hybrid-connections-node-get-started.md)
-
-[Pricing overview]: https://azure.microsoft.com/pricing/details/service-bus/
-[Relay exceptions]: relay-exceptions.md
-[Shared Access Signatures]: ../service-bus-messaging/service-bus-sas.md
azure-relay Relay Hybrid Connections Protocol https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-relay/relay-hybrid-connections-protocol.md
header. If the header is present, the response is from the listener.
## Next steps
-* [Relay FAQ](relay-faq.md)
+* [Relay FAQ](relay-faq.yml)
* [Create a namespace](relay-create-namespace-portal.md) * [Get started with .NET](relay-hybrid-connections-dotnet-get-started.md) * [Get started with Node](relay-hybrid-connections-node-get-started.md)
azure-relay Relay Port Settings https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-relay/relay-port-settings.md
Hybrid Connections uses WebSockets on port 443 with TLS as the underlying transp
## Next steps To learn more about Azure Relay, visit these links: * [What is Azure Relay?](relay-what-is-it.md)
-* [Relay FAQ](relay-faq.md)
+* [Relay FAQ](relay-faq.yml)
azure-relay Relay What Is It https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-relay/relay-what-is-it.md
The following diagram shows you how incoming relay requests are handled by the A
* [Get started with .NET HTTP Requests](relay-hybrid-connections-http-requests-dotnet-get-started.md) * [Get started with Node WebSockets](relay-hybrid-connections-node-get-started.md) * [Get started with Node HTTP Requests](relay-hybrid-connections-http-requests-node-get-started.md)
-* [Relay FAQ](relay-faq.md)
+* [Relay FAQ](relay-faq.yml)
azure-resource-manager Microsoft Solutions Armapicontrol https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/managed-applications/microsoft-solutions-armapicontrol.md
-# Microsoft.Common.ArmApiControl UI element
+# Microsoft.Solutions.ArmApiControl UI element
ArmApiControl lets you get results from an Azure Resource Manager API operation. Use the results to populate dynamic content in other controls.
azure-resource-manager Tutorial Create Managed App With Custom Provider https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/managed-applications/tutorial-create-managed-app-with-custom-provider.md
You can go to managed application instance and perform **custom action** in "Ove
## Looking for help
-If you have questions about Azure Managed Applications, try asking on [Stack Overflow](https://stackoverflow.com/questions/tagged/azure-managedapps). A similar question may have already been asked and answered, so check first before posting. Add the tag `azure-managedapps` to get a fast response!
+If you have questions about Azure Managed Applications, you can try asking on [Stack Overflow](https://stackoverflow.com/questions/tagged/azure-managed-app) with tag azure-managed-app or [Microsoft Q&A] (https://docs.microsoft.com/answers/topics/azure-managed-applications.html) with tag azure-managed-application. A similar question may have already been asked and answered, so check first before posting. Please use respective tags for faster response.
## Next steps
azure-sql Connect Query Content Reference Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/connect-query-content-reference-guide.md
The following document includes links to Azure examples showing how to connect a
|[PHP](connect-query-php.md)|This quickstart demonstrates how to use PHP to create a program to connect to a database and use Transact-SQL statements to query data.| |[Python](connect-query-python.md)|This quickstart demonstrates how to use Python to connect to a database and use Transact-SQL statements to query data. | |[Ruby](connect-query-ruby.md)|This quickstart demonstrates how to use Ruby to create a program to connect to a database and use Transact-SQL statements to query data.|
-|[R](connect-query-r.md)|This quickstart demonstrates how to use R with Azure SQL Database Machine Learning Services to create a program to connect to a database in Azure SQL Database and use Transact-SQL statements to query data.|
||| ## Get server connection information
azure-sql Connect Query R https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/connect-query-r.md
- Title: Use R with Azure SQL Database Machine Learning Services (preview) to query a database -
-description: This article shows you how to use an R script with Azure SQL Database Machine Learning Services to connect to a database in Azure SQL Database and query it using Transact-SQL statements.
--------- Previously updated : 05/29/2019---
-# Quickstart: Use R with Azure SQL Database Machine Learning Services (preview) to query a database
--
-In this quickstart, you use R with Azure SQL Database Machine Learning Services to connect to a database in Azure SQL Database and use T-SQL statements to query data.
--
-## Prerequisites
--- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio).-- An [Azure SQL Database](single-database-create-quickstart.md)-- [Machine Learning Services](../managed-instance/machine-learning-services-overview.md) with R enabled.-- [SQL Server Management Studio](/sql/ssms/sql-server-management-studio-ssms) (SSMS)-
-> [!IMPORTANT]
-> The scripts in this article are written to use the **Adventure Works** database.
-
-Machine Learning Services with R is a feature of Azure SQL Database used for executing in-database R scripts. For more information, see the [R Project](https://www.r-project.org/).
-
-## Get the SQL Server connection information
-
-Get the connection information you need to connect to the database in Azure SQL Database. You'll need the fully qualified server name or host name, database name, and login information for the upcoming procedures.
-
-1. Sign in to the [Azure portal](https://portal.azure.com/).
-
-2. Navigate to the **SQL Databases** or **SQL Managed Instances** page.
-
-3. On the **Overview** page, review the fully qualified server name next to **Server name** for a database in Azure SQL Database or the fully qualified server name next to **Host** for a managed instance in Azure SQL Managed Instance. To copy the server name or host name, hover over it and select the **Copy** icon.
-
-## Create code to query your database
-
-1. Open **SQL Server Management Studio** and connect to your database.
-
- If you need help connecting, see [Quickstart: Use SQL Server Management Studio to connect and query a database in Azure SQL Database](connect-query-ssms.md).
-
-1. Pass the complete R script to the [sp_execute_external_script](/sql/relational-databases/system-stored-procedures/sp-execute-external-script-transact-sql) stored procedure.
-
- The script is passed through the `@script` argument. Everything inside the `@script` argument must be valid R code.
-
- >[!IMPORTANT]
- >The code in this example uses the sample AdventureWorksLT data, which you can choose as source when creating your database. If your database has different data, use tables from your own database in the SELECT query.
-
- ```sql
- EXECUTE sp_execute_external_script
- @language = N'R'
- , @script = N'OutputDataSet <- InputDataSet;'
- , @input_data_1 = N'SELECT TOP 20 pc.Name as CategoryName, p.name as ProductName FROM [SalesLT].[ProductCategory] pc JOIN [SalesLT].[Product] p ON pc.productcategoryid = p.productcategoryid'
- ```
-
- > [!NOTE]
- > If you get any errors, it might be because the public preview of Machine Learning Services (with R) is not enabled for your database. See [Prerequisites](#prerequisites) above.
-
-## Run the code
-
-1. Execute the [sp_execute_external_script](/sql/relational-databases/system-stored-procedures/sp-execute-external-script-transact-sql) stored procedure.
-
-1. Verify that the top 20 Category/Product rows are returned in the **Messages** window.
-
-## Next steps
--- [Design your first database in Azure SQL Database](design-first-database-tutorial.md)-- [Azure SQL Database Machine Learning Services (with R)](../managed-instance/machine-learning-services-overview.md)-- [Create and run simple R scripts in Azure SQL Database Machine Learning Services (preview)](/sql/machine-learning/tutorials/quickstart-r-create-script?context=%2fazure%2fazure-sql%2fmanaged-instance%2fcontext%2fml-context)
azure-sql Sql Vulnerability Assessment https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/sql-vulnerability-assessment.md
Previously updated : 02/11/2021 Last updated : 04/09/2021 tags: azure-synapse # SQL vulnerability assessment helps you identify database vulnerabilities
Select **Export Scan Results** to create a downloadable Excel report of your sca
Select **Scan History** in the vulnerability assessment pane to view a history of all scans previously run on this database. Select a particular scan in the list to view the detailed results of that scan.
+### Disable specific findings from Azure Security Center (preview)
+
+If you have an organizational need to ignore a finding, rather than remediate it, you can optionally disable it. Disabled findings don't impact your secure score or generate unwanted noise.
+
+When a finding matches the criteria you've defined in your disable rules, it won't appear in the list of findings. Typical scenarios include:
+
+- Disable findings with severity below medium
+- Disable findings that are non-patchable
+- Disable findings from benchmarks that aren't of interest for a defined scope
+
+> [!IMPORTANT]
+> To disable specific findings, you need permissions to edit a policy in Azure Policy. Learn more in [Azure RBAC permissions in Azure Policy](../../governance/policy/overview.md#azure-rbac-permissions-in-azure-policy).
+
+To create a rule:
+
+1. From the recommendations detail page for **Vulnerability assessment findings on your SQL servers on machines should be remediated**, select **Disable rule**.
+
+1. Select the relevant scope.
+
+1. Define your criteria. You can use any of the following criteria:
+ - Finding ID
+ - Severity
+ - Benchmarks
+
+ :::image type="content" source="../../security-center/media/defender-for-sql-on-machines-vulnerability-assessment/disable-rule-vulnerability-findings-sql.png" alt-text="Create a disable rule for VA findings on SQL servers on machines":::
+
+1. Select **Apply rule**. Changes might take up to 24hrs to take effect.
+
+1. To view, override, or delete a rule:
+
+ 1. Select **Disable rule**.
+
+ 1. From the scope list, subscriptions with active rules show as **Rule applied**.
+
+ :::image type="content" source="../../security-center/media/remediate-vulnerability-findings-vm/modify-rule.png" alt-text="Modify or delete an existing rule":::
+
+ 1. To view or delete the rule, select the ellipsis menu ("...").
+ ## Manage vulnerability assessments programmatically ### Using Azure PowerShell
To handle Boolean types as true/false, set the baseline result with binary input
- Learn more about [Azure Defender for SQL](azure-defender-for-sql.md). - Learn more about [data discovery and classification](data-discovery-and-classification-overview.md).-- Learn more about [Storing vulnerability assessment scan results in a storage account accessible behind firewalls and VNets](sql-database-vulnerability-assessment-storage.md).
+- Learn more about [Storing vulnerability assessment scan results in a storage account accessible behind firewalls and VNets](sql-database-vulnerability-assessment-storage.md).
azure-sql Instance Pools Configure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/managed-instance/instance-pools-configure.md
$databases = Get-AzSqlInstanceDatabase -InstanceName "pool-mi-001" -ResourceGrou
> [!NOTE]
-> There is a limit of 100 databases per pool (not per instance).
+> For checking limits on number of databases per instance pool and managed instance deployed inside the pool visit [Instance pool resource limits](instance-pools-overview.md#resource-limitations) section.
## Scale
If there are multiple databases, repeat the process for each database.
- For a quickstart that creates a managed instance and restores a database from a backup file, see [Create a managed instance](instance-create-quickstart.md). - For a tutorial about using Azure Database Migration Service for migration, see [SQL Managed Instance migration using Database Migration Service](../../dms/tutorial-sql-server-to-managed-instance.md). - For advanced monitoring of SQL Managed Instance database performance with built-in troubleshooting intelligence, see [Monitor Azure SQL Managed Instance using Azure SQL Analytics](../../azure-monitor/insights/azure-sql.md).-- For pricing information, see [SQL Managed Instance pricing](https://azure.microsoft.com/pricing/details/sql-database/managed/).
+- For pricing information, see [SQL Managed Instance pricing](https://azure.microsoft.com/pricing/details/sql-database/managed/).
azure-sql Management Operations Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/managed-instance/management-operations-overview.md
The following tables summarize operations and typical overall durations, based o
|Operation |Long-running segment |Estimated duration | |||| |Instance property change (admin password, Azure AD login, Azure Hybrid Benefit flag)|N/A|Up to 1 minute.|
-|Instance storage scaling up/down (General Purpose service tier)|Attaching database files|90% of operations finish in 5 minutes.|
+|Instance storage scaling up/down (General Purpose service tier)|No long-running segment<sup>1</sup>|99% of operations finish in 5 minutes.|
|Instance storage scaling up/down (Business Critical service tier)|- Virtual cluster resizing<br>- Always On availability group seeding|90% of operations finish in 2.5 hours + time to seed all databases (220 GB/hour).| |Instance compute (vCores) scaling up and down (General Purpose)|- Virtual cluster resizing<br>- Attaching database files|90% of operations finish in 2.5 hours.| |Instance compute (vCores) scaling up and down (Business Critical)|- Virtual cluster resizing<br>- Always On availability group seeding|90% of operations finish in 2.5 hours + time to seed all databases (220 GB/hour).| |Instance service tier change (General Purpose to Business Critical and vice versa)|- Virtual cluster resizing<br>- Always On availability group seeding|90% of operations finish in 2.5 hours + time to seed all databases (220 GB/hour).| | | |
+<sup>1</sup> Scaling General Purpose managed instance storage will not cause a failover at the end of operation. In this case operation consists of updating meta data and propagating response for submitted request.
+ **Category: Delete** |Operation |Long-running segment |Estimated duration |
The following tables summarize operations and typical overall durations, based o
SQL Managed Instance **is available during update operations**, except a short downtime caused by the failover that happens at the end of the update. It typically lasts up to 10 seconds even in case of interrupted long-running transactions, thanks to [accelerated database recovery](../accelerated-database-recovery.md).
+> [!NOTE]
+> Scaling General Purpose managed instance storage will not cause a failover at the end of update.
+ SQL Managed Instance is not available to client applications during deployment and deletion operations. > [!IMPORTANT]
azure-sql Replication Transactional Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/managed-instance/replication-transactional-overview.md
In this configuration, a database in Azure SQL Database or Azure SQL Managed Ins
[Active geo-replication](../database/active-geo-replication-overview.md) is not supported with a SQL Managed Instance using transactional replication. Instead of active geo-replication, use [Auto-failover groups](../database/auto-failover-group-overview.md), but note that the publication has to be [manually deleted](transact-sql-tsql-differences-sql-server.md#replication) from the primary managed instance and re-created on the secondary SQL Managed Instance after failover.
-If geo-replication is enabled on a **publisher** or **distributor** SQL Managed Instance in a [failover group](../database/auto-failover-group-overview.md), the SQL Managed Instance administrator must clean up all publications on the old primary and reconfigure them on the new primary after a failover occurs. The following activities are needed in this scenario:
+If a **publisher** or **distributor** SQL Managed Instance is in a [failover group](../database/auto-failover-group-overview.md), the SQL Managed Instance administrator must clean up all publications on the old primary and reconfigure them on the new primary after a failover occurs. The following activities are needed in this scenario:
1. Stop all replication jobs running on the database, if there are any. 1. Drop subscription metadata from publisher by running the following script on publisher database:
azure-sql Access To Sql Database Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/migration-guides/database/access-to-sql-database-guide.md
Last updated 03/19/2021
# Migration guide: Access to Azure SQL Database
-In this guide, you learn [how to migrate](https://azure.microsoft.com/migration/migration-journey) your Microsoft Access database to an Azure SQL database by using [SQL Server Migration](https://azure.microsoft.com/migration/migration-journey) Assistant for Access (SSMA for Access).
+In this guide, you learn [how to migrate](https://azure.microsoft.com/migration/migration-journey) your Microsoft Access database to an Azure SQL database by using [SQL Server Migration](https://azure.microsoft.com/en-us/migration/sql-server/) Assistant for Access (SSMA for Access).
For other migration guides, see [Azure Database Migration Guide](https://docs.microsoft.com/data-migration).
azure-sql Db2 To Sql Database Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/migration-guides/database/db2-to-sql-database-guide.md
Last updated 11/06/2020
# Migration guide: IBM Db2 to Azure SQL Database [!INCLUDE[appliesto-sqldb-sqlmi](../../includes/appliesto-sqldb.md)]
-In this guide, you learn [how to migrate](https://azure.microsoft.com/migration/migration-journey) your IBM Db2 databases to Azure SQL Database, by using [SQL Server Migration](https://azure.microsoft.com/migration/migration-journey) Assistant for Db2.
+In this guide, you learn [how to migrate](https://azure.microsoft.com/migration/migration-journey) your IBM Db2 databases to Azure SQL Database, by using [SQL Server Migration](https://azure.microsoft.com/en-us/migration/sql-server/) Assistant for Db2.
For other migration guides, see [Azure Database Migration Guides](https://docs.microsoft.com/data-migration).
azure-sql Mysql To Sql Database Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/migration-guides/database/mysql-to-sql-database-guide.md
Last updated 03/19/2021
# Migration guide: MySQL to Azure SQL Database [!INCLUDE[appliesto-sqldb-sqlmi](../../includes/appliesto-sqldb.md)]
-In this guide, you learn [how to migrate](https://azure.microsoft.com/migration/migration-journey) your MySQL database to an Azure SQL database by using [SQL Server Migration](https://azure.microsoft.com/migration/migration-journey) Assistant for MySQL (SSMA for MySQL).
+In this guide, you learn [how to migrate](https://azure.microsoft.com/migration/migration-journey) your MySQL database to an Azure SQL database by using [SQL Server Migration](https://azure.microsoft.com/en-us/migration/sql-server/) Assistant for MySQL (SSMA for MySQL).
For other migration guides, see [Azure Database Migration Guide](https://docs.microsoft.com/data-migration).
azure-sql Oracle To Sql Database Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/migration-guides/database/oracle-to-sql-database-guide.md
Last updated 08/25/2020
[!INCLUDE[appliesto-sqldb-sqlmi](../../includes/appliesto-sqldb.md)]
-In this guide, you learn [how to migrate](https://azure.microsoft.com/migration/migration-journey) your Oracle schemas to Azure SQL Database by using [SQL Server Migration](https://azure.microsoft.com/migration/migration-journey) Assistant for Oracle (SSMA for Oracle).
+In this guide, you learn [how to migrate](https://azure.microsoft.com/migration/migration-journey) your Oracle schemas to Azure SQL Database by using [SQL Server Migration](https://azure.microsoft.com/en-us/migration/sql-server/) Assistant for Oracle (SSMA for Oracle).
For other migration guides, see [Azure Database Migration Guides](https://docs.microsoft.com/data-migration).
azure-sql Sap Ase To Sql Database https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/migration-guides/database/sap-ase-to-sql-database.md
Last updated 03/19/2021
[!INCLUDE[appliesto-sqldb-sqlmi](../../includes/appliesto-sqldb.md)]
-In this guide, you learn [how to migrate](https://azure.microsoft.com/migration/migration-journey) your SAP Adapter Server Enterprise (ASE) databases to an Azure SQL database by using [SQL Server Migration](https://azure.microsoft.com/migration/migration-journey) Assistant for SAP Adapter Server Enterprise.
+In this guide, you learn [how to migrate](https://azure.microsoft.com/migration/migration-journey) your SAP Adapter Server Enterprise (ASE) databases to an Azure SQL database by using [SQL Server Migration](https://azure.microsoft.com/en-us/migration/sql-server/) Assistant for SAP Adapter Server Enterprise.
For other migration guides, see [Azure Database Migration Guide](https://docs.microsoft.com/data-migration).
azure-sql Sql Server To Sql Database Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/migration-guides/database/sql-server-to-sql-database-guide.md
For more migration information, see the [migration overview](sql-server-to-sql-d
## Prerequisites
-For your [SQL Server migration](https://azure.microsoft.com/migration/migration-journey) to Azure SQL Database, make sure you have the following prerequisites:
+For your [SQL Server migration](https://azure.microsoft.com/en-us/migration/sql-server/) to Azure SQL Database, make sure you have the following prerequisites:
- A chosen [migration method](sql-server-to-sql-database-overview.md#compare-migration-options) and corresponding tools . - [Data Migration Assistant (DMA)](https://www.microsoft.com/download/details.aspx?id=53595) installed on a machine that can connect to your source SQL Server.
azure-vmware Windows Server Failover Cluster https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/windows-server-failover-cluster.md
Title: Windows Server Failover Cluster on Azure VMware Solution vSAN with native shared disks
-description: Set up Windows Server Failover Cluster (WSFC) on Azure VMware Solution and take advantage of solutions requiring WSFC capability.
+ Title: Configure Windows Server Failover Cluster on Azure VMware Solution vSAN
+description: Set up Windows Server Failover Cluster (WSFC) on Azure VMware Solution vSAN with native shared disks.
Previously updated : 03/09/2021 Last updated : 04/09/2021
-# Windows Server Failover Cluster on Azure VMware Solution vSAN with native shared disks
+# Configure Windows Server Failover Cluster on Azure VMware Solution vSAN
-In this article, we'll walk through setting up Windows Server Failover Cluster on Azure VMware Solution. The implementation in this article is for proof of concept and pilot purposes. We recommend using a Cluster-in-a-Box (CIB) configuration until placement policies are available.
+In this article, you'll learn how to set up Windows Server Failover Cluster on Azure VMware Solution vSAN with native shared disks.
+
+>[!IMPORTANT]
+>The implementation in this article is for proof of concept and pilot purposes. We recommend using a Cluster-in-a-Box (CIB) configuration until placement policies become available.
Windows Server Failover Cluster (WSFC), previously known as Microsoft Service Cluster Service (MSCS), is a feature of the Windows Server Operating System (OS). WSFC is a business-critical feature, and for many applications is required. For example, WSFC is required for the following configurations:
Windows Server Failover Cluster (WSFC), previously known as Microsoft Service Cl
You can host the WSFC cluster on different Azure VMware Solution instances, known as Cluster-Across-Box (CAB). You can also place the WSFC cluster on a single Azure VMware Solution node. This configuration is known as Cluster-in-a-Box (CIB). We don't recommend using a CIB solution for a production implementation. Were the single Azure VMware Solution node to fail, all WSFC cluster nodes would be powered off, and the application would experience downtime. Azure VMware Solution requires a minimum of three nodes in a private cloud cluster.
-It's important to deploy a supported WSFC configuration. You'll want your solution to be supported on vSphere and with Azure VMware Solution. VMware provides a detailed document about WSFC on vSphere 6.7, [Setup for Failover
-Clustering and Microsoft
-Cluster Service](https://docs.vmware.com/en/VMware-vSphere/6.7/vsphere-esxi-vcenter-server-67-setup-mscs.pdf).
+It's important to deploy a supported WSFC configuration. You'll want your solution to be supported on vSphere and with Azure VMware Solution. VMware provides a detailed document about WSFC on vSphere 6.7, [Setup for Failover Clustering and Microsoft Cluster Service](https://docs.vmware.com/en/VMware-vSphere/6.7/vsphere-esxi-vcenter-server-67-setup-mscs.pdf).
This article focuses on WSFC on Windows Server 2016 and Windows Server 2019. Older Windows Server versions are out of [mainstream support](https://support.microsoft.com/lifecycle/search?alpha=windows%20server) and so we don't consider them here.
Azure VMware Solution provides native support for virtualized WSFC. It supports
The following diagram illustrates the architecture of WSFC virtual nodes on an Azure VMware Solution private cloud. It shows where Azure VMware Solution resides, including the WSFC virtual servers (red box), in relation to the broader Azure platform. This diagram illustrates a typical hub-spoke architecture, but a similar setup is possible with the use of Azure Virtual WAN. Both offer all the value other Azure services can bring you.
-[![Diagram showing the architecture of WSFC virtual nodes on an Azure VMware Solution private cloud.](media/windows-server-failover-cluster/windows-server-failover-architecture.png)](media/windows-server-failover-cluster/windows-server-failover-architecture.png#lightbox)
## Supported configurations
Currently, the following configurations are supported:
- Up to four PVSCSI adapters per VM - Up to 64 disks per PVSCSI adapter
-## Virtual Machine configuration requirements
+## Virtual machine configuration requirements
### WSFC node configuration parameters
backup Backup Sql Server Database Azure Vms https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-sql-server-database-azure-vms.md
Title: Back up multiple SQL Server VMs from the vault description: In this article, learn how to back up SQL Server databases on Azure virtual machines with Azure Backup from the Recovery Services vault Previously updated : 09/11/2019 Last updated : 04/07/2021 # Back up multiple SQL Server VMs from the Recovery Services vault
In this article, you'll learn how to:
> * Discover databases and set up backups. > * Set up auto-protection for databases.
->[!NOTE]
->**Soft delete for SQL server in Azure VM and soft delete for SAP HANA in Azure VM workloads** is now available in preview.<br>
->To sign up for the preview, write to us at AskAzureBackupTeam@microsoft.com
- ## Prerequisites Before you back up a SQL Server database, check the following criteria:
backup Backup Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-support-matrix.md
Azure Backup has added the Cross Region Restore feature to strengthen data avail
| Backup Management type | Supported | Supported Regions | | - | | -- |
-| Azure VM | Supported for Azure VMs with both managed and unmanaged disks. Not supported for classic VMs. | Available in all Azure public regions and sovereign regions except for France Central, Australia Central, South Africa North, UAE North, Switzerland North, Germany West Central, Norway East. <br>For information about use in those regions, contact [AskAzureBackupTeam@microsoft.com](mailto:AskAzureBackupTeam@microsoft.com) |
-| SQL /SAP HANA | In preview | Available in all Azure public regions and sovereign regions except for France Central, Australia Central, South Africa North, UAE North, Switzerland North, Germany West Central, Norway East. <br>For information about use in those regions, contact [AskAzureBackupTeam@microsoft.com](mailto:AskAzureBackupTeam@microsoft.com) |
+| Azure VM | Supported for Azure VMs with both managed and unmanaged disks. Not supported for classic VMs. | Available in all Azure public regions and sovereign regions except for France Central, Australia Central, South Africa North, UAE North, Switzerland North, Germany West Central, Norway East, UG IOWA, and UG Virginia. <br>For information about use in those regions, contact [AskAzureBackupTeam@microsoft.com](mailto:AskAzureBackupTeam@microsoft.com) |
+| SQL /SAP HANA | In preview | Available in all Azure public regions and sovereign regions except for France Central, Australia Central, South Africa North, UAE North, Switzerland North, Germany West Central, Norway East, UG IOWA, and UG Virginia. <br>For information about use in those regions, contact [AskAzureBackupTeam@microsoft.com](mailto:AskAzureBackupTeam@microsoft.com) |
| MARS Agent/On premises | No | N/A | | AFS (Azure file shares) | No | N/A |
backup Disk Backup Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/disk-backup-overview.md
Title: Overview of Azure Disk Backup description: Learn about the Azure Disk backup solution. Previously updated : 01/07/2021 Last updated : 04/09/2021 # Overview of Azure Disk Backup
Azure Disk backup is an agentless and crash consistent solution that uses [incre
Azure Disk backup solution is useful in the following scenarios: -- Need for frequent backups per day without application being quiescent-- Apps running in a cluster scenario: both Windows Server Failover Cluster and Linux clusters are writing to shared disks-- Specific need for agentless backup because of security or performance concerns on the application
+- Need for frequent backups per day without application being quiescent.
+- Apps running in a cluster scenario: both Windows Server Failover Cluster and Linux clusters are writing to shared disks.
+- Specific need for agentless backup because of security or performance concerns on the application.
- Application consistent backup of VM isn't feasible since line-of-business apps don't support Volume Shadow Copy Service (VSS). Consider Azure Disk Backup in scenarios where: -- a mission-critical application is running on an Azure Virtual machine that demands multiple backups per day to meet the recovery point objective, but without impacting the production environment or application performance-- your organization or industry regulation restricts installing agents because of security concerns-- executing custom pre or post scripts and invoking freeze and thaw on Linux virtual machines to get application-consistent backup puts undue overhead on production workload availability-- containerized applications running on Azure Kubernetes Service (AKS cluster) are using managed disks as persistent storage. Today you have to back up the managed disk via automation scripts that are hard to manage.-- a managed disk is holding critical business data, used as a file-share, or contains database backup files, and you want to optimize backup cost by not investing in Azure VM backup-- You have many Linux and Windows single-disk virtual machines (that is, a virtual machine with just an OS disk and no data disks attached) that host webserver or state-less machines or serves as a staging environment with application configuration settings and you need a cost efficient backup solution to protect the OS disk. For example, to trigger a quick on-demand backup before upgrading or patching the virtual machine-- a virtual machine is running an OS configuration that is unsupported by Azure VM backup solution (for example, Windows 2008 32-bit Server)
+- A mission-critical application is running on an Azure Virtual machine that demands multiple backups per day to meet the recovery point objective, but without impacting the production environment or application performance.
+- Your organization or industry regulation restricts installing agents because of security concerns.
+- Executing custom pre or post scripts and invoking freeze and thaw on Linux virtual machines to get application-consistent backup puts undue overhead on production workload availability.
+- Containerized applications running on Azure Kubernetes Service (AKS cluster) are using managed disks as persistent storage. Today, you must back up the managed disk via automation scripts that are hard to manage.
+- A managed disk is holding critical business data, used as a file-share, or contains database backup files, and you want to optimize backup cost by not investing in Azure VM backup.
+- You have many Linux and Windows single-disk virtual machines (that is, a virtual machine with just an OS disk and no data disks attached) that host web server, state-less machines, or serves as a staging environment with application configuration settings, and you need a cost efficient backup solution to protect the OS disk. For example, to trigger a quick on-demand backup before upgrading or patching the virtual machine.
+- A virtual machine is running an OS configuration that is unsupported by Azure VM backup solution (for example, Windows 2008 32-bit Server).
## How the backup and restore process works
Consider Azure Disk Backup in scenarios where:
- To configure backup, go to the Backup vault, assign a backup policy, select the managed disk that needs to be backed up and provide a resource group where the snapshots are to be stored and managed. Azure Backup automatically triggers scheduled backup jobs that create an incremental snapshot of the disk according to the backup frequency. Older snapshots are deleted according to the retention duration specified by the backup policy. -- Azure Backup uses [incremental snapshots](../virtual-machines/disks-incremental-snapshots.md#restrictions) of the managed disk. Incremental snapshots are a cost-effective, point-in-time backup of managed disks that are billed for the delta changes to the disk since the last snapshot. They're always stored on the most cost-effective storage, standard HDD storage regardless of the storage type of the parent disks. The first snapshot of the disk will occupy the used size of the disk, and consecutive incremental snapshots store delta changes to the disk since the last snapshot.
+- Azure Backup uses [incremental snapshots](../virtual-machines/disks-incremental-snapshots.md#restrictions) of the managed disk. Incremental snapshots are a cost-effective, point-in-time backup of managed disks that are billed for the delta changes to the disk since the last snapshot. These are always stored on the most cost-effective storage, standard HDD storage regardless of the storage type of the parent disks. The first snapshot of the disk will occupy the used size of the disk, and consecutive incremental snapshots store delta changes to the disk since the last snapshot.
- Once you configure the backup of a managed disk, a backup instance will be created within the backup vault. Using the backup instance, you can find health of backup operations, trigger on-demand backups, and perform restore operations. You can also view health of backups across multiple vaults and backup instances using Backup Center, which provides a single pane of glass view.
Consider Azure Disk Backup in scenarios where:
- Backup Vault uses Managed Identity to access other Azure resources. To configure backup of a managed disk and to restore from past backup, Backup VaultΓÇÖs managed identity requires a set of permissions on the source disk, the snapshot resource group where snapshots are created and managed, and the target resource group where you want to restore the backup. You can grant permissions to the managed identity by using Azure role-based access control (Azure RBAC). Managed identity is a service principal of a special type that may only be used with Azure resources. Learn more about [Managed Identities](../active-directory/managed-identities-azure-resources/overview.md). -- Currently Azure Disk Backup supports operational backup of managed disks and doesn't copy the backups to Backup Vault storage. Refer to the [support matrix](disk-backup-support-matrix.md)for a detailed list of supported and unsupported scenarios, and region availability.
+- Currently Azure Disk Backup supports operational backup of managed disks and doesn't copy the backups to Backup Vault storage. Refer to the [support matrix](disk-backup-support-matrix.md) for a detailed list of supported and unsupported scenarios, and region availability.
## Pricing
-Azure Backup offers a snapshot lifecycle management solution for protecting Azure Disks. The disk snapshots created by Azure Backup are stored in the resource group within your Azure subscription and incur **Snapshot Storage** charges. You can visit [Managed Disk Pricing](https://azure.microsoft.com/pricing/details/managed-disks/) for more details about the snapshot pricing. Because the snapshots aren't copied to the Backup Vault, Azure Backup doesn't charge a **Protected Instance** fee and **Backup Storage** cost doesn't apply. Additionally, incremental snapshots occupy delta changes since the last snapshot and are always stored on standard storage regardless of the storage type of the parent-managed disks and are charged according to the pricing of standard storage. This makes Azure Disk Backup a cost-effective solution.
+Azure Backup offers a snapshot lifecycle management solution for protecting Azure Disks. The disk snapshots created by Azure Backup are stored in the resource group within your Azure subscription and incur **Snapshot Storage** charges. You can visit [Managed Disk Pricing](https://azure.microsoft.com/pricing/details/managed-disks/) for more details about the snapshot pricing.<br></br>Because the snapshots aren't copied to the Backup Vault, Azure Backup doesn't charge a **Protected Instance** fee and **Backup Storage** cost doesn't apply. Additionally, incremental snapshots occupy delta changes as the last snapshot and are always stored on standard storage regardless of the storage type of the parent-managed disks and are charged according to the pricing of standard storage. This makes Azure Disk Backup a cost-effective solution.
## Next steps -- [Azure Disk Backup support matrix](disk-backup-support-matrix.md)
+[Azure Disk Backup support matrix](disk-backup-support-matrix.md)
backup Sql Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/sql-support-matrix.md
Title: Azure Backup support matrix for SQL Server Backup in Azure VMs description: Provides a summary of support settings and limitations when backing up SQL Server in Azure VMs with the Azure Backup service. Previously updated : 03/05/2020 Last updated : 04/07/2021
You can use Azure Backup to back up SQL Server databases in Azure VMs hosted on
|Setting |Maximum limit | ||| |Number of databases that can be protected in a server (and in a vault) | 2000 |
-|Database size supported (beyond this, performance issues may come up) | 2 TB |
+|Database size supported (beyond this, performance issues may come up) | 6 TB* |
|Number of files supported in a database | 1000 |
->[!NOTE]
-> [Download the detailed Resource Planner](https://download.microsoft.com/download/A/B/5/AB5D86F0-DCB7-4DC3-9872-6155C96DE500/SQL%20Server%20in%20Azure%20VM%20Backup%20Scale%20Calculator.xlsx) to calculate the approximate number of protected databases that are recommended per server based on the VM resources, bandwidth and the backup policy.
+_*The database size limit depends on the data transfer rate that we support and the backup time limit configuration. ItΓÇÖs not the hard limit. [Learn more](#backup-throughput-performance) on backup throughput performance._
* SQL Server backup can be configured in the Azure portal or **PowerShell**. CLI isn't supported. * The solution is supported on both kinds of [deployments](../azure-resource-manager/management/deployment-models.md) - Azure Resource Manager VMs and classic VMs.
Differential | Primary
Log | Secondary Copy-Only Full | Secondary
+## Backup throughput performance
+
+Azure Backup supports a consistent data transfer rate of 200 Mbps for full and differential backups of large SQL databases (of 500 GB). To utilize the optimum performance, ensure that:
+
+- The underlying VM (containing the SQL Server instance, which hosts the database) is configured with the required network throughput. If the maximum throughput of the VM is less than 200 Mbps, Azure Backup canΓÇÖt transfer data at the optimum speed.<br></br>Also, the disk that contains the database files must have enough throughput provisioned. [Learn more](../virtual-machines/disks-performance.md) about disk throughput and performance in Azure VMs.
+- Processes, which are running in the VM, are not consuming the VM bandwidth.
+- The backup schedules are spread across a subset of databases. Multiple backups running concurrently on a VM shares the network consumption rate between the backups. [Learn more](faq-backup-sql-server.md#can-i-control-how-many-concurrent-backups-run-on-the-sql-server) about how to control the number of concurrent backups.
+
+>[!NOTE]
+> [Download the detailed Resource Planner](https://download.microsoft.com/download/A/B/5/AB5D86F0-DCB7-4DC3-9872-6155C96DE500/SQL%20Server%20in%20Azure%20VM%20Backup%20Scale%20Calculator.xlsx) to calculate the approximate number of protected databases that are recommended per server based on the VM resources, bandwidth and the backup policy.
+ ## Next steps Learn how to [back up a SQL Server database](backup-azure-sql-database.md) that's running on an Azure VM.
baremetal-infrastructure Concepts Baremetal Infrastructure Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/baremetal-infrastructure/concepts-baremetal-infrastructure-overview.md
description: Provides an overview of the BareMetal Infrastructure on Azure.
Previously updated : 04/06/2021 Last updated : 04/08/2021 # What is BareMetal Infrastructure on Azure? Microsoft Azure offers a cloud infrastructure with a wide range of integrated cloud services to meet your business needs. In some cases, though, you may need to run services on bare metal servers without a virtualization layer. You may need root access, and control over the operating system (OS). To meet such a need, Azure offers BareMetal Infrastructure for several high-value and mission-critical applications.
-BareMetal Infrastructure is made up of dedicated BareMetal instances (compute instances), high-performance and application-appropriate storage (NFS, dNFS, ISCSI, and Fiber Channel), as well as a set of function-specific virtual LANs (VLANs) in an isolated environment. Storage can be shared across BareMetal instances to enable features like scale-out clusters or for creating high availability pairs with STONITH.
+BareMetal Infrastructure is made up of dedicated BareMetal instances (compute instances), high-performance and application-appropriate storage (NFS, ISCSI, and Fiber Channel), as well as a set of function-specific virtual LANs (VLANs) in an isolated environment. Storage can be shared across BareMetal instances to enable features like scale-out clusters or for creating high availability pairs with STONITH.
This environment also has special VLANs you can access if you're running virtual machines (VMs) on one or more Azure Virtual Networks (VNets) in your Azure subscription. The entire environment is represented as a resource group in your Azure subscription.
BareMetal Infrastructure offers these benefits:
- Up to 1 PB/tenant - IOPS up to 1.2 million/tenant - 40/100-GB network bandwidth
- - Accessible via NFS, dNFS, ISCSI, and FC
+ - Accessible via NFS, ISCSI, and FC
- Redundant power, power supplies, NICs, TORs, ports, WANs, storage, and management - Hot spares for replacement on a failure (without the need for reconfiguring) - Customer coordinated maintenance windows
bastion Howto Metrics Monitor Alert https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/bastion/howto-metrics-monitor-alert.md
You can view the total memory of Azure Bastion, split across each bastion instan
#### <a name="used-cpu"></a>Used CPU
-You can view the CPU utilization of Azure Bastion, split across each bastion instance. Monitoring this metric will help gauge the availability and capacity of the instances that comprise Azure Bastion.
+You can view the CPU utilization of Azure Bastion, split across each bastion instance. Monitoring this metric will help gauge the availability and capacity of the instances that comprise Azure Bastion
:::image type="content" source="./media/metrics-monitor-alert/used-cpu.png" alt-text="Screenshot showing CPU used.":::
You can view memory utilization across each bastion instance, split across each
You can view the count of active sessions per bastion instance, aggregated across each session type (RDP and SSH). Each Azure Bastion can support a range of active RDP and SSH sessions. Monitoring this metric will help you to understand if you need to adjust the number of instances running the bastion service. For more information about the session count Azure Bastion can support, refer to the [Azure Bastion FAQ](bastion-faq.md).
+The recommended values for this metric's configuration are:
+
+* **Aggregation:** Avg
+* **Granularity:** 5 or 15 minutes
+* Splitting by instances is recommended to get a more accurate count
+ :::image type="content" source="./media/metrics-monitor-alert/session-count.png" alt-text="Screenshot showing session count."::: ## <a name="metrics"></a>How to view metrics
cdn Cdn App Dev Node https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cdn/cdn-app-dev-node.md
na ms.devlang: na Previously updated : 01/23/2017 Last updated : 04/02/2021
> >
-You can use the [Azure CDN SDK for Node.js](https://www.npmjs.com/package/azure-arm-cdn) to automate creation and management of CDN profiles and endpoints. This tutorial walks through the creation of a simple Node.js console application that demonstrates several of the available operations. This tutorial is not intended to describe all aspects of the Azure CDN SDK for Node.js in detail.
+You can use the [Azure CDN SDK for JavaScript](https://www.npmjs.com/package/@azure/arm-cdn) to automate creation and management of CDN profiles and endpoints. This tutorial walks through the creation of a simple Node.js console application that demonstrates several of the available operations. This tutorial is not intended to describe all aspects of the Azure CDN SDK for JavaScript in detail.
-To complete this tutorial, you should already have [Node.js](https://www.nodejs.org) **4.x.x** or higher installed and configured. You can use any text editor you want to create your Node.js application. To write this tutorial, I used [Visual Studio Code](https://code.visualstudio.com).
+To complete this tutorial, you should already have [Node.js](https://www.nodejs.org) **6.x.x** or higher installed and configured. You can use any text editor you want to create your Node.js application. To write this tutorial, I used [Visual Studio Code](https://code.visualstudio.com).
-> [!TIP]
-> The [completed project from this tutorial](https://code.msdn.microsoft.com/Azure-CDN-SDK-for-Nodejs-c712bc74) is available for download on MSDN.
->
->
[!INCLUDE [cdn-app-dev-prep](../../includes/cdn-app-dev-prep.md)]
You will then be presented a series of questions to initialize your project. Fo
![NPM init output](./media/cdn-app-dev-node/cdn-npm-init.png)
-Our project is now initialized with a *packages.json* file. Our project is going to use some Azure libraries contained in NPM packages. We'll use the Azure Client Runtime for Node.js (ms-rest-azure) and the Azure CDN Client Library for Node.js (azure-arm-cd). Let's add those to the project as dependencies.
+Our project is now initialized with a *packages.json* file. Our project is going to use some Azure libraries contained in NPM packages. We'll use the library for Azure Active Directory authentication in Node.js (@azure/ms-rest-nodeauth) and the Azure CDN Client Library for JavaScript (@azure/arm-cdn). Let's add those to the project as dependencies.
```console
-npm install --save ms-rest-azure
-npm install --save azure-arm-cdn
+npm install --save @azure/ms-rest-nodeauth
+npm install --save @azure/arm-cdn
``` After the packages are done installing, the *package.json* file should look similar to this example (version numbers may differ):
After the packages are done installing, the *package.json* file should look simi
"author": "Cam Soper", "license": "MIT", "dependencies": {
- "azure-arm-cdn": "^0.2.1",
- "ms-rest-azure": "^1.14.4"
+ "@azure/arm-cdn": "^5.2.0",
+ "@azure/ms-rest-nodeauth": "^3.0.0"
} } ```
With *app.js* open in our editor, let's get the basic structure of our program w
1. Add the "requires" for our NPM packages at the top with the following: ``` javascript
- var msRestAzure = require('ms-rest-azure');
- var cdnManagementClient = require('azure-arm-cdn');
+ var msRestAzure = require('@azure/ms-rest-nodeauth');
+ const { CdnManagementClient } = require('@azure/arm-cdn');
``` 2. We need to define some constants our methods will use. Add the following. Be sure to replace the placeholders, including the **&lt;angle brackets&gt;**, with your own values as needed.
With *app.js* open in our editor, let's get the basic structure of our program w
``` javascript var credentials = new msRestAzure.ApplicationTokenCredentials(clientId, tenantId, clientSecret);
- var cdnClient = new cdnManagementClient(credentials, subscriptionId);
+ var cdnClient = new CdnManagementClient(credentials, subscriptionId);
```
-
- If you are using individual user authentication, these two lines will look slightly different.
-
- > [!IMPORTANT]
- > Only use this code sample if you are choosing to have individual user authentication instead of a service principal. Be careful to guard your individual user credentials and keep them secret.
- >
- >
-
- ``` javascript
- var credentials = new msRestAzure.UserTokenCredentials(clientId,
- tenantId, '<username>', '<password>', '<redirect URI>');
- var cdnClient = new cdnManagementClient(credentials, subscriptionId);
- ```
-
- Be sure to replace the items in **&lt;angle brackets&gt;** with the correct information. For `<redirect URI>`, use the redirect URI you entered when you registered the application in Azure AD.
+ 4. Our Node.js console application is going to take some command-line parameters. Let's validate that at least one parameter was passed. ```javascript
function cdnList(){
case "endpoints": requireParms(3); console.log("Listing endpoints...");
- cdnClient.endpoints.listByProfile(parms[2], resourceGroupName, callback);
+ cdnClient.endpoints.listByProfile(resourceGroupName, parms[2], callback);
break; default:
function cdnCreateProfile() {
} };
- cdnClient.profiles.create(parms[2], standardCreateParameters, resourceGroupName, callback);
+ cdnClient.profiles.create( resourceGroupName, parms[2], standardCreateParameters, callback);
} // create endpoint <profile name> <endpoint name> <origin hostname>
function cdnCreateEndpoint() {
}] };
- cdnClient.endpoints.create(parms[3], endpointProperties, parms[2], resourceGroupName, callback);
+ cdnClient.endpoints.create(resourceGroupName, parms[2], parms[3], endpointProperties, callback);
} ```
function cdnPurge() {
requireParms(4); console.log("Purging endpoint..."); var purgeContentPaths = [ parms[3] ];
- cdnClient.endpoints.purgeContent(parms[2], parms[1], resourceGroupName, purgeContentPaths, callback);
+ cdnClient.endpoints.purgeContent(resourceGroupName, parms[2], parms[3], purgeContentPaths, callback);
} ```
function cdnDelete() {
case "profile": requireParms(3); console.log("Deleting profile...");
- cdnClient.profiles.deleteIfExists(parms[2], resourceGroupName, callback);
+ cdnClient.profiles.deleteMethod(resourceGroupName, parms[2], callback);
break; // delete endpoint <profile name> <endpoint name> case "endpoint": requireParms(4); console.log("Deleting endpoint...");
- cdnClient.endpoints.deleteIfExists(parms[3], parms[2], resourceGroupName, callback);
+ cdnClient.endpoints.deleteMethod(resourceGroupName, parms[2], parms[3], callback);
break; default:
Finally, let's delete our profile.
![Delete profile](./media/cdn-app-dev-node/cdn-delete-profile.png) ## Next Steps
-To see the completed project from this walkthrough, [download the sample](https://code.msdn.microsoft.com/Azure-CDN-SDK-for-Nodejs-c712bc74).
-To see the reference for the Azure CDN SDK for Node.js, view the [reference](https://azure.github.io/azure-sdk-for-node/azure-arm-cdn/latest/).
+To see the reference for the Azure CDN SDK for JavaScript, view the [reference](https://docs.microsoft.com/javascript/api/@azure/arm-cdn).
-To find additional documentation on the Azure SDK for Node.js, view the [full reference](https://azure.github.io/azure-sdk-for-node/).
+To find additional documentation on the Azure SDK for JavaScript, view the [full reference](https://docs.microsoft.com/javascript/api/?view=azure-node-latest).
Manage your CDN resources with [PowerShell](cdn-manage-powershell.md).-
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Face/Overview.md
This documentation contains the following types of articles:
## Face detection
-The Face service detects human faces in an image and returns the rectangle coordinates of their locations. Optionally, face detection can extract a series of face-related attributes, such as head pose, gender, age, emotion, facial hair, and glasses.
+The Detect API detects human faces in an image and returns the rectangle coordinates of their locations. Optionally, face detection can extract a series of face-related attributes, such as head pose, gender, age, emotion, facial hair, and glasses. These attributes are general predictions, not actual classifications.
> [!NOTE]
-> The face detection feature is also available through the [Computer Vision service](../computer-vision/overview.md). However, if you want to do further operations with face data, you should use this service instead.
+> The face detection feature is also available through the [Computer Vision service](../computer-vision/overview.md). However, if you want to do further Face operations like Identify, Verify, Find Similar, or Group, you should use this Face service instead.
![An image of a woman and a man, with rectangles drawn around their faces and age and gender displayed](./Images/Face.detection.jpg)
For more information on face detection, see the [Face detection](concepts/face-d
## Face verification
-The Verify API does an authentication against two detected faces or from one detected face to one person object. Practically, it evaluates whether two faces belong to the same person. This capability is potentially useful in security scenarios. For more information, see the [Facial recognition](concepts/face-recognition.md) concepts guide or the [Verify API](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f3039523a) reference documentation.
+The Verify API builds on Detection and addresses the question, ΓÇ£Are these two images the same person?ΓÇ¥. Verification is also called ΓÇ£one-to-oneΓÇ¥ matching because the probe image is compared to only one enrolled template. Verification can be used in identity verification or access control scenarios to verify a picture matches a previously captured image (such as from a photo from a government issued ID card). For more information, see the [Facial recognition](concepts/face-recognition.md) concepts guide or the [Verify API](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f3039523a) reference documentation.
+
+## Face identification
+
+The Identify API also starts with Detection and answers the question, ΓÇ£Can this detected face be matched to any enrolled face in a database?ΓÇ¥ Because it's like face recognition search, is also called ΓÇ£one-to-manyΓÇ¥ matching. Candidate matches are returned based on how closely the probe template with the detected face matches each of the enrolled templates.
+
+The following image shows an example of a database named `"myfriends"`. Each group can contain up to 1 million different person objects. Each person object can have up to 248 faces registered.
+
+![A grid with three columns for different people, each with three rows of face images](./Images/person.group.clare.jpg)
+
+After you create and train a database, you can do identification against the group with a new detected face. If the face is identified as a person in the group, the person object is returned.
+
+For more information about person identification, see the [Facial recognition](concepts/face-recognition.md) concepts guide or the [Identify API](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395239) reference documentation.
## Find similar faces
To find four similar faces, the **matchPerson** mode returns a and b, which show
The Group API divides a set of unknown faces into several groups based on similarity. Each group is a disjoint proper subset of the original set of faces. All of the faces in a group are likely to belong to the same person. There can be several different groups for a single person. The groups are differentiated by another factor, such as expression, for example. For more information, see the [Facial recognition](concepts/face-recognition.md) concepts guide or the [Group API](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395238) reference documentation.
-## Person identification
-
-The Identify API is used to identify a detected face against a database of people (facial recognition search). This feature might be useful for automatic image tagging in photo management software. You create the database in advance, and you can edit it over time.
-
-The following image shows an example of a database named `"myfriends"`. Each group can contain up to 1 million different person objects. Each person object can have up to 248 faces registered.
-
-![A grid with three columns for different people, each with three rows of face images](./Images/person.group.clare.jpg)
-
-After you create and train a database, you can do identification against the group with a new detected face. If the face is identified as a person in the group, the person object is returned.
-
-For more information about person identification, see the [Facial recognition](concepts/face-recognition.md) concepts guide or the [Identify API](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395239) reference documentation.
-
-## Deploy on premises using Docker containers
-
-[Use the Face container (preview)](face-how-to-install-containers.md) to deploy API features on-premises. This Docker container enables you to bring the service closer to your data for compliance, security or other operational reasons.
## Sample apps
As with all of the Cognitive Services resources, developers who use the Face ser
Follow a quickstart to code the basic components of a face recognition app in the language of your choice. -- [Client library quickstart](quickstarts/client-libraries.md).
+- [Client library quickstart](quickstarts/client-libraries.md).
cognitive-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/language-support.md
https://cris.ai -> Click on Adaptation Data -> scroll down to section "Pronuncia
| Language | Locale (BCP-47) | Customizations | [Language detection](how-to-automatic-language-detection.md) | ||--||-|
-| Arabic (Bahrain), modern standard | `ar-BH` | Text | Yes |
+| Arabic (Bahrain), modern standard | `ar-BH` | Text | |
| Arabic (Egypt) | `ar-EG` | Text | Yes | | Arabic (Iraq) | `ar-IQ` | Text | | | Arabic (Israel) | `ar-IL` | Text | |
https://cris.ai -> Click on Adaptation Data -> scroll down to section "Pronuncia
| Arabic (Lebanon) | `ar-LB` | Text | | | Arabic (Oman) | `ar-OM` | Text | | | Arabic (Qatar) | `ar-QA` | Text | |
-| Arabic (Saudi Arabia) | `ar-SA` | Text | Yes |
+| Arabic (Saudi Arabia) | `ar-SA` | Text | |
| Arabic (State of Palestine) | `ar-PS` | Text | |
-| Arabic (Syria) | `ar-SY` | Text | Yes |
+| Arabic (Syria) | `ar-SY` | Text | |
| Arabic (United Arab Emirates) | `ar-AE` | Text | | | Bulgarian (Bulgaria) | `bg-BG` | Text | | | Catalan (Spain) | `ca-ES` | Text | Yes |
https://cris.ai -> Click on Adaptation Data -> scroll down to section "Pronuncia
| English (Canada) | `en-CA` | Audio (20201019)<br>Text | Yes | | English (Ghana) | `en-GH` | Text | | | English (Hong Kong) | `en-HK` | Text | |
-| English (India) | `en-IN` | Audio (20200923)<br>Text | Yes |
+| English (India) | `en-IN` | Audio (20200923)<br>Text | |
| English (Ireland) | `en-IE` | Text | | | English (Kenya) | `en-KE` | Text | |
-| English (New Zealand) | `en-NZ` | Audio (20201019)<br>Text | Yes |
+| English (New Zealand) | `en-NZ` | Audio (20201019)<br>Text | |
| English (Nigeria) | `en-NG` | Text | | | English (Philippines) | `en-PH` | Text | | | English (Singapore) | `en-SG` | Text | |
https://cris.ai -> Click on Adaptation Data -> scroll down to section "Pronuncia
| French (Switzerland) | `fr-CH` | Text<br>Pronunciation | | | German (Austria) | `de-AT` | Text<br>Pronunciation | | | German (Germany) | `de-DE` | Audio (20190701, 20200619, 20201127)<br>Text<br>Pronunciation| Yes |
-| Greek (Greece) | `el-GR` | Text | |
+| Greek (Greece) | `el-GR` | Text | Yes |
| Gujarati (Indian) | `gu-IN` | Text | | | Hindi (India) | `hi-IN` | Audio (20200701)<br>Text | Yes | | Hungarian (Hungary) | `hu-HU` | Text | |
https://cris.ai -> Click on Adaptation Data -> scroll down to section "Pronuncia
| Polish (Poland) | `pl-PL` | Text | Yes | | Portuguese (Brazil) | `pt-BR` | Audio (20190620, 20201015)<br>Text<br>Pronunciation| Yes | | Portuguese (Portugal) | `pt-PT` | Text<br>Pronunciation | Yes |
-| Romanian (Romania) | `ro-RO` | Text | |
+| Romanian (Romania) | `ro-RO` | Text | Yes |
| Russian (Russia) | `ru-RU` | Audio (20200907)<br>Text | Yes | | Slovak (Slovakia) | `sk-SK` | Text | | | Slovenian (Slovenia) | `sl-SI` | Text | |
cognitive-services Releasenotes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/releasenotes.md
# Speech Service release notes
+## Text-to-speech 2021-April release
+
+**Neural TTS is available across 21 regions**
+
+- **Twelve new regions added** - Neural TTS is now available in these new 12 regions: `Japan East`, `Japan West`, `Korea Central`, `North Central US`, `North Europe`, `South Central US`, `Southeast Asia`, `UK South`, `west Central US`, `West Europe`, `West US`, `West US 2`. Check [here](regions.md#text-to-speech) for full list of 21 supported regions.
+ ## Text-to-speech 2021-March release **New languages and voices added for neural TTS**
communication-services Troubleshooting Info https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/troubleshooting-info.md
chat_client = ChatClient(
## Access your call ID
-When filing a support request through the Azure portal related to calling issues, you may be asked to provide ID of the call you're referring to. This can be accessed through the Calling SDK:
+When troubleshooting voice or video calls, you may be asked to provide a `call ID`. This can be accessed via the `id` property of the `call` object:
# [JavaScript](#tab/javascript) ```javascript
-// `call` is an instance of a call created by `callAgent.call` or `callAgent.join` methods
+// `call` is an instance of a call created by `callAgent.startCall` or `callAgent.join` methods
console.log(call.id) ```
print(call.callId)
# [Android](#tab/android) ```java // The `call id` property can be retrieved by calling the `call.getCallId()` method on a call object after a call ends
-// `call` is an instance of a call created by `callAgent.call(…)` or `callAgent.join(…)` methods
+// `call` is an instance of a call created by `callAgent.startCall(…)` or `callAgent.join(…)` methods
Log.d(call.getCallId()) ```
console.log(result); // your message ID will be in the result
# [JavaScript](#tab/javascript)
-The following code can be used to configure `AzureLogger` to output logs to the console using the JavaScript SDK:
+The Azure Communication Services Calling SDK relies internally on [@azure/logger](https://www.npmjs.com/package/@azure/logger) library to control logging.
+Use the `setLogLevel` method from the `@azure/logger` package to configure the log output:
```javascript
-import { AzureLogger } from '@azure/logger';
+import { setLogLevel } from '@azure/logger';
+setLogLevel('verbose');
+const callClient = new CallClient();
+```
-AzureLogger.verbose = (...args) => { console.info(...args); }
-AzureLogger.info = (...args) => { console.info(...args); }
-AzureLogger.warning = (...args) => { console.info(...args); }
-AzureLogger.error = (...args) => { console.info(...args); }
+You can use AzureLogger to redirect the logging output from Azure SDKs by overriding the `AzureLogger.log` method:
+This may be useful if you want to redirect logs to a location other than console.
-callClient = new CallClient({logger: AzureLogger});
+```javascript
+import { AzureLogger } from '@azure/logger';
+// redirect log output
+AzureLogger.log = (...args) => {
+ console.log(...args); // to console, file, buffer, REST API..
+};
``` # [iOS](#tab/ios)
confidential-computing Confidential Nodes Aks Get Started https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/confidential-computing/confidential-nodes-aks-get-started.md
Previously updated : 03/18/2020 Last updated : 04/08/2021
Features of confidential computing nodes include:
- Linux worker nodes supporting Linux containers. - Generation 2 virtual machine (VM) with Ubuntu 18.04 VM nodes.-- Intel SGX-based CPU with Encrypted Page Cache Memory (EPC). For more information, see [Frequently asked questions for Azure confidential computing](./faq.md).-- Support for Kubernetes version 1.16+.-- Intel SGX DCAP Driver preinstalled on the AKS nodes. For more information, see [Frequently asked questions for Azure confidential computing](./faq.md).
+- Intel SGX capable CPU to help run your containers in confidentiality protected enclave leveraging Encrypted Page Cache Memory (EPC). For more information, see [Frequently asked questions for Azure confidential computing](./faq.md).
+- Intel SGX DCAP Driver preinstalled on the confidential computing nodes. For more information, see [Frequently asked questions for Azure confidential computing](./faq.md).
> [!NOTE] > DCsv2 VMs use specialized hardware that's subject to higher pricing and region availability. For more information, see the [available SKUs and supported regions](virtual-machine-solutions.md).
spec:
image: oeciteam/sgx-test:1.0 resources: limits:
- kubernetes.azure.com/sgx_epc_mem_in_MiB: 5 # This limit will automatically place the job into a confidential computing node. Alternatively, you can target deployment to node pools
+ sgx.intel.com/epc: 5Mi # This limit will automatically place the job into a confidential computing node and mount the required driver volumes. Alternatively, you can target deployment to node pools with node selector.
restartPolicy: Never backoffLimit: 0 ```
Enclave called into host to print: Hello World!
To remove the confidential computing node pool that you created in this quickstart, use the following command: ```azurecli-interactive
-az aks nodepool delete --cluster-name myAKSCluster --name myNodePoolName --resource-group myResourceGroup
+az aks nodepool delete --cluster-name myAKSCluster --name confcompool1 --resource-group myResourceGroup
``` To delete the AKS cluster, use the following command:
az aks delete --resource-group myResourceGroup --name myAKSCluster
<!-- LINKS --> [az-group-create]: /cli/azure/group#az_group_create [az-aks-create]: /cli/azure/aks#az_aks_create
-[az-aks-get-credentials]: /cli/azure/aks#az_aks_get_credentials
+[az-aks-get-credentials]: /cli/azure/aks#az_aks_get_credentials
cosmos-db Sql Query Getting Started https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql-query-getting-started.md
Previously updated : 02/02/2021 Last updated : 04/09/2021
The remainder of this doc shows how to get started writing SQL queries in Azure
## Upload sample data
-In your SQL API Cosmos DB account, open the [Data Explorer](./data-explorer.md) to create a container called `Families`. After the it is created, use the data structures browser, to find and open it. In your `Families` container, you will see the `Items` option right below the name of the container. Open this option and you'll see a button, in the menu bar in center of the screen, to create a 'New Item'. You will use this feature to create the JSON items below.
+In your SQL API Cosmos DB account, open the [Data Explorer](./data-explorer.md) to create a container called `Families`. After the container is created, use the data structures browser, to find and open it. In your `Families` container, you will see the `Items` option right below the name of the container. Open this option and you'll see a button, in the menu bar in center of the screen, to create a 'New Item'. You will use this feature to create the JSON items below.
### Create JSON items
cost-management-billing Ea Portal Administration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/manage/ea-portal-administration.md
Title: Azure EA portal administration
description: This article explains the common tasks that an administrator accomplishes in the Azure EA portal. Previously updated : 11/13/2020 Last updated : 03/19/2021
Only existing Azure enterprise administrators can create other enterprise admini
### Create another enterprise administrator
-To add another enterprise administrator:
+Use one of the following options, based on your situation.
+
+#### If you're already an enterprise administrator
1. Sign in to the [Azure Enterprise portal](https://ea.azure.com). 1. Go to **Manage** > **Enrollment Detail**.
To add another enterprise administrator:
Make sure that you have the user's email address and preferred authentication method, such as a work, school, or Microsoft account.
-If you're not the enterprise administrator, contact an enterprise administrator to request that they add you to an enrollment. After you're added to an enrollment, you receive an activation email.
+#### If you're not an enterprise administrator
+
+If you're not an enterprise administrator, contact an enterprise administrator to request that they add you to an enrollment. The enterprise administrator uses the preceding steps to add you as an enterprise administrator. After you're added to an enrollment, you receive an activation email.
+
+#### If your enterprise administrator can't help you
If your enterprise administrator can't assist you, create an [Azure Enterprise portal support request](https://support.microsoft.com/supportrequestform/cf791efa-485b-95a3-6fad-3daf9cd4027c). Provide the following information:
cost-management-billing Mca Setup Account https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/manage/mca-setup-account.md
Title: Set up billing for Microsoft Customer Agreement - Azure
-description: Learn how to set up your billing account for a Microsoft Customer Agreement. See prerequisites for the setup and view additional available resources.
+description: Learn how to set up your billing account for a Microsoft Customer Agreement. See prerequisites for the setup and view other available resources.
tags: billing Previously updated : 10/20/2020 Last updated : 03/19/2021
The renewal includes the following steps:
1. Accept the new Microsoft Customer Agreement. Work with your Microsoft field representative to understand the details and accept the new agreement. 2. Set up the new billing account that's created for the new Microsoft Customer Agreement.
-To set up the billing account, you must transition the billing of Azure subscriptions from your Enterprise Agreement enrollment to the new account. The setup doesn't impact Azure services that are running in your subscriptions. However, it changes the way you'll manage the billing for your subscriptions.
+To set up the billing account, you must transition the billing of Azure subscriptions from your Enterprise Agreement enrollment to the new account. The setup doesn't affect Azure services that are running in your subscriptions. However, it changes the way you'll manage the billing for your subscriptions.
- Instead of the [EA portal](https://ea.azure.com), you'll manage your Azure services and billing, in the [Azure portal](https://portal.azure.com). - You'll get a monthly, digital invoice for your charges. You can view and analyze the invoice in the Azure Cost Management + Billing page.
Before you start the setup, we recommend you do the following actions:
To complete the setup, you need the following access: -- Owner of the billing profile that was created when the Microsoft Customer Agreement was signed. To learn more about billing profiles, see [understand billing profiles](../understand/mca-overview.md#billing-profiles).-
+- Owner of the billing profile that was created when the Microsoft Customer Agreement was signed. To learn more about billing profiles, see [understand billing profiles](../understand/mca-overview.md#billing-profiles).
+&mdash; And &mdash;
- Enterprise administrator on the enrollment that is renewed.
-### If you're not an enterprise administrator on the enrollment
+### Start migration and get permission needed to complete setup
-You can request the enterprise administrators of the enrollment to complete the setup of your billing account.
+You can use the following options to start the migration experience for your EA enrollment to your Microsoft Customer Agreement.
-1. Sign in to the Azure portal using the link in the email that was sent to you when you signed the Microsoft Customer Agreement.
-2. If you don't have the email, sign in using the following link. Replace `<enrollmentNumber>` with the enrollment number of your enterprise agreement that was renewed.
+- Sign in to the Azure portal using the link in the email that was sent to you when you signed the Microsoft Customer Agreement.
- `https://portal.azure.com/#blade/Microsoft_Azure_EA/EATransitionToMCA/enrollmentId/<enrollmentNumber>`
+- If you don't have the email, sign in using the following link. Replace `enrollmentNumber` with the enrollment number of your enterprise agreement that was renewed.
-3. Select the enterprise administrators that you want to send the request.
+ `https://portal.azure.com/#blade/Microsoft_Azure_EA/EATransitionToMCA/enrollmentId/<enrollmentNumber>`
- ![Screenshot that shows inviting the enterprise administrators](./media/mca-setup-account/ea-mca-invite-admins.png)
+If you have both the enterprise administrator and billing account owner roles or billing profile role, you see the following page in the Azure portal. You can continue setting up your EA enrollments and Microsoft Customer Agreement billing account for transition.
++
+If you don't have the enterprise administrator role for the enterprise agreement or the billing profile owner role for the Microsoft Customer Agreement, then use the following information to get the access that you need to complete setup.
+
+### If you're not an enterprise administrator on the enrollment
-4. Select **Send request**.
+You see the following page in the Azure portal if you have a billing account or billing profile owner role but you're not an enterprise administrator.
- The administrators will receive an email with instructions to complete the setup.
+
+You have two options:
+
+- Ask the enterprise administrator of the enrollment to give you the enterprise administrator role. For more information, see [Create another enterprise administrator](ea-portal-administration.md#create-another-enterprise-administrator).
+- You can give an enterprise administrator the billing account owner or billing profile owner role. For more information, see [Manage billing roles in the Azure portal](understand-mca-roles.md#manage-billing-roles-in-the-azure-portal).
+
+If you're given the enterprise administrator role, copy the link on the Set up your billing account page. Open it in your web browser to continue setting up your Microsoft Customer Agreement. Otherwise, send it to the enterprise administrator.
### If you're not an owner of the billing profile
-The user in your organization, who signed the Microsoft Customer Agreement is added as the owner on the billing profile. Request the user to add you as an owner so that you can complete the setup.
+If you're an enterprise administrator but you don't have a billing account or billing profile owner role for your Microsoft Customer Agreement, You see the following page in the Azure portal.
+
+If you believe that you have billing profile owner access to the correct Microsoft Customer Agreement and you see the following message, make sure that you are in the correct tenant for your organization. You might need to change directories.
++
+You have two options:
+
+- Ask an existing billing account owner to give you the billing account owner or billing profile owner role. For more information, see [Manage billing roles in the Azure portal](understand-mca-roles.md#manage-billing-roles-in-the-azure-portal)
+- Give the enterprise administrator role to an existing billing account owner. For more information, see [Create another enterprise administrator](ea-portal-administration.md#create-another-enterprise-administrator).
+
+If you're given the billing account owner or billing profile owner role, copy the link on the Set up your billing account page. Open it in your web browser to continue setting up your Microsoft Customer Agreement. Otherwise, send the link to the billing account owner.
## Understand changes to your billing hierarchy
cost-management-billing Understand Ea Roles https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/manage/understand-ea-roles.md
# Managing Azure Enterprise Agreement roles
-To help manage your organization's usage and spend, Azure customers with an Enterprise Agreement can assign five distinct administrative roles:
+To help manage your organization's usage and spend, Azure customers with an Enterprise Agreement can assign six distinct administrative roles:
- Enterprise Administrator - Enterprise Administrator (read only)<sup>1</sup>
The following diagram illustrates simple Azure EA hierarchies.
The following administrative user roles are part of your enterprise enrollment: - Enterprise administrator-- Ea purchaser
+- EA purchaser
- Department administrator - Account owner - Service administrator
data-factory Create Azure Ssis Integration Runtime https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/create-azure-ssis-integration-runtime.md
Title: Create an Azure-SSIS integration runtime in Azure Data Factory
description: Learn how to create an Azure-SSIS integration runtime in Azure Data Factory so you can deploy and run SSIS packages in Azure. Previously updated : 02/22/2021 Last updated : 04/09/2021
After your data factory is created, open its overview page in the Azure portal.
### Provision an Azure-SSIS integration runtime
-On the **Let's get started** page, select the **Configure SSIS Integration Runtime** tile to open the **Integration runtime setup** pane.
+On the **Let's get started** page, select the **Configure SSIS Integration** tile to open the **Integration runtime setup** pane.
![Configure SSIS Integration Runtime tile](./media/tutorial-create-azure-ssis-runtime-portal/configure-ssis-integration-runtime-tile.png)
On the **General settings** page of **Integration runtime setup** pane, complete
7. For **Save Money**, select the Azure Hybrid Benefit option for your integration runtime: **Yes** or **No**. Select **Yes** if you want to bring your own SQL Server license with Software Assurance to benefit from cost savings with hybrid use.
- 8. Select **Next**.
+ 8. Select **Continue**.
#### Deployment settings page
If you select the check box, complete the following steps to bring your own data
1. For **Catalog Database Service Tier**, select the service tier for your database server to host SSISDB. Select the Basic, Standard, or Premium tier, or select an elastic pool name.
-Select **Test connection** when applicable and if it's successful, select **Next**.
+Select **Test connection** when applicable and if it's successful, select **Continue**.
> [!NOTE] > If you use Azure SQL Database server to host SSISDB, your data will be stored in geo-redundant storage for backups by default. If you don't want your data to be replicated in other regions, please follow the instructions to [Configure backup storage redundancy by using PowerShell](../azure-sql/database/automated-backups-overview.md?tabs=single-database#configure-backup-storage-redundancy-by-using-powershell).
On the **Add package store** pane, complete the following steps.
1. For **Package store linked service**, select your existing linked service that stores the access information for file system/Azure Files/Azure SQL Managed Instance where your packages are deployed or create a new one by selecting **New**. On the **New linked service** pane, complete the following steps. > [!NOTE]
- > You can use either **Azure File Storage** or **File System** linked services to access Azure Files. If you use **Azure File Storage** linked service, Azure-SSIS IR package store supports only **Basic** (not **Account key** nor **SAS URI**) authentication method for now. To use **Basic** authentication on **Azure File Storage** linked service, you can append `?feature.upgradeAzureFileStorage=false` to the ADF portal URL in your browser. Alternatively, you can use **File System** linked service to access Azure Files instead.
+ > You can use either **Azure File Storage** or **File System** linked services to access Azure Files. If you use **Azure File Storage** linked service, Azure-SSIS IR package store supports only **Basic** (not **Account key** nor **SAS URI**) authentication method for now.
![Deployment settings for linked services](./media/tutorial-create-azure-ssis-runtime-portal/deployment-settings-linked-service.png)
On the **Add package store** pane, complete the following steps.
1. You can ignore **Connect via integration runtime**, since we always use your Azure-SSIS IR to fetch the access information for package stores.
- 1. If you select **Azure File Storage**, complete the following steps.
+ 1. If you select **Azure File Storage**, for **Authentication method**, select **Basic**, and then complete the following steps.
1. For **Account selection method**, select **From Azure subscription** or **Enter manually**.
On the **Add package store** pane, complete the following steps.
1. If you select **Azure SQL Managed Instance**, complete the following steps.
- 1. Select **Connection string** to enter it manually or your **Azure Key Vault** where it's stored as a secret.
-
+ 1. Select **Connection string** or your **Azure Key Vault** where it's stored as a secret.
+ 1. If you select **Connection string**, complete the following steps.
+ 1. For **Account selection method**, if you choose **From Azure subscription**, select the relevant **Azure subscription**, **Server name**, **Endpoint type** and **Database name**. If you choose **Enter manually**, complete the following steps.
+ 1. For **Fully qualified domain name**, enter `<server name>.<dns prefix>.database.windows.net` or `<server name>.public.<dns prefix>.database.windows.net,3342` as the private or public endpoint of your Azure SQL Managed Instance, respectively. If you enter the private endpoint, **Test connection** isn't applicable, since ADF UI can't reach it.
- 1. For **Fully qualified domain name**, enter `<server name>.<dns prefix>.database.windows.net` or `<server name>.public.<dns prefix>.database.windows.net,3342` as the private or public endpoint of your Azure SQL Managed Instance, respectively. If you enter the private endpoint, **Test connection** isn't applicable, since ADF UI can't reach it.
+ 1. For **Database name**, enter `msdb`.
- 1. For **Database name**, enter `msdb`.
-
1. For **Authentication type**, select **SQL Authentication**, **Managed Identity**, or **Service Principal**.
- 1. If you select **SQL Authentication**, enter the relevant **Username** and **Password** or select your **Azure Key Vault** where it's stored as a secret.
+ - If you select **SQL Authentication**, enter the relevant **Username** and **Password** or select your **Azure Key Vault** where it's stored as a secret.
- 1. If you select **Managed Identity**, grant your ADF managed identity access to your Azure SQL Managed Instance.
+ - If you select **Managed Identity**, grant your ADF managed identity access to your Azure SQL Managed Instance.
- 1. If you select **Service Principal**, enter the relevant **Service principal ID** and **Service principal key** or select your **Azure Key Vault** where it's stored as a secret.
+ - If you select **Service Principal**, enter the relevant **Service principal ID** and **Service principal key** or select your **Azure Key Vault** where it's stored as a secret.
1. If you select **File system**, enter the UNC path of folder where your packages are deployed for **Host**, as well as the relevant **Username** and **Password** or select your **Azure Key Vault** where it's stored as a secret.
On the **Add package store** pane, complete the following steps.
1. Your added package stores will appear on the **Deployment settings** page. To remove them, select their check boxes, and then select **Delete**.
-Select **Test connection** when applicable and if it's successful, select **Next**.
+Select **Test connection** when applicable and if it's successful, select **Continue**.
#### Advanced settings page
On the **Connections** pane of **Manage** hub, switch to the **Integration runti
### Azure SSIS integration runtimes in the portal
-1. In the Azure Data Factory UI, switch to the **Edit** tab and select **Connections**. Then switch to the **Integration Runtimes** tab to view existing integration runtimes in your data factory.
+1. In the Azure Data Factory UI, switch to the **Manage** tab and then switch to the **Integration runtimes** tab on the **Connections** pane to view existing integration runtimes in your data factory.
![View existing IRs](./media/tutorial-create-azure-ssis-runtime-portal/view-azure-ssis-integration-runtimes.png)
On the **Connections** pane of **Manage** hub, switch to the **Integration runti
![Integration runtime via menu](./media/tutorial-create-azure-ssis-runtime-portal/edit-connections-new-integration-runtime-button.png)
-1. In the **Integration runtime setup** pane, select the **Lift-and-shift existing SSIS packages to execute in Azure** tile, and then select **Next**.
+1. In the **Integration runtime setup** pane, select the **Lift-and-shift existing SSIS packages to execute in Azure** tile, and then select **Continue**.
![Specify the type of integration runtime](./media/tutorial-create-azure-ssis-runtime-portal/integration-runtime-setup-options.png)
data-factory Frequently Asked Questions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/frequently-asked-questions.md
Use the Copy activity to stage data from any of the other connectors, and then e
### Is the self-hosted integration runtime available for data flows?
-Self-hosted IR is an ADF pipeline construct that you can use with the Copy Activity to acquire or move data to and from on-prem or VM-based data sources and sinks. Stage the data first with a Copy, then Data Flow for transformation, and then a subsequent copy if you need to move that transformed data back to the on-prem store.
+Self-hosted IR is an ADF pipeline construct that you can use with the Copy Activity to acquire or move data to and from on-prem or VM-based data sources and sinks. The virtual machines that you use for a self-hosted IR can also be placed inside of the same VNET as your protected data stores for access to those data stores from ADF. With data flows, you'll achieve these same end-results using the Azure IR with managed VNET instead.
### Does the data flow compute engine serve multiple tenants? Clusters are never shared. We guarantee isolation for each job run in production runs. In case of debug scenario one person gets one cluster, and all debugs will go to that cluster which are initiated by that user.
-## Wrangling data flows
-
-### What are the supported regions for wrangling data flow?
-
-Wrangling data flow is currently supported in data factories created in following regions:
-
-* Australia East
-* Canada Central
-* Central India
-* East US
-* East US 2
-* Japan East
-* North Europe
-* Southeast Asia
-* South Central US
-* UK South
-* West Central US
-* West Europe
-* West US
-* West US 2
-
-### What are the limitations and constraints with wrangling data flow?
-
-Dataset names can only contain alpha-numeric characters. The following data stores are supported:
-
-* DelimitedText dataset in Azure Blob Storage using account key authentication
-* DelimitedText dataset in Azure Data Lake Storage gen2 using account key or service principal authentication
-* DelimitedText dataset in Azure Data Lake Storage gen1 using service principal authentication
-* Azure SQL Database and Data Warehouse using sql authentication. See supported SQL types below. There is no PolyBase or staging support for data warehouse.
-
-At this time, linked service Key Vault integration is not supported in wrangling data flows.
-
-### What is the difference between mapping and wrangling data flows?
-
-Mapping data flows provide a way to transform data at scale without any coding required. You can design a data transformation job in the data flow canvas by constructing a series of transformations. Start with any number of source transformations followed by data transformation steps. Complete your data flow with a sink to land your results in a destination. Mapping data flow is great at mapping and transforming data with both known and unknown schemas in the sinks and sources.
-
-Wrangling data flows allow you to do agile data preparation and exploration using the Power Query Online mashup editor at scale via spark execution. With the rise of data lakes sometimes you just need to explore a data set or create a dataset in the lake. You aren't mapping to a known target. Wrangling data flows are used for less formal and model-based analytics scenarios.
-
-### What is the difference between Power Platform Dataflows and wrangling data flows?
-
-Power Platform Dataflows allow users to import and transform data from a wide range of data sources into the Common Data Service and Azure Data Lake to build PowerApps applications, Power BI reports or Flow automations. Power Platform Dataflows use the established Power Query data preparation experiences, similar to Power BI and Excel. Power Platform Dataflows also enable easy reuse within an organization and automatically handle orchestration (e.g. automatically refreshing dataflows that depend on another dataflow when the former one is refreshed).
-
-Azure Data Factory (ADF) is a managed data integration service that allows data engineers and citizen data integrator to create complex hybrid extract-transform-load (ETL) and extract-load-transform (ELT) workflows. Wrangling data flow in ADF empowers users with a code-free, serverless environment that simplifies data preparation in the cloud and scales to any data size with no infrastructure management required. It uses the Power Query data preparation technology (also used in Power Platform dataflows, Excel, Power BI) to prepare and shape the data. Built to handle all the complexities and scale challenges of big data integration, wrangling data flows allow users to quickly prepare data at scale via spark execution. Users can build resilient data pipelines in an accessible visual environment with our browser-based interface and let ADF handle the complexities of Spark execution. Build schedules for your pipelines and monitor your data flow executions from the ADF monitoring portal. Easily manage data availability SLAs with ADF's rich availability monitoring and alerts and leverage built-in continuous integration and deployment capabilities to save and manage your flows in a managed environment. Establish alerts and view execution plans to validate that your logic is performing as planned as you tune your data flows.
-
-### Supported SQL Types
-
-Wrangling data flow supports the following data types in SQL. You will get a validation error for using a data type that isn't supported.
-
-* short
-* double
-* real
-* float
-* char
-* nchar
-* varchar
-* nvarchar
-* integer
-* int
-* bit
-* boolean
-* smallint
-* tinyint
-* bigint
-* long
-* text
-* date
-* datetime
-* datetime2
-* smalldatetime
-* timestamp
-* uniqueidentifier
-* xml
-
-Other data types will be supported in the future.
- ## Next steps For step-by-step instructions to create a data factory, see the following tutorials:
data-factory Tutorial Deploy Ssis Packages Azure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/tutorial-deploy-ssis-packages-azure.md
description: Learn how to provision the Azure-SSIS integration runtime in Azure
Previously updated : 02/22/2021 Last updated : 04/02/2021
After your data factory is created, open its overview page in the Azure portal.
### From the Data Factory overview
-1. On the **Let's get started** page, select the **Configure SSIS Integration Runtime** tile.
+1. On the **Let's get started** page, select the **Configure SSIS Integration** tile.
!["Configure SSIS Integration Runtime" tile](./media/tutorial-create-azure-ssis-runtime-portal/configure-ssis-integration-runtime-tile.png)
After your data factory is created, open its overview page in the Azure portal.
### From the authoring UI
-1. In the Azure Data Factory UI, switch to the **Edit** tab and select **Connections**. Then switch to the **Integration Runtimes** tab to view existing integration runtimes in your data factory.
+1. In the Azure Data Factory UI, switch to the **Manage** tab, and then switch to the **Integration runtimes** tab to view existing integration runtimes in your data factory.
![Selections for viewing existing IRs](./media/tutorial-create-azure-ssis-runtime-portal/view-azure-ssis-integration-runtimes.png)
After your data factory is created, open its overview page in the Azure portal.
![Integration runtime via menu](./media/tutorial-create-azure-ssis-runtime-portal/edit-connections-new-integration-runtime-button.png)
-1. In the **Integration runtime setup** pane, select the **Lift-and-shift existing SSIS packages to execute in Azure** tile, and then select **Next**.
+1. In the **Integration runtime setup** pane, select the **Lift-and-shift existing SSIS packages to execute in Azure** tile, and then select **Continue**.
![Specify the type of integration runtime](./media/tutorial-create-azure-ssis-runtime-portal/integration-runtime-setup-options.png)
On the **General settings** page of **Integration runtime setup** pane, complete
1. For **Save Money**, select the Azure Hybrid Benefit option for your integration runtime: **Yes** or **No**. Select **Yes** if you want to bring your own SQL Server license with Software Assurance to benefit from cost savings with hybrid use.
- 1. Select **Next**.
+ 1. Select **Continue**.
### Deployment settings page
If you select the check box, complete the following steps to bring your own data
1. For **Catalog Database Service Tier**, select the service tier for your database server to host SSISDB. Select the Basic, Standard, or Premium tier, or select an elastic pool name.
-Select **Test connection** when applicable and if it's successful, select **Next**.
+Select **Test connection** when applicable and if it's successful, select **Continue**.
#### Creating Azure-SSIS IR package stores
On the **Add package store** pane, complete the following steps.
1. For **Package store linked service**, select your existing linked service that stores the access information for file system/Azure Files/Azure SQL Managed Instance where your packages are deployed or create a new one by selecting **New**. On the **New linked service** pane, complete the following steps. > [!NOTE]
- > You can use either **Azure File Storage** or **File System** linked services to access Azure Files. If you use **Azure File Storage** linked service, Azure-SSIS IR package store supports only **Basic** (not **Account key** nor **SAS URI**) authentication method for now. To use **Basic** authentication on **Azure File Storage** linked service, you can append `?feature.upgradeAzureFileStorage=false` to the ADF portal URL in your browser. Alternatively, you can use **File System** linked service to access Azure Files instead.
+ > You can use either **Azure File Storage** or **File System** linked services to access Azure Files. If you use **Azure File Storage** linked service, Azure-SSIS IR package store supports only **Basic** (not **Account key** nor **SAS URI**) authentication method for now.
![Deployment settings for linked services](./media/tutorial-create-azure-ssis-runtime-portal/deployment-settings-linked-service.png)
On the **Add package store** pane, complete the following steps.
1. For **Type**, select **Azure File Storage**, **Azure SQL Managed Instance**, or **File System**. 1. You can ignore **Connect via integration runtime**, since we always use your Azure-SSIS IR to fetch the access information for package stores.-
- 1. If you select **Azure File Storage**, complete the following steps.
+
+ 1. If you select **Azure File Storage**, for **Authentication method**, select **Basic**, and then complete the following steps.
1. For **Account selection method**, select **From Azure subscription** or **Enter manually**.
On the **Add package store** pane, complete the following steps.
1. If you select **Azure SQL Managed Instance**, complete the following steps.
- 1. Select **Connection string** to enter it manually or your **Azure Key Vault** where it's stored as a secret.
+ 1. Select **Connection string** or your **Azure Key Vault** where it's stored as a secret.
1. If you select **Connection string**, complete the following steps.
+ 1. For **Account selection method**, if you choose **From Azure subscription**, select the relevant **Azure subscription**, **Server name**, **Endpoint type** and **Database name**. If you choose **Enter manually**, complete the following steps.
+ 1. For **Fully qualified domain name**, enter `<server name>.<dns prefix>.database.windows.net` or `<server name>.public.<dns prefix>.database.windows.net,3342` as the private or public endpoint of your Azure SQL Managed Instance, respectively. If you enter the private endpoint, **Test connection** isn't applicable, since ADF UI can't reach it.
- 1. For **Fully qualified domain name**, enter `<server name>.<dns prefix>.database.windows.net` or `<server name>.public.<dns prefix>.database.windows.net,3342` as the private or public endpoint of your Azure SQL Managed Instance, respectively. If you enter the private endpoint, **Test connection** isn't applicable, since ADF UI can't reach it.
-
- 1. For **Database name**, enter `msdb`.
+ 1. For **Database name**, enter `msdb`.
1. For **Authentication type**, select **SQL Authentication**, **Managed Identity**, or **Service Principal**.
- 1. If you select **SQL Authentication**, enter the relevant **Username** and **Password** or select your **Azure Key Vault** where it's stored as a secret.
+ - If you select **SQL Authentication**, enter the relevant **Username** and **Password** or select your **Azure Key Vault** where it's stored as a secret.
- 1. If you select **Managed Identity**, grant your ADF managed identity access to your Azure SQL Managed Instance.
+ - If you select **Managed Identity**, grant your ADF managed identity access to your Azure SQL Managed Instance.
- 1. If you select **Service Principal**, enter the relevant **Service principal ID** and **Service principal key** or select your **Azure Key Vault** where it's stored as a secret.
+ - If you select **Service Principal**, enter the relevant **Service principal ID** and **Service principal key** or select your **Azure Key Vault** where it's stored as a secret.
1. If you select **File system**, enter the UNC path of folder where your packages are deployed for **Host**, as well as the relevant **Username** and **Password** or select your **Azure Key Vault** where it's stored as a secret.
On the **Add package store** pane, complete the following steps.
1. Your added package stores will appear on the **Deployment settings** page. To remove them, select their check boxes, and then select **Delete**.
-Select **Test connection** when applicable and if it's successful, select **Next**.
+Select **Test connection** when applicable and if it's successful, select **Continue**.
### Advanced settings page
On the **Advanced settings** page of **Integration runtime setup** pane, complet
1. Select **Continue**.
-On the **Summary** page of **Integration runtime setup** pane, review all provisioning settings, bookmark the recommended documentation links, and select **Finish** to start the creation of your integration runtime.
+On the **Summary** page of **Integration runtime setup** pane, review all provisioning settings, bookmark the recommended documentation links, and select **Create** to start the creation of your integration runtime.
> [!NOTE] > Excluding any custom setup time, this process should finish within 5 minutes.
databox-online Azure Stack Edge Pro R Security https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-pro-r-security.md
Previously updated : 10/14/2020 Last updated : 04/09/2021 # Security and data protection for Azure Stack Edge Pro R and Azure Stack Edge Mini R
Data on your disks is protected by two layers of encryption:
> [!NOTE] > The OS disk has single layer BitLocker XTS-AES-256 software encryption.
-When the device is activated, you are prompted to save a key file that contains recovery keys that help recover the data on the device if the device doesn't boot up. There are two keys in the file:
+Before you activate the device, you are required to configure encryption-at-rest on your device. This is a required setting and until this is successfully configured, you can't activate the device.
-- One key recovers the device configuration on the OS volumes.
-<!-
-- Second key unlocks the hardware encryption in the data disks.
+At the factory, once the devices are imaged, the volume level BitLocker encryption is enabled. After you receive the device, you need to configure the encryption-at-rest. The storage pool and volumes are recreated and you can provide BitLocker keys to enable encryption-at-rest and thus create another layer of encryption for your data-at-rest.
+
+The encryption-at-rest key is a 32 character long Base-64 encoded key that you provide and this key is used to protect the actual encryption key. Microsoft does not have access to this encryption-at-rest key that protects your data. The key is saved in a key file on the **Cloud details** page after the device is activated.
+
+When the device is activated, you are prompted to save the key file that contains recovery keys that help recover the data on the device if the device doesn't boot up. Certain recovery scenarios will prompt you for the key file that you have saved. The key file has the following recovery keys:
+
+- A key that unlocks the first layer of encryption.
+- A key that unlocks the hardware encryption in the data disks.
+- A key that helps recover the device configuration on the OS volumes.
+- A key that protects the data flowing through the Azure service.
> [!IMPORTANT] > Save the key file in a secure location outside the device itself. If the device doesn't boot up, and you don't have the key, it could potentially result in data loss. -- Certain recovery scenarios will prompt you for the key file that you have saved.
-<! If a node isn't booting up, you will need to perform a node replacement. You will have the option to swap the data disks from the failed node to the new node. For a 4-node device, you won't need a key file. For a 1-node device, you will be prompted to provide a key file.-->
+ #### Restricted access to data
When the device undergoes a hard reset, a secure wipe is performed on the device
### Protect data in storage accounts [!INCLUDE [azure-stack-edge-gateway-data-rest](../../includes/azure-stack-edge-gateway-protect-data-storage-accounts.md)]- - Rotate and then [sync your storage account keys](azure-stack-edge-gpu-manage-storage-accounts.md) regularly to help protect your storage account from unauthorized users. ## Manage personal information
digital-twins Concepts Event Notifications https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/concepts-event-notifications.md
+
+# Mandatory fields.
+ Title: Event notifications
+
+description: See how to interpret different event types and their different notification messages.
++ Last updated : 4/8/2021++++
+# Optional fields. Don't forget to remove # if you need a field.
+#
+#
+#
++
+# Event notifications
+
+Different events in Azure Digital Twins produce **notifications**, which allow the solution backend to be aware when different actions are happening. These are then [routed](concepts-route-events.md) to different locations inside and outside of Azure Digital Twins that can use this information to take action.
+
+There are several types of notifications that can be generated, and notification messages may look different depending on which type of event generated them. This article gives detail about different types of messages, and what they might look like.
+
+This chart shows the different notification types:
++
+## Notification structure
+
+In general, notifications are made up of two parts: the header and the body.
+
+### Event notification headers
+
+Notification message headers are represented with key-value pairs. Depending on the protocol used (MQTT, AMQP, or HTTP), message headers will be serialized differently. This section discusses general header information for notification messages, regardless of the specific protocol and serialization chosen.
+
+Some notifications conform to the [CloudEvents](https://cloudevents.io/) standard. CloudEvents conformance is as follows.
+* Notifications emitted from devices continue to follow the existing specifications for notifications
+* Notifications processed and emitted by IoT Hub continue to follow the existing specifications for notification, except where IoT Hub chooses to support CloudEvents, such as through Event Grid
+* Notifications emitted from [digital twins](concepts-twins-graph.md) with a [model](concepts-models.md) conform to CloudEvents
+* Notifications processed and emitted by Azure Digital Twins conform to CloudEvents
+
+Services have to add a sequence number on all the notifications to indicate their order, or maintain their own ordering in some other way.
+
+Notifications emitted by Azure Digital Twins to Event Grid will be automatically formatted to either the CloudEvents schema or EventGridEvent schema, depending on the schema type defined in the event grid topic.
+
+Extension attributes on headers will be added as properties on the Event Grid schema inside of the payload.
+
+### Event notification bodies
+
+The bodies of notification messages are described here in JSON. Depending on the serialization desired for the message body (such as with JSON, CBOR, Protobuf, etc.), the message body may be serialized differently.
+
+The set of fields that the body contains vary with different notification types.
+
+The following sections go into more detail about the different types of notifications emitted by IoT Hub and Azure Digital Twins (or other Azure IoT services). You will read about the things that trigger each notification type, and the set of fields included with each type of notification body.
+
+## Digital twin change notifications
+
+**Digital twin change notifications** are triggered when a digital twin is being updated, like:
+* When property values or metadata changes.
+* When digital twin or component metadata changes. An example of this scenario is changing the model of a digital twin.
+
+### Properties
+
+Here are the fields in the body of a digital twin change notification.
+
+| Name | Value |
+| | |
+| `id` | Identifier of the notification, such as a UUID or a counter maintained by the service. `source` + `id` is unique for each distinct event |
+| `source` | Name of the IoT hub or Azure Digital Twins instance, like *myhub.azure-devices.net* or *mydigitaltwins.westus2.azuredigitaltwins.net* |
+| `data` | A JSON Patch document describing the update made to the twin. For details, see [Body details](#body-details) below. |
+| `specversion` | *1.0*<br>The message conforms to this version of the [CloudEvents spec](https://github.com/cloudevents/spec). |
+| `type` | `Microsoft.DigitalTwins.Twin.Update` |
+| `datacontenttype` | `application/json` |
+| `subject` | ID of the digital twin |
+| `time` | Timestamp for when the operation occurred on the digital twin |
+| `traceparent` | A W3C trace context for the event |
+
+### Body details
+
+Inside the message, the `data` field contains a JSON Patch document containing the update to the digital twin.
+
+For example, say that a digital twin was updated using the following patch.
++
+The data in the corresponding notification (if synchronously executed by the service, such as Azure Digital Twins updating a digital twin) would have a body like:
+
+```json
+{
+ "modelId": "dtmi:example:com:floor4;2",
+ "patch": [
+ {
+ "value": 40,
+ "path": "/Temperature",
+ "op": "replace"
+ },
+ {
+ "value": 30,
+ "path": "/comp1/prop1",
+ "op": "add"
+ }
+ ]
+ }
+```
+
+This is the information that will go in the `data` field of the lifecycle notification message.
+
+## Digital twin lifecycle notifications
+
+All [digital twins](concepts-twins-graph.md) emit notifications, regardless of whether they represent [IoT Hub devices in Azure Digital Twins](how-to-ingest-iot-hub-data.md) or not. This is because of **lifecycle notifications**, which are about the digital twin itself.
+
+Lifecycle notifications are triggered when:
+* A digital twin is created
+* A digital twin is deleted
+
+### Properties
+
+Here are the fields in the body of a lifecycle notification.
+
+| Name | Value |
+| | |
+| `id` | Identifier of the notification, such as a UUID or a counter maintained by the service. `source` + `id` is unique for each distinct event. |
+| `source` | Name of the IoT hub or Azure Digital Twins instance, like *myhub.azure-devices.net* or *mydigitaltwins.westus2.azuredigitaltwins.net* |
+| `data` | The data of the twin experiencing the lifecycle event. For details, see [Body details](#body-details-1) below. |
+| `specversion` | *1.0*<br>The message conforms to this version of the [CloudEvents spec](https://github.com/cloudevents/spec). |
+| `type` | `Microsoft.DigitalTwins.Twin.Create`<br>`Microsoft.DigitalTwins.Twin.Delete` |
+| `datacontenttype` | `application/json` |
+| `subject` | ID of the digital twin |
+| `time` | Timestamp for when the operation occurred on the twin |
+| `traceparent` | A W3C trace context for the event |
+
+### Body details
+
+Here is an example of a lifecycle notification message:
+
+```json
+{
+ "specversion": "1.0",
+ "id": "d047e992-dddc-4a5a-b0af-fa79832235f8",
+ "type": "Microsoft.DigitalTwins.Twin.Create",
+ "source": "contoso-adt.api.wus2.digitaltwins.azure.net",
+ "data": {
+ "$dtId": "floor1",
+ "$etag": "W/\"e398dbf4-8214-4483-9d52-880b61e491ec\"",
+ "$metadata": {
+ "$model": "dtmi:example:Floor;1"
+ }
+ },
+ "subject": "floor1",
+ "time": "2020-06-23T19:03:48.9700792Z",
+ "datacontenttype": "application/json",
+ "traceparent": "00-18f4e34b3e4a784aadf5913917537e7d-691a71e0a220d642-01"
+}
+```
+
+Inside the message, the `data` field contains the data of the affected digital twin, represented in JSON format. The schema for this is *Digital Twins Resource 7.1*.
+
+For creation events, the `data` payload reflects the state of the twin after the resource is created, so it should include all system generated-elements just like a `GET` call.
+
+Here is an example of a the data for an [IoT Plug and Play (PnP)](../iot-pnp/overview-iot-plug-and-play.md) device, with components and no top-level properties. Properties that do not make sense for devices (such as reported properties) should be omitted. This is the information that will go in the `data` field of the lifecycle notification message.
+
+```json
+{
+ "$dtId": "device-digitaltwin-01",
+ "$etag": "W/\"e59ce8f5-03c0-4356-aea9-249ecbdc07f9\"",
+ "thermostat": {
+ "temperature": 80,
+ "humidity": 45,
+ "$metadata": {
+ "$model": "dtmi:com:contoso:Thermostat;1",
+ "temperature": {
+ "desiredValue": 85,
+ "desiredVersion": 3,
+ "ackVersion": 2,
+ "ackCode": 200,
+ "ackDescription": "OK"
+ },
+ "humidity": {
+ "desiredValue": 40,
+ "desiredVersion": 1,
+ "ackVersion": 1,
+ "ackCode": 200,
+ "ackDescription": "OK"
+ }
+ }
+ },
+ "$metadata": {
+ "$model": "dtmi:com:contoso:Thermostat_X500;1",
+ }
+}
+```
+
+Here is another example of digital twin data. This one is based on a [model](concepts-models.md), and does not support components:
+
+```json
+{
+ "$dtId": "logical-digitaltwin-01",
+ "$etag": "W/\"e59ce8f5-03c0-4356-aea9-249ecbdc07f9\"",
+ "avgTemperature": 70,
+ "comfortIndex": 85,
+ "$metadata": {
+ "$model": "dtmi:com:contoso:Building;1",
+ "avgTemperature": {
+ "desiredValue": 72,
+ "desiredVersion": 5,
+ "ackVersion": 4,
+ "ackCode": 200,
+ "ackDescription": "OK"
+ },
+ "comfortIndex": {
+ "desiredValue": 90,
+ "desiredVersion": 1,
+ "ackVersion": 3,
+ "ackCode": 200,
+ "ackDescription": "OK"
+ }
+ }
+}
+```
+
+## Digital twin relationship change notifications
+
+**Relationship change notifications** are triggered when any relationship of a digital twin is created, updated, or deleted.
+
+### Properties
+
+Here are the fields in the body of a relationship change notification.
+
+| Name | Value |
+| | |
+| `id` | Identifier of the notification, such as a UUID or a counter maintained by the service. `source` + `id` is unique for each distinct event |
+| `source` | Name of the Azure Digital Twins instance, like *mydigitaltwins.westus2.azuredigitaltwins.net* |
+| `data` | The payload of the relationship that was changed. For details, see [Body details](#body-details-2) below. |
+| `specversion` | *1.0*<br>The message conforms to this version of the [CloudEvents spec](https://github.com/cloudevents/spec). |
+| `type` | `Microsoft.DigitalTwins.Relationship.Create`<br>`Microsoft.DigitalTwins.Relationship.Update`<br>`Microsoft.DigitalTwins.Relationship.Delete` |
+| `datacontenttype` | `application/json` |
+| `subject` | ID of the relationship, like `<twinID>/relationships/<relationshipID>` |
+| `time` | Timestamp for when the operation occurred on the relationship |
+| `traceparent` | A W3C trace context for the event |
+
+### Body details
+
+Inside the message, the `data` field contains the payload of a relationship, in JSON format. It uses the same format as a `GET` request for a relationship via the [DigitalTwins API](/rest/api/digital-twins/dataplane/twins).
+
+Here is an example of the data for an update relationship notification. "Updating a relationship" means properties of the relationship have changed, so the data shows the updated property and its new value. This is the information that will go in the `data` field of the digital twin relationship notification message.
+
+```json
+{
+ "modelId": "dtmi:example:Floor;1",
+ "patch": [
+ {
+ "value": "user3",
+ "path": "/ownershipUser",
+ "op": "replace"
+ }
+ ]
+ }
+```
+
+Here is an example of the data for a create or delete relationship notification. For `Relationship.Delete`, the body is the same as the `GET` request, and it gets the latest state before deletion.
+
+```json
+{
+ "$relationshipId": "device_to_device",
+ "$etag": "W/\"72479873-0083-41a8-83e2-caedb932d881\"",
+ "$relationshipName": "Connected",
+ "$targetId": "device2",
+ "connectionType": "WIFI"
+}
+```
+
+## Digital twin telemetry messages
+
+**Telemetry messages** are received in Azure Digital Twins from connected devices that collect and send measurements.
+
+### Properties
+
+Here are the fields in the body of a telemetry message.
+
+| Name | Value |
+| | |
+| `id` | Identifier of the notification, which is provided by the customer when calling the telemetry API. |
+| `source` | Fully qualified name of the twin that the telemetry event was sent to. Uses the following format: `<yourDigitalTwinInstance>.api.<yourRegion>.digitaltwins.azure.net/<twinId>`. |
+| `specversion` | *1.0*<br>The message conforms to this version of the [CloudEvents spec](https://github.com/cloudevents/spec). |
+| `type` | `microsoft.iot.telemetry` |
+| `data` | The telemetry message that has been sent to twins. The payload is unmodified and may not align with the schema of the twin that has been sent the telemetry. |
+| `dataschema` | The data schema is the model ID of the twin or the component that emits the telemetry. For example, `dtmi:example:com:floor4;2`. |
+| `datacontenttype` | `application/json` |
+| `traceparent` | A W3C trace context for the event. |
+
+### Body details
+
+The body contains the telemetry measurement along with some contextual information about the device.
+
+Here is an example telemetry message body:
+
+```json
+{
+ "specversion": "1.0",
+ "id": "df5a5992-817b-4e8a-b12c-e0b18d4bf8fb",
+ "type": "microsoft.iot.telemetry",
+ "source": "contoso-adt.api.wus2.digitaltwins.azure.net/digitaltwins/room1",
+ "data": {
+ "Temperature": 10
+ },
+ "dataschema": "dtmi:example:com:floor4;2",
+ "datacontenttype": "application/json",
+ "traceparent": "00-7e3081c6d3edfb4eaf7d3244b2036baa-23d762f4d9f81741-01"
+}
+```
+
+## Next steps
+
+Learn about delivering events to different destinations, using endpoints and routes:
+* [*Concepts: Event routes*](concepts-route-events.md)
digital-twins How To Ingest Iot Hub Data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/how-to-ingest-iot-hub-data.md
Whenever a temperature telemetry event is sent by the thermostat device, a funct
In this section, you'll set up a [digital twin](concepts-twins-graph.md) in Azure Digital Twins that will represent the thermostat device and will be updated with information from IoT Hub.
-To create a thermostat-type twin, you'll first need to upload the thermostat [model](concepts-models.md) to your instance, which describes the properties of a thermostat and will be used later to create the twin.
+To create a thermostat-type twin, you'll first need to upload the thermostat [model](concepts-models.md) to your instance, which describes the properties of a thermostat and will be used later to create the twin.
[!INCLUDE [digital-twins-thermostat-model-upload.md](../../includes/digital-twins-thermostat-model-upload.md)]
digital-twins How To Integrate Time Series Insights https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/how-to-integrate-time-series-insights.md
description: See how to set up event routes from Azure Digital Twins to Azure Time Series Insights. Previously updated : 1/19/2021 Last updated : 4/7/2021
In this article, you'll learn how to integrate Azure Digital Twins with [Azure Time Series Insights (TSI)](../time-series-insights/overview-what-is-tsi.md).
-The solution described in this article will allow you to gather and analyze historical data about your IoT solution. Azure Digital Twins is a great fit for feeding data into Time Series Insights, as it allows you to correlate multiple data streams and standardize your information before sending it to Time Series Insights.
+The solution described in this article will allow you to gather and analyze historical data about your IoT solution. Azure Digital Twins is a great fit for feeding data into Time Series Insights, as it allows you to correlate multiple data streams and standardize your information before sending it to Time Series Insights.
## Prerequisites
-Before you can set up a relationship with Time Series Insights, you need to have an **Azure Digital Twins instance**. This instance should be set up with the ability to update digital twin information based on data, as you'll need to update twin information a few times to see that data tracked in Time Series Insights.
+Before you can set up a relationship with Time Series Insights, you'll need to set up the following resources:
+* An **IoT hub**. For instructions, see the [*Create an IoT Hub*](../iot-hub/quickstart-send-telemetry-cli.md#create-an-iot-hub) section of the *IoT Hub's Send Telemetry* quickstart.
+* An **Azure Digital Twins instance**.
+For instructions, see [*How-to: Set up an Azure Digital Twins instance and authentication*](./how-to-set-up-instance-portal.md).
+* A **model and a twin in the Azure Digital Twins instance**.
+You'll need to update twin's information a few times to see that data tracked in Time Series Insights. For instructions, see the [*Add a model and twin*](how-to-ingest-iot-hub-data.md#add-a-model-and-twin) section of the *How to: Ingest IoT hub* article.
+
+> [!TIP]
+> In this article, the changing digital twin values that are viewed in Time Series Insights are updated manually for simplicity. However, if you'd like to complete this article with live simulated data, you can set up an Azure function that updates digital twins based on IoT telemetry events from a simulated device. For instructions, follow [*How to: Ingest IoT Hub data*](how-to-ingest-iot-hub-data.md), including the final steps to run the device simulator and validate that the data flow works.
+>
+> Later, look for another TIP to show you where to start running the device simulator and have your Azure functions update the twins automatically, instead of sending manual digital twin update commands.
-If you do not have this set up already, you can create it by following the Azure Digital Twins [*Tutorial: Connect an end-to-end solution*](./tutorial-end-to-end.md). The tutorial will walk you through setting up an Azure Digital Twins instance that works with a virtual IoT device to trigger digital twin updates.
## Solution architecture
You will be attaching Time Series Insights to Azure Digital Twins through the pa
:::row::: :::column:::
- :::image type="content" source="media/how-to-integrate-time-series-insights/diagram-simple.png" alt-text="A view of Azure services in an end-to-end scenario, highlighting Time Series Insights" lightbox="media/how-to-integrate-time-series-insights/diagram-simple.png":::
+ :::image type="content" source="media/how-to-integrate-time-series-insights/diagram-simple.png" alt-text="Diagram of Azure services in an end-to-end scenario, highlighting Time Series Insights." lightbox="media/how-to-integrate-time-series-insights/diagram-simple.png":::
:::column-end::: :::column::: :::column-end::: :::row-end:::
-## Create a route and filter to twin update notifications
+## Create event hub namespace
-Azure Digital Twins instances can emit [twin update events](how-to-interpret-event-data.md) whenever a twin's state is updated. In this section, you will be creating an Azure Digital Twins [**event route**](concepts-route-events.md) that will direct these update events to Azure [Event Hubs](../event-hubs/event-hubs-about.md) for further processing.
+Before creating the event hubs, you'll first create an event hub namespace that will receive events from your Azure Digital Twins instance. You can either use the Azure CLI instructions below, or use the Azure portal: [*Quickstart: Create an event hub using Azure portal*](../event-hubs/event-hubs-create.md). To see what regions support event hubs, visit [*Azure products available by region*](https://azure.microsoft.com/global-infrastructure/services/?products=event-hubs).
-The Azure Digital Twins [*Tutorial: Connect an end-to-end solution*](./tutorial-end-to-end.md) walks through a scenario where a thermometer is used to update a temperature attribute on a digital twin representing a room. This pattern relies on the twin updates, rather than forwarding telemetry from an IoT device, which gives you the flexibility to change the underlying data source without needing to update your Time Series Insights logic.
+```azurecli-interactive
+az eventhubs namespace create --name <name-for-your-event-hubs-namespace> --resource-group <your-resource-group> -l <region>
+```
-1. First, create an event hub namespace that will receive events from your Azure Digital Twins instance. You can either use the Azure CLI instructions below, or use the Azure portal: [*Quickstart: Create an event hub using Azure portal*](../event-hubs/event-hubs-create.md). To see what regions support Event Hubs, visit [*Azure products available by region*](https://azure.microsoft.com/global-infrastructure/services/?products=event-hubs).
+> [!TIP]
+> If you get an error stating `BadRequest: The specified service namespace is invalid.`, make sure the name you've chosen for your namespace meets the naming requirements described in this reference document: [Create Namespace](/rest/api/servicebus/create-namespace).
- ```azurecli-interactive
- az eventhubs namespace create --name <name for your Event Hubs namespace> --resource-group <resource group name> -l <region>
- ```
+You'll be using this event hubs namespace to hold the two event hubs that are needed for this article:
-2. Create an event hub within the namespace to receive twin change events. Specify a name for the event hub.
+ 1. **Twins hub** - Event hub to receive twin change events
+ 2. **Time series hub** - Event hub to stream events to Time Series Insights
- ```azurecli-interactive
- az eventhubs eventhub create --name <name for your Twins event hub> --resource-group <resource group name> --namespace-name <Event Hubs namespace from above>
- ```
+The next sections will walk you through creating and configuring these hubs within your event hub namespace.
-3. Create an [authorization rule](/cli/azure/eventhubs/eventhub/authorization-rule#az-eventhubs-eventhub-authorization-rule-create) with send and receive permissions. Specify a name for the rule.
+## Create twins hub
- ```azurecli-interactive
- az eventhubs eventhub authorization-rule create --rights Listen Send --resource-group <resource group name> --namespace-name <Event Hubs namespace from above> --eventhub-name <Twins event hub name from above> --name <name for your Twins auth rule>
- ```
+The first event hub you'll create in this article is the **twins hub**. This event hub will receive twin change events from Azure Digital Twins.
+To set up the twins hub, you'll complete the following steps in this section:
-4. Create an Azure Digital Twins [endpoint](concepts-route-events.md#create-an-endpoint) that links your event hub to your Azure Digital Twins instance.
+1. Create the twins hub
+2. Create an authorization rule to control permissions to the hub
+3. Create an endpoint in Azure Digital Twins that uses the authorization rule to access the hub
+4. Create a route in Azure Digital Twins that sends twin updates event to the endpoint and connected twins hub
+5. Get the twins hub connection string
- ```azurecli-interactive
- az dt endpoint create eventhub -n <your Azure Digital Twins instance name> --endpoint-name <name for your Event Hubs endpoint> --eventhub-resource-group <resource group name> --eventhub-namespace <Event Hubs namespace from above> --eventhub <Twins event hub name from above> --eventhub-policy <Twins auth rule from above>
- ```
+Create the **twins hub** with the following CLI command. Specify a name for your twins hub.
-5. Create a [route](concepts-route-events.md#create-an-event-route) in Azure Digital Twins to send twin update events to your endpoint. The filter in this route will only allow twin update messages to be passed to your endpoint.
+```azurecli-interactive
+az eventhubs eventhub create --name <name-for-your-twins-hub> --resource-group <your-resource-group> --namespace-name <your-event-hubs-namespace-from-above>
+```
- >[!NOTE]
- >There is currently a **known issue** in Cloud Shell affecting these command groups: `az dt route`, `az dt model`, `az dt twin`.
- >
- >To resolve, either run `az login` in Cloud Shell prior to running the command, or use the [local CLI](/cli/azure/install-azure-cli) instead of Cloud Shell. For more detail on this, see [*Troubleshooting: Known issues in Azure Digital Twins*](troubleshoot-known-issues.md#400-client-error-bad-request-in-cloud-shell).
+### Create twins hub authorization rule
- ```azurecli-interactive
- az dt route create -n <your Azure Digital Twins instance name> --endpoint-name <Event Hub endpoint from above> --route-name <name for your route> --filter "type = 'Microsoft.DigitalTwins.Twin.Update'"
- ```
+Create an [authorization rule](/cli/azure/eventhubs/eventhub/authorization-rule#az-eventhubs-eventhub-authorization-rule-create) with send and receive permissions. Specify a name for the rule.
-Before moving on, take note of your *Event Hubs namespace* and *resource group*, as you will use them again to create another event hub later in this article.
+```azurecli-interactive
+az eventhubs eventhub authorization-rule create --rights Listen Send --name <name-for-your-twins-hub-auth-rule> --resource-group <your-resource-group> --namespace-name <your-event-hubs-namespace-from-earlier> --eventhub-name <your-twins-hub-from-above>
+```
-## Create a function in Azure
+### Create twins hub endpoint
-Next, you'll use Azure Functions to create an **Event Hubs-triggered function** inside a function app. You can use the function app created in the end-to-end tutorial ([*Tutorial: Connect an end-to-end solution*](./tutorial-end-to-end.md)), or your own.
+Create an Azure Digital Twins [endpoint](concepts-route-events.md#create-an-endpoint) that links your event hub to your Azure Digital Twins instance. Specify a name for your twins hub endpoint.
-This function will convert those twin update events from their original form as JSON Patch documents to JSON objects, containing only updated and added values from your twins.
+```azurecli-interactive
+az dt endpoint create eventhub -n <your-Azure-Digital-Twins-instance-name> --eventhub-resource-group <your-resource-group> --eventhub-namespace <your-event-hubs-namespace-from-earlier> --eventhub <your-twins-hub-name-from-above> --eventhub-policy <your-twins-hub-auth-rule-from-earlier> --endpoint-name <name-for-your-twins-hub-endpoint>
+```
-For more information about using Event Hubs with Azure Functions, see [*Azure Event Hubs trigger for Azure Functions*](../azure-functions/functions-bindings-event-hubs-trigger.md).
+### Create twins hub event route
-Inside your published function app, add a new function called **ProcessDTUpdatetoTSI** with the following code.
+Azure Digital Twins instances can emit [twin update events](how-to-interpret-event-data.md) whenever a twin's state is updated. In this section, you'll create an Azure Digital Twins **event route** that will direct these update events to the twins hub for further processing.
+Create a [route](concepts-route-events.md#create-an-event-route) in Azure Digital Twins to send twin update events to your endpoint from above. The filter in this route will only allow twin update messages to be passed to your endpoint. Specify a name for the twins hub event route.
+
+```azurecli-interactive
+az dt route create -n <your-Azure-Digital-Twins-instance-name> --endpoint-name <your-twins-hub-endpoint-from-above> --route-name <name-for-your-twins-hub-event-route> --filter "type = 'Microsoft.DigitalTwins.Twin.Update'"
+```
+
+### Get twins hub connection string
+
+Get the [twins event hub connection string](../event-hubs/event-hubs-get-connection-string.md), using the authorization rules you created above for the twins hub.
+
+```azurecli-interactive
+az eventhubs eventhub authorization-rule keys list --resource-group <your-resource-group> --namespace-name <your-event-hubs-namespace-from-earlier> --eventhub-name <your-twins-hub-from-above> --name <your-twins-hub-auth-rule-from-earlier>
+```
+Take note of the **primaryConnectionString** value from the result to configure the twins hub app setting later in this article.
+
+## Create time series hub
+
+The second event hub you'll create in this article is the **time series hub**. This is an event hub that will stream the Azure Digital Twins events to Time Series Insights.
+To set up the time series hub, you'll complete these steps:
+
+1. Create the time series hub
+2. Create an authorization rule to control permissions to the hub
+3. Get the time series hub connection string
->[!NOTE]
->You may need to add the packages to your project using the `dotnet add package` command or the Visual Studio NuGet package manager.
+Later, when you create the Time Series Insights instance, you'll connect this time series hub as the event source for the Time Series Insights instance.
-Next, **publish** the new Azure function. For instructions on how to do this, see [*How-to: Set up an Azure function for processing data*](how-to-create-azure-function.md#publish-the-function-app-to-azure).
+Create the **time series hub** using the following command. Specify a name for the time series hub.
-Looking ahead, this function will send the JSON objects it creates to a second event hub, which you will connect to Time Series Insights. You'll create that event hub in the next section.
+```azurecli-interactive
+ az eventhubs eventhub create --name <name-for-your-time-series-hub> --resource-group <your-resource-group> --namespace-name <your-event-hub-namespace-from-earlier>
+```
-Later, you'll also set some environment variables that this function will use to connect to your own event hubs.
+### Create time series hub authorization rule
-## Send telemetry to an event hub
+Create an [authorization rule](/cli/azure/eventhubs/eventhub/authorization-rule#az-eventhubs-eventhub-authorization-rule-create) with send and receive permissions. Specify a name for the time series hub auth rule.
-You will now create a second event hub, and configure your function to stream its output to that event hub. This event hub will then be connected to Time Series Insights.
+```azurecli-interactive
+az eventhubs eventhub authorization-rule create --rights Listen Send --name <name-for-your-time-series-hub-auth-rule> --resource-group <your-resource-group> --namespace-name <your-event-hub-namespace-from-earlier> --eventhub-name <your-time-series-hub-name-from-above>
+```
-### Create an event hub
+### Get time series hub connection string
-To create the second event hub, you can either use the Azure CLI instructions below, or use the Azure portal: [*Quickstart: Create an event hub using Azure portal*](../event-hubs/event-hubs-create.md).
+Get the [time series hub connection string](../event-hubs/event-hubs-get-connection-string.md), using the authorization rules you created above for the time series hub:
-1. Prepare your *Event Hubs namespace* and *resource group* name from earlier in this article
+```azurecli-interactive
+az eventhubs eventhub authorization-rule keys list --resource-group <your-resource-group> --namespace-name <your-event-hub-namespace-from-earlier> --eventhub-name <your-time-series-hub-name-from-earlier> --name <your-time-series-hub-auth-rule-from-earlier>
+```
+Take note of the **primaryConnectionString** value from the result to configure the time series hub app setting later in this article.
-2. Create a new event hub. Specify a name for the event hub.
+Also, take note of the following values to use them later to create a Time Series Insights instance.
+* Event hub namespace
+* Time series hub name
+* Time series hub auth rule
- ```azurecli-interactive
- az eventhubs eventhub create --name <name for your TSI event hub> --resource-group <resource group name from earlier> --namespace-name <Event Hubs namespace from earlier>
- ```
-3. Create an [authorization rule](/cli/azure/eventhubs/eventhub/authorization-rule#az-eventhubs-eventhub-authorization-rule-create) with send and receive permissions. Specify a name for the rule.
+## Create a function
- ```azurecli-interactive
- az eventhubs eventhub authorization-rule create --rights Listen Send --resource-group <resource group name> --namespace-name <Event Hubs namespace from earlier> --eventhub-name <TSI event hub name from above> --name <name for your TSI auth rule>
- ```
+In this section, you'll create an Azure function that will convert twin update events from their original form as JSON Patch documents to JSON objects, containing only updated and added values from your twins.
-## Configure your function
+### Step 1: Create function app
+
+First, create a new function app project in Visual Studio. For instructions on how to do this, see the [**Create a function app in Visual Studio**](how-to-create-azure-function.md#create-a-function-app-in-visual-studio) section of the *How-to: Set up a function for processing data* article.
+
+### Step 2: Add a new function
+
+Create a new Azure function called *ProcessDTUpdatetoTSI.cs* to update device telemetry events to the Time Series Insights. The function type will be **Event Hub trigger**.
++
+### Step 3: Fill in function code
+
+Add the following packages to your project:
+* [Microsoft.Azure.WebJobs](https://www.nuget.org/packages/Microsoft.Azure.WebJobs/)
+* [Microsoft.Azure.WebJobs.Extensions.EventHubs](https://www.nuget.org/packages/Microsoft.Azure.WebJobs/)
+* [Microsoft.NET.Sdk.Functions](https://www.nuget.org/packages/Microsoft.NET.Sdk.Functions/)
+
+Replace the code in the *ProcessDTUpdatetoTSI.cs* file with the following code:
+
-Next, you'll need to set environment variables in your function app from earlier, containing the connection strings for the event hubs you've created.
+Save your function code.
-### Set the Twins event hub connection string
+### Step 4: Publish the function app to Azure
-1. Get the Twins [event hub connection string](../event-hubs/event-hubs-get-connection-string.md), using the authorization rules you created above for the Twins hub.
+Publish the project with the *ProcessDTUpdatetoTSI.cs* function to a function app in Azure.
- ```azurecli-interactive
- az eventhubs eventhub authorization-rule keys list --resource-group <resource group name> --namespace-name <Event Hubs namespace> --eventhub-name <Twins event hub name from earlier> --name <Twins auth rule from earlier>
- ```
+For instructions on how to do this, see the section [**Publish the function app to Azure**](how-to-create-azure-function.md#publish-the-function-app-to-azure) of the *How-to: Set up a function for processing data* article.
-2. Use the *primaryConnectionString* value from the result to create an app setting in your function app that contains your connection string:
+Save the function app name to use later to configure app settings for the two event hubs.
- ```azurecli-interactive
- az functionapp config appsettings set --settings "EventHubAppSetting-Twins=<Twins event hub connection string>" -g <resource group> -n <your App Service (function app) name>
- ```
+### Step 5: Security access for the function app
-### Set the Time Series Insights event hub connection string
+Next, **assign an access role** for the function and **configure the application settings** so that it can access your Azure Digital Twins instance. For instructions on how to do this, see the section [**Set up security access for the function app**](how-to-create-azure-function.md#set-up-security-access-for-the-function-app) of the *How-to: Set up a function for processing data* article.
-1. Get the TSI [event hub connection string](../event-hubs/event-hubs-get-connection-string.md), using the authorization rules you created above for the Time Series Insights hub:
+### Step 6: Configure app settings for the two event hubs
- ```azurecli-interactive
- az eventhubs eventhub authorization-rule keys list --resource-group <resource group name> --namespace-name <Event Hubs namespace> --eventhub-name <TSI event hub name> --name <TSI auth rule>
- ```
+Next, you'll add environment variables in the function app's settings that allow it to access the twins hub and time series hub.
-2. Use the *primaryConnectionString* value from the result to create an app setting in your function app that contains your connection string:
+Use the twins hub **primaryConnectionString** value that you saved earlier to create an app setting in your function app that contains the twins hub connection string:
- ```azurecli-interactive
- az functionapp config appsettings set --settings "EventHubAppSetting-TSI=<TSI event hub connection string>" -g <resource group> -n <your App Service (function app) name>
- ```
+```azurecli-interactive
+az functionapp config appsettings set --settings "EventHubAppSetting-Twins=<your-twins-hub-primaryConnectionString>" -g <your-resource-group> -n <your-App-Service-(function-app)-name>
+```
+
+Use the time series hub **primaryConnectionString** value that you saved earlier to create an app setting in your function app that contains the time series hub connection string:
+
+```azurecli-interactive
+az functionapp config appsettings set --settings "EventHubAppSetting-TSI=<your-time-series-hub-primaryConnectionString>" -g <your-resource-group> -n <your-App-Service-(function-app)-name>
+```
## Create and connect a Time Series Insights instance
-Next, you will set up a Time Series Insights instance to receive the data from your second (TSI) event hub. Follow the steps below, and for more details about this process, see [*Tutorial: Set up an Azure Time Series Insights Gen2 PAYG environment*](../time-series-insights/tutorial-set-up-environment.md).
+In this section, you'll set up Time Series Insights instance to receive data from your time series hub. For more details about this process, see [*Tutorial: Set up an Azure Time Series Insights Gen2 PAYG environment*](../time-series-insights/tutorial-set-up-environment.md). Follow the steps below to create a time series insights environment.
-1. In the Azure portal, begin creating a Time Series Insights environment.
- 1. Select the **Gen2(L1)** pricing tier.
- 2. You will need to choose a **time series ID** for this environment. Your time series ID can be up to three values that you will use to search for your data in Time Series Insights. For this tutorial, you can use **$dtId**. Read more about selecting an ID value in [*Best practices for choosing a Time Series ID*](../time-series-insights/how-to-select-tsid.md).
-
- :::image type="content" source="media/how-to-integrate-time-series-insights/create-time-series-insights-environment-1.png" alt-text="The creation portal UX for a Time Series Insights environment. Select your subscription, resource group, and location from the respective dropdowns and choose a name for your environment." lightbox="media/how-to-integrate-time-series-insights/create-time-series-insights-environment-1.png":::
-
- :::image type="content" source="media/how-to-integrate-time-series-insights/create-time-series-insights-environment-2.png" alt-text="The creation portal UX for a Time Series Insights environment. The Gen2(L1) pricing tier is selected and the time series ID property name is $dtId" lightbox="media/how-to-integrate-time-series-insights/create-time-series-insights-environment-2.png":::
+1. In the [Azure portal](https://portal.azure.com), search for *Time Series Insights environments*, and select the **Add** button. Choose the following options to create the time series environment.
+
+ * **Subscription** - Choose your subscription.
+ - **Resource group** - Choose your resource group.
+ * **Environment name** - Specify a name for your time series environment.
+ * **Location** - Choose a location.
+ * **Tier** - Choose the **Gen2(L1)** pricing tier.
+ * **Property name** - Enter **$dtId** (Read more about selecting an ID value in [*Best practices for choosing a Time Series ID*](../time-series-insights/how-to-select-tsid.md)).
+ * **Storage account name** - Specify a storage account name.
+ * **Enable warm store** - Leave this field set to *Yes*.
-2. Select **Next: Event Source** and select your TSI event hub information from earlier. You will also need to create a new Event Hubs consumer group.
+ You can leave default values for other properties on this page. Select the **Next : Event Source >** button.
+
+ :::image type="content" source="media/how-to-integrate-time-series-insights/create-time-series-insights-environment-1.png" alt-text="Screenshot of the Azure portal to create Time Series Insights environment. Select your subscription, resource group, and location from the respective dropdowns and choose a name for your environment." lightbox="media/how-to-integrate-time-series-insights/create-time-series-insights-environment-1.png":::
+
+ :::image type="content" source="media/how-to-integrate-time-series-insights/create-time-series-insights-environment-2.png" alt-text="Screenshot of the Azure portal to create Time Series Insights environment. The Gen2(L1) pricing tier is selected and the time series ID property name is $dtId." lightbox="media/how-to-integrate-time-series-insights/create-time-series-insights-environment-2.png":::
+
+2. In the *Event Source* tab, choose the following fields:
+
+ * **Create an event source?** - Choose *Yes*.
+ * **Source type** - Choose *Event Hub*.
+ * **Name** - Specify a name for your event source.
+ * **Subscription** - Choose your Azure subscription.
+ * **Event Hub namespace** - Choose the namespace that you created earlier in this article.
+ * **Event Hub name** - Choose the **time series hub** name that you created earlier in this article.
+ * **Event Hub access policy name** - Choose the *time series hub auth rule* that you created earlier in this article.
+ * **Event Hub consumer group** - Select *New* and specify a name for your event hub consumer group. Then, select *Add*.
+ * **Property name** - Leave this field blank.
- :::image type="content" source="media/how-to-integrate-time-series-insights/event-source-twins.png" alt-text="The creation portal UX for a Time Series Insights environment event source. You are creating an event source with the event hub information from above. You are also creating a new consumer group." lightbox="media/how-to-integrate-time-series-insights/event-source-twins.png":::
+ Choose the **Review + Create** button to review all the details. Then, select the **Review + Create** button again to create the time series environment.
+
+ :::image type="content" source="media/how-to-integrate-time-series-insights/create-tsi-environment-event-source.png" alt-text="Screenshot of the Azure portal to create Time Series Insights environment. You are creating an event source with the event hub information from above. You are also creating a new consumer group." lightbox="media/how-to-integrate-time-series-insights/create-tsi-environment-event-source.png":::
+
+## Send IoT data to Azure Digital Twins
-## Begin sending IoT data to Azure Digital Twins
+To begin sending data to Time Series Insights, you'll need to start updating the digital twin properties in Azure Digital Twins with changing data values.
-To begin sending data to Time Series Insights, you will need to start updating the digital twin properties in Azure Digital Twins with changing data values. Use the [az dt twin update](/cli/azure/dt/twin#az_dt_twin_update) command.
+Use the following CLI command to update the *Temperature* property on the *thermostat67* twin that you added to your instance in the [Prerequisites](#prerequisites) section.
-If you are using the end-to-end tutorial ([*Tutorial: Connect an end-to-end solution*](tutorial-end-to-end.md)) to assist with environment setup, you can begin sending simulated IoT data by running the *DeviceSimulator* project from the sample. The instructions are in the [*Configure and run the simulation*](tutorial-end-to-end.md#configure-and-run-the-simulation) section of the tutorial.
+```azurecli-interactive
+az dt twin update -n <your-azure-digital-twins-instance-name> --twin-id thermostat67 --json-patch '{"op":"replace", "path":"/Temperature", "value": 20.5}'
+```
+
+**Repeat the command at least 4 more times with different temperature values**, to create several data points that can be observed later in the Time Series Insights environment.
+
+> [!TIP]
+> If you'd like to complete this article with live simulated data instead of manually updating the digital twin values, first make sure you've completed the TIP from the [*Prerequisites*](#prerequisites) section to set up an Azure function that updates twins from a simulated device.
+After that, you can run the device now to start sending simulated data and updating your digital twin through that data flow.
## Visualize your data in Time Series Insights Now, data should be flowing into your Time Series Insights instance, ready to be analyzed. Follow the steps below to explore the data coming in.
-1. Open your Time Series Insights environment in the [Azure portal](https://portal.azure.com) (you can search for the name of your environment in the portal search bar). Visit the *Time Series Insights Explorer URL* shown in the instance overview.
-
- :::image type="content" source="media/how-to-integrate-time-series-insights/view-environment.png" alt-text="Select the Time Series Insights explorer URL in the overview tab of your Time Series Insights environment":::
+1. In the [Azure portal](https://portal.azure.com), search for your time series environment name that you created earlier. In the menu options on the left, select *Overview* to see the *Time Series Insights Explorer URL*. Select the URL to view the temperature changes reflected in the Time Series Insights environment.
-2. In the explorer, you will see your three twins from Azure Digital Twins shown on the left. Select _**thermostat67**_, select **temperature**, and hit **add**.
+ :::image type="content" source="media/how-to-integrate-time-series-insights/view-environment.png" alt-text="Screenshot of the Azure portal to select the Time Series Insights explorer URL in the overview tab of your Time Series Insights environment." lightbox="media/how-to-integrate-time-series-insights/view-environment.png":::
- :::image type="content" source="media/how-to-integrate-time-series-insights/add-data.png" alt-text="Select **thermostat67**, select **temperature**, and hit **add**":::
+2. In the explorer, you will see the twins in the Azure Digital Twins instance shown on the left. Select the *thermostat67* twin, choose the property *Temperature*, and hit **Add**.
-3. You should now be seeing the initial temperature readings from your thermostat, as shown below. That same temperature reading is updated for *room21* and *floor1*, and you can visualize those data streams in tandem.
-
- :::image type="content" source="media/how-to-integrate-time-series-insights/initial-data.png" alt-text="Initial temperature data is graphed in the TSI explorer. It is a line of random values between 68 and 85":::
+ :::image type="content" source="media/how-to-integrate-time-series-insights/add-data.png" alt-text="Screenshot of the Time Series Insights explorer to select thermostat67, select the property temperature, and hit add." lightbox="media/how-to-integrate-time-series-insights/add-data.png":::
-4. If you allow the simulation to run for much longer, your visualization will look something like this:
-
- :::image type="content" source="media/how-to-integrate-time-series-insights/day-data.png" alt-text="Temperature data for each twin is graphed in three parallel lines of different colors.":::
+3. You should now see the initial temperature readings from your thermostat, as shown below.
+
+ :::image type="content" source="media/how-to-integrate-time-series-insights/initial-data.png" alt-text="Screenshot of the TSI explorer to view the initial temperature data. It is a line of random values between 68 and 85" lightbox="media/how-to-integrate-time-series-insights/initial-data.png":::
+
+If you allow a simulation to run for much longer, your visualization will look something like this:
+ ## Next steps
digital-twins How To Manage Routes Apis Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/how-to-manage-routes-apis-cli.md
[!INCLUDE [digital-twins-route-selector.md](../../includes/digital-twins-route-selector.md)]
-In Azure Digital Twins, you can route [event notifications](how-to-interpret-event-data.md) to downstream services or connected compute resources. This is done by first setting up **endpoints** that can receive the events. You can then create [**event routes**](concepts-route-events.md) that specify which events generated by Azure Digital Twins are delivered to which endpoints.
+In Azure Digital Twins, you can route [event notifications](concepts-event-notifications.md) to downstream services or connected compute resources. This is done by first setting up **endpoints** that can receive the events. You can then create [**event routes**](concepts-route-events.md) that specify which events generated by Azure Digital Twins are delivered to which endpoints.
This article walks you through the process of creating endpoints and routes with the [REST APIs](/rest/api/azure-digitaltwins/), the [.NET (C#) SDK](/dotnet/api/overview/azure/digitaltwins/client), and the [Azure Digital Twins CLI](how-to-use-cli.md).
Once the endpoint with dead-lettering is set up, dead-lettered messages will be
Dead-lettered messages will match the schema of the original event that was intended to be delivered to your original endpoint.
-Here is an example of a dead-letter message for a [twin create notification](how-to-interpret-event-data.md#digital-twin-lifecycle-notifications):
+Here is an example of a dead-letter message for a [twin create notification](concepts-event-notifications.md#digital-twin-lifecycle-notifications):
```json {
Here are the supported route filters. Use the detail in the *Filter text schema*
## Next steps Read about the different types of event messages you can receive:
-* [*How-to: Interpret event data*](how-to-interpret-event-data.md)
+* [*Concepts: Event notifications*](concepts-event-notifications.md)
digital-twins How To Manage Routes Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/how-to-manage-routes-portal.md
[!INCLUDE [digital-twins-route-selector.md](../../includes/digital-twins-route-selector.md)]
-In Azure Digital Twins, you can route [event notifications](how-to-interpret-event-data.md) to downstream services or connected compute resources. This is done by first setting up **endpoints** that can receive the events. You can then create [**event routes**](concepts-route-events.md) that specify which events generated by Azure Digital Twins are delivered to which endpoints.
+In Azure Digital Twins, you can route [event notifications](concepts-event-notifications.md) to downstream services or connected compute resources. This is done by first setting up **endpoints** that can receive the events. You can then create [**event routes**](concepts-route-events.md) that specify which events generated by Azure Digital Twins are delivered to which endpoints.
This article walks you through the process of creating endpoints and routes using the [Azure portal](https://portal.azure.com).
Here are the supported route filters. The detail in the *Filter text schema* col
## Next steps Read about the different types of event messages you can receive:
-* [*How-to: Interpret event data*](how-to-interpret-event-data.md)
+* [*Concepts: Event notifications*](concepts-event-notifications.md)
digital-twins Troubleshoot Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/troubleshoot-diagnostics.md
description: See how to enable logging with diagnostics settings and query the logs for immediate viewing. Previously updated : 2/24/2021 Last updated : 11/9/2020
Here are more details about the categories of logs that Azure Digital Twins coll
| ADTModelsOperation | Log all API calls pertaining to Models | | ADTQueryOperation | Log all API calls pertaining to Queries | | ADTEventRoutesOperation | Log all API calls pertaining to Event Routes as well as egress of events from Azure Digital Twins to an endpoint service like Event Grid, Event Hubs and Service Bus |
-| ADTDigitalTwinsOperation | Log all API calls pertaining individual twins |
+| ADTDigitalTwinsOperation | Log all API calls pertaining to Azure Digital Twins |
Each log category consists of operations of write, read, delete, and action. These map to REST API calls as follows:
Here is a comprehensive list of the operations and corresponding [Azure Digital
Each log category has a schema that defines how events in that category are reported. Each individual log entry is stored as text and formatted as a JSON blob. The fields in the log and example JSON bodies are provided for each log type below.
-`ADTDigitalTwinsOperation`, `ADTModelsOperation`, and `ADTQueryOperation` use a consistent API log schema. `ADTEventRoutesOperation` extends the schema to contain an `endpointName` field in properties.
+`ADTDigitalTwinsOperation`, `ADTModelsOperation`, and `ADTQueryOperation` use a consistent API log schema; `ADTEventRoutesOperation` has its own separate schema.
### API log schemas
-This log schema is consistent for `ADTDigitalTwinsOperation`, `ADTModelsOperation`, `ADTQueryOperation`. The same schema is also used for `ADTEventRoutesOperation`, with the **exception** of the `Microsoft.DigitalTwins/eventroutes/action` operation name (for more information about that schema, see the next section, [*Egress log schemas*](#egress-log-schemas)).
-
-The schema contains information pertinent to API calls to an Azure Digital Twins instance.
+This log schema is consistent for `ADTDigitalTwinsOperation`, `ADTModelsOperation`, and `ADTQueryOperation`. It contains information pertinent to API calls to an Azure Digital Twins instance.
Here are the field and property descriptions for API logs.
Here are the field and property descriptions for API logs.
| `DurationMs` | String | How long it took to perform the event in milliseconds | | `CallerIpAddress` | String | A masked source IP address for the event | | `CorrelationId` | Guid | Customer provided unique identifier for the event |
-| `ApplicationId` | Guid | Application ID used in bearer authorization |
-| `Level` | Int | The logging severity of the event |
+| `Level` | String | The logging severity of the event |
| `Location` | String | The region where the event took place | | `RequestUri` | Uri | The endpoint utilized during the event |
-| `TraceId` | String | `TraceId`, as part of [W3C's Trace Context](https://www.w3.org/TR/trace-context/). The ID of the whole trace used to uniquely identify a distributed trace across systems. |
-| `SpanId` | String | `SpanId` as part of [W3C's Trace Context](https://www.w3.org/TR/trace-context/). The ID of this request in the trace. |
-| `ParentId` | String | `ParentId` as part of [W3C's Trace Context](https://www.w3.org/TR/trace-context/). A request without a parent ID is the root of the trace. |
-| `TraceFlags` | String | `TraceFlags` as part of [W3C's Trace Context](https://www.w3.org/TR/trace-context/). Controls tracing flags such as sampling, trace level, etc. |
-| `TraceState` | String | `TraceState` as part of [W3C's Trace Context](https://www.w3.org/TR/trace-context/). Additional vendor-specific trace identification information to span across different distributed tracing systems. |
Below are example JSON bodies for these types of logs.
Below are example JSON bodies for these types of logs.
"resultType": "Success", "resultSignature": "200", "resultDescription": "",
- "durationMs": 8,
+ "durationMs": "314",
"callerIpAddress": "13.68.244.*", "correlationId": "2f6a8e64-94aa-492a-bc31-16b9f0b16ab3",
- "identity": {
- "claims": {
- "appId": "872cd9fa-d31f-45e0-9eab-6e460a02d1f1"
- }
- },
"level": "4", "location": "southcentralus",
- "uri": "https://myinstancename.api.scus.digitaltwins.azure.net/digitaltwins/factory-58d81613-2e54-4faa-a930-d980e6e2a884?api-version=2020-10-31",
- "properties": {},
- "traceContext": {
- "traceId": "95ff77cfb300b04f80d83e64d13831e7",
- "spanId": "b630da57026dd046",
- "parentId": "9f0de6dadae85945",
- "traceFlags": "01",
- "tracestate": "k1=v1,k2=v2"
- }
+ "uri": "https://myinstancename.api.scus.digitaltwins.azure.net/digitaltwins/factory-58d81613-2e54-4faa-a930-d980e6e2a884?api-version=2020-10-31"
} ```
Below are example JSON bodies for these types of logs.
"resultType": "Success", "resultSignature": "201", "resultDescription": "",
- "durationMs": "80",
+ "durationMs": "935",
"callerIpAddress": "13.68.244.*", "correlationId": "9dcb71ea-bb6f-46f2-ab70-78b80db76882",
- "identity": {
- "claims": {
- "appId": "872cd9fa-d31f-45e0-9eab-6e460a02d1f1"
- }
- },
"level": "4", "location": "southcentralus", "uri": "https://myinstancename.api.scus.digitaltwins.azure.net/Models?api-version=2020-10-31",
- "properties": {},
- "traceContext": {
- "traceId": "95ff77cfb300b04f80d83e64d13831e7",
- "spanId": "b630da57026dd046",
- "parentId": "9f0de6dadae85945",
- "traceFlags": "01",
- "tracestate": "k1=v1,k2=v2"
- }
} ```
Below are example JSON bodies for these types of logs.
"resultType": "Success", "resultSignature": "200", "resultDescription": "",
- "durationMs": "314",
+ "durationMs": "255",
"callerIpAddress": "13.68.244.*", "correlationId": "1ee2b6e9-3af4-4873-8c7c-1a698b9ac334",
- "identity": {
- "claims": {
- "appId": "872cd9fa-d31f-45e0-9eab-6e460a02d1f1"
- }
- },
"level": "4", "location": "southcentralus", "uri": "https://myinstancename.api.scus.digitaltwins.azure.net/query?api-version=2020-10-31",
- "properties": {},
- "traceContext": {
- "traceId": "95ff77cfb300b04f80d83e64d13831e7",
- "spanId": "b630da57026dd046",
- "parentId": "9f0de6dadae85945",
- "traceFlags": "01",
- "tracestate": "k1=v1,k2=v2"
- }
} ```
-#### ADTEventRoutesOperation
-
-Here is an example JSON body for an `ADTEventRoutesOperation` that is **not** of `Microsoft.DigitalTwins/eventroutes/action` type (for more information about that schema, see the next section, [*Egress log schemas*](#egress-log-schemas)).
-
-```json
- {
- "time": "2020-10-30T22:18:38.0708705Z",
- "resourceId": "/SUBSCRIPTIONS/BBED119E-28B8-454D-B25E-C990C9430C8F/RESOURCEGROUPS/MYRESOURCEGROUP/PROVIDERS/MICROSOFT.DIGITALTWINS/DIGITALTWINSINSTANCES/MYINSTANCENAME",
- "operationName": "Microsoft.DigitalTwins/eventroutes/write",
- "operationVersion": "2020-10-31",
- "category": "EventRoutesOperation",
- "resultType": "Success",
- "resultSignature": "204",
- "resultDescription": "",
- "durationMs": 42,
- "callerIpAddress": "212.100.32.*",
- "correlationId": "7f73ab45-14c0-491f-a834-0827dbbf7f8e",
- "identity": {
- "claims": {
- "appId": "872cd9fa-d31f-45e0-9eab-6e460a02d1f1"
- }
- },
- "level": "4",
- "location": "southcentralus",
- "uri": "https://myinstancename.api.scus.digitaltwins.azure.net/EventRoutes/egressRouteForEventHub?api-version=2020-10-31",
- "properties": {},
- "traceContext": {
- "traceId": "95ff77cfb300b04f80d83e64d13831e7",
- "spanId": "b630da57026dd046",
- "parentId": "9f0de6dadae85945",
- "traceFlags": "01",
- "tracestate": "k1=v1,k2=v2"
- }
- },
-```
- ### Egress log schemas
-This is the schema for `ADTEventRoutesOperation` logs specific to the `Microsoft.DigitalTwins/eventroutes/action` operation name. These contain details pertaining to exceptions and the API operations around egress endpoints connected to an Azure Digital Twins instance.
+This is the schema for `ADTEventRoutesOperation` logs. These contain details pertaining to exceptions and the API operations around egress endpoints connected to an Azure Digital Twins instance.
|Field name | Data type | Description | |--||-|
This is the schema for `ADTEventRoutesOperation` logs specific to the `Microsoft
| `OperationName` | String | The type of action being performed during the event | | `Category` | String | The type of resource being emitted | | `ResultDescription` | String | Additional details about the event |
-| `CorrelationId` | Guid | Customer provided unique identifier for the event |
-| `ApplicationId` | Guid | Application ID used in bearer authorization |
-| `Level` | Int | The logging severity of the event |
+| `Level` | String | The logging severity of the event |
| `Location` | String | The region where the event took place |
-| `TraceId` | String | `TraceId`, as part of [W3C's Trace Context](https://www.w3.org/TR/trace-context/). The ID of the whole trace used to uniquely identify a distributed trace across systems. |
-| `SpanId` | String | `SpanId` as part of [W3C's Trace Context](https://www.w3.org/TR/trace-context/). The ID of this request in the trace. |
-| `ParentId` | String | `ParentId` as part of [W3C's Trace Context](https://www.w3.org/TR/trace-context/). A request without a parent ID is the root of the trace. |
-| `TraceFlags` | String | `TraceFlags` as part of [W3C's Trace Context](https://www.w3.org/TR/trace-context/). Controls tracing flags such as sampling, trace level, etc. |
-| `TraceState` | String | `TraceState` as part of [W3C's Trace Context](https://www.w3.org/TR/trace-context/). Additional vendor-specific trace identification information to span across different distributed tracing systems. |
| `EndpointName` | String | The name of egress endpoint created in Azure Digital Twins | Below are example JSON bodies for these types of logs.
-#### ADTEventRoutesOperation for Microsoft.DigitalTwins/eventroutes/action
-
-Here is an example JSON body for an `ADTEventRoutesOperation` that of `Microsoft.DigitalTwins/eventroutes/action` type.
+#### ADTEventRoutesOperation
```json { "time": "2020-11-05T22:18:38.0708705Z", "resourceId": "/SUBSCRIPTIONS/BBED119E-28B8-454D-B25E-C990C9430C8F/RESOURCEGROUPS/MYRESOURCEGROUP/PROVIDERS/MICROSOFT.DIGITALTWINS/DIGITALTWINSINSTANCES/MYINSTANCENAME", "operationName": "Microsoft.DigitalTwins/eventroutes/action",
- "operationVersion": "",
"category": "EventRoutesOperation",
- "resultType": "",
- "resultSignature": "",
- "resultDescription": "Unable to send EventHub message to [myPath] for event Id [f6f45831-55d0-408b-8366-058e81ca6089].",
- "durationMs": -1,
- "callerIpAddress": "",
+ "resultDescription": "Unable to send EventGrid message to [my-event-grid.westus-1.eventgrid.azure.net] for event Id [f6f45831-55d0-408b-8366-058e81ca6089].",
"correlationId": "7f73ab45-14c0-491f-a834-0827dbbf7f8e",
- "identity": {
- "claims": {
- "appId": "872cd9fa-d31f-45e0-9eab-6e460a02d1f1"
- }
- },
- "level": "4",
+ "level": "3",
"location": "southcentralus",
- "uri": "",
"properties": {
- "endpointName": "myEventHub"
- },
- "traceContext": {
- "traceId": "95ff77cfb300b04f80d83e64d13831e7",
- "spanId": "b630da57026dd046",
- "parentId": "9f0de6dadae85945",
- "traceFlags": "01",
- "tracestate": "k1=v1,k2=v2"
+ "endpointName": "endpointEventGridInvalidKey"
}
-},
+}
``` ## View and query logs
dns Private Dns Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/dns/private-dns-overview.md
Previously updated : 6/12/2019 Last updated : 04/09/2021 #Customer intent: As an administrator, I want to evaluate Azure Private DNS so I can determine if I want to use it instead of my current DNS service. # What is Azure Private DNS?
-The Domain Name System, or DNS, is responsible for translating (or resolving) a service name to its IP address. Azure DNS is a hosting service for DNS domains, providing name resolution using the Microsoft Azure infrastructure. In addition to supporting internet-facing DNS domains, Azure DNS also supports private DNS zones.
+The Domain Name System, or DNS, is responsible for translating (or resolving) a service name to an IP address. Azure DNS is a hosting service for domains and provides naming resolution using the Microsoft Azure infrastructure. Azure DNS not only supports internet-facing DNS domains, but it also supports private DNS zones.
-Azure Private DNS provides a reliable, secure DNS service to manage and resolve domain names in a virtual network without the need to add a custom DNS solution. By using private DNS zones, you can use your own custom domain names rather than the Azure-provided names available today. Using custom domain names helps you to tailor your virtual network architecture to best suit your organization's needs. It provides name resolution for virtual machines (VMs) within a virtual network and between virtual networks. Additionally, you can configure zones names with a split-horizon view, which allows a private and a public DNS zone to share the name.
+Azure Private DNS provides a reliable and secure DNS service for your virtual network. Azure Private DNS manages and resolves domain names in the virtual network without the need to configure a custom DNS solution. By using private DNS zones, you can use your own custom domain name instead of the Azure-provided names during deployment. Using a custom domain name helps you tailor your virtual network architecture to best suit your organization's needs. It provides a naming resolution for virtual machines (VMs) within a virtual network and connected virtual networks. Additionally, you can configure zones names with a split-horizon view, which allows a private and a public DNS zone to share the name.
-To resolve the records of a private DNS zone from your virtual network, you must link the virtual network with the zone. Linked virtual networks have full access and can resolve all DNS records published in the private zone. Additionally, you can also enable autoregistration on a virtual network link. If you enable autoregistration on a virtual network link, the DNS records for the virtual machines on that virtual network are registered in the private zone. When autoregistration is enabled, Azure DNS also updates the zone records whenever a virtual machine is created, changes its' IP address, or is deleted.
+To resolve the records of a private DNS zone from your virtual network, you must link the virtual network with the zone. Linked virtual networks have full access and can resolve all DNS records published in the private zone. You can also enable autoregistration on a virtual network link. When you enable autoregistration on a virtual network link, the DNS records for the virtual machines in that virtual network are registered in the private zone. When autoregistration gets enabled, Azure DNS will update the zone record whenever a virtual machine gets created, changes its' IP address, or gets deleted.
![DNS overview](./media/private-dns-overview/scenario.png)
Azure Private DNS provides the following benefits:
Azure DNS provides the following capabilities:
-* **Automatic registration of virtual machines from a virtual network that's linked to a private zone with autoregistration enabled**. The virtual machines are registered (added) to the private zone as A records pointing to their private IP addresses. When a virtual machine in a virtual network link with autoregistration enabled is deleted, Azure DNS also automatically removes the corresponding DNS record from the linked private zone.
+* **Automatic registration of virtual machines from a virtual network that's linked to a private zone with autoregistration enabled**. Virtual machines get registered to the private zone as A records pointing to their private IP addresses. When a virtual machine in a virtual network link with autoregistration enabled gets deleted, Azure DNS also automatically removes the corresponding DNS record from the linked private zone.
* **Forward DNS resolution is supported across virtual networks that are linked to the private zone**. For cross-virtual network DNS resolution, there's no explicit dependency such that the virtual networks are peered with each other. However, you might want to peer virtual networks for other scenarios (for example, HTTP traffic).
-* **Reverse DNS lookup is supported within the virtual-network scope**. Reverse DNS lookup for a private IP within the virtual network assigned to a private zone returns the FQDN that includes the host/record name and the zone name as the suffix.
+* **Reverse DNS lookup is supported within the virtual-network scope**. Reverse DNS lookup for a private IP associated to a private zone will return an FQDN that includes the host/record name and the zone name as the suffix.
## Other considerations
Azure DNS has the following limitations:
* A specific virtual network can be linked to only one private zone if automatic registration of VM DNS records is enabled. You can however link multiple virtual networks to a single DNS zone. * Reverse DNS works only for private IP space in the linked virtual network
-* Reverse DNS for a private IP address for a linked virtual network returns *internal.cloudapp.net* as the default suffix for the virtual machine. For virtual networks that are linked to a private zone with autoregistration enabled, reverse DNS for a private IP address returns two FQDNs: one with default the suffix *internal.cloudapp.net* and another with the private zone suffix.
-* Conditional forwarding is not currently natively supported. To enable resolution between Azure and on-premises networks, see [Name resolution for VMs and role instances](../virtual-network/virtual-networks-name-resolution-for-vms-and-role-instances.md).
+* Reverse DNS for a private IP address in linked virtual network will return `internal.cloudapp.net` as the default suffix for the virtual machine. For virtual networks that are linked to a private zone with autoregistration enabled, reverse DNS for a private IP address returns two FQDNs: one with default the suffix `internal.cloudapp.net` and another with the private zone suffix.
+* Conditional forwarding isn't currently natively supported. To enable resolution between Azure and on-premises networks, see [Name resolution for VMs and role instances](../virtual-network/virtual-networks-name-resolution-for-vms-and-role-instances.md).
## Pricing
For pricing information, see [Azure DNS Pricing](https://azure.microsoft.com/pri
* Read about some common [private zone scenarios](./private-dns-scenarios.md) that can be realized with private zones in Azure DNS.
-* For common questions and answers about private zones in Azure DNS, including specific behavior you can expect for certain kinds of operations, see [Private DNS FAQ](./dns-faq-private.md).
+* For common questions and answers about private zones in Azure DNS, see [Private DNS FAQ](./dns-faq-private.md).
* Learn about DNS zones and records by visiting [DNS zones and records overview](dns-zones-records.md).
event-grid Event Filtering https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-grid/event-filtering.md
FOR_EACH filter IN (a, b, c)
``` ## StringIn
-The **StringIn** operator checks whether the **key** value **exactly matches** one of the specified **filter** values. In the following example, it checks whether the value of the `key1` attribute in the `data` section is `exact` or `string` or `matches`.
+The **StringIn** operator checks whether the **key** value **exactly matches** one of the specified **filter** values. In the following example, it checks whether the value of the `key1` attribute in the `data` section is `contoso` or `fabrikam` or `factory`.
```json "advancedFilters": [{
event-grid Troubleshoot Errors https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-grid/troubleshoot-errors.md
Title: Azure Event Grid - Troubleshooting guide description: This article provides a list of error codes, error messages, descriptions, and recommended actions. Previously updated : 07/07/2020 Last updated : 04/09/2021 # Troubleshoot Azure Event Grid errors
This troubleshooting guide provides you the following information:
| - | - | -- | -- | | HttpStatusCode.Conflict <br/>409 | Topic with the specified name already exists. Choose a different topic name. | The custom topic name should be unique in a single Azure region to ensure a correct publishing operation. The same name can be used in different Azure regions. | Choose a different name for the topic. | | HttpStatusCode.Conflict <br/> 409 | Domain with the specified already exists. Choose a different domain name. | The domain name should be unique in a single Azure region to ensure a correct publishing operation. The same name can be used in different Azure regions. | Choose a different name for the domain. |
-| HttpStatusCode.Conflict<br/>409 | Quota limit reached. For more information on these limits, see [Azure Event Grid limits](../azure-resource-manager/management/azure-subscription-service-limits.md#event-grid-limits). | Each Azure subscription has a limit on the number of Azure Event Grid resources that it can use. Some or all of this quota had been exceeded and no more resources could be created. | Check your current resources usage and delete any that aren't needed. If you still need to increase your quota, send an email to [aeg@microsoft.com](mailto:aeg@microsoft.com) with the exact number of resources needed. |
+| HttpStatusCode.Conflict<br/>409 | Quota limit reached. For more information on these limits, see [Azure Event Grid limits](../azure-resource-manager/management/azure-subscription-service-limits.md#event-grid-limits). | Each Azure subscription has a limit on the number of Azure Event Grid resources that it can use. Some or all of this quota had been exceeded and no more resources could be created. | Check your current resources usage and delete any that aren't needed. If you can't delete any resources, create another Azure subscription and create Event Grid resources in that subscription. |
## Error code: 403
event-hubs Apache Kafka Developer Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-hubs/apache-kafka-developer-guide.md
Review samples in the GitHub repo [azure-event-hubs-for-kafka](https://github.co
Also, see the following articles: - [Apache Kafka troubleshooting guide for Event Hubs](apache-kafka-troubleshooting-guide.md)-- [Frequently asked questions - Event Hubs for Apache Kafka](apache-kafka-frequently-asked-questions.md)
+- [Frequently asked questions - Event Hubs for Apache Kafka](apache-kafka-frequently-asked-questions.yml)
- [Apache Kafka migration guide for Event Hubs](apache-kafka-migration-guide.md)
event-hubs Apache Kafka Frequently Asked Questions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-hubs/apache-kafka-frequently-asked-questions.md
- Title: Frequently asked questions - Azure Event Hubs for Apache Kafka
-description: This article answers frequent questions asked about Azure Event Hubs' support for Apache Kafka clients not covered elsewhere.
- Previously updated : 06/23/2020--
-# Frequently asked questions - Event Hubs for Apache Kafka
-This article provides answers to some of the frequently asked questions on migrating to Event Hubs for Apache Kafka.
-
-## Does Azure Event Hubs run on Apache Kafka?
-
-No. Azure Event Hubs is a cloud-native multi-tier broker with support for multiple protocols that is developed and maintained by Microsoft and does not use any Apache Kafka code. One of the supported protocols is the Kafka RPC protocol for the Kafka client's consumer and producer APIs. Event Hubs works with many of your existing Kafka applications. For more information, see [Event Hubs for Apache Kafka](event-hubs-for-kafka-ecosystem-overview.md). Because the concepts of Apache Kafka and Azure Event Hubs are very similar (but not identical), we are able to offer the unmatched reliability of Azure Event Hubs to customers with existing Apache Kafka investments.
-
-## Event Hubs consumer group vs. Kafka consumer group
-What's the difference between an Event Hub consumer group and a Kafka consumer group on Event Hubs? Kafka consumer groups on Event Hubs are fully distinct from standard Event Hubs consumer groups.
-
-**Event Hubs consumer groups**
--- They are Managed with create, retrieve, update, and delete (CRUD) operations via portal, SDK, or Azure Resource Manager templates. Event Hubs consumer groups can't be autocreated.-- They are children entities of an event hub. It means that the same consumer group name can be reused between event hubs in the same namespace because they're separate entities.-- They aren't used for storing offsets. Orchestrated AMQP consumption is done using external offset storage, for example, Azure Storage.-
-**Kafka consumer groups**
--- They are autocreated. Kafka groups can be managed via the Kafka consumer group APIs.-- They can store offsets in the Event Hubs service.-- They are used as keys in what is effectively an offset key-value store. For a unique pair of `group.id` and `topic-partition`, we store an offset in Azure Storage (3x replication). Event Hubs users don't incur extra storage costs from storing Kafka offsets. Offsets are manipulable via the Kafka consumer group APIs, but the offset storage *accounts* aren't directly visible or manipulable for Event Hub users. -- They span a namespace. Using the same Kafka group name for multiple applications on multiple topics means that all applications and their Kafka clients will be rebalanced whenever only a single application needs rebalancing. Choose your group names wisely.-- They fully distinct from Event Hubs consumer groups. You **don't** need to use '$Default', nor do you need to worry about Kafka clients interfering with AMQP workloads.-- They aren't viewable in the Azure portal. Consumer group info is accessible via Kafka APIs.-
-## Next steps
-To learn more about Event Hubs and Event Hubs for Kafka, see the following articles:
--- [Apache Kafka developer guide for Event Hubs](apache-kafka-developer-guide.md)-- [Apache Kafka migration guide for Event Hubs](apache-kafka-migration-guide.md)-- [Apache Kafka troubleshooting guide for Event Hubs](apache-kafka-troubleshooting-guide.md)-- [Recommended configurations](apache-kafka-configurations.md)-
event-hubs Apache Kafka Migration Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-hubs/apache-kafka-migration-guide.md
Run your Kafka application that sends events to the event hub. Then, verify that
To learn more about Event Hubs and Event Hubs for Kafka, see the following articles: - [Apache Kafka troubleshooting guide for Event Hubs](apache-kafka-troubleshooting-guide.md)-- [Frequently asked questions - Event Hubs for Apache Kafka](apache-kafka-frequently-asked-questions.md)
+- [Frequently asked questions - Event Hubs for Apache Kafka](apache-kafka-frequently-asked-questions.yml)
- [Apache Kafka developer guide for Azure Event Hubs](apache-kafka-developer-guide.md) - [Recommended configurations](apache-kafka-configurations.md)
event-hubs Apache Kafka Troubleshooting Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-hubs/apache-kafka-troubleshooting-guide.md
To learn more about Event Hubs and Event Hubs for Kafka, see the following artic
- [Apache Kafka developer guide for Event Hubs](apache-kafka-developer-guide.md) - [Apache Kafka migration guide for Event Hubs](apache-kafka-migration-guide.md)-- [Frequently asked questions - Event Hubs for Apache Kafka](apache-kafka-frequently-asked-questions.md)
+- [Frequently asked questions - Event Hubs for Apache Kafka](apache-kafka-frequently-asked-questions.yml)
- [Recommended configurations](apache-kafka-configurations.md)
event-hubs Event Hubs About https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-hubs/event-hubs-about.md
To get started using Event Hubs, see the **Send and receive events** tutorials:
To learn more about Event Hubs, see the following articles: - [Event Hubs features overview](event-hubs-features.md)-- [Frequently asked questions](event-hubs-faq.md).
+- [Frequently asked questions](event-hubs-faq.yml).
event-hubs Event Hubs Create https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-hubs/event-hubs-create.md
An Event Hubs namespace provides a unique scoping container, in which you create
1. Select the **resource group** you created in the previous step. 1. Enter a **name** for the namespace. The system immediately checks to see if the name is available. 1. Select a **location** for the namespace.
- 1. Choose the **pricing tier** (Basic or Standard). To learn about some of the differences between basic and standard tiers, see [Event Hubs pricing](https://azure.microsoft.com/pricing/details/event-hubs/), [Differences between tiers](event-hubs-faq.md#what-is-the-difference-between-event-hubs-basic-and-standard-tiers), and [Quotas and limits](event-hubs-quotas.md).
+ 1. Choose the **pricing tier** (Basic or Standard). To learn about some of the differences between basic and standard tiers, see [Event Hubs pricing](https://azure.microsoft.com/pricing/details/event-hubs/), [Differences between tiers](event-hubs-faq.yml#what-is-the-difference-between-event-hubs-basic-and-standard-tiers-), and [Quotas and limits](event-hubs-quotas.md).
1. Leave the **throughput units** settings as it is. Throughput units are pre-purchased units of capacity. To learn about throughput units, see [Event Hubs scalability](event-hubs-scalability.md#throughput-units). 1. Select **Review + Create** at the bottom of the page.
event-hubs Event Hubs Dedicated Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-hubs/event-hubs-dedicated-overview.md
Contact your Microsoft sales representative or Microsoft Support to get addition
- [Create an Event Hubs cluster through the Azure portal](https://aka.ms/eventhubsclusterquickstart) - [Event Hubs Dedicated pricing](https://azure.microsoft.com/pricing/details/event-hubs/). You can also contact your Microsoft sales representative or Microsoft Support to get additional details about Event Hubs Dedicated capacity.-- The [Event Hubs FAQ](event-hubs-faq.md) contains pricing information and answers some frequently asked questions about Event Hubs.
+- The [Event Hubs FAQ](event-hubs-faq.yml) contains pricing information and answers some frequently asked questions about Event Hubs.
event-hubs Event Hubs Dotnet Framework Getstarted Send https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-hubs/event-hubs-dotnet-framework-getstarted-send.md
Read the following articles:
- [EventProcessorHost](event-hubs-event-processor-host.md) - [Features and terminology in Azure Event Hubs](event-hubs-features.md).-- [Event Hubs FAQ](event-hubs-faq.md)
+- [Event Hubs FAQ](event-hubs-faq.yml)
<!-- Links -->
event-hubs Event Hubs Dotnet Standard Get Started Send Legacy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-hubs/event-hubs-dotnet-standard-get-started-send-legacy.md
Read the following articles:
These samples use the old **Microsoft.Azure.EventHubs** library, but you can easily update it to using the latest **Azure.Messaging.EventHubs** library. To move the sample from using the old library to new one, see the [Guide to migrate from Microsoft.Azure.EventHubs to Azure.Messaging.EventHubs](https://github.com/Azure/azure-sdk-for-net/blob/master/sdk/eventhub/Azure.Messaging.EventHubs/MigrationGuide.md). - [EventProcessorHost](event-hubs-event-processor-host.md) - [Features and terminology in Azure Event Hubs](event-hubs-features.md)-- [Event Hubs FAQ](event-hubs-faq.md)
+- [Event Hubs FAQ](event-hubs-faq.yml)
event-hubs Event Hubs Event Processor Host https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-hubs/event-hubs-event-processor-host.md
Now that you're familiar with the Event Processor Host, see the following articl
- [JavaScript](event-hubs-node-get-started-send.md) * [Event Hubs programming guide](event-hubs-programming-guide.md) * [Availability and consistency in Event Hubs](event-hubs-availability-and-consistency.md)
-* [Event Hubs FAQ](event-hubs-faq.md)
+* [Event Hubs FAQ](event-hubs-faq.yml)
* [Event Hubs samples on GitHub](https://github.com/Azure/azure-event-hubs/tree/master/samples)
event-hubs Event Hubs Faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-hubs/event-hubs-faq.md
- Title: Frequently asked questions - Azure Event Hubs | Microsoft Docs
-description: This article provides a list of frequently asked questions (FAQ) for Azure Event Hubs and their answers.
- Previously updated : 01/20/2021--
-# Event Hubs frequently asked questions
-
-## General
-
-### What is an Event Hubs namespace?
-A namespace is a scoping container for Event Hub/Kafka Topics. It gives you a unique [FQDN](https://en.wikipedia.org/wiki/Fully_qualified_domain_name). A namespace serves as an application container that can house multiple Event Hub/Kafka Topics.
-
-### When do I create a new namespace vs. use an existing namespace?
-Capacity allocations ([throughput units (TUs)](#throughput-units)) are billed at the namespace level. A namespace is also associated with a region.
-
-You may want to create a new namespace instead of using an existing one in one of the following scenarios:
--- You need an Event Hub associated with a new region.-- You need an Event Hub associated with a different subscription.-- You need an Event Hub with a distinct capacity allocation (that is, the capacity need for the namespace with the added event hub would exceed the 40 TU threshold and you don't want to go for the dedicated cluster) -
-### What is the difference between Event Hubs Basic and Standard tiers?
-
-The Standard tier of Azure Event Hubs provides features beyond what is available in the Basic tier. The following features are included with Standard:
-
-* Longer event retention
-* Additional brokered connections, with an overage charge for more than the number included
-* More than a single [consumer group](event-hubs-features.md#consumer-groups)
-* [Capture](event-hubs-capture-overview.md)
-* [Kafka integration](event-hubs-for-kafka-ecosystem-overview.md)
-
-For more information about pricing tiers, including Event Hubs Dedicated, see the [Event Hubs pricing details](https://azure.microsoft.com/pricing/details/event-hubs/).
-
-### Where is Azure Event Hubs available?
-
-Azure Event Hubs is available in all supported Azure regions. For a list, visit the [Azure regions](https://azure.microsoft.com/regions/) page.
-
-### Can I use a single Advanced Message Queuing Protocol (AMQP) connection to send and receive from multiple event hubs?
-
-Yes, as long as all the event hubs are in the same namespace.
-
-### What is the maximum retention period for events?
-
-Event Hubs Standard tier currently supports a maximum retention period of seven days. Event hubs aren't intended as a permanent data store. Retention periods greater than 24 hours are intended for scenarios in which it's convenient to replay an event stream into the same systems. For example, to train or verify a new machine learning model on existing data. If you need message retention beyond seven days, enabling [Event Hubs Capture](event-hubs-capture-overview.md) on your event hub pulls the data from your event hub into the Storage account or Azure Data Lake Service account of your choosing. Enabling Capture incurs a charge based on your purchased throughput units.
-
-You can configure the retention period for the captured data on your storage account. The **lifecycle management** feature of Azure Storage offers a rich, rule-based policy for general purpose v2 and blob storage accounts. Use the policy to transition your data to the appropriate access tiers or expire at the end of the data's lifecycle. For more information, see [Manage the Azure Blob storage lifecycle](../storage/blobs/storage-lifecycle-management-concepts.md).
-
-### How do I monitor my Event Hubs?
-Event Hubs emits exhaustive metrics that provide the state of your resources to [Azure Monitor](../azure-monitor/overview.md). They also let you assess the overall health of the Event Hubs service not only at the namespace level but also at the entity level. Learn about what monitoring is offered for [Azure Event Hubs](event-hubs-metrics-azure-monitor.md).
-
-### <a name="in-region-data-residency"></a>Where does Azure Event Hubs store data?
-Azure Event Hubs standard and dedicated tiers store metadata and data in regions that you select. When geo-disaster recovery is set up for an Azure Event Hubs namespace, metadata is copied over to the secondary region that you select. Therefore, this service automatically satisfies the region data residency requirements including the ones specified in the [Trust Center](https://azuredatacentermap.azurewebsites.net/).
--
-## Apache Kafka integration
-
-### How do I integrate my existing Kafka application with Event Hubs?
-Event Hubs provides a Kafka endpoint that can be used by your existing Apache Kafka based applications. A configuration change is all that is required to have the PaaS Kafka experience. It provides an alternative to running your own Kafka cluster. Event Hubs supports Apache Kafka 1.0 and newer client versions and works with your existing Kafka applications, tools, and frameworks. For more information, see [Event Hubs for Kafka repo](https://github.com/Azure/azure-event-hubs-for-kafka).
-
-### What configuration changes need to be done for my existing application to talk to Event Hubs?
-To connect to an event hub, you'll need to update the Kafka client configs. It's done by creating an Event Hubs namespace and obtaining the [connection string](event-hubs-get-connection-string.md). Change the bootstrap.servers to point the Event Hubs FQDN and the port to 9093. Update the sasl.jaas.config to direct the Kafka client to your Event Hubs endpoint (which is the connection string you've obtained), with correct authentication as shown below:
-
-```properties
-bootstrap.servers={YOUR.EVENTHUBS.FQDN}:9093
-request.timeout.ms=60000
-security.protocol=SASL_SSL
-sasl.mechanism=PLAIN
-sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username="$ConnectionString" password="{YOUR.EVENTHUBS.CONNECTION.STRING}";
-```
-
-Example:
-
-```properties
-bootstrap.servers=dummynamespace.servicebus.windows.net:9093
-request.timeout.ms=60000
-security.protocol=SASL_SSL
-sasl.mechanism=PLAIN
-sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username="$ConnectionString" password="Endpoint=sb://dummynamespace.servicebus.windows.net/;SharedAccessKeyName=DummyAccessKeyName;SharedAccessKey=XXXXXXXXXXXXXXXXXXXXX";
-```
-Note: If sasl.jaas.config isn't a supported configuration in your framework, find the configurations that are used to set the SASL username and password and use them instead. Set the username to $ConnectionString and the password to your Event Hubs connection string.
-
-### What is the message/event size for Event Hubs?
-The maximum message size allowed for Event Hubs is 1 MB.
-
-## Throughput units
-
-### What are Event Hubs throughput units?
-Throughput in Event Hubs defines the amount of data in mega bytes or the number (in thousands) of 1-KB events that ingress and egress through Event Hubs. This throughput is measured in throughput units (TUs). Purchase TUs before you can start using the Event Hubs service. You can explicitly select Event Hubs TUs either by using portal or Event Hubs Resource Manager templates.
--
-### Do throughput units apply to all event hubs in a namespace?
-Yes, throughput units (TUs) apply to all event hubs in an Event Hubs namespace. It means that you purchase TUs at the namespace level and are shared among the event hubs under that namespace. Each TU entitles the namespace to the following capabilities:
--- Up to 1 MB per second of ingress events (events sent into an event hub), but no more than 1000 ingress events, management operations, or control API calls per second.-- Up to 2 MB per second of egress events (events consumed from an event hub), but no more than 4096 egress events.-- Up to 84 GB of event storage (enough for the default 24-hour retention period).-
-### How are throughput units billed?
-Throughput units (TUs) are billed on an hourly basis. The billing is based on the maximum number of units that was selected during the given hour.
-
-### How can I optimize the usage on my throughput units?
-You can start as low as one throughput unit (TU), and turn on [auto-inflate](event-hubs-auto-inflate.md). The auto-inflate feature lets you grow your TUs as your traffic/payload increases. You can also set an upper limit on the number of TUs.
-
-### How does Auto-inflate feature of Event Hubs work?
-The auto-inflate feature lets you scale up your throughput units (TUs). It means that you can start by purchasing low TUs and auto-inflate scales up your TUs as your ingress increases. It gives you a cost-effective option and complete control of the number of TUs to manage. This feature is a **scale-up only** feature, and you can completely control the scaling down of the number of TUs by updating it.
-
-You may want to start with low throughput units (TUs), for example, 2 TUs. If you predict that your traffic may grow to 15 TUs, turn-on the auto-inflate feature on your namespace, and set the max limit to 15 TUs. You can now grow your TUs automatically as your traffic grows.
-
-### Is there a cost associated when I turn on the auto-inflate feature?
-There's **no cost** associated with this feature.
-
-### How are throughput limits enforced?
-If the total **ingress** throughput or the total ingress event rate across all event hubs in a namespace exceeds the aggregate throughput unit allowances, senders are throttled and receive errors indicating that the ingress quota has been exceeded.
-
-If the total **egress** throughput or the total event egress rate across all event hubs in a namespace exceeds the aggregate throughput unit allowances, receivers are throttled but no throttling errors are generated.
-
-Ingress and egress quotas are enforced separately, so that no sender can cause event consumption to slow down, nor can a receiver prevent events from being sent into an event hub.
-
-### Is there a limit on the number of throughput units that can be reserved/selected?
-
-When creating a basic or a standard tier namespace in the Azure portal, you can select up to 20 TUs for the namespace. To raise it to **exactly** 40 TUs, submit a [support request](../azure-portal/supportability/how-to-create-azure-support-request.md).
-
-1. On the **Event Bus Namespace** page, select **New support request** on the left menu.
-1. On the **New support request** page, follow these steps:
- 1. For **Summary**, describe the issue in a few words.
- 1. For **Problem type**, select **Quota**.
- 1. For **Problem subtype**, select **Request for throughput unit increase or decrease**.
-
- :::image type="content" source="./media/event-hubs-faq/support-request-throughput-units.png" alt-text="Support request page":::
-
-Beyond 40 TUs, Event Hubs offers the resource/capacity-based model called the Event Hubs Dedicated clusters. Dedicated clusters are sold in Capacity Units (CUs). For more information, see [Event Hubs Dedicated - overview](event-hubs-dedicated-overview.md).
-
-## Dedicated clusters
-
-### What are Event Hubs Dedicated clusters?
-Event Hubs Dedicated clusters offer single-tenant deployments for customers with most demanding requirements. This offering builds a capacity-based cluster that is not bound by throughput units. It means that you could use the cluster to ingest and stream your data as dictated by the CPU and memory usage of the cluster. For more information, see [Event Hubs Dedicated clusters](event-hubs-dedicated-overview.md).
-
-### How do I create an Event Hubs Dedicated cluster?
-For step-by-step instructions and more information on setting up an Event Hubs dedicated cluster, see the [Quickstart: Create a dedicated Event Hubs cluster using Azure portal](event-hubs-dedicated-cluster-create-portal.md).
----
-## Partitions
-
-### How many partitions do I need?
-The number of partitions is specified at creation and must be between 1 and 32. The partition count isn't changeable in all tiers except the [dedicated tier](event-hubs-dedicated-overview.md), so you should consider long-term scale when setting partition count. Partitions are a data organization mechanism that relates to the downstream parallelism required in consuming applications. The number of partitions in an event hub directly relates to the number of concurrent readers you expect to have. For more information on partitions, see [Partitions](event-hubs-features.md#partitions).
-
-You may want to set it to be the highest possible value, which is 32, at the time of creation. Remember that having more than one partition will result in events sent to multiple partitions without retaining the order, unless you configure senders to only send to a single partition out of the 32 leaving the remaining 31 partitions redundant. In the former case, you'll have to read events across all 32 partitions. In the latter case, there's no obvious additional cost apart from the extra configuration you have to make on Event Processor Host.
-
-Event Hubs is designed to allow a single partition reader per consumer group. In most use cases, the default setting of four partitions is sufficient. If you're looking to scale your event processing, you may want to consider adding additional partitions. There's no specific throughput limit on a partition, however the aggregate throughput in your namespace is limited by the number of throughput units. As you increase the number of throughput units in your namespace, you may want additional partitions to allow concurrent readers to achieve their own maximum throughput.
-
-However, if you have a model in which your application has an affinity to a particular partition, increasing the number of partitions may not be of any benefit to you. For more information, see [availability and consistency](event-hubs-availability-and-consistency.md).
-
-### Increase partitions
-You can request for the partition count to be increased to 40 (exact) by submitting a support request.
-
-1. On the **Event Bus Namespace** page, select **New support request** on the left menu.
-1. On the **New support request** page, follow these steps:
- 1. For **Summary**, describe the issue in a few words.
- 1. For **Problem type**, select **Quota**.
- 1. For **Problem subtype**, select **Request for partition change**.
-
- :::image type="content" source="./media/event-hubs-faq/support-request-increase-partitions.png" alt-text="Increase partition count":::
-
-The partition count can be increased to exactly 40. In this case, number of TUs also have to be increased to 40. If you decide later to lower the TU limit back to <= 20, the maximum partition limit is also decreased to 32.
-
-The decrease in partitions doesn't affect existing event hubs because partitions are applied at the event hub level and they're immutable after creation of the hub.
-
-## Pricing
-
-### Where can I find more pricing information?
-
-For complete information about Event Hubs pricing, see the [Event Hubs pricing details](https://azure.microsoft.com/pricing/details/event-hubs/).
-
-### Is there a charge for retaining Event Hubs events for more than 24 hours?
-
-The Event Hubs Standard tier does allow message retention periods longer than 24 hours, for a maximum of seven days. If the size of the total number of stored events exceeds the storage allowance for the number of selected throughput units (84 GB per throughput unit), the size that exceeds the allowance is charged at the published Azure Blob storage rate. The storage allowance in each throughput unit covers all storage costs for retention periods of 24 hours (the default) even if the throughput unit is used up to the maximum ingress allowance.
-
-### How is the Event Hubs storage size calculated and charged?
-
-The total size of all stored events, including any internal overhead for event headers or on disk storage structures in all event hubs, is measured throughout the day. At the end of the day, the peak storage size is calculated. The daily storage allowance is calculated based on the minimum number of throughput units that were selected during the day (each throughput unit provides an allowance of 84 GB). If the total size exceeds the calculated daily storage allowance, the excess storage is billed using Azure Blob storage rates (at the **Locally Redundant Storage** rate).
-
-### How are Event Hubs ingress events calculated?
-
-Each event sent to an event hub counts as a billable message. An *ingress event* is defined as a unit of data that is less than or equal to 64 KB. Any event that is less than or equal to 64 KB in size is considered to be one billable event. If the event is greater than 64 KB, the number of billable events is calculated according to the event size, in multiples of 64 KB. For example, an 8-KB event sent to the event hub is billed as one event, but a 96-KB message sent to the event hub is billed as two events.
-
-Events consumed from an event hub, and management operations and control calls such as checkpoints, aren't counted as billable ingress events, but accrue up to the throughput unit allowance.
-
-### Do brokered connection charges apply to Event Hubs?
-
-Connection charges apply only when the AMQP protocol is used. There are no connection charges for sending events using HTTP, regardless of the number of sending systems or devices. If you plan to use AMQP (for example, to achieve more efficient event streaming or to enable bi-directional communication in IoT command and control scenarios), see the [Event Hubs pricing information](https://azure.microsoft.com/pricing/details/event-hubs/) page for details about how many connections are included in each service tier.
-
-### How is Event Hubs Capture billed?
-
-Capture is enabled when any event hub in the namespace has the Capture option enabled. Event Hubs Capture is billed hourly per purchased throughput unit. As the throughput unit count is increased or decreased, Event Hubs Capture billing reflects these changes in whole hour increments. For more information about Event Hubs Capture billing, see [Event Hubs pricing information](https://azure.microsoft.com/pricing/details/event-hubs/).
-
-### Do I get billed for the storage account I select for Event Hubs Capture?
-
-Capture uses a storage account you provide when enabled on an event hub. As it is your storage account, any changes for this configuration are billed to your Azure subscription.
-
-## Quotas
-
-### Are there any quotas associated with Event Hubs?
-
-For a list of all Event Hubs quotas, see [quotas](event-hubs-quotas.md).
-
-## Troubleshooting
-
-### Why am I not able to create a namespace after deleting it from another subscription?
-When you delete a namespace from a subscription, wait for 4 hours before recreating it with the same name in another subscription. Otherwise, you may receive the following error message: `Namespace already exists`.
-
-### What are some of the exceptions generated by Event Hubs and their suggested actions?
-
-For a list of possible Event Hubs exceptions, see [Exceptions overview](event-hubs-messaging-exceptions.md).
-
-### Diagnostic logs
-
-Event Hubs supports two types of [diagnostics logs](event-hubs-diagnostic-logs.md) - Capture error logs and operational logs - both of which are represented in json and can be turned on through the Azure portal.
-
-### Support and SLA
-
-Technical support for Event Hubs is available through the [Microsoft Q&A question page for Azure Service Bus](/answers/topics/azure-service-bus.html). Billing and subscription management support is provided at no cost.
-
-To learn more about our SLA, see the [Service Level Agreements](https://azure.microsoft.com/support/legal/sla/) page.
-
-## Azure Stack Hub
-
-### How can I target a specific version of Azure Storage SDK when using Azure Blob Storage as a checkpoint store?
-If you run this code on Azure Stack Hub, you'll experience runtime errors unless you target a specific Storage API version. That's because the Event Hubs SDK uses the latest available Azure Storage API available in Azure that may not be available on your Azure Stack Hub platform. Azure Stack Hub may support a different version of Storage Blob SDK than that are typically available on Azure. If you're using Azure Blog Storage as a checkpoint store, check the [supported Azure Storage API version for your Azure Stack Hub build](/azure-stack/user/azure-stack-acs-differences?#api-version) and target that version in your code.
-
-For example, If you're running on Azure Stack Hub version 2005, the highest available version for the Storage service is version 2019-02-02. By default, the Event Hubs SDK client library uses the highest available version on Azure (2019-07-07 at the time of the release of the SDK). In this case, besides following steps in this section, you'll also need to add code to target the Storage service API version 2019-02-02. For an example on how to target a specific Storage API version, see the following samples for C#, Java, Python, and JavaScript/TypeScript.
-
-For an example on how to target a specific Storage API version from your code, see the following samples on GitHub:
--- [.NET](https://github.com/Azure/azure-sdk-for-net/tree/master/sdk/eventhub/Azure.Messaging.EventHubs.Processor/samples/)-- [Java](https://github.com/Azure/azure-sdk-for-java/blob/master/sdk/eventhubs/azure-messaging-eventhubs-checkpointstore-blob/src/samples/java/com/azure/messaging/eventhubs/checkpointstore/blob/EventProcessorWithCustomStorageVersion.java)-- Python - [Synchronous](https://github.com/Azure/azure-sdk-for-python/blob/master/sdk/eventhub/azure-eventhub-checkpointstoreblob/samples/receive_events_using_checkpoint_store_storage_api_version.py), [Asynchronous](https://github.com/Azure/azure-sdk-for-python/blob/master/sdk/eventhub/azure-eventhub-checkpointstoreblob-aio/samples/receive_events_using_checkpoint_store_storage_api_version_async.py)-- [JavaScript](https://github.com/Azure/azure-sdk-for-js/blob/master/sdk/eventhub/eventhubs-checkpointstore-blob/samples/javascript/receiveEventsWithApiSpecificStorage.js) and [TypeScript](https://github.com/Azure/azure-sdk-for-js/blob/master/sdk/eventhub/eventhubs-checkpointstore-blob/samples/typescript/src/receiveEventsWithApiSpecificStorage.ts)-
-## Next steps
-
-You can learn more about Event Hubs by visiting the following links:
-
-* [Event Hubs overview](./event-hubs-about.md)
-* [Create an Event Hub](event-hubs-create.md)
-* [Event Hubs Auto-inflate](event-hubs-auto-inflate.md)
event-hubs Event Hubs Features https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-hubs/event-hubs-features.md
For more information about Event Hubs, visit the following links:
- [JavaScript](event-hubs-node-get-started-send.md) * [Event Hubs programming guide](event-hubs-programming-guide.md) * [Availability and consistency in Event Hubs](event-hubs-availability-and-consistency.md)
-* [Event Hubs FAQ](event-hubs-faq.md)
+* [Event Hubs FAQ](event-hubs-faq.yml)
* [Event Hubs samples](event-hubs-samples.md)
event-hubs Event Hubs Geo Dr https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-hubs/event-hubs-geo-dr.md
For more information about Event Hubs, visit the following links:
- [Java](event-hubs-java-get-started-send.md) - [Python](event-hubs-python-get-started-send.md) - [JavaScript](event-hubs-node-get-started-send.md)
-* [Event Hubs FAQ](event-hubs-faq.md)
+* [Event Hubs FAQ](event-hubs-faq.yml)
* [Sample applications that use Event Hubs](https://github.com/Azure/azure-event-hubs/tree/master/samples) [1]: ./media/event-hubs-geo-dr/geo1.png
event-hubs Event Hubs Go Get Started Send https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-hubs/event-hubs-go-get-started-send.md
Read the following articles:
- [EventProcessorHost](event-hubs-event-processor-host.md) - [Features and terminology in Azure Event Hubs](event-hubs-features.md)-- [Event Hubs FAQ](event-hubs-faq.md)
+- [Event Hubs FAQ](event-hubs-faq.yml)
<!-- Links -->
event-hubs Event Hubs Java Get Started Send Legacy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-hubs/event-hubs-java-get-started-send-legacy.md
Read the following articles:
- [EventProcessorHost](event-hubs-event-processor-host.md) - [Features and terminology in Azure Event Hubs](event-hubs-features.md)-- [Event Hubs FAQ](event-hubs-faq.md)
+- [Event Hubs FAQ](event-hubs-faq.yml)
event-hubs Event Hubs Messaging Exceptions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-hubs/event-hubs-messaging-exceptions.md
You can learn more about Event Hubs by visiting the following links:
* [Event Hubs overview](./event-hubs-about.md) * [Create an Event Hub](event-hubs-create.md)
-* [Event Hubs FAQ](event-hubs-faq.md)
+* [Event Hubs FAQ](event-hubs-faq.yml)
event-hubs Event Hubs Metrics Azure Monitor https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-hubs/event-hubs-metrics-azure-monitor.md
For more information about Event Hubs, visit the following links:
- [Java](event-hubs-java-get-started-send.md) - [Python](event-hubs-python-get-started-send.md) - [JavaScript](event-hubs-node-get-started-send.md)
-* [Event Hubs FAQ](event-hubs-faq.md)
+* [Event Hubs FAQ](event-hubs-faq.yml)
* [Sample applications that use Event Hubs](https://github.com/Azure/azure-event-hubs/tree/master/samples) [1]: ./media/event-hubs-metrics-azure-monitor/event-hubs-monitor1.png
event-hubs Event Hubs Quotas https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-hubs/event-hubs-quotas.md
You can learn more about Event Hubs by visiting the following links:
* [Event Hubs overview](./event-hubs-about.md) * [Event Hubs Auto-inflate](event-hubs-auto-inflate.md)
-* [Event Hubs FAQ](event-hubs-faq.md)
+* [Event Hubs FAQ](event-hubs-faq.yml)
event-hubs Event Hubs Resource Manager Namespace Event Hub Enable Capture https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-hubs/event-hubs-resource-manager-namespace-event-hub-enable-capture.md
You can learn more about Event Hubs by visiting the following links:
* [Event Hubs overview](./event-hubs-about.md) * [Create an event hub](event-hubs-create.md)
-* [Event Hubs FAQ](event-hubs-faq.md)
+* [Event Hubs FAQ](event-hubs-faq.yml)
[Authoring Azure Resource Manager templates]: ../azure-resource-manager/templates/template-syntax.md [Azure Quickstart Templates]: https://azure.microsoft.com/documentation/templates/?term=event+hubs
event-hubs Event Hubs Samples https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-hubs/event-hubs-samples.md
You can learn more about Event Hubs in the following articles:
- [Event Hubs overview](./event-hubs-about.md) - [Event Hubs features](event-hubs-features.md)-- [Event Hubs FAQ](event-hubs-faq.md)
+- [Event Hubs FAQ](event-hubs-faq.yml)
event-hubs Event Hubs Storm Getstarted Receive https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-hubs/event-hubs-storm-getstarted-receive.md
You can learn more about Event Hubs by visiting the following links:
* [Event Hubs overview][Event Hubs overview] * [Create an event hub](event-hubs-create.md)
-* [Event Hubs FAQ](event-hubs-faq.md)
+* [Event Hubs FAQ](event-hubs-faq.yml)
<!-- Links --> [Event Hubs overview]: ./event-hubs-about.md
event-hubs Sdks https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-hubs/sdks.md
You can learn more about Event Hubs by visiting the following links:
* [Event Hubs overview](./event-hubs-about.md) * [Create an Event Hub](event-hubs-create.md)
-* [Event Hubs FAQ](event-hubs-faq.md)
+* [Event Hubs FAQ](event-hubs-faq.yml)
firewall-manager Private Link Inspection Secure Virtual Hub https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/firewall-manager/private-link-inspection-secure-virtual-hub.md
# Secure traffic destined to private endpoints in Azure Virtual WAN
+> [!NOTE]
+> This article applies to secured virtual hub only. If you want to inspect traffic destined to private endpoints using Azure Firewall in a hub virtual network, see [Use Azure Firewall to inspect traffic destined to a private endpoint](../private-link/inspect-traffic-with-azure-firewall.md).
+ [Azure Private Endpoint](../private-link/private-endpoint-overview.md) is the fundamental building block for [Azure Private Link](../private-link/private-link-overview.md). Private endpoints enable Azure resources deployed in a virtual network to communicate privately with private link resources. Private endpoints allow resources access to the private link service deployed in a virtual network. Access to the private endpoint through virtual network peering and on-premises network connections extend the connectivity.
frontdoor Front Door Whats New https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/frontdoor/front-door-whats-new.md
- Title: Azure Front Door recent changes
-description: This article provides an ongoing list of recent changes that are made to Azure Front Door.
----- Previously updated : 4/30/2020-
-# Customer intent: As an IT admin, I want to learn about Front Door and what new features are available.
--
-# What's new in Azure Front Door?
-
-Azure Front Door is updated on an ongoing basis. To stay updated with the most recent developments, this article provides you with information about:
--- The latest releases-- Known issues-- Bug fixes-- Deprecated functionality-
-## New features
-
-|Feature |Description |Date added |
-||||
-| Rules Engine GA | Customize how http requests are handled at the edge. For more information, see the [Rules Engine overview](front-door-rules-engine.md). |June 2020 |
-| Rules Engine (Preview) | Customize how http requests are handled at the edge. For more information, see the [Rules Engine overview](front-door-rules-engine.md). |April 2020 |
--
-## Next steps
-
-For more information about Azure Front Door, see [What is Azure Front Door?](front-door-overview.md)
governance Create Management Group Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/management-groups/create-management-group-rest-api.md
directory. You receive a notification when the process is complete. For more inf
### Create in REST API For REST API, use the
-[Management Groups - Create or Update](/rest/api/resources/managementgroups/createorupdate) endpoint
+[Management Groups - Create or Update](/rest/api/managementgroups/managementgroups/createorupdate) endpoint
to create a new management group. In this example, the management group **groupId** is _Contoso_. - REST API URI
specify a different management group as the parent, use the **properties.parent.
## Clean up resources To remove the management group created above, use the
-[Management Groups - Delete](/rest/api/resources/managementgroups/delete) endpoint:
+[Management Groups - Delete](/rest/api/managementgroups/managementgroups/delete) endpoint:
- REST API URI
management group can hold subscriptions or other management groups.
To learn more about management groups and how to manage your resource hierarchy, continue to: > [!div class="nextstepaction"]
-> [Manage your resources with management groups](./manage.md)
+> [Manage your resources with management groups](./manage.md)
governance Protect Resource Hierarchy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/management-groups/how-to/protect-resource-hierarchy.md
To configure this setting in Azure portal, follow these steps:
### Set default management group with REST API To configure this setting with REST API, the
-[Hierarchy Settings](/rest/api/resources/hierarchysettings) endpoint is called. To do so, use the
+[Hierarchy Settings](/rest/api/managementgroups/hierarchysettings) endpoint is called. To do so, use the
following REST API URI and body format. Replace `{rootMgID}` with the ID of your root management group and `{defaultGroupID}` with the ID of the management group to become the default management group:
To configure this setting in Azure portal, follow these steps:
### Set require authorization with REST API To configure this setting with REST API, the
-[Hierarchy Settings](/rest/api/resources/hierarchysettings) endpoint is called. To do so, use the
+[Hierarchy Settings](/rest/api/managementgroups/hierarchysettings) endpoint is called. To do so, use the
following REST API URI and body format. This value is a _boolean_, so provide either **true** or **false** for the value. A value of **true** enables this method of protecting your management group hierarchy:
To turn the setting back off, use the same endpoint and set
To learn more about management groups, see: - [Create management groups to organize Azure resources](../create-management-group-portal.md)-- [How to change, delete, or manage your management groups](../manage.md)
+- [How to change, delete, or manage your management groups](../manage.md)
governance Manage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/management-groups/manage.md
To learn more about management groups, see:
- [Create management groups to organize Azure resources](./create-management-group-portal.md) - [How to change, delete, or manage your management groups](./manage.md) - [Review management groups in Azure PowerShell Resources Module](/powershell/module/az.resources#resources)-- [Review management groups in REST API](/rest/api/resources/managementgroups)-- [Review management groups in Azure CLI](/cli/azure/account/management-group)
+- [Review management groups in REST API](/rest/api/managementgroups/managementgroups)
+- [Review management groups in Azure CLI](/cli/azure/account/management-group)
governance Guidance For Throttled Requests https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/resource-graph/concepts/guidance-for-throttled-requests.md
Title: Guidance for throttled requests description: Learn to group, stagger, paginate, and query in parallel to avoid requests being throttled by Azure Resource Graph. Previously updated : 01/27/2021 Last updated : 04/09/2021
looking for. However, some Azure Resource Graph clients handle pagination differ
## Still get throttled?
-If you're getting throttled after exercising the above recommendations, contact the team at
-[resourcegraphsupport@microsoft.com](mailto:resourcegraphsupport@microsoft.com).
+If you're getting throttled after exercising the above recommendations, contact the [Azure Resource
+Graph team](mailto:resourcegraphsupport@microsoft.com).
Provide these details:
hdinsight Hdinsight Apps Publish Applications https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hdinsight/hdinsight-apps-publish-applications.md
Create a .zip file that contains all the files that are required to install your
## Publish the application To publish an HDInsight application:
-1. Sign in to [Azure Publishing](https://publish.windowsazure.com/).
+1. Sign in to Azure Publishing.
+ 2. In the left menu, select **Solution templates**. 3. Enter a title, and then select **Create a new solution template**. 4. If you haven't already registered your organization, select **Create Dev Center account and join the Azure program**. For more information, see [Create a Microsoft Developer account](../marketplace/overview.md).
To publish an HDInsight application:
* Learn how to [install custom HDInsight applications](hdinsight-apps-install-custom-applications.md) and deploy an unpublished HDInsight application to HDInsight. * Learn how to [use Script Action to customize Linux-based HDInsight clusters](hdinsight-hadoop-customize-cluster-linux.md) and add more applications. * Learn how to [create Linux-based Apache Hadoop clusters in HDInsight by using Azure Resource Manager templates](hdinsight-hadoop-create-linux-clusters-arm-templates.md).
-* Learn how to [use an empty edge node in HDInsight](hdinsight-apps-use-edge-node.md) to access HDInsight clusters, test HDInsight applications, and host HDInsight applications.
+* Learn how to [use an empty edge node in HDInsight](hdinsight-apps-use-edge-node.md) to access HDInsight clusters, test HDInsight applications, and host HDInsight applications.
hdinsight Hdinsight Config For Vscode https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hdinsight/hdinsight-config-for-vscode.md
+
+ Title: Azure HDInsight configuration settings reference
+description: Introduce the configuration of Azure HDInsight extension.
++ Last updated : 04/07/2021+++
+# Azure HDInsight configuration settings reference
+
+The Spark & Hive tools Extension for Visual Studio Code is highly configurable. This page describes the key settings you can work with.
+
+For general information about working with settings in VS Code, refer to [User and workspace settings](https://code.visualstudio.com/docs/getstarted/settings), and the [Variables reference](https://code.visualstudio.com/docs/editor/variables-reference) for information about predefined variable support.
+
+## Open the Azure HDInsight configuration
+
+1. Open a folder first to create workspace settings.
+2. Press **Ctrl + Shift + P**, or navigate to **View** -> **Command Palette...** to show all commands.
+3. Search **Set Configuration**.
+4. Expand **Extensions** in the left directory, and select **HDInsight configuration**.
+
+ ![hdi config image](./media/HDInsight-config-for-vscode/HDInsight-config-for-vscode.png)
+
+## General settings
+
+| Property | Default | Description |
+| -- | -- |-- |
+| HDInsight: Azure Environment | Azure | Azure environment |
+| HDInsight: Disable Open Survey Link | Checked | Enable/Disable opening HDInsight survey |
+| HDInsight: Enable Skip Pyspark Installation | Unchecked | Enable/Disable skipping pyspark installation |
+| HDInsight: Login Tips Enable | Unchecked | When this option is checked, there will be a prompt when logging in to Azure |
+| HDInsight: Previous Extension Version | Display the version number of the current extension | Show the previous extension version|
+| HDInsight: Results Font Family | -apple-system,BlinkMacSystemFont,Segoe WPC,Segoe UI,HelveticaNeue-Light,Ubuntu,Droid Sans,sans-serif | Set the font family for the results grid; set to blank to use the editor font |
+| HDInsight: Results Font Size | 13 |Set the font size for the results gird; set to blank to use the editor size |
+| HDInsight Cluster: Linked Cluster | -- | Linked clusters urls. Also can edit the JSON file to set |
+| HDInsight Hive: Apply Localization | Unchecked | [Optional] Configuration options for localizing into VSCode's configured locale (must restart VSCode for settings to take effect)|
+| HDInsight Hive: Copy Include Headers | Unchecked | [Optional] Configuration option for copying results from the Results View |
+| HDInsight Hive: Copy Remove New Line | Checked | [Optional] Configuration options for copying multi-line results from the Results View |
+| HDInsight Hive › Format: Align Column Definitions In Columns | Unchecked | Should column definition be aligned |
+| HDInsight Hive › Format: Datatype Casing | none | Should data types be formatted as UPPERCASE, lowercase, or none (not formatted) |
+| HDInsight Hive › Format: Keyword Casing | none | Should keywords be formatted as UPPERCASE, lowercase, or none (not formatted) |
+| HDInsight Hive › Format: Place Commas Before Next Statement | Unchecked | Should commas be placed at the beginning of each statement in a list for example ', mycolumn2' instead of at the end 'mycolumn1,'|
+| HDInsight Hive › Format: Place Select Statement References On New Line | Unchecked | Should references to objects in a select statement be split into separate lines? For example, for 'SELECT C1, C2 FROM T1' both C1 and C2 will be on separate lines
+| HDInsight Hive: Log Debug Info | Unchecked | [Optional] Log debug output to the VS Code console (Help -> Toggle Developer Tools)
+| HDInsight Hive: Messages Default Open | Checked | True for the messages pane to be open by default; false for closed|
+| HDInsight Hive: Results Font Family | -apple-system,BlinkMacSystemFont,Segoe WPC,Segoe UI,HelveticaNeue-Light,Ubuntu,Droid Sans,sans-serif | Set the font family for the results grid; set to blank to use the editor font |
+| HDInsight Hive: Results Font Size | 13 | Set the font size for the results grid; set to blank to use the editor size |
+| HDInsight Hive › Save As Csv: Include Headers | Checked | [Optional] When true, column headers are included when saving results as CSV |
+| HDInsight Hive: Shortcuts | -- | Shortcuts related to the results window |
+| HDInsight Hive: Show Batch Time| Unchecked | [Optional] Should execution time is shown for individual batches |
+| HDInsight Hive: Split Pane Selection | next | [Optional] Configuration options for which column new result panes should open in |
+| HDInsight Job Submission: Cluster Conf | -- | Cluster Configuration |
+| HDInsight Job Submission: Livy Conf | -- | Livy Configuration. POST/batches |
+| HDInsight Jupyter: Append Results| Checked | Whether to append the results to results window, else clear and display. |
+| HDInsight Jupyter: Languages | -- | Default settings per language. |
+| HDInsight Jupyter › Log: Verbose | Unchecked | If enable verbose logging |
+| HDInsight Jupyter › Notebook: Startup Args | Can add item | 'jupyter notebook' command-line arguments. Each argument is a separate item in the array. For a full list type 'jupyter notebook--help' in a terminal window. |
+| HDInsight Jupyter › Notebook: Startup Folder | ${workspaceRoot} |-- |
+| HDInsight Jupyter: Python Extension Enabled | Checked | Use Python-Interactive-Window of ms-python extension when submitting pySpark Interactive jobs. Otherwise, use our own jupyter window |
+| HDInsight Spark.NET: 7z | C:\Program Files\7-Zip | <Path to 7z.exe> |
+| HDInsight Spark.NET: HADOOP_HOME | D:\winutils | <Path to bin\winutils.exe> windows OS only |
+| HDInsight Spark.NET: JAVA_HOME | C:\Program Files\Java\jdk1.8.0_201\ | Path to Java Home|
+| HDInsight Spark.NET: SCALA_HOME | C:\Program Files (x86)\scala\ | Path to Scala Home |
+| HDInsight Spark.NET: SPARK_HOME | D:\spark-2.3.3-bin-hadoop2.7\ | Path to Spark Home |
+| Hive: Persist Query Result Tabs | Unchecked | Hive PersistQueryResultTabs |
+| Hive: Split Pane Selection | next | [Optional] Configuration options for which column new result panes should open in |
+| Hive Interactive: Copy Executable Folder | Unchecked | If copy the hive interactive service runtime folder to user's tmp folder |
+| Hql Interactive Server: Wrapper Port | 13424 | Hive interactive service port |
+| Hql Language Server: Language Wrapper Port | 12342 | Hive language service port servers listen to. |
+| Hql Language Server: Max Number Of Problems | 100 | Controls the maximum number of problems produced by the server. |
+| Synapse Spark Compute: Synapse Spark Compute Azure Environment | blank | synapse Spark Compute Azure environment |
+| Synapse Spark pool Job Submission: Livy Conf | -- | Livy Configuration. POST/batches
+| Synapse Spark pool Job Submission: Synapse Spark Pool Cluster Conf | -- | Synapse Spark Pool Configuration |
++
+## Next steps
+
+- For information about Azure HDInsight for VSCode, see [Spark & Hive for Visual Studio Code Tools](https://docs.microsoft.com/sql/big-data-cluster/spark-hive-tools-vscode).
+- For a video that demonstrates using Spark & Hive for Visual Studio Code, see [Spark & Hive for Visual Studio Code](https://go.microsoft.com/fwlink/?linkid=858706).
healthcare-apis Register Confidential Azure Ad Client App https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/healthcare-apis/fhir/register-confidential-azure-ad-client-app.md
Previously updated : 03/16/2021 Last updated : 04/08/2021
To register a new confidential client application, refer to the steps below.
1. Select **App registrations**.
- ![Azure portal. New App Registration.](media/how-to-aad/portal-aad-new-app-registration.png)
+ :::image type="content" source="media/how-to-aad/portal-aad-new-app-registration.png" alt-text="Azure portal. New App Registration.":::
1. Select **New registration**.
To register a new confidential client application, refer to the steps below.
1. (Optional) Provide a **Redirect URI**. These details can be changed later, but if you know the reply URL of your application, enter it now.
- ![New Confidential Client App Registration.](media/how-to-aad/portal-aad-register-new-app-registration-CONF-CLIENT.png)
+ :::image type="content" source="media/how-to-aad/portal-aad-register-new-app-registration-CONF-CLIENT.png" alt-text="New Confidential Client App Registration.":::
1. Select **Register**.
Now that you've registered your application, you must select which API permissio
1. Select **API permissions**.
- ![Confidential client. API Permissions](media/how-to-aad/portal-aad-register-new-app-registration-CONF-CLIENT-API-Permissions.png)
+ :::image type="content" source="media/how-to-aad/portal-aad-register-new-app-registration-CONF-CLIENT-API-Permissions.png" alt-text="Confidential client. API Permissions.":::
1. Select **Add a permission**.
Now that you've registered your application, you must select which API permissio
1. Select **Certificates & secrets**, and then select **New client secret**.
- ![Confidential client. Application Secret](media/how-to-aad/portal-aad-register-new-app-registration-CONF-CLIENT-SECRET.png)
+ :::image type="content" source="media/how-to-aad/portal-aad-register-new-app-registration-CONF-CLIENT-SECRET.png" alt-text="Confidential client. Application Secret.":::
-1. Enter a **Description** for the client secret. Select the Expires (In 1 year, In 2 years, or Never), and then click **Add**.
+1. Enter a **Description** for the client secret. Select the **Expires** drop-down menu to choose an expiration time frame, and then click **Add**.
- ![Add a client secret](media/how-to-aad/add-a-client-secret.png)
+ :::image type="content" source="media/how-to-aad/add-a-client-secret.png" alt-text="Add a client secret.":::
1. After the client secret string is created, copy its **Value** and **ID**, and store them in a secure location of your choice.
hpc-cache Cache Usage Models https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hpc-cache/cache-usage-models.md
description: Describes the different cache usage models and how to choose among
Previously updated : 03/15/2021 Last updated : 04/08/2021
Cache usage models let you customize how your Azure HPC Cache stores files to sp
File caching is how Azure HPC Cache expedites client requests. It uses these basic practices:
-* **Read caching** - Azure HPC Cache keeps a copy of files that clients request from the storage system. The next time a client requests the same file, HPC cache can provide the version in its cache instead of having to fetch it from the back-end storage system again.
+* **Read caching** - Azure HPC Cache keeps a copy of files that clients request from the storage system. The next time a client requests the same file, HPC Cache can provide the version in its cache instead of having to fetch it from the back-end storage system again.
* **Write caching** - Optionally, Azure HPC Cache can store a copy of any changed files sent from the client machines. If multiple clients make changes to the same file over a short period, the cache can collect all the changes in the cache instead of having to write each change individually to the back-end storage system.
The usage models built into Azure HPC Cache have different values for these sett
## Choose the right usage model for your workflow
-You must choose a usage model for each NFS-mounted storage target that you use. Azure Blob storage targets have a built-in usage model that can't be customized.
+You must choose a usage model for each NFS-protocol storage target that you use. Azure Blob storage targets have a built-in usage model that can't be customized.
-HPC cache usage models let you choose how to balance fast response with the risk of getting stale data. If you want to optimize speed for reading files, you might not care whether the files in the cache are checked against the back-end files. On the other hand, if you want to make sure your files are always up to date with the remote storage, choose a model that checks frequently.
+HPC Cache usage models let you choose how to balance fast response with the risk of getting stale data. If you want to optimize speed for reading files, you might not care whether the files in the cache are checked against the back-end files. On the other hand, if you want to make sure your files are always up to date with the remote storage, choose a model that checks frequently.
These are the usage model options:
This table summarizes the usage model differences:
If you have questions about the best usage model for your Azure HPC Cache workflow, talk to your Azure representative or open a support request for help.
+## Know when to remount clients for NLM
+
+In some situations, you might need to remount clients if you change a storage target's usage model. This is needed because of the way different usage models handle Network Lock Manager (NLM) requests.
+
+The HPC Cache sits between clients and the back-end storage system. Usually the cache passes NLM requests through to the back-end storage system, but in some situations, the cache itself acknowledges the NLM request and returns a value to the client. In Azure HPC Cache, this only happens when you use the usage model **Read heavy, infrequent writes** (or in a standard blob storage target, which doesn't have configurable usage models).
+
+There is a small risk of file conflict if you change between the **Read heavy, infrequent writes** usage model and a different usage model. There's no way to transfer the current NLM state from the cache to the storage system or vice versa. So the client's lock status is inaccurate.
+
+Remount the clients to make sure that they have an accurate NLM state with the new lock manager.
+
+If your clients send a NLM request when the usage model or back-end storage does not support it, they will receive an error.
+
+### Disable NLM at client mount time
+
+It is not always easy to know whether or not your client systems will send NLM requests.
+
+You can disable NLM when clients mount the cluster by using the option ``-o nolock`` in the ``mount`` command.
+
+The exact behavior of the ``nolock`` option depends on the client operating system, so check the mount documentation (man 5 nfs) for your client OS. In most cases, it moves the lock locally to the client. Use caution if your application lock files across multiple clients.
+
+> [!NOTE]
+> ADLS-NFS does not support NLM. You should disable NLM with the mount option above when using an ADLS-NFS storage target.
+ ## Next steps * [Add storage targets](hpc-cache-add-storage.md) to your Azure HPC Cache
hpc-cache Configuration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hpc-cache/configuration.md
description: Explains how to configure additional settings for the cache like MT
Previously updated : 03/17/2021 Last updated : 04/08/2021
To see the settings, open the cache's **Networking** page in the Azure portal.
![screenshot of networking page in Azure portal](media/networking-page.png) > [!NOTE]
-> A previous version of this page included a cache-level root squash setting, but this setting has been moved to [client access policies](access-policies.md).
+> A previous version of this page included a cache-level root squash setting, but this setting has moved to [client access policies](access-policies.md).
<!-- >> [!TIP] > The [Managing Azure HPC Cache video](https://azure.microsoft.com/resources/videos/managing-hpc-cache/) shows the networking page and its settings. -->
Learn more about MTU settings in Azure virtual networks by reading [TCP/IP perfo
## Customize NTP
-Your cache uses the Azure-based time server time.microsoft.com by default. If you want your cache to use a different NTP server, specify it in the **NTP configuration** section. Use a fully qualified domain name or an IP address.
+Your cache uses the Azure-based time server time.windows.com by default. If you want your cache to use a different NTP server, specify it in the **NTP configuration** section. Use a fully qualified domain name or an IP address.
## Set a custom DNS configuration > [!CAUTION]
-> Do not change your cache DNS configuration if you don't need to. Configuration mistakes can have dire consequences. If your configuration can't resolve Azure service names, the HPC cache instance will become permanently unreachable.
+> Do not change your cache DNS configuration if you don't need to. Configuration mistakes can have dire consequences. If your configuration can't resolve Azure service names, the HPC Cache instance will become permanently unreachable.
+>
+> Check with your Azure representatives before attempting to set up a custom DNS configuration.
Azure HPC Cache is automatically configured to use the secure and convenient Azure DNS system. However, a few unusual configurations require the cache to use a separate, on-premises DNS system instead of the Azure system. The **DNS configuration** section of the **Networking** page is used to specify this kind of system. Check with your Azure representatives or consult Microsoft Service and Support to determine whether or not you need to use a custom cache DNS configuration.
-If you configure your own on-premises DNS system for the Azure HPC Cache to use, you must make sure that the configuration can resolve Azure endpoint names for Azure services. You must configure your custom DNS environment to forward certain name resolution requests to Azure DNS or to another server as needed.
+If you configure your own on-premises DNS system for the Azure HPC Cache to use, you must make sure that your local DNS server is able to directly resolve Azure service endpoint names. HPC Cache will not function if your DNS server is restricted from public name resolution.
Check that your DNS configuration can successfully resolve these items before using it for an Azure HPC Cache: * ``*.core.windows.net`` * Certificate revocation list (CRL) download and online certificate status protocol (OCSP) verification services. A partial list is provided in the [firewall rules item](../security/fundamentals/tls-certificate-changes.md#will-this-change-affect-me) at the end of this [Azure TLS article](../security/fundamentals/tls-certificate-changes.md), but you should consult a Microsoft technical representative to understand all of the requirements.
-* The fully qualified domain name of your NTP server (time.microsoft.com or a custom server)
+* The fully qualified domain name of your NTP server (time.windows.com or a custom server)
If you need to set a custom DNS server for your cache, use the provided fields:
hpc-cache Directory Services https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hpc-cache/directory-services.md
Under **Active directory details**, supply these values:
* **AD DNS domain name** - Provide the fully qualified domain name of the AD server that the cache will join to get the credentials.
-* **Cache server name (computer account)** - Set the name that will be assigned to this HPC cache when it joins the AD domain. Specify a name that is easy to recognize as this cache. The name can be up to 15 characters long and can include capital or lowercase letters, numbers, and hyphens (-).
+* **Cache server name (computer account)** - Set the name that will be assigned to this HPC Cache when it joins the AD domain. Specify a name that is easy to recognize as this cache. The name can be up to 15 characters long and can include capital or lowercase letters, numbers, and hyphens (-).
In the **Credentials** section, provide an AD administrator username and password that the Azure HPC Cache can use to access the AD server. This information is encrypted when stored, and can't be queried.
hpc-cache Hpc Cache Add Storage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hpc-cache/hpc-cache-add-storage.md
ADLS-NFS storage targets have some similarities with Blob storage targets and so
* Like a Blob storage target, you need to give Azure HPC Cache permission to [access your storage account](#add-the-access-control-roles-to-your-account). * Like an NFS storage target, you need to set a cache [usage model](#choose-a-usage-model).
-* Because NFS-enabled blob containers have an NFS-compatible hierarchical structure, you do not need to use the cache to ingest data, and the containers are readable by other NFS systems. You can pre-load data in an ADLS-NFS container, then add it to an HPC cache as a storage target, and then access the data later from outside of an HPC cache. When you use a standard blob container as an HPC cache storage target, the data is written in a proprietary format and can only be accessed from other Azure HPC Cache-compatible products.
+* Because NFS-enabled blob containers have an NFS-compatible hierarchical structure, you do not need to use the cache to ingest data, and the containers are readable by other NFS systems. You can pre-load data in an ADLS-NFS container, then add it to an HPC Cache as a storage target, and then access the data later from outside of an HPC Cache. When you use a standard blob container as an HPC Cache storage target, the data is written in a proprietary format and can only be accessed from other Azure HPC Cache-compatible products.
Before you can create an ADLS-NFS storage target, you must create an NFS-enabled storage account. Follow the tips in [Prerequisites for Azure HPC Cache](hpc-cache-prerequisites.md#nfs-mounted-blob-adls-nfs-storage-requirements-preview) and the instructions in [Mount Blob storage by using NFS](../storage/blobs/network-file-system-protocol-support-how-to.md). After your storage account is set up you can create a new container when you create the storage target.
+Read [Use NFS-mounted blob storage with Azure HPC Cache](nfs-blob-considerations.md) to learn more about this configuration.
+ To create an ADLS-NFS storage target, open the **Add storage target** page in the Azure portal. (Additional methods are in development.) ![Screenshot of add storage target page with ADLS-NFS target defined](media/add-adls-target.png)
Enter this information.
When finished, click **OK** to add the storage target.
-<!-- **** -->
- ## View storage targets You can use the Azure portal or the Azure CLI to show the storage targets already defined for your cache.
hpc-cache Hpc Cache Blob Firewall Fix https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hpc-cache/hpc-cache-blob-firewall-fix.md
A particular setting used in storage account firewalls can cause your Blob storage target creation to fail. The Azure HPC Cache team is working on a software fix for this problem, but you can work around it by following the instructions in this article.
-The firewall setting that allows access only from "selected networks" can prevent the cache from creating or modifying a Blob storage target. This configuration is in the storage account's **Firewalls and virtual networks** settings page.
+The firewall setting that allows access only from "selected networks" can prevent the cache from creating or modifying a Blob storage target. This configuration is in the storage account's **Firewalls and virtual networks** settings page. (This issue does not apply to ADLS-NFS storage targets.)
The issue is that the cache service uses a hidden service virtual network that is separate from customer environments. It isn't possible to explicitly authorize this network to access your storage account.
hpc-cache Hpc Cache Edit Storage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hpc-cache/hpc-cache-edit-storage.md
description: How to edit Azure HPC Cache storage targets
Previously updated : 03/10/2021 Last updated : 03/29/2021
az hpc-cache nfs-storage-target update --cache-name mycache \
### Change the usage model
-The usage model influences how the cache retains data. Read [Choose a usage model](hpc-cache-add-storage.md#choose-a-usage-model) to learn more.
+The usage model influences how the cache retains data. Read [Understand cache usage models](cache-usage-models.md) to learn more.
+
+> [!NOTE]
+> If you change usage models, you might need to remount clients to avoid NLM errors. Read [Know when to remount clients](cache-usage-models.md#know-when-to-remount-clients-for-nlm) for details.
To change the usage model for an NFS storage target, use one of these methods.
hpc-cache Hpc Cache Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hpc-cache/hpc-cache-overview.md
Last updated 03/11/2021 - # What is Azure HPC Cache?
hpc-cache Hpc Cache Prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hpc-cache/hpc-cache-prerequisites.md
This is a general overview of the steps. These steps might change, so always ref
If you are not the storage account owner, have the owner do this step.
+Learn more about using ADLS-NFS storage targets with Azure HPC Cache in [Use NFS-mounted blob storage with Azure HPC Cache](nfs-blob-considerations.md).
+ ## Set up Azure CLI access (optional) If you want to create or manage Azure HPC Cache from the Azure command-line interface (Azure CLI), you need to install the CLI software and the hpc-cache extension. Follow the instructions in [Set up Azure CLI for Azure HPC Cache](az-cli-prerequisites.md).
hpc-cache Nfs Blob Considerations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hpc-cache/nfs-blob-considerations.md
+
+ Title: Use NFS Blob storage with Azure HPC Cache
+description: Describes procedures and limitations when using ADLS-NFS blob storage with Azure HPC Cache
+++ Last updated : 03/29/2021+++
+# Use NFS-mounted blob storage (PREVIEW) with Azure HPC Cache
+
+Starting in March 2021 you can use NFS-mounted blob containers with Azure HPC Cache. Read more about the [NFS 3.0 protocol support in Azure Blob storage](../storage/blobs/network-file-system-protocol-support.md) on the Blob storage documentation site.
+
+> [!NOTE]
+> NFS 3.0 protocol support in Azure Blob storage is in preview and should not be used in production environments. Check for updates and more details in the [NFS protocol support documentation](../storage/blobs/network-file-system-protocol-support.md).
+
+Azure HPC Cache uses NFS-enabled blob storage in its ADLS-NFS storage target type. These storage targets are similar to regular NFS storage targets, but also have some overlap with regular Azure Blob targets.
+
+This article explains strategies and limitations that you should understand when you use ADLS-NFS storage targets.
+
+You should also read the NFS blob documentation, especially these sections that describe compatible and incompatible scenarios:
+
+* [Feature overview](../storage/blobs/network-file-system-protocol-support.md#applications-and-workloads-suited-for-this-feature)
+* [Features not yet supported](../storage/blobs/network-file-system-protocol-support.md#azure-storage-features-not-yet-supported)
+* [Performance considerations](../storage/blobs/network-file-system-protocol-support-performance.md)
+
+## Understand consistency requirements
+
+HPC Cache requires strong consistency for ADLS-NFS storage targets. By default, NFS-enabled blob storage does not strictly update file metadata, which prevents HPC Cache from accurately comparing file versions.
+
+To work around this difference, Azure HPC Cache automatically disables NFS attribute caching on any NFS-enabled blob container used as a storage target.
+
+This setting persists for the lifetime of the container, even if you remove it from the cache.
+
+## Preload data with NFS protocol
+
+On an NFS-enabled blob container, *a file can only be edited by the same protocol used when it was created*. That is, if you use the Azure REST API to populate a container, you cannot use NFS to update those files. Because Azure HPC Cache only uses NFS, it can't edit any files that were created with the Azure REST API.
+
+It's not a problem for the cache if your container is empty, or if the files were created by using NFS.
+
+If the files in your container were created with Azure Blob's REST API instead of NFS, Azure HPC Cache is restricted to these actions on the original files:
+
+* List the file in a directory.
+* Read the file (and hold it in the cache for subsequent reads).
+* Delete the file.
+* Empty the file (truncate it to 0).
+* Save a copy of the file. The copy is marked as an NFS-created file, and it can be edited using NFS.
+
+Azure HPC Cache **can't** edit the contents of a file that was created using REST. This means that it can't save a changed file from a client back to the storage target.
+
+It's important to understand this limitation, because it can cause data integrity problems if you use read/write caching usage models on files that were not created with NFS.
+
+> [!TIP]
+> Learn more about read and write caching in [Understand cache usage models](cache-usage-models.md).
+
+### Write caching scenarios
+
+These cache usage models include write caching:
+
+* **Greater than 15% writes**
+* **Greater than 15% writes, checking the backing server for changes every 30 seconds**
+* **Greater than 15% writes, checking the backing server for changes every 60 seconds**
+* **Greater than 15% writes, write back to the server every 30 seconds**
+
+Only use these usage models to edit files that were created with NFS.
+
+If you try to use write caching on REST-created files, your file changes could be lost. This is because the cache does not try to save file edits to the storage container immediately.
+
+Here is how trying to cache writes to REST-created files puts data at risk:
+
+1. The cache accepts edits from clients, and returns a success message on each change.
+1. The cache keeps the changed file in its storage and waits for additional changes.
+1. After some time has passed, the cache tries to save the changed file to the back-end container. At this point, it will get an error message because it is trying to write to a REST-created file with NFS.
+
+ It is too late to tell the client machine that its changes were not accepted, and the cache has no way to update the original file. So the changes from clients will be lost.
+
+### Read caching scenarios
+
+Read caching scenarios are appropriate for files created with either NFS or Azure Blob REST API.
+
+These usage models use only read caching:
+
+* **Read heavy, infrequent writes**
+* **Clients write to the NFS target, bypassing the cache**
+* **Read heavy, checking the backing server every 3 hours**
+
+You can use these usage models with files created by REST API or by NFS. Any NFS writes sent from a client to the back-end container will still fail, but they will fail immediately and return an error message to the client.
+
+A read caching workflow can still involve file changes, as long as these are not cached. For example, clients might access files from the container but write their changes back as a new file, or they could save modified files in a different location.
+
+## Recognize Network Lock Manager (NLM) limitations
+
+NFS-enabled blob containers do not support Network Lock Manager (NLM), which is a commonly used NFS protocol to protect files from conflicts.
+
+If your NFS workflow was originally written for hardware storage systems, your client applications might include NLM requests. To work around this limitation when moving your process to NFS-enabled blob storage, make sure that your clients disable NLM when they mount the cache.
+
+To disable NLM, use the option ``-o nolock`` in your clients' ``mount`` command. This option prevents the clients from requesting NLM locks and receiving errors in response.
+
+In a few situations, Azure HPC Cache itself acknowledges NLM requests from the client. The cache usage model named **Read heavy, infrequent writes** acknowledges NLM requests on behalf of its back-end storage. This system prevents errors on the client, but it does not actually create a lock on the ADLS-NFS container.
+
+If you switch an ADLS-NFS storage target's usage model to or from **Read heavy, infrequent writes**, you must remount all clients using the ``nolock`` option.
+
+More information about NLM, HPC Cache, and usage models is included in [Understand cache usage models](cache-usage-models.md#know-when-to-remount-clients-for-nlm).
+
+## Streamline writes to NFS-enabled containers with HPC Cache
+
+Azure HPC Cache can help improve performance in a workload that includes writing changes to an ADLS-NFS storage target.
+
+> [!NOTE]
+> You must use NFS to populate your ADLS-NFS storage container if you want to modify its files through Azure HPC Cache.
+
+One of the limitations outlined in the NFS-enabled blob [Performance considerations article](../storage/blobs/network-file-system-protocol-support-performance.md) is that ADLS-NFS storage is not very efficient at overwriting existing files. If you use Azure HPC Cache with NFS-mounted blob storage, the cache handles intermittent rewrites as clients modify an active file. The latency of writing a file to the back end container is hidden from the clients.
+
+Keep in mind the limitations explained above in [Preload data with NFS protocol](#preload-data-with-nfs-protocol).
+
+## Next steps
+
+* Learn [ADLS-NFS storage target prerequisites](hpc-cache-prerequisites.md#nfs-mounted-blob-adls-nfs-storage-requirements-preview)
+* Create an [NFS-enabled blob storage account](../storage/blobs/network-file-system-protocol-support-how-to.md)
hpc-cache Troubleshoot Nas https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hpc-cache/troubleshoot-nas.md
For example, a system might show three exports like these:
The export ``/ifs/accounting/payroll`` is a child of ``/ifs/accounting``, and ``/ifs/accounting`` is itself a child of ``/ifs``.
-If you add the ``payroll`` export as an HPC cache storage target, the cache actually mounts ``/ifs/`` and accesses the payroll directory from there. So Azure HPC Cache needs root access to ``/ifs`` in order to access the ``/ifs/accounting/payroll`` export.
+If you add the ``payroll`` export as an HPC Cache storage target, the cache actually mounts ``/ifs/`` and accesses the payroll directory from there. So Azure HPC Cache needs root access to ``/ifs`` in order to access the ``/ifs/accounting/payroll`` export.
This requirement is related to the way the cache indexes files and avoids file collisions, using file handles that the storage system provides.
hpc-cache Usage Scenarios https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hpc-cache/usage-scenarios.md
+
+ Title: Azure HPC Cache scenarios
+description: Describes how to know whether your computing job works well with Azure HPC Cache
+++ Last updated : 03/29/2021+++
+# Is your job a good fit for Azure HPC Cache?
+
+Azure HPC Cache can speed access to data for high-performance computing jobs in a variety of disciplines. But it is not perfect for all types of workflows. This article gives guidelines for how to decide if HPC Cache is a good option for your needs.
+
+The [Overview](hpc-cache-overview.md) article also gives a short outline of when to use Azure HPC Cache and some examples of use cases.
+
+Also read [this article](nfs-blob-considerations.md) about how to make effective use of [NFS-mounted blob storage](../storage/blobs/network-file-system-protocol-support.md), which is in preview.
+
+## NFS version 3.0 applications
+
+Azure HPC Cache supports NFS 3.0 clients only.
+
+## High read-to-write ratio
+
+Workloads where the compute clients do more reading than they do writing are usually good candidates for a cache. For example, if your read-to-write ratio is 80/20 or 70/30, Azure HPC Cache can help by serving frequently requested files from the cache instead of having to fetch them from remote storage over and over.
+
+Fetching a file and storing it in the cache for the first time has a small additional latency over a normal client request directly to storage, so the efficiency boost comes the next time a client requests the same file. This is especially true for large files. If each client request is unique, HPC Cache's impact is limited. But the larger the file, the better the performance is over time after that first access.
+
+## File-based analytic workload
+
+Azure HPC Cache is ideal for a pipeline that uses file-based data and runs across a large number of compute clients, especially if the compute clients are Azure virtual machines. It can help fix slow or inconsistent performance that's caused by long file access times.
+
+## Remote data access
+
+Azure HPC Cache can help reduce latency if your workload needs to access remote data that can't be moved closer to the computing resources. For example, your records might be at the far end of a WAN environment, in a different Azure region, or in a customer data center. (This is sometimes called "file-bursting".)
+
+## Heavy request load
+
+If a large number of clients request data from the source at the same time, Azure HPC Cache can speed up file access. For example, when used with a high-performance computing cluster, Azure HPC Cache provides scalability for high numbers of concurrent requests through the cache.
+
+## Compute resources are located in Azure
+
+Azure virtual machines are a scalable and cost-effective answer to high-performance computing workload. Azure HPC Cache can help by bringing the information they need closer to them, especially if the original data is stored on a remote system.
+
+If a customer wants to run their current pipeline "as is" in Azure virtual machines, Azure HPC Cache can provide a POSIX-based shared storage (or caching) solution for scalability.
+
+By using Azure HPC Cache, you don't have to re-architect the work pipeline to make native calls to Azure Blob storage. You can access your data on its original system, or use HPC Cache to move it to a new blob container.
+
+## Next steps
+
+* Learn more about how to plan and configure a cache in the [Overview](hpc-cache-overview.md) and [Prerequisites](hpc-cache-prerequisites.md) articles
+* Read considerations for using [NFS-enabled Blob storage](nfs-blob-considerations.md) (PREVIEW) with Azure HPC Cache
iot-edge How To Auto Provision Simulated Device Linux https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/how-to-auto-provision-simulated-device-linux.md
description: Use a simulated TPM on a Linux VM to test Azure Device Provisioning
Previously updated : 6/30/2020 Last updated : 04/09/2021 # Create and provision an IoT Edge device with a TPM on Linux This article shows how to test auto-provisioning on a Linux IoT Edge device using a Trusted Platform Module (TPM). You can automatically provision Azure IoT Edge devices with the [Device Provisioning Service](../iot-dps/index.yml). If you're unfamiliar with the process of auto-provisioning, review the [provisioning](../iot-dps/about-iot-dps.md#provisioning-process) overview before continuing.
-> [!NOTE]
-> Currently, automatic provisioning using TPM authentication is not supported in IoT Edge version 1.2.
- The tasks are as follows: 1. Create a Linux virtual machine (VM) in Hyper-V with a simulated Trusted Platform Module (TPM) for hardware security.
The tasks are as follows:
1. Install the IoT Edge runtime and connect the device to IoT Hub. > [!TIP]
-> This article describes how to test DPS provisioning using a TPM simulator, but much of it applies to physical TPM hardware such as the [Infineon OPTIGA&trade; TPM](https://devicecatalog.azure.com/devices/3f52cdee-bbc4-d74e-6c79-a2546f73df4e), an Azure Certified for IoT device.
+> This article describes how to test DPS provisioning using a TPM simulator, but much of it applies to physical TPM hardware such as the [Infineon OPTIGA&trade; TPM](https://catalog.azureiotsolutions.com/details?title=OPTIGA-TPM-SLB-9670-Iridium-Board), an Azure Certified for IoT device.
> > If you're using a physical device, you can skip ahead to the [Retrieve provisioning information from a physical device](#retrieve-provisioning-information-from-a-physical-device) section in this article.
Follow the steps in [Install the Azure IoT Edge runtime](how-to-install-iot-edge
Once the runtime is installed on your device, configure the device with the information it uses to connect to the Device Provisioning Service and IoT Hub.
+<!-- 1.1 -->
+ 1. Know your DPS **ID Scope** and device **Registration ID** that were gathered in the previous sections. 1. Open the configuration file on the IoT Edge device.
Once the runtime is installed on your device, configure the device with the info
# dynamic_reprovisioning: false ```
- Optionally, use the `always_reprovision_on_startup` or `dynamic_reprovisioning` lines to configure your device's reprovisioning behavior. If a device is set to reprovision on startup, it will always attempt to provision with DPS first and then fall back to the provisioning backup if that fails. If a device is set to dynamically reprovision itself, IoT Edge will restart and reprovision if a reprovisioning event is detected. For more information, see [IoT Hub device reprovisioning concepts](../iot-dps/concepts-device-reprovision.md).
- 1. Update the values of `scope_id` and `registration_id` with your DPS and device information.
+1. Optionally, use the `always_reprovision_on_startup` or `dynamic_reprovisioning` lines to configure your device's reprovisioning behavior. If a device is set to reprovision on startup, it will always attempt to provision with DPS first and then fall back to the provisioning backup if that fails. If a device is set to dynamically reprovision itself, IoT Edge will restart and reprovision if a reprovisioning event is detected. For more information, see [IoT Hub device reprovisioning concepts](../iot-dps/concepts-device-reprovision.md).
+
+1. Save and close the file.
+
+<!-- end 1.1 -->
+
+<!-- 1.2 -->
+
+1. Know your DPS **ID Scope** and device **Registration ID** that were gathered in the previous sections.
+
+1. Open the configuration file on the IoT Edge device.
+
+ ```bash
+ sudo nano /etc/aziot/config.toml
+ ```
+
+1. Find the provisioning configurations section of the file. Uncomment the lines for TPM provisioning, and make sure any other provisioning lines are commented out.
+
+ ```toml
+ # DPS provisioning with TPM
+ [provisioning]
+ source = "dps"
+ global_endpoint = "https://global.azure-devices-provisioning.net"
+ id_scope = "<SCOPE_ID>"
+
+ [provisioning.attestation]
+ method = "tpm"
+ registration_id = "<REGISTRATION_ID>"
+ ```
+
+1. Update the values of `id_scope` and `registration_id` with your DPS and device information.
+
+1. Optionally, find the auto-reprovisioning mode section of the file. Use the `auto_reprovisioning_mode` parameter to configure your device's reprovisioning behavior to either `Dynamic`, `AlwaysOnStartup`, or `OnErrorOnly`. For more information, see [IoT Hub device reprovisioning concepts](../iot-dps/concepts-device-reprovision.md).
+
+1. Save and close the file.
+<!-- end 1.2 -->
+ ## Give IoT Edge access to the TPM
+<!-- 1.1 -->
+ The IoT Edge runtime needs to access the TPM to automatically provision your device. You can give TPM access to the IoT Edge runtime by overriding the systemd settings so that the `iotedge` service has root privileges. If you don't want to elevate the service privileges, you can also use the following steps to manually provide TPM access.
You can give TPM access to the IoT Edge runtime by overriding the systemd settin
``` If you don't see that the correct permissions have been applied, try rebooting your machine to refresh udev.
+<!-- end 1.1 -->
+
+<!-- 1.2 -->
+The IoT Edge runtime relies on a TPM service that broker's access to a device's TPM. This service needs to access the TPM to automatically provision your device.
+
+You can give access to the TPM by overriding the systemd settings so that the `aziottpm` service has root privileges. If you don't want to elevate the service privileges, you can also use the following steps to manually provide TPM access.
+
+1. Find the file path to the TPM hardware module on your device and save it as a local variable.
-## Restart the IoT Edge runtime
+ ```bash
+ tpm=$(sudo find /sys -name dev -print | fgrep tpm | sed 's/.\{4\}$//')
+ ```
+
+2. Create a new rule that will give the IoT Edge runtime access to tpm0.
+
+ ```bash
+ sudo touch /etc/udev/rules.d/tpmaccess.rules
+ ```
+
+3. Open the rules file.
+
+ ```bash
+ sudo nano /etc/udev/rules.d/tpmaccess.rules
+ ```
+
+4. Copy the following access information into the rules file.
+
+ ```input
+ # allow aziottpm access to tpm0
+ KERNEL=="tpm0", SUBSYSTEM=="tpm", OWNER="aziottpm", MODE="0600"
+ ```
+5. Save and exit the file.
+
+6. Trigger the udev system to evaluate the new rule.
+
+ ```bash
+ /bin/udevadm trigger $tpm
+ ```
+
+7. Verify that the rule was successfully applied.
+
+ ```bash
+ ls -l /dev/tpm0
+ ```
+
+ Successful output appears as follows:
+
+ ```output
+ crw-rw- 1 root aziottpm 10, 224 Jul 20 16:27 /dev/tpm0
+ ```
+
+ If you don't see that the correct permissions have been applied, try rebooting your machine to refresh udev.
+<!-- end 1.2 -->
+
+## Restart IoT Edge and verify successful installation
+
+<!-- 1.1 -->
Restart the IoT Edge runtime so that it picks up all the configuration changes that you made on the device. ```bash
Check to see that the IoT Edge runtime is running.
sudo systemctl status iotedge ```
+Examine daemon logs.
+
+```cmd/sh
+journalctl -u iotedge --no-pager --no-full
+```
+ If you see provisioning errors, it may be that the configuration changes haven't taken effect yet. Try restarting the IoT Edge daemon again. ```bash
If you see provisioning errors, it may be that the configuration changes haven't
``` Or, try restarting your virtual machine to see if the changes take effect on a fresh start.
+<!-- end 1.1 -->
-## Verify successful installation
+<!-- 1.2 -->
+Apply the configuration changes that you made on the device.
-If the runtime started successfully, you can go into your IoT Hub and see that your new device was automatically provisioned. Now your device is ready to run IoT Edge modules.
+ ```bash
+ sudo iotedge config apply
+ ```
-Check the status of the IoT Edge Daemon.
+Check to see that the IoT Edge runtime is running.
-```cmd/sh
-systemctl status iotedge
-```
+ ```bash
+ sudo iotedge system status
+ ```
Examine daemon logs.
-```cmd/sh
-journalctl -u iotedge --no-pager --no-full
-```
+ ```cmd/sh
+ sudo iotedge system logs
+ ```
+
+If you see provisioning errors, it may be that the configuration changes haven't taken effect yet. Try restarting the IoT Edge daemon.
+
+ ```bash
+ sudo systemctl daemon-reload
+ ```
+
+Or, try restarting your virtual machine to see if the changes take effect on a fresh start.
+<!-- end 1.2 -->
+
+If the runtime started successfully, you can go into your IoT Hub and see that your new device was automatically provisioned. Now your device is ready to run IoT Edge modules.
List running modules.
iot-edge How To Configure Api Proxy Module https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/how-to-configure-api-proxy-module.md
The API proxy module can enable many scenarios for gateway hierarchies, includin
## Deploy the proxy module
-The API proxy module is available from the Microsoft Container Registry (MCR): `mcr.microsoft.com/azureiotedge-api-proxy:latest`.
+The API proxy module is available from the Microsoft Container Registry (MCR): `mcr.microsoft.com/azureiotedge-api-proxy:1.0`.
You can also deploy the API proxy module directly from the Azure Marketplace: [IoT Edge API Proxy](https://azuremarketplace.microsoft.com/marketplace/apps/azure-iot.azureiotedge-api-proxy?tab=Overview).
iot-edge How To Connect Downstream Iot Edge Device https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/how-to-connect-downstream-iot-edge-device.md
monikerRange: ">=iotedge-2020-11"
-# Connect a downstream IoT Edge device to an Azure IoT Edge gateway (Preview)
+# Connect a downstream IoT Edge device to an Azure IoT Edge gateway
[!INCLUDE [iot-edge-version-202011](../../includes/iot-edge-version-202011.md)] This article provides instructions for establishing a trusted connection between an IoT Edge gateway and a downstream IoT Edge device.
->[!NOTE]
->This feature requires IoT Edge version 1.2, which is in public preview, running Linux containers.
->
->This article reflects the latest preview release of IoT Edge version 1.2. Make sure that your device is running version [1.2.0-rc4](https://github.com/Azure/azure-iotedge/releases/tag/1.2.0-rc4) or newer. For steps to get the latest preview version on your device, see [Install Azure IoT Edge for Linux (version 1.2)](how-to-install-iot-edge.md) or [Update IoT Edge to version 1.2](how-to-update-iot-edge.md#special-case-update-from-10-or-11-to-12).
- In a gateway scenario, an IoT Edge device can be both a gateway and a downstream device. Multiple IoT Edge gateways can be layered to create a hierarchy of devices. The downstream (or child) devices can authenticate and send or receive messages through their gateway (or parent) device. There are two different configurations for IoT Edge devices in a gateway hierarchy, and this article address both. The first is the **top layer** IoT Edge device. When multiple IoT Edge devices are connecting through each other, any device that does not have a parent device but connects directly to IoT Hub is considered to be in the top layer. This device is responsible for handling requests from all the devices below it. The other configuration applies to any IoT Edge device in a **lower layer** of the hierarchy. These devices may be a gateway for other downstream IoT and IoT Edge devices, but also need to route any communications through their own parent devices.
Make sure that the user **iotedge** has read permissions for the directory holdi
1. Find the **Trust bundle cert** section. Uncomment and update the `trust_bundle_cert` parameter with the file URI to the root CA certificate on your device.
-1. While this feature is in public preview, you need to configure your IoT Edge device to use the public preview version of the IoT Edge agent when it starts up.
+1. Verify your IoT Edge device will use the correct version of the IoT Edge agent when it starts up.
- Find the **Default Edge Agent** section and update the image value to the public preview image:
+ Find the **Default Edge Agent** section and verify the image value is IoT Edge version 1.2. If not, update it:
```toml [agent.config]
- image: "mcr.microsoft.com/azureiotedge-agent:1.2.0-rc4"
+ image: "mcr.microsoft.com/azureiotedge-agent:1.2"
``` 1. Find the **Edge CA certificate** section in the config file. Uncomment the lines in this section and provide the file URI paths for the certificate and key files on the IoT Edge device.
Make sure that the user **iotedge** has read permissions for the directory holdi
>[!TIP] >The IoT Edge check tool uses a container to perform some of the diagnostics check. If you want to use this tool on downstream IoT Edge devices, make sure they can access `mcr.microsoft.com/azureiotedge-diagnostics:latest`, or have the container image in your private container registry.
-## Configure runtime modules for public preview
-
-While this feature is in public preview, you need to configure your IoT Edge device to use the public preview versions of the IoT Edge runtime modules. The previous section provides steps for configuring edgeAgent at startup. You also need to configure the runtime modules in deployments for your device.
-
-1. Configure the edgeHub module to use the public preview image: `mcr.microsoft.com/azureiotedge-hub:1.2.0-rc4`.
-
-1. Configure the following environment variables for the edgeHub module:
-
- | Name | Value |
- | - | - |
- | `experimentalFeatures__enabled` | `true` |
- | `experimentalFeatures__nestedEdgeEnabled` | `true` |
-
-1. Configure the edgeAgent module to use the public preview image: `mcr.microsoft.com/azureiotedge-hub:1.2.0-rc4`.
- ## Network isolate downstream devices The steps so far in this article set up IoT Edge devices as either a gateway or a downstream device, and create a trusted connection between them. The gateway device handles interactions between the downstream device and IoT Hub, including authentication and message routing. Modules deployed to downstream IoT Edge devices can still create their own connections to cloud services.
The IoT Edge device at the top layer of a gateway hierarchy has a set of require
The API proxy module was designed to be customized to handle most common gateway scenarios. This article provides and example to set up the modules in a basic configuration. Refer to [Configure the API proxy module for your gateway hierarchy scenario](how-to-configure-api-proxy-module.md) for more detailed information and examples.
+# [Portal](#tab/azure-portal)
+ 1. In the [Azure portal](https://portal.azure.com), navigate to your IoT hub. 1. Select **IoT Edge** from the navigation menu. 1. Select the top layer device that you're configuring from the list of **IoT Edge devices**.
The API proxy module was designed to be customized to handle most common gateway
1. Select **Review + create** to go to the final step. 1. Select **Create** to deploy to your device.
+# [Azure CLI](#tab/azure-cli)
+
+1. In the [Azure Cloud Shell](https://shell.azure.com/), create a deployment JSON file. For example:
+
+ ```json
+ {
+ "modulesContent": {
+ "$edgeAgent": {
+ "properties.desired": {
+ "modules": {
+ "dockerContainerRegistry": {
+ "settings": {
+ "image": "registry:latest",
+ "createOptions": "{\"HostConfig\":{\"PortBindings\":{\"5000/tcp\":[{\"HostPort\":\"5000\"}]}}}"
+ },
+ "type": "docker",
+ "version": "1.0",
+ "env": {
+ "REGISTRY_PROXY_REMOTEURL": {
+ "value": "The URL for the container registry you want this registry module to map to. For example, https://myregistry.azurecr"
+ },
+ "REGISTRY_PROXY_USERNAME": {
+ "value": "Username to authenticate to the container registry."
+ },
+ "REGISTRY_PROXY_PASSWORD": {
+ "value": "Password to authenticate to the container registry."
+ }
+ },
+ "status": "running",
+ "restartPolicy": "always"
+ },
+ "IoTEdgeAPIProxy": {
+ "settings": {
+ "image": "mcr.microsoft.com/azureiotedge-api-proxy:1.0",
+ "createOptions": "{\"HostConfig\": {\"PortBindings\": {\"443/tcp\": [{\"HostPort\":\"443\"}]}}}"
+ },
+ "type": "docker",
+ "env": {
+ "NGINX_DEFAULT_PORT": {
+ "value": "443"
+ },
+ "DOCKER_REQUEST_ROUTE_ADDRESS": {
+ "value": "registry:5000"
+ }
+ },
+ "status": "running",
+ "restartPolicy": "always",
+ "version": "1.0"
+ }
+ },
+ "runtime": {
+ "settings": {
+ "minDockerVersion": "v1.25"
+ },
+ "type": "docker"
+ },
+ "schemaVersion": "1.1",
+ "systemModules": {
+ "edgeAgent": {
+ "settings": {
+ "image": "mcr.microsoft.com/azureiotedge-agent:1.2",
+ "createOptions": ""
+ },
+ "type": "docker"
+ },
+ "edgeHub": {
+ "settings": {
+ "image": "mcr.microsoft.com/azureiotedge-hub:1.2",
+ "createOptions": "{\"HostConfig\":{\"PortBindings\":{\"5671/tcp\":[{\"HostPort\":\"5671\"}],\"8883/tcp\":[{\"HostPort\":\"8883\"}]}}}"
+ },
+ "type": "docker",
+ "env": {},
+ "status": "running",
+ "restartPolicy": "always"
+ }
+ }
+ }
+ },
+ "$edgeHub": {
+ "properties.desired": {
+ "routes": {
+ "route": "FROM /messages/* INTO $upstream"
+ },
+ "schemaVersion": "1.1",
+ "storeAndForwardConfiguration": {
+ "timeToLiveSecs": 7200
+ }
+ }
+ }
+ }
+ }
+ ```
+
+ This deployment file configures the API proxy module to listen on port 443. To prevent port binding collisions, the file configures the edgeHub module to not listen on port 443. Instead, the API proxy module will route any edgeHub traffic on port 443.
+
+1. Enter the following command to create a deployment to an IoT Edge device:
+
+ ```bash
+ az iot edge set-modules --device-id <device_id> --hub-name <iot_hub_name> --content ./<deployment_file_name>.json
+ ```
+++ ### Deploy modules to lower layer devices IoT Edge devices in lower layers of a gateway hierarchy have one required module that must be deployed to them, in addition to any workload modules you may run on the device.
Before discussing the required proxy module for IoT Edge devices in gateway hier
If your lower layer devices can't connect to the cloud, but you want them to pull module images as usual, then the top layer device of the gateway hierarchy must be configured to handle these requests. The top layer device needs to run a Docker **registry** module that is mapped to your container registry. Then, configure the API proxy module to route container requests to it. Those details are discussed in the earlier sections of this article. In this configuration, the lower layer devices should not point to cloud container registries, but to the registry running in the top layer.
-For example, instead of calling `mcr.microsoft.com/azureiotedge-api-proxy:latest`, lower layer devices should call `$upstream:443/azureiotedge-api-proxy:latest`.
+For example, instead of calling `mcr.microsoft.com/azureiotedge-api-proxy:1.0`, lower layer devices should call `$upstream:443/azureiotedge-api-proxy:1.0`.
The **$upstream** parameter points to the parent of a lower layer device, so the request will route through all the layers until it reaches the top layer which has a proxy environment routing container requests to the registry module. The `:443` port in this example should be replaced with whichever port the API proxy module on the parent device is listening on.
name = "edgeAgent"
type = "docker" [agent.config]
-image: "{Parent FQDN or IP}:443/azureiotedge-agent:1.2.0-rc4"
+image: "{Parent FQDN or IP}:443/azureiotedge-agent:1.2"
``` If you are using a local container registry, or providing the container images manually on the device, update the config file accordingly.
iot-edge How To Install Iot Edge https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/how-to-install-iot-edge.md
The IoT identity service was introduced along with version 1.2 of IoT Edge. This
The steps in this section represent the typical process to install the latest version on a device that has internet connection. If you need to install a specific version, like a pre-release version, or need to install while offline, follow the [Offline or specific version installation](#offline-or-specific-version-installation-optional) steps later in this article. >[!NOTE]
->The steps in this section show you how to install IoT Edge version 1.2, which is currently in public preview. If you are looking for the steps to install the latest generally available version of IoT Edge, view the [1.1 (LTS)](?view=iotedge-2018-06&preserve-view=true) version of this article.
+>The steps in this section show you how to install IoT Edge version 1.2.
> >If you already have an IoT Edge device running an older version and want to upgrade to 1.2, use the steps in [Update the IoT Edge security daemon and runtime](how-to-update-iot-edge.md). Version 1.2 is sufficiently different from previous versions of IoT Edge that specific steps are necessary to upgrade.
If you want to install the most recent version of IoT Edge, use the following co
sudo apt-get install aziot-edge ```
-<!-- commenting out for public preview. reintroduce at GA
-
-Or, if you want to install a specific version of IoT Edge and the identity service, specify the versions from the apt list output. Specify the same versions for both services.. For example, the following command installs the most recent version of the 1.2 release:
+Or, if you want to install a specific version of IoT Edge and the identity service, specify the versions from the apt list output. Specify the same versions for both services. For example, the following command installs the most recent version of the 1.2 release:
```bash sudo apt-get install aziot-edge=1.2* aziot-identity-service=1.2* ``` >- <!-- end 1.2 --> ::: moniker-end
iot-edge How To Publish Subscribe https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/how-to-publish-subscribe.md
monikerRange: ">=iotedge-2020-11"
-# Publish and subscribe with Azure IoT Edge
+# Publish and subscribe with Azure IoT Edge (preview)
[!INCLUDE [iot-edge-version-202011](../../includes/iot-edge-version-202011.md)]
iot-edge How To Update Iot Edge https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/how-to-update-iot-edge.md
keywords:
Previously updated : 03/01/2021 Last updated : 04/07/2021
Some of the key differences between 1.2 and earlier versions include:
* The package name changed from **iotedge** to **aziot-edge**. * The **libiothsm-std** package is no longer used. If you used the standard package provided as part of the IoT Edge release, then your configurations can be transferred to the new version. If you used a different implementation of libiothsm-std, then any user-provided certificates like the device identity certificate, device CA, and trust bundle will need to be reconfigured.
-* A new identity service, **aziot-identity-service** was introduced as part of the 1.2 release. This service handles the identity provisioning and management for IoT Edge and for other device components that need to communicate with IoT Hub, like Azure IoT Hub Device Update. <!--TODO: add link to ADU when available -->
-* The default config file has a new name and location. Formerly `/etc/iotedge/config.yaml`, your device configuration information is now expected to be in `/etc/aziot/config.toml` by default. The `iotedge config import` command can be used to help migrate configuration information form the old location and syntax to the new one.
-* Any modules that use the IoT Edge workload API to encrypt or decrypt persistent data can't be decrypted after the update. IoT Edge dynamically generates a master identity key and encryption key for internal use. This key won't be transferred to the new service. IoT Edge v1.2 will generate a new one.
+* A new identity service, **aziot-identity-service** was introduced as part of the 1.2 release. This service handles the identity provisioning and management for IoT Edge and for other device components that need to communicate with IoT Hub, like [Device Update for IoT Hub](../iot-hub-device-update/understand-device-update.md).
+* The default config file has a new name and location. Formerly `/etc/iotedge/config.yaml`, your device configuration information is now expected to be in `/etc/aziot/config.toml` by default. The `iotedge config import` command can be used to help migrate configuration information from the old location and syntax to the new one.
+ * The import command cannot detect or modify access rules to a device's trusted platform module (TPM). If your device uses TPM attestation, you need to manually update the /etc/udev/rules.d/tpmaccess.rules file to give access to the aziottpm service. For more information, see [Give IoT Edge access to the TPM](how-to-auto-provision-simulated-device-linux.md?view=iotedge-2020-11&preserve-view=true#give-iot-edge-access-to-the-tpm).
+* The workload API in version 1.2 saves encrypted secrets in a new format. If you upgrade from an older version to version 1.2, the existing master encryption key is imported. The workload API can read secrets saved in the prior format using the imported encryption key. However, the workload API can't write encrypted secrets in the old format. Once a secret is re-encrypted by a module, it is saved in the new format. Secrets encrypted in version 1.2 are unreadable by the same module in version 1.1. If you persist encrypted data to a host-mounted folder or volume, always create a backup copy of the data *before* upgrading to retain the ability to downgrade if necessary.
Before automating any update processes, validate that it works on test machines.
iot-edge Quickstart Linux https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/quickstart-linux.md
description: In this quickstart, learn how to create an IoT Edge device on Linux
Previously updated : 03/12/2021 Last updated : 04/07/2021
Manage your Azure IoT Edge device from the cloud to deploy a module that will se
![Diagram - deploy module from cloud to device](./media/quickstart-linux/deploy-module.png)
+<!-- [!INCLUDE [iot-edge-deploy-module](../../includes/iot-edge-deploy-module.md)]
+
+Include content included below to support versioned steps in Linux quickstart. Can update include file once Windows quickstart supports v1.2 -->
+
+One of the key capabilities of Azure IoT Edge is deploying code to your IoT Edge devices from the cloud. *IoT Edge modules* are executable packages implemented as containers. In this section, you'll deploy a pre-built module from the [IoT Edge Modules section of Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/category/internet-of-things?page=1&subcategories=iot-edge-modules) directly from Azure IoT Hub.
+
+The module that you deploy in this section simulates a sensor and sends generated data. This module is a useful piece of code when you're getting started with IoT Edge because you can use the simulated data for development and testing. If you want to see exactly what this module does, you can view the [simulated temperature sensor source code](https://github.com/Azure/iotedge/blob/027a509549a248647ed41ca7fe1dc508771c8123/edge-modules/SimulatedTemperatureSensor/src/Program.cs).
+
+Follow these steps to start the **Set Modules** wizard to deploy your first module from Azure Marketplace.
+
+1. Sign in to the [Azure portal](https://portal.azure.com) and go to your IoT hub.
+
+1. From the menu on the left, under **Automatic Device Management**, select **IoT Edge**.
+
+1. Select the device ID of the target device from the list of devices.
+
+1. On the upper bar, select **Set Modules**.
+
+ ![Screenshot that shows selecting Set Modules.](./media/quickstart/select-set-modules.png)
+
+### Modules
+
+The first step of the wizard is to choose which modules you want to run on your device.
+
+Under **IoT Edge Modules**, open the **Add** drop-down menu, and then select **Marketplace Module**.
+
+ ![Screenshot that shows the Add drop-down menu.](./media/quickstart/add-marketplace-module.png)
+
+In **IoT Edge Module Marketplace**, search for and select the `Simulated Temperature Sensor` module. The module is added to the IoT Edge Modules section with the desired **running** status.
<!-- 1.2 --> :::moniker range=">=iotedge-2020-11"
-Since IoT Edge version 1.2 is in public preview, there is an extra step to take to update the runtime modules to their public preview versions as well.
+Select **Runtime Settings** to open the settings for the edgeHub and edgeAgent modules. This settings section is where you can manage the runtime modules by adding environment variables or changing the create options.
+
+Update the **Image** field for both the edgeHub and edgeAgent modules to use the version tag 1.2. For example:
-1. From the device details page, select **Set Modules** again.
+* `mcr.microsoft.com/azureiotedge-hub:1.2`
+* `mcr.microsoft.com/azureiotedge-agent:1.2`
-1. Select **Runtime Settings**.
+Select **Save** to apply your changes to the runtime modules.
-1. Update the **Image** field for both the IoT Edge hub and IoT Edge agent modules to use the version tag 1.2.0-rc4. For example:
+<!--end 1.2-->
- * `mcr.microsoft.com/azureiotedge-hub:1.2.0-rc4`
- * `mcr.microsoft.com/azureiotedge-agent:1.2.0-rc4`
+Select **Next: Routes** to continue to the next step of the wizard.
-1. The simulated temperature sensor module should still be listed in the modules section. You don't need to make any changes to that module for the public preview.
+ ![Screenshot that shows continuing to the next step after the module is added.](./media/quickstart/view-temperature-sensor-next-routes.png)
-1. Select **Review + create**.
+### Routes
-1. Select **Create**.
+On the **Routes** tab, remove the default route, **route**, and then select **Next: Review + create** to continue to the next step of the wizard.
-1. On the device details page, you can select either **$edgeAgent** or **$edgeHub** to see the module details reflect the public preview version of the image.
+ >[!Note]
+ >Routes are constructed by using name and value pairs. You should see two routes on this page. The default route, **route**, sends all messages to IoT Hub (which is called `$upstream`). A second route, **SimulatedTemperatureSensorToIoTHub**, was created automatically when you added the module from Azure Marketplace. This route sends all messages from the simulated temperature module to IoT Hub. You can delete the default route because it's redundant in this case.
-<!-- end 1.2 -->
+ ![Screenshot that shows removing the default route then moving to the next step.](./media/quickstart/delete-route-next-review-create.png)
+
+### Review and create
+
+Review the JSON file, and then select **Create**. The JSON file defines all of the modules that you deploy to your IoT Edge device. You'll see the **SimulatedTemperatureSensor** module and the two runtime modules, **edgeAgent** and **edgeHub**.
+
+ >[!Note]
+ >When you submit a new deployment to an IoT Edge device, nothing is pushed to your device. Instead, the device queries IoT Hub regularly for any new instructions. If the device finds an updated deployment manifest, it uses the information about the new deployment to pull the module images from the cloud then starts running the modules locally. This process can take a few minutes.
+
+After you create the module deployment details, the wizard returns you to the device details page. View the deployment status on the **Modules** tab.
+
+You should see three modules: **$edgeAgent**, **$edgeHub**, and **SimulatedTemperatureSensor**. If one or more of the modules has **YES** under **SPECIFIED IN DEPLOYMENT** but not under **REPORTED BY DEVICE**, your IoT Edge device is still starting them. Wait a few minutes, and then refresh the page.
+
+ ![Screenshot that shows Simulated Temperature Sensor in the list of deployed modules.](./media/quickstart/view-deployed-modules.png)
## View generated data
iot-edge Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/quickstart.md
Make sure your IoT Edge device meets the following requirements:
* Professional, Enterprise, IoT Enterprise * Windows Server 2019 build 17763 or later
-
* Hardware requirements * Minimum Free Memory: 2 GB * Minimum Free Disk Space: 10 GB - >[!NOTE] >This quickstart uses Windows Admin Center to create a deployment of IoT Edge for Linux on Windows. You can also use PowerShell. If you wish to use PowerShell to create your deployment, follow the steps in the how-to guide on [installing and provisioning Azure IoT Edge for Linux on a Windows device](how-to-install-iot-edge-on-windows.md).
Manage your Azure IoT Edge device from the cloud to deploy a module that sends t
![Diagram that shows the step to deploy a module.](./media/quickstart/deploy-module.png)
+<!--
[!INCLUDE [iot-edge-deploy-module](../../includes/iot-edge-deploy-module.md)]
+Include content included below to support versioned steps in Linux quickstart. Can update include file once Windows quickstart supports v1.2
+-->
+
+One of the key capabilities of Azure IoT Edge is deploying code to your IoT Edge devices from the cloud. *IoT Edge modules* are executable packages implemented as containers. In this section, you'll deploy a pre-built module from the [IoT Edge Modules section of Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/category/internet-of-things?page=1&subcategories=iot-edge-modules) directly from Azure IoT Hub.
+
+The module that you deploy in this section simulates a sensor and sends generated data. This module is a useful piece of code when you're getting started with IoT Edge because you can use the simulated data for development and testing. If you want to see exactly what this module does, you can view the [simulated temperature sensor source code](https://github.com/Azure/iotedge/blob/027a509549a248647ed41ca7fe1dc508771c8123/edge-modules/SimulatedTemperatureSensor/src/Program.cs).
+
+Follow these steps to deploy your first module from Azure Marketplace.
+
+1. Sign in to the [Azure portal](https://portal.azure.com) and go to your IoT hub.
+
+1. From the menu on the left, under **Automatic Device Management**, select **IoT Edge**.
+
+1. Select the device ID of the target device from the list of devices.
+
+1. On the upper bar, select **Set Modules**.
+
+ ![Screenshot that shows selecting Set Modules.](./media/quickstart/select-set-modules.png)
+
+1. Under **IoT Edge Modules**, open the **Add** drop-down menu, and then select **Marketplace Module**.
+
+ ![Screenshot that shows the Add drop-down menu.](./media/quickstart/add-marketplace-module.png)
+
+1. In **IoT Edge Module Marketplace**, search for and select the `Simulated Temperature Sensor` module.
+
+ The module is added to the IoT Edge Modules section with the desired **running** status.
+
+1. Select **Next: Routes** to continue to the next step of the wizard.
+
+ ![Screenshot that shows continuing to the next step after the module is added.](./media/quickstart/view-temperature-sensor-next-routes.png)
+
+1. On the **Routes** tab, remove the default route, **route**, and then select **Next: Review + create** to continue to the next step of the wizard.
+
+ >[!Note]
+ >Routes are constructed by using name and value pairs. You should see two routes on this page. The default route, **route**, sends all messages to IoT Hub (which is called `$upstream`). A second route, **SimulatedTemperatureSensorToIoTHub**, was created automatically when you added the module from Azure Marketplace. This route sends all messages from the simulated temperature module to IoT Hub. You can delete the default route because it's redundant in this case.
+
+ ![Screenshot that shows removing the default route then moving to the next step.](./media/quickstart/delete-route-next-review-create.png)
+
+1. Review the JSON file, and then select **Create**. The JSON file defines all of the modules that you deploy to your IoT Edge device. You'll see the **SimulatedTemperatureSensor** module and the two runtime modules, **edgeAgent** and **edgeHub**.
+
+ >[!Note]
+ >When you submit a new deployment to an IoT Edge device, nothing is pushed to your device. Instead, the device queries IoT Hub regularly for any new instructions. If the device finds an updated deployment manifest, it uses the information about the new deployment to pull the module images from the cloud then starts running the modules locally. This process can take a few minutes.
+
+1. After you create the module deployment details, the wizard returns you to the device details page. View the deployment status on the **Modules** tab.
+
+ You should see three modules: **$edgeAgent**, **$edgeHub**, and **SimulatedTemperatureSensor**. If one or more of the modules has **YES** under **SPECIFIED IN DEPLOYMENT** but not under **REPORTED BY DEVICE**, your IoT Edge device is still starting them. Wait a few minutes, and then refresh the page.
+
+ ![Screenshot that shows Simulated Temperature Sensor in the list of deployed modules.](./media/quickstart/view-deployed-modules.png)
+ ## View the generated data In this quickstart, you created a new IoT Edge device and installed the IoT Edge runtime on it. Then you used the Azure portal to deploy an IoT Edge module to run on the device without having to make changes to the device itself.
iot-edge Support https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/support.md
description: Learn which operating systems can run the Azure IoT Edge daemon and
Previously updated : 02/11/2021 Last updated : 04/09/2021
The systems listed in the following table are considered compatible with Azure I
IoT Edge release assets and release notes are available on the [azure-iotedge releases](https://github.com/Azure/azure-iotedge/releases) page. This section reflects information from those release notes to help you visualize the components of each version more easily.
-IoT Edge components can be installed or updated individually, and are backwards compatible with components from older versions. The following table lists the components included in each release:
+The following table lists the components included in each release starting with 1.2.0. The components listed in this table can be installed or updated individually, and are backwards compatible with older versions.
-| Release | Security daemon | Edge hub<br>Edge agent | Libiothsm | Moby |
+| Release | aziot-edge | edgeHub<br>edgeAgent | aziot-identity-service |
+| - | - | -- | - |
+| **1.2** | 1.2.0 | 1.2.0 | 1.2.0 |
+
+The following table lists the components included in each release up to the 1.1 LTS release. The components listed in this table can be installed or updated individually, and are backwards compatible with older versions.
+
+| Release | iotedge | edgeHub<br>edgeAgent | libiothsm | moby |
|--|--|--|--|--| | **1.1 LTS**<sup>1</sup> | 1.1.0<br>1.1.1 | 1.1.0<br>1.1.1 | 1.1.0<br>1.1.1 | | | **1.0.10** | 1.0.10<br>1.0.10.1<br>1.0.10.2<br><br>1.0.10.4 | 1.0.10<br>1.0.10.1<br>1.0.10.2<br>1.0.10.3<br>1.0.10.4 | 1.0.10<br>1.0.10.1<br>1.0.10.2<br><br>1.0.10.4 | |
IoT Edge uses the Microsoft.Azure.Devices.Client SDK. For more information, see
| IoT Edge version | Microsoft.Azure.Devices.Client SDK version | ||--|
-| 1.1 (LTS) | 1.28.0 |
+| 1.2.0 | 1.33.4-NestedEdge
+| 1.1 (LTS) | 1.28.0 |
| 1.0.10 | 1.28.0 | | 1.0.9 | 1.21.1 | | 1.0.8 | 1.20.3 |
iot-edge Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/troubleshoot.md
The troubleshooting tool runs many checks that are sorted into these three categ
The IoT Edge check tool uses a container to run its diagnostics. The container image, `mcr.microsoft.com/azureiotedge-diagnostics:latest`, is available through the [Microsoft Container Registry](https://github.com/microsoft/containerregistry). If you need to run a check on a device without direct access to the internet, your devices will need access to the container image.
+<!-- <1.2> -->
+
+In a scenario using nested IoT Edge devices, you can get access to the diagnostics image on child devices by routing the image pull through the parent devices.
+
+```bash
+sudo iotedge check --diagnostics-image-name <parent_device_fqdn_or_ip>:<port_for_api_proxy_module>/azureiotedge-diagnostics:1.2
+```
+
+<!-- </1.2> -->
+ For information about each of the diagnostic checks this tool runs, including what to do if you get an error or warning, see [IoT Edge troubleshoot checks](https://github.com/Azure/iotedge/blob/master/doc/troubleshoot-checks.md). ## Gather debug information with 'support-bundle' command
iot-edge Tutorial Machine Learning Edge 07 Send Data To Hub https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/tutorial-machine-learning-edge-07-send-data-to-hub.md
In this article, we used our development VM to simulate a leaf device sending se
To continue learning about IoT Edge capabilities, try this tutorial next: > [!div class="nextstepaction"]
-> [Create a hierarchy of IoT Edge devices (Preview)](tutorial-nested-iot-edge.md?view=iotedge-2020-11&preserve-view=true)
+> [Create a hierarchy of IoT Edge devices](tutorial-nested-iot-edge.md?view=iotedge-2020-11&preserve-view=true)
iot-edge Tutorial Nested Iot Edge https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/tutorial-nested-iot-edge.md
monikerRange: ">=iotedge-2020-11"
-# Tutorial: Create a hierarchy of IoT Edge devices (Preview)
+# Tutorial: Create a hierarchy of IoT Edge devices
[!INCLUDE [iot-edge-version-202011](../../includes/iot-edge-version-202011.md)] Deploy Azure IoT Edge nodes across networks organized in hierarchical layers. Each layer in a hierarchy is a gateway device that handles messages and requests from devices in the layer beneath it.
->[!NOTE]
->This feature requires IoT Edge version 1.2, which is in public preview, running Linux containers.
- You can structure a hierarchy of devices so that only the top layer has connectivity to the cloud, and the lower layers can only communicate with adjacent north and south layers. This network layering is the foundation of most industrial networks, which follow the [ISA-95 standard](https://en.wikipedia.org/wiki/ANSI/ISA-95).
-The goal of this tutorial is to create a hierarchy of IoT Edge devices that simulates a production environment. At the end, you will deploy the [Simulated Temperature Sensor module](https://azuremarketplace.microsoft.com/marketplace/apps/azure-iot.simulated-temperature-sensor) to a lower layer device without internet access by downloading container images through the hierarchy.
+The goal of this tutorial is to create a hierarchy of IoT Edge devices that simulates a simplified production environment. At the end, you will deploy the [Simulated Temperature Sensor module](https://azuremarketplace.microsoft.com/marketplace/apps/azure-iot.simulated-temperature-sensor) to a lower layer device without internet access by downloading container images through the hierarchy.
-To accomplish this goal, this tutorial walks you through creating a hierarchy of IoT Edge devices, deploying IoT Edge runtime containers to your devices, and configuring your devices locally. In this tutorial, you learn how to:
+To accomplish this goal, this tutorial walks you through creating a hierarchy of IoT Edge devices, deploying IoT Edge runtime containers to your devices, and configuring your devices locally. In this tutorial, you use an automated configuration tool to:
> [!div class="checklist"] >
To accomplish this goal, this tutorial walks you through creating a hierarchy of
> * Add workloads to the devices in your hierarchy. > * Use the [IoT Edge API Proxy module](https://azuremarketplace.microsoft.com/marketplace/apps/azure-iot.azureiotedge-api-proxy?tab=Overview) to securely route HTTP traffic over a single port from your lower layer devices.
+>[!TIP]
+>This tutorial includes a mixture of manual and automated steps to provide a showcase of nested IoT Edge features.
+>
+>If you would like an entirely automated look at setting up a hierarchy of IoT Edge devices, you can follow the scripted [Azure IoT Edge for Industrial IoT sample](https://aka.ms/iotedge-nested-sample). This scripted scenario deploys Azure virtual machines as preconfigured devices to simulate a factory environment.
+>
+>If you would like an in-depth look at the manual steps to create and manage a hierarchy of IoT Edge devices, see [the how-to guide on IoT Edge device gateway hierarchies](how-to-connect-downstream-iot-edge-device.md).
+ In this tutorial, the following network layers are defined: * **Top layer**: IoT Edge devices at this layer can connect directly to the cloud. * **Lower layers**: IoT Edge devices at layers below the top layer cannot connect directly to the cloud. They need to go through one or more intermediary IoT Edge devices to send and receive data.
-This tutorial uses a two device hierarchy for simplicity, pictured below. One device, the **top layer device**, represents a device at the top layer of the hierarchy, which can connect directly to the cloud. This device will also be referred to as the **parent device**. The other device, the **lower layer device**, represents a device at the lower layer of the hierarchy, which cannot connect directly to the cloud. You can add additional lower layer devices to represent your production environment, as needed. Devices at lower layers will also be referred to as **child devices**. The configuration of any additional lower layer devices will follow the **lower layer device**'s configuration.
+This tutorial uses a two device hierarchy for simplicity, pictured below. One device, the **top layer device**, represents a device at the top layer of the hierarchy, which can connect directly to the cloud. This device will also be referred to as the **parent device**. The other device, the **lower layer device**, represents a device at the lower layer of the hierarchy, which cannot connect directly to the cloud. You can add more lower layer devices to represent your production environment, as needed. Devices at lower layers will also be referred to as **child devices**.
![Structure of the tutorial hierarchy, containing two devices: the top layer device and the lower layer device](./media/tutorial-nested-iot-edge/tutorial-hierarchy-diagram.png)
To create a hierarchy of IoT Edge devices, you will need:
* A computer (Windows or Linux) with internet connectivity. * An Azure account with a valid subscription. If you don't have an [Azure subscription](../guides/developer/azure-developer-guide.md#understanding-accounts-subscriptions-and-billing), create a [free account](https://azure.microsoft.com/free/) before you begin. * A free or standard tier [IoT Hub](../iot-hub/iot-hub-create-through-portal.md) in Azure.
-* Azure CLI v2.3.1 with the Azure IoT extension v0.10.6 or higher installed. This tutorial uses the [Azure Cloud Shell](../cloud-shell/overview.md). If you're unfamiliar with the Azure Cloud Shell, [check out a quickstart for details](./quickstart-linux.md#prerequisites).
+* A Bash shell in Azure Cloud Shell using Azure CLI v2.3.1 with the Azure IoT extension v0.10.6 or higher installed. This tutorial uses the [Azure Cloud Shell](../cloud-shell/overview.md). If you're unfamiliar with the Azure Cloud Shell, [check out a quickstart for details](./quickstart-linux.md#prerequisites).
* To see your current versions of the Azure CLI modules and extensions, run [az version](/cli/azure/reference-index?#az_version).
-* A Linux device to configure as an IoT Edge device for each device in your hierarchy. This tutorial uses two devices. If you don't have devices available, you can create Azure virtual machines for each device in your hierarchy by replacing the placeholder text in the following command and running it:
-
- ```azurecli-interactive
- az vm create \
- --resource-group <REPLACE_WITH_RESOURCE_GROUP> \
- --name <REPLACE_WITH_UNIQUE_NAMES_FOR_EACH_VM> \
- --image UbuntuLTS \
- --admin-username azureuser \
- --admin-password <REPLACE_WITH_PASSWORD>
- ```
-
-* Make sure that the following ports are open inbound: 8000, 443, 5671, 8883:
- * 8000: Used to pull Docker container images through the API proxy.
- * 443: Used between parent and child edge hubs for REST API calls.
- * 5671, 8883: Used for AMQP and MQTT.
-
- For more information, see [How to open ports to a virtual machine with the Azure portal](../virtual-machines/windows/nsg-quickstart-portal.md).
-
->[!TIP]
->If you would like an automated look at setting up a hierarchy of IoT Edge devices, you can follow the scripted [Azure IoT Edge for Industrial IoT sample](https://aka.ms/iotedge-nested-sample). This scripted scenario deploys Azure virtual machines as preconfigured devices to simulate a factory environment.
->
->If you would like to proceed though the creation of the sample hierarchy step-by-step, continue with the tutorial steps below.
-
-## Configure your IoT Edge device hierarchy
-
-### Create a hierarchy of IoT Edge devices
-
-IoT Edge devices make up the layers of your hierarchy. This tutorial will create a hierarchy of two IoT Edge devices: the **top layer device** and its child, the **lower layer device**. You can create additional child devices, as needed.
-
-To create your IoT Edge devices, you can use the Azure portal or Azure CLI.
-
-# [Portal](#tab/azure-portal)
-
-1. In the [Azure portal](https://ms.portal.azure.com/), navigate to your IoT Hub.
-
-1. From the menu on the left pane, under **Automatic Device Management**, select **IoT Edge**.
-
-1. Select **+ Add an IoT Edge device**. This device will be the top layer device, so enter an appropriate unique device ID. Select **Save**.
-
-1. Select **+ Add an IoT Edge device** again. This device will be a lower layer device, so enter an appropriate unique device ID.
-
-1. Choose **Set a parent device**, choose your top layer device from the list of devices, and select **Ok**. Select **Save**.
-
- ![Setting parent for lower layer device](./media/tutorial-nested-iot-edge/set-parent-device.png)
-
-# [Azure CLI](#tab/azure-cli)
-
-1. In the [Azure Cloud Shell](https://shell.azure.com/), enter the following command to create an IoT Edge device in your hub. This device will be the top layer device, so enter an appropriate unique device ID:
-
- ```azurecli-interactive
- az iot hub device-identity create --device-id {top_layer_device_id} --edge-enabled --hub-name {hub_name}
- ```
-
- A successful device creation will output the JSON configuration of the device.
-
- ![JSON output of a successful device creation](./media/tutorial-nested-iot-edge/json-success-output.png)
-
-1. Enter the following command to create a lower layer IoT Edge device. You can create more than one lower layer device if you want more layer in your hierarchy. Make sure to provide unique device identities to each.
-
- ```azurecli-interactive
- az iot hub device-identity create --device-id {lower_layer_device_id} --edge-enabled --hub-name {hub_name}
- ```
-
-1. Enter the following command to define each parent-child relationship between the **top layer device** and each **lower layer device**. Make sure to run this command for each lower layer device in your hierarchy.
-
- ```azurecli-interactive
- az iot hub device-identity parent set --device-id {lower_layer_device_id} --parent-device-id {top_layer_device_id} --hub-name {hub_name}
- ```
-
- This command does not have explicit output.
---
-Next, make note of each IoT Edge device's connection string. They will be used later.
-
-# [Portal](#tab/azure-portal)
-
-1. In the [Azure portal](https://ms.portal.azure.com/), navigate to the **IoT Edge** section of your IoT Hub.
-
-1. Click on the device ID of one of the devices in the list of devices.
+* A Linux device to configure as an IoT Edge device for each device in your hierarchy. This tutorial uses two devices. If you don't have devices available, you can create Azure virtual machines for each device in your hierarchy using the command below.
-1. Select **Copy** on the **Primary Connection String** field and record it in a place of your choice.
+ Replace the placeholder text in the following command and run it twice, once for each virtual machine. Each virtual machine needs a unique DNS prefix, which will also serve as its name. The DNS prefix must conform to the following regular expression: `[a-z][a-z0-9-]{1,61}[a-z0-9]`.
-1. Repeat for all other devices.
-
-# [Azure CLI](#tab/azure-cli)
-
-1. In the [Azure Cloud Shell](https://shell.azure.com/), for each device, enter the following command to retrieve the connection string of your device and record it in a place of your choice:
-
- ```azurecli-interactive
- az iot hub device-identity connection-string show --device-id {device_id} --hub-name {hub_name}
- ```
---
-If you completed the above steps correctly, you can use the following steps to check that your parent-child relationships are correct in the Azure portal or Azure CLI.
-
-# [Portal](#tab/azure-portal)
-
-1. In the [Azure portal](https://ms.portal.azure.com/), navigate to the **IoT Edge** section of your IoT Hub.
-
-1. Click on the device ID of a **lower layer device** in the list of devices.
-
-1. On the device's details page, you should see your **top layer device**'s identity listed beside the **Parent device** field.
-
- [Parent device acknowledged by child device](./media/tutorial-nested-iot-edge/lower-layer-device-parent.png)
-
-You can also child your hierarchy relationship's through your **top layer device**.
-
-1. Click on the device ID of a **top layer device** in the list of devices.
-
-1. Select the **Manage child devices** tab at the top.
-
-1. Under the list of devices, you should see your **lower layer device**.
-
- [Child device acknowledged by parent device](./media/tutorial-nested-iot-edge/top-layer-device-child.png)
-
-# [Azure CLI](#tab/azure-cli)
-
-1. In the [Azure Cloud Shell](https://shell.azure.com/) you can verify that any of your child devices successfully established their relationship with the parent device by getting the identity of the child device's acknowledged parent device. This will output the JSON configuration of the parent device, pictured above:
-
- ```azurecli-interactive
- az iot hub device-identity parent show --device-id {lower_layer_device_id} --hub-name {hub_name}
+ ```bash
+ az deployment group create \
+ --resource-group <REPLACE_WITH_YOUR_RESOURCE_GROUP> \
+ --template-uri "https://raw.githubusercontent.com/Azure/iotedge-vm-deploy/1.2.0/edgeDeploy.json" \
+ --parameters dnsLabelPrefix='<REPLACE_WITH_UNIQUE_DNS_FOR_VIRTUAL_MACHINE>' \
+ --parameters adminUsername='azureuser' \
+ --parameters authenticationType='sshPublicKey' \
+ --parameters adminPasswordOrKey="$(< ~/.ssh/id_rsa.pub)" \
+ --query "properties.outputs.[publicFQDN.value, publicSSH.value]" -o tsv
```
-You can also child your hierarchy relationship's through querying your **top layer device**.
+ The virtual machine uses SSH keys for authenticating users. If you are unfamiliar with creating and using SSH keys, you can follow [the instructions for SSH public-private key pairs for Linux VMs in Azure](https://docs.microsoft.com/azure/virtual-machines/linux/mac-create-ssh-keys).
-1. List the child devices the parent device knows:
+ IoT Edge version 1.2 is preinstalled with this ARM template, saving the need to manually install the assets on the virtual machines. If you are installing IoT Edge on your own devices, see [Install Azure IoT Edge for Linux (version 1.2)](how-to-install-iot-edge.md) or [Update IoT Edge to version 1.2](how-to-update-iot-edge.md#special-case-update-from-10-or-11-to-12).
- ```azurecli-interactive
- az iot hub device-identity children list --device-id {top_layer_device_id} --hub-name {hub_name}
- ```
+ A successful creation of a virtual machine using this ARM template will output your virtual machine's `SSH` handle and fully-qualified domain name (`FQDN`). You will use the SSH handle and either the FQDN or IP address of each virtual machine for configuration in later steps, so keep track of this information. A sample output is pictured below.
-
+ >[!TIP]
+ >You can also find the IP address and FQDN on the Azure portal. For the IP address, navigate to your list of virtual machines and note the **Public IP address field**. For the FQDN, go to each virtual machine's overview page and look for the **DNS name** field.
-Once you are satisfied your hierarchy is correctly structured, you are ready to proceed.
+ ![The virtual machine will output a JSON upon creation, which contains its SSH handle](./media/tutorial-nested-iot-edge/virtual-machine-outputs.png)
-### Create certificates
+* Make sure that the following ports are open inbound for all devices except the lowest layer device: 8000, 443, 5671, 8883:
+ * 8000: Used to pull Docker container images through the API proxy.
+ * 443: Used between parent and child edge hubs for REST API calls.
+ * 5671, 8883: Used for AMQP and MQTT.
-All devices in a [gateway scenario](iot-edge-as-gateway.md) need a shared certificate to set up secure connections between them. Use the following steps to create demo certificates for both devices in this scenario.
+ For more information, see [how to open ports to a virtual machine with the Azure portal](../virtual-machines/windows/nsg-quickstart-portal.md).
-To create demo certificates on a Linux device, you need to clone the generation scripts and set them up to run locally in bash.
+## Configure your IoT Edge device hierarchy
-1. Clone the IoT Edge git repo, which contains scripts to generate demo certificates.
+### Create a hierarchy of IoT Edge devices
- ```bash
- git clone https://github.com/Azure/iotedge.git
- ```
+IoT Edge devices make up the layers of your hierarchy. This tutorial will create a hierarchy of two IoT Edge devices: the **top layer device** and its child, the **lower layer device**. You can create additional child devices as needed.
-1. Navigate to the directory in which you want to work. All certificate and key files will be created in this directory.
+To create and configure your hierarchy of IoT Edge devices, you'll use the `iotedge-config` tool. This tool simplifies the configuration of the hierarchy by automating and condensing several steps into two:
-1. Copy the config and script files from the cloned IoT Edge repo into your working directory.
+1. Setting up the cloud configuration and preparing each device configuration, which includes:
- ```bash
- cp <path>/iotedge/tools/CACertificates/*.cnf .
- cp <path>/iotedge/tools/CACertificates/certGen.sh .
- ```
+ * Creating devices in your IoT Hub
+ * Setting the parent-child relationships to authorize communication between devices
+ * Generating a chain of certificates for each device to establish secure communication between them
+ * Generating configuration files for each device
-1. Create the root CA certificate and one intermediate certificate.
+1. Installing each device configuration, which includes:
- ```bash
- ./certGen.sh create_root_and_intermediate
- ```
+ * Installing certificates on each device
+ * Applying the configuration files for each device
- This script command creates several certificate and key files, but we are using the following file as the **root CA certificate** for the gateway hierarchy:
+The `iotedge-config` tool will also make the module deployments to your IoT Edge device automatically.
- * `<WRKDIR>/certs/azure-iot-test-only.root.ca.cert.pem`
+To use the `iotedge-config` tool to create and configure your hierarchy, follow the steps below in the Azure CLI:
-1. Create two sets of IoT Edge device CA certificates and private keys with the following command: one set for the top layer device and one set for the lower layer device. Provide memorable names for the CA certificates to distinguish them from each other.
+1. In the [Azure Cloud Shell](https://shell.azure.com/), make a directory for your tutorial's resources:
```bash
- ./certGen.sh create_edge_device_ca_certificate "{top_layer_certificate_name}"
- ./certGen.sh create_edge_device_ca_certificate "{lower_layer_certificate_name}"
+ mkdir nestedIotEdgeTutorial
```
- This script command creates several certificate and key files, but we are using the following certificate and key pair on each IoT Edge device and referenced in the config file:
-
- * `<WRKDIR>/certs/iot-edge-device-<CA cert name>-full-chain.cert.pem`
- * `<WRKDIR>/private/iot-edge-device-<CA cert name>.key.pem`
-
-Each device needs a copy of the root CA certificate and a copy of its own device CA certificate and private key. You can use a USB drive or [secure file copy](https://www.ssh.com/ssh/scp/) to move the certificates to each device.
-
-1. After the certificates are transferred, install the root CA for each device.
+1. Download the release of the configuration tool and configuration templates:
```bash
- sudo cp <path>/azure-iot-test-only.root.ca.cert.pem /usr/local/share/ca-certificates/azure-iot-test-only.root.ca.cert.pem.crt
- sudo update-ca-certificates
+ cd ~/nestedIotEdgeTutorial
+ wget -O iotedge_config.tar "https://github.com/Azure-Samples/iotedge_config_cli/releases/download/latest/iotedge_config_cli.tar.gz"
+ tar -xvf iotedge_config.tar
```
- This command should output that one certificate has been added in `/etc/ssl/certs`.
-
- [Successful certificate installation message](./media/tutorial-nested-iot-edge/certificates-success-output.png)
-
-If you completed the above steps correctly, you can check that your certificates are installed in `/etc/ssl/certs` by navigating to that directory and searching for your installed certs.
+ This will create the `iotedge_config_cli_release` folder in your tutorial directory.
-1. Navigate to `/etc/ssl/certs`:
+ The template file used to create your device hierarchy is the `iotedge_config.yaml` file found in `~/nestedIotEdgeTutorial/iotedge_config_cli_release/templates/tutorial`. In the same directory, `deploymentLowerLayer.json` is a JSON deployment file containing instructions for which modules to deploy to your **lower layer device**. The `deploymentTopLayer.json` file is the same, but for your **top layer device**, as the modules deployed to each device are not the same. The `device_config.toml` file is a template for IoT Edge device configurations and will be used to automatically generate the configuration bundles for the devices in your hierarchy.
- ```bash
- cd /etc/ssl/certs
- ```
+ If you'd like to take a look at the source code and scripts for the `iotedge-config` tool, check out [the Azure-Samples repository on GitHub](https://github.com/Azure-Samples/iotedge_config_cli).
-1. List the installed certs and `grep` for `azure`:
+1. Open the tutorial configuration template and edit it with your information:
```bash
- ll | grep azure
+ code ~/nestedIotEdgeTutorial/iotedge_config_cli_release/templates/tutorial/iotedge_config.yaml
```
- You should see a certificate hash linked to your root CA Certificate and your root CA certificate linked to the copy in your `usr/local/share` directory.
-
- [Results of search for Azure certificates](./media/tutorial-nested-iot-edge/certificate-list-results.png)
+ In the **iothub** section, populate the `iothub_hostname` and `iothub_name` fields with your information. This information can be found on the overview page of your IoT Hub on the Azure portal.
-Once you are satisfied your certificates are installed on each device, you are ready to proceed.
+ In the optional **certificates** section, you can populate the fields with the absolute paths to your certificate and key. If you leave these fields blank, the script will automatically generate self-signed test certificates for your use. If you're unfamiliar with how certificates are used in a gateway scenario, check out [the how-to guide's certificate section](how-to-connect-downstream-iot-edge-device.md#prepare-certificates).
-### Install IoT Edge on the devices
+ In the **configuration** section, the `template_config_path` is the path to the `device_config.toml` template used to create your device configurations. The `default_edge_agent` field determines what Edge Agent image lower layer devices will pull and from where.
-Installing the IoT Edge version 1.2 runtime images gives your devices access to the features necessary to operate as a hierarchy of devices.
+ In the **edgedevices** section, for a production scenario, you can edit the hierarchy tree to reflect your desired structure. For the purposes of this tutorial, accept the default tree. For each device, there is a `device_id` field, where you can name your devices. There is also the `deployment` field, which specifies the path to the deployment JSON for that device.
-To install IoT Edge, you need to install the appropriate repository configuration, install prerequisites, and install the necessary release assets.
+ You can also manually register IoT Edge devices in your IoT Hub through the Azure portal or Azure Cloud Shell. To learn how, see [the guide on how to register an IoT Edge device](how-to-register-device.md).
-1. Install the repository configuration that matches your device operating system.
+ You can define the parent-child relationships manually as well. See the [create a gateway hierarchy](how-to-connect-downstream-iot-edge-device.md#create-a-gateway-hierarchy) section of the how-to guide to learn more.
- * **Ubuntu Server 18.04**:
+ ![The edgedevices section of the configuration file allows you to define your hierarchy](./media/tutorial-nested-iot-edge/hierarchy-config-sample.png)
- ```bash
- curl https://packages.microsoft.com/config/ubuntu/18.04/multiarch/prod.list > ./microsoft-prod.list
- ```
+1. Save and close the file:
- * **Raspberry Pi OS Stretch**:
+ `CTRL + S`, `CTRL + Q`
- ```bash
- curl https://packages.microsoft.com/config/debian/stretch/multiarch/prod.list > ./microsoft-prod.list
- ```
-
-1. Copy the generated list to the sources.list.d directory.
+1. Create an outputs directory for the configuration bundles in your tutorial resources directory:
```bash
- sudo cp ./microsoft-prod.list /etc/apt/sources.list.d/
+ mkdir ~/nestedIotEdgeTutorial/iotedge_config_cli_release/outputs
```
-1. Install the Microsoft GPG public key.
+1. Navigate to your `iotedge_config_cli_release` directory and run the tool to create your hierarchy of IoT Edge devices:
```bash
- curl https://packages.microsoft.com/keys/microsoft.asc | gpg --dearmor > microsoft.gpg
- sudo cp ./microsoft.gpg /etc/apt/trusted.gpg.d/
+ cd ~/nestedIotEdgeTutorial/iotedge_config_cli_release
+ ./iotedge_config --config ~/nestedIotEdgeTutorial/iotedge_config_cli_release/templates/tutorial/iotedge_config.yaml --output ~/nestedIotEdgeTutorial/iotedge_config_cli_release/outputs -f
```
-1. Update package lists on your device.
-
- ```bash
- sudo apt-get update
- ```
+ With the `--output` flag, the tool creates the device certificates, certificate bundles, and a log file in a directory of your choice. With the `-f` flag set, the tool will automatically look for existing IoT Edge devices in your IoT Hub and remove them, to avoid errors and keep your hub clean.
-1. Install the Moby engine.
+ The configuration tool creates your IoT Edge devices and sets up the parent-child relationships between them. Optionally, it creates certificates for your devices to use. If paths to deployment JSONs are provided, the tool will automatically create these deployments to your devices, but this is not required. Finally, the tool will generate the configuration bundles for your devices and place them in the output directory. For a thorough look at the steps taken by the configuration tool, see the log file in the output directory.
- ```bash
- sudo apt-get install moby-engine
- ```
+ ![The script will display a topology of your hierarchy upon execution](./media/tutorial-nested-iot-edge/successful-setup-tool-run.png)
-1. Install IoT Edge and the IoT identity service.
-
- ```bash
- sudo apt-get install aziot-edge
- ```
-
-If you completed the above steps correctly, you saw the Azure IoT Edge banner message requesting that you update the Azure IoT Edge configuration file, `/etc/aziot/config.toml`, on each device in your hierarchy. If so, you are ready to proceed.
+Double-check that the topology output from the script looks correct. Once you are satisfied your hierarchy is correctly structured, you are ready to proceed.
### Configure the IoT Edge runtime In addition to the provisioning of your devices, the configuration steps establish trusted communication between the devices in your hierarchy using the certificates you created earlier. The steps also begin to establish the network structure of your hierarchy. The top layer device will maintain internet connectivity, allowing it to pull images for its runtime from the cloud, while lower layer devices will route through the top layer device to access these images.
-To configure the IoT Edge runtime, you need to modify several components of the configuration file. The configurations slightly differ between the **top layer device** and a **lower layer device**, so be mindful of which device's configuration file you are editing for each step. Configuring the IoT Edge runtime for your devices consists of four steps, all accomplished by editing the IoT Edge configuration file:
-
-* Manually provision each device by adding that device's connection string to the configuration file.
-
-* Finish setting up your device's certificates by pointing the configuration file to the device CA certificate, device CA private key, and root CA certificate.
-
-* Bootstrap the system using the IoT Edge agent.
+To configure the IoT Edge runtime, you need to apply the configuration bundles created by the setup script to your devices. The configurations slightly differ between the **top layer device** and a **lower layer device**, so be mindful of which device's configuration file you are applying to each device.
-* Update the **hostname** parameter for your **top layer** device, and update both the **hostname** parameter and **parent_hostname** parameter for your **lower layer** devices.
-
-Complete these steps and restart the IoT Edge service to configure your devices.
-
->[!TIP]
->When navigating the configuration file in Nano, you can use **Ctrl + W** to search for keywords in the file.
-
-1. On each device, create a configuration file based on the included template.
-
- ```bash
- sudo cp /etc/aziot/config.toml.edge.template /etc/aziot/config.toml
+1. Each device needs its corresponding configuration bundle. You can use a USB drive or [secure file copy](https://www.ssh.com/ssh/scp/) to move the configuration bundles to each device.
-1. On each device, open the IoT Edge configuration file.
+ Be sure to send the correct configuration bundle to each device.
```bash
- sudo nano /etc/aziot/config.toml
+ scp <PATH_TO_CONFIGURATION_BUNDLE> <USER>@<VM_IP_OR_FQDN>:~
```
-1. For your **top layer** device, find the **Hostname** section. Uncomment the line with the `hostname` parameter and update the value to be the fully qualified domain name (FQDN) or IP address of the IoT Edge device. Use whichever value you choose consistently across the devices in your hierarchy.
+ The `:~` means that the configuration folder will be placed in the home directory on the virtual machine.
- ```toml
- hostname = "<device fqdn or IP>"
- ```
-
- >[!TIP]
- >To paste clipboard contents into Nano `Shift+Right Click` or press `Shift+Insert`.
+1. Log on to your virtual machine to apply the configuration bundle to the device:
-1. For any IoT Edge device in **lower layers**, find the **Parent hostname** section. Uncomment the line with the `parent_hostname` parameter and update the value to point to the FQDN or IP of the parent device. Use the exact value that you put in the parent device's **hostname** field. For the IoT Edge device in the **top layer**, leave this parameter commented out.
-
- ```toml
- parent_hostname = "<parent device fqdn or IP>"
+ ```bash
+ ssh <USER>@<VM_IP_OR_FQDN>
```
- > [!NOTE]
- > For hierarchies with more than one lower layer, update the *parent_hostname* field with the FQDN of the device in the layer immediately above.
-
-1. Find the **Trust bundle cert** section of the file. Uncomment the line with the `trust_bundle_cert` parameter and update the value with the file URI path to the root CA certificate shared by all devices in the gateway hierarchy.
+1. On each device, unzip the configuration bundle. You'll need to install zip first:
- ```toml
- trust_bundle_cert = "<root CA certificate>"
+ ```bash
+ sudo apt install zip
+ unzip ~/<PATH_TO_CONFIGURATION_BUNDLE>/<CONFIGURATION_BUNDLE>.zip
```
-1. Find the **Provisioning** section of the file. Uncomment the lines for **Manual provisioning with connection string**. For each device in your hierarchy, update the value of **connection_string** with the connection string from your IoT Edge device.
+1. On each device, apply the configuration bundle to the device:
- ```toml
- # Manual provisioning with connection string
- [provisioning]
- source = "manual"
- connection_string: "<Device connection string>"
+ ```bash
+ sudo ./install.sh
```
-1. Find the **Default Edge Agent** section.
-
- * For your **top layer** device, update the edgeAgent image value to the public preview version of the module in Azure Container Registry.
-
- ```toml
- [agent.config]
- image = "mcr.microsoft.com/azureiotedge-agent:1.2.0-rc4"
- ```
-
- * For each IoT Edge device in **lower layers**, update the edgeAgent image to point to the parent device followed by the port that the API proxy is listening on. You will deploy the API proxy module to the parent device in the next section.
-
- ```toml
- [agent.config]
- image = "<parent hostname value>:8000/azureiotedge-agent:1.2.0-rc4"
- ```
-
-1. Find the **Edge CA certificate** section. Uncomment the first three lines of this section. Then, update the two parameters to point to the device CA certificate and device CA private key files that you created in the previous section and moved to the IoT Edge device. Provide the file URI paths, which take the format `file:///<path>/<filename>`, such as `file:///certs/iot-edge-device-ca-top-layer-device.key.pem`.
-
- ```yml
- [edge_ca]
- cert = "<File URI path to the device full chain CA certificate unique to this device.>"
- pk = "<File URI path to the device CA private key unique to this device.>"
- ```
+ On the **top layer device**, you will receive a prompt to enter the hostname. On the **lower layer device**, it will ask for the hostname and parent's hostname. Supply the appropriate IP or FQDN for each prompt. You can use either, but be consistent in your choice across devices. The output of the install script is pictured below.
- >[!NOTE]
- >Make sure to use the **full chain** CA certificate pathway and filename to populate the `device_ca_cert` field.
+ If you want a closer look at what modifications are being made to your device's configuration file, see [the configure IoT Edge on devices section of the how-to guide](how-to-connect-downstream-iot-edge-device.md#configure-iot-edge-on-devices).
-1. Save and close the file.
+ ![Installing the configuration bundles will update the config.toml files on your device and restart all IoT Edge services automatically](./media/tutorial-nested-iot-edge/configuration-install-output.png)
- `CTRL + X`, `Y`, `Enter`
+If you completed the above steps correctly, you can check your devices are configured correctly.
-1. After entering the provisioning information in the configuration file, apply the changes:
+Run the configuration and connectivity checks on your devices. For the **top layer device**:
```bash
- sudo iotedge config apply
+ sudo iotedge check
```
-Before you continue, make sure that you have updated the configuration file of each device in the hierarchy. Depending on your hierarchy structure, you configured one **top layer device** and one or more **lower layer device(s)**.
-
-If you completed the above steps correctly, you can check your devices are configured correctly.
-
-1. Run the configuration and connectivity checks on your devices:
+For the **lower layer device**, the diagnostics image needs to be manually passed in the command:
```bash
- sudo iotedge check
+ sudo iotedge check --diagnostics-image-name <parent_device_fqdn_or_ip>:8000/azureiotedge-diagnostics:1.2
``` On your **top layer device**, expect to see an output with several passing evaluations and at least one warning. The check for the `latest security daemon` will warn you that another IoT Edge version is the latest stable version, because IoT Edge version 1.2 is in public preview. You may see additional warnings about logs policies and, depending on your network, DNS policies.
-On a **lower layer device**, expect to see an output similar to the top layer device, but with an additional warning indicating the EdgeAgent module cannot be pulled from upstream. This is acceptable, as the IoT Edge API Proxy module and Docker Container Registry module, which lower layer devices will pull images through, are not yet deployed to the **top layer device**.
+<!-- Add pic after GA -->
+<!-- KEEP! A sample output of the `iotedge check` is shown below: -->
-A sample output of `iotedge check` is shown below:
-
-[Sample configuration and connectivity results](./media/tutorial-nested-iot-edge/configuration-and-connectivity-check-results.png)
+<!-- KEEP! ![Sample configuration and connectivity results](./media/tutorial-nested-iot-edge/configuration-and-connectivity-check-results.png) -->
Once you are satisfied your configurations are correct on each device, you are ready to proceed.
-## Deploy modules to the top layer device
-
-Modules serve to complete the deployment and the IoT Edge runtime to your devices and further define the structure of your hierarchy. The IoT Edge API Proxy module securely routes HTTP traffic over a single port from your lower layer devices. The Docker Registry module allows for a repository of Docker images that your lower layer devices can access by routing image pulls to the top layer device.
-
-To deploy modules to your top layer device, you can use the Azure portal or Azure CLI.
-
->[!NOTE]
->The remaining steps to complete the configuration of the IoT Edge runtime and deploy workloads are not done on your IoT Edge devices.
-
-# [Portal](#tab/azure-portal)
-
-In the [Azure portal](https://ms.portal.azure.com/):
-
-1. Navigate to your IoT Hub.
-
-1. From the menu on the left pane, under **Automatic Device Management**, select **IoT Edge**.
-
-1. Click on the device ID of your **top layer** edge device in the list of devices.
-
-1. On the upper bar, select **Set Modules**.
-
-1. Select **Runtime Settings**, next to the gear icon.
-
-1. Under **Edge Hub**, in the image field, enter `mcr.microsoft.com/azureiotedge-hub:1.2.0-rc4`.
-
- ![Edit the Edge Hub's image](./media/tutorial-nested-iot-edge/edge-hub-image.png)
-
-1. Add the following environment variables to your Edge Hub module:
-
- | Name | Value |
- | - | - |
- | `experimentalFeatures__enabled` | `true` |
- | `experimentalFeatures__nestedEdgeEnabled` | `true` |
-
- ![Edit the Edge Hub's environment variables](./media/tutorial-nested-iot-edge/edge-hub-environment-variables.png)
-
-1. Under **Edge Agent**, in the image field, enter `mcr.microsoft.com/azureiotedge-agent:1.2.0-rc4`. Select **Save**.
-
-1. Add the Docker registry module to your top layer device. Select **+ Add** and choose **IoT Edge Module** from the dropdown. Provide the name `registry` for your Docker registry module and enter `registry:latest` for the image URI. Next, add environment variables and create options to point your local registry module at the Microsoft container registry to download container images from and to serve these images at registry:5000.
+## Deploy modules to your devices
-1. Under the environment variables tab, enter the following environment variable name-value pair:
+The module deployments to your devices were automatically generated when the devices were created. The `iotedge-config-cli` tool fed deployment JSONs for the **top and lower layer devices** after they were created. The module deployment were pending while you configured the IoT Edge runtime on each device. Once you configured the runtime, the deployments to the **top layer device** began. After those deployments completed, the **lower layer device** could use the **IoT Edge API Proxy** module to pull its necessary images.
- | Name | Value |
- | - | - |
- | `REGISTRY_PROXY_REMOTEURL` | `https://mcr.microsoft.com` |
+In the [Azure Cloud Shell](https://shell.azure.com/), you can take a look at the **top layer device's** deployment JSON to understand what modules were deployed to your device:
-1. Under the container create options tab, enter the following JSON:
-
- ```json
- {
- "HostConfig": {
- "PortBindings": {
- "5000/tcp": [
- {
- "HostPort": "5000"
- }
- ]
- }
- }
- }
+ ```bash
+ cat ~/nestedIotEdgeTutorial/templates/tutorial/deploymentTopLayer.json
```
-1. Next, add the API proxy module to your top layer device. Select **+ Add** and choose **Marketplace Module** from the dropdown. Search for `IoT Edge API Proxy` and select the module. The IoT Edge API proxy uses port 8000 and is configured to use your registry module named `registry` on port 5000 by default.
+In addition the runtime modules **IoT Edge Agent** and **IoT Edge Hub**, the **top layer device** receives the **Docker registry** module and **IoT Edge API Proxy** module.
-1. Select **Review + create**, then **Create** to complete the deployment. Your top layer device's IoT Edge runtime, which has access to the internet, will pull and run the **public preview** configs for IoT Edge hub and IoT Edge agent.
+The **Docker registry** module points to an existing Azure Container Registry. In this case, `REGISTRY_PROXY_REMOTEURL` points to the Microsoft Container Registry. In the `createOptions`, you can see it communicates on port 5000.
- ![Complete deployment containing Edge Hub, Edge Agent, Registry Module, and API Proxy Module](./media/tutorial-nested-iot-edge/complete-top-layer-deployment.png)
+The **IoT Edge API Proxy** module routes HTTP requests to other modules, allowing lower layer devices to pull container images or push blobs to storage. In this tutorial, it communicates on port 8000 and is configured to send Docker container image pull requests route to your **Docker registry** module on port 5000. Also, any blob storage upload requests route to module AzureBlobStorageonIoTEdge on port 11002. For more information about the **IoT Edge API Proxy** module and how to configure it, see the module's [how-to guide](how-to-configure-api-proxy-module.md).
-# [Azure CLI](#tab/azure-cli)
+If you'd like a look at how to create a deployment like this through the Azure portal or Azure Cloud Shell, see [top layer device section of the how-to guide](how-to-connect-downstream-iot-edge-device.md#deploy-modules-to-top-layer-devices).
-1. In the [Azure Cloud Shell](https://shell.azure.com/), enter the following command to create a deployment.json file:
-
- ```azurecli-interactive
- code deploymentTopLayer.json
- ```
-
-1. Copy the contents of the JSON below into your deployment.json, save it, and close it.
-
- ```json
- {
- "modulesContent": {
- "$edgeAgent": {
- "properties.desired": {
- "modules": {
- "registry": {
- "settings": {
- "image": "registry:latest",
- "createOptions": "{\"HostConfig\":{\"PortBindings\":{\"5000/tcp\":[{\"HostPort\":\"5000\"}]}}}"
- },
- "type": "docker",
- "version": "1.0",
- "env": {
- "REGISTRY_PROXY_REMOTEURL": {
- "value": "https://mcr.microsoft.com"
- }
- },
- "status": "running",
- "restartPolicy": "always"
- },
- "IoTEdgeAPIProxy": {
- "settings": {
- "image": "mcr.microsoft.com/azureiotedge-api-proxy",
- "createOptions": "{\"HostConfig\": {\"PortBindings\": {\"8000/tcp\": [{\"HostPort\":\"8000\"}]}}}"
- },
- "type": "docker",
- "env": {
- "NGINX_DEFAULT_PORT": {
- "value": "8000"
- },
- "DOCKER_REQUEST_ROUTE_ADDRESS": {
- "value": "registry:5000"
- },
- "BLOB_UPLOAD_ROUTE_ADDRESS": {
- "value": "AzureBlobStorageonIoTEdge:11002"
- }
- },
- "status": "running",
- "restartPolicy": "always",
- "version": "1.0"
- }
- },
- "runtime": {
- "settings": {
- "minDockerVersion": "v1.25"
- },
- "type": "docker"
- },
- "schemaVersion": "1.1",
- "systemModules": {
- "edgeAgent": {
- "settings": {
- "image": "mcr.microsoft.com/azureiotedge-agent:1.2.0-rc4",
- "createOptions": ""
- },
- "type": "docker"
- },
- "edgeHub": {
- "settings": {
- "image": "mcr.microsoft.com/azureiotedge-hub:1.2.0-rc4",
- "createOptions": "{\"HostConfig\":{\"PortBindings\":{\"443/tcp\":[{\"HostPort\":\"443\"}],\"5671/tcp\":[{\"HostPort\":\"5671\"}],\"8883/tcp\":[{\"HostPort\":\"8883\"}]}}}"
- },
- "type": "docker",
- "env": {
- "experimentalFeatures__enabled": {
- "value": "true"
- },
- "experimentalFeatures__nestedEdgeEnabled": {
- "value": "true"
- }
- },
- "status": "running",
- "restartPolicy": "always"
- }
- }
- }
- },
- "$edgeHub": {
- "properties.desired": {
- "routes": {
- "route": "FROM /messages/* INTO $upstream"
- },
- "schemaVersion": "1.1",
- "storeAndForwardConfiguration": {
- "timeToLiveSecs": 7200
- }
- }
- }
- }
- }
- ```
+In the [Azure Cloud Shell](https://shell.azure.com/), you can take a look at the **lower layer device's** deployment JSON to understand what modules were deployed to your device:
-1. Enter the following command to create a deployment to your top layer edge device:
-
- ```azurecli-interactive
- az iot edge set-modules --device-id <top_layer_device_id> --hub-name <iot_hub_name> --content ./deploymentTopLayer.json
+ ```bash
+ cat ~/nestedIotEdgeTutorial/templates/tutorial/deploymentLowerLayer.json
``` --
-If you completed the above steps correctly, your **top layer device** should report the four modules: the IoT Edge API Proxy Module, the Docker Container Registry module, and the system modules, as **Specified in Deployment**. It may take a few minutes for the device to receive its new deployment and start the modules. Refresh the page until you see the IoTEdgeAPIProxy and registry modules listed as **Reported by Device**. Once the modules are reported by the device, you are ready to continue.
-
-## Deploy modules to the lower layer device
+You can see under `systemModules` that the **lower layer device's** runtime modules are set to pull from `$upstream:8000`, instead of `mcr.microsoft.com`, as the **top layer device** did. The **lower layer device** sends Docker image requests the **IoT Edge API Proxy** module on port 8000, as it cannot directly pull the images from the cloud. The other module deployed to the **lower layer device**, the **Simulated Temperature Sensor** module, also makes its image request to `$upstream:8000`.
-Modules also serve as the workloads of your lower layer devices. The Simulated Temperature Sensor module creates sample telemetry data to provide a functional flow of data through your hierarchy of devices.
+If you'd like a look at how to create a deployment like this through the Azure portal or Azure Cloud Shell, see [lower layer device section of the how-to guide](how-to-connect-downstream-iot-edge-device.md#deploy-modules-to-lower-layer-devices).
-To deploy modules to your lower layer devices, you can use the Azure portal or Azure CLI.
+You can view the status of your modules using the command:
-# [Portal](#tab/azure-portal)
-
-In the [Azure portal](https://ms.portal.azure.com/):
-
-1. Navigate to your IoT Hub.
-
-1. From the menu on the left pane, under **Automatic Device Management**, select **IoT Edge**.
-
-1. Click on the device ID of your lower layer device in the list of IoT Edge devices.
-
-1. On the upper bar, select **Set Modules**.
-
-1. Select **Runtime Settings**, next to the gear icon.
-
-1. Under **Edge Hub**, in the image field, enter `$upstream:8000/azureiotedge-hub:1.2.0-rc4`.
-
-1. Add the following environment variables to your Edge Hub module:
-
- | Name | Value |
- | - | - |
- | `experimentalFeatures__enabled` | `true` |
- | `experimentalFeatures__nestedEdgeEnabled` | `true` |
-
-1. Under **Edge Agent**, in the image field, enter `$upstream:8000/azureiotedge-agent:1.2.0-rc4`. Select **Save**.
+ ```bash
+ az iot hub module-twin show --device-id <edge_device_id> --module-id '$edgeAgent' --hub-name <iot_hub_name> --query "properties.reported.[systemModules, modules]"
+ ```
-1. Add the temperature sensor module. Select **+ Add** and choose **Marketplace Module** from the dropdown. Search for `Simulated Temperature Sensor` and select the module.
+ This command will output all the edgeAgent reported properties. Here are some helpful ones for monitoring the status of the device: *runtime status*, *runtime start time*, *runtime last exit time*, *runtime restart count*.
-1. Under **IoT Edge Modules**, select the `Simulated Temperature Sensor` module you just added and update its image URI to point to `$upstream:8000/azureiotedge-simulated-temperature-sensor:1.0`.
+You can also see the status of your modules on the [Azure portal](https://ms.portal.azure.com/). Navigate to the **IoT Edge** section of your IoT Hub to see your devices and modules.
-1. Select **Save**, **Review + create**, and **Create** to complete the deployment.
+Once you are satisfied with your module deployments, you are ready to proceed.
- ![Complete deployment containing Edge Hub, Edge Agent, and Simulated Temperature Sensor](./media/tutorial-nested-iot-edge/complete-lower-layer-deployment.png)
+## View generated data
-# [Azure CLI](#tab/azure-cli)
+The **Simulated Temperature Sensor** module that you pushed generates sample environment data. It sends messages that include ambient temperature and humidity, machine temperature and pressure, and a timestamp.
-1. In the [Azure Cloud Shell](https://shell.azure.com/), enter the following command to create a deployment.json file:
+You can also view these messages through the [Azure Cloud Shell](https://shell.azure.com/):
```azurecli-interactive
- code deploymentLowerLayer.json
- ```
-
-1. Copy the contents of the JSON below into your deployment.json, save it, and close it.
-
- ```json
- {
- "modulesContent": {
- "$edgeAgent": {
- "properties.desired": {
- "modules": {
- "simulatedTemperatureSensor": {
- "settings": {
- "image": "$upstream:8000/azureiotedge-simulated-temperature-sensor:1.0",
- "createOptions": ""
- },
- "type": "docker",
- "status": "running",
- "restartPolicy": "always",
- "version": "1.0"
- }
- },
- "runtime": {
- "settings": {
- "minDockerVersion": "v1.25"
- },
- "type": "docker"
- },
- "schemaVersion": "1.1",
- "systemModules": {
- "edgeAgent": {
- "settings": {
- "image": "$upstream:8000/azureiotedge-agent:1.2.0-rc4",
- "createOptions": ""
- },
- "type": "docker"
- },
- "edgeHub": {
- "settings": {
- "image": "$upstream:8000/azureiotedge-hub:1.2.0-rc4",
- "createOptions": "{\"HostConfig\":{\"PortBindings\":{\"443/tcp\":[{\"HostPort\":\"443\"}],\"5671/tcp\":[{\"HostPort\":\"5671\"}],\"8883/tcp\":[{\"HostPort\":\"8883\"}]}}}"
- },
- "type": "docker",
- "env": {
- "experimentalFeatures__enabled": {
- "value": "true"
- },
- "experimentalFeatures__nestedEdgeEnabled": {
- "value": "true"
- }
- },
- "status": "running",
- "restartPolicy": "always"
- }
- }
- }
- },
- "$edgeHub": {
- "properties.desired": {
- "routes": {
- "route": "FROM /messages/* INTO $upstream"
- },
- "schemaVersion": "1.1",
- "storeAndForwardConfiguration": {
- "timeToLiveSecs": 7200
- }
- }
- }
- }
- }
+ az iot hub monitor-events -n <iothub_name> -d <lower-layer-device-name>
```
-1. Enter the following command to create a set modules deployment to your lower layer edge device:
-
- ```azurecli-interactive
- az iot edge set-modules --device-id <lower_layer_device_id> --hub-name <iot_hub_name> --content ./deploymentLowerLayer.json
---
-Notice that the image URI that we used for the simulated temperature sensor module pointed to `$upstream:8000` instead of to a container registry. We configured this device to not have direct connections to the cloud, because it's in a lower layer. To pull container images, this device requests the image from its parent device instead. At the top layer, the API proxy module routes this container request to the registry module, which handles the image pull.
-
-If you completed the above steps correctly, your **lower layer device** should report three modules: the temperature sensor module and the system modules, as **Specified in Deployment**. It may take a few minutes for the device to receive its new deployment, request the container image, and start the module. Refresh the page until you see the temperature sensor module listed as **Reported by Device**. Once the modules are reported by the device, you are ready to continue.
- ## Troubleshooting Run the `iotedge check` command to verify the configuration and to troubleshoot errors.
When you run `iotedge check` from the lower layer, the program tries to pull the
In this tutorial, we use port 8000, so we need to specify it: ```bash
-sudo iotedge check --diagnostics-image-name <parent_device_fqdn_or_ip>:8000/azureiotedge-diagnostics:1.2.0-rc4
+sudo iotedge check --diagnostics-image-name $upstream:8000/azureiotedge-diagnostics:1.2.0-rc4
``` The `azureiotedge-diagnostics` value is pulled from the container registry that's linked with the registry module. This tutorial has it set by default to https://mcr.microsoft.com:
The `azureiotedge-diagnostics` value is pulled from the container registry that'
| - | - | | `REGISTRY_PROXY_REMOTEURL` | `https://mcr.microsoft.com` |
-If you're using a private container registry, make sure that all the images (for example, IoTEdgeAPIProxy, edgeAgent, edgeHub, and diagnostics) are present in the container registry.
-
-## View generated data
-
-The **Simulated Temperature Sensor** module that you pushed generates sample environment data. It sends messages that include ambient temperature and humidity, machine temperature and pressure, and a timestamp.
-
-You can watch the messages arrive at your IoT hub by using the [Azure IoT Hub extension for Visual Studio Code](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-toolkit).
-
-You can also view these messages through the [Azure Cloud Shell](https://shell.azure.com/):
-
- ```azurecli-interactive
- az iot hub monitor-events -n <iothub_name> -d <lower-layer-device-name>
- ```
+If you're using a private container registry, make sure that all the images (IoTEdgeAPIProxy, edgeAgent, edgeHub, Simulated Temperature Sensor, and diagnostics) are present in the container registry.
## Clean up resources
To delete the resources:
## Next steps
-In this tutorial, you configured two IoT Edge devices as gateways and set one as the parent device of the other. Then, you demonstrated pulling a container image onto the child device through a gateway.
+In this tutorial, you configured two IoT Edge devices as gateways and set one as the parent device of the other. Then, you demonstrated pulling a container image onto the child device through a gateway using the IoT Edge API Proxy module. See [the how-to guide on the proxy module's use](how-to-configure-api-proxy-module.md) if you want to learn more.
+
+To learn more about using gateways to create hierarchical layers of IoT Edge devices, see [the how-to guide on connecting downstream IoT Edge devices](how-to-connect-downstream-iot-edge-device.md).
To see how Azure IoT Edge can create more solutions for your business, continue on to the other tutorials.
iot-edge Version History https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/version-history.md
description: Discover what's new in IoT Edge with information about new features
Previously updated : 03/01/2021 Last updated : 04/07/2021
Azure IoT Edge is a product built from the open-source IoT Edge project hosted o
The IoT Edge documentation on this site is available for two different versions of the product, so that you can choose the content that applies to your IoT Edge environment. Currently, the two supported versions are:
-* **IoT Edge 1.1 (LTS)** is the first long-term support (LTS) version of IoT Edge. The documentation for this version covers all features and capabilities from all previous versions through 1.1. This documentation version will be stable through the supported lifetime of version 1.1, and will not reflect new features released in later versions. The 1.1 release is the latest generally available version of IoT Edge.
-* **IoT Edge 1.2 (preview)** contains additional content for features and capabilities that are in the latest preview release, [1.2-rc4](https://github.com/Azure/azure-iotedge/releases/tag/1.2.0-rc4).
+* **IoT Edge 1.2** contains additional content for new features and capabilities that are in the latest stable release.
+* **IoT Edge 1.1 (LTS)** is the first long-term support (LTS) version of IoT Edge. The documentation for this version covers all features and capabilities from all previous versions through 1.1. This documentation version will be stable through the supported lifetime of version 1.1, and will not reflect new features released in later versions.
For more information about IoT Edge releases, see [Azure IoT Edge supported systems](support.md).
This table provides recent version history for IoT Edge package releases, and hi
| Release notes and assets | Type | Date | Highlights | | | - | - | - |
-| [1.2-rc4](https://github.com/Azure/azure-iotedge/releases/tag/1.2.0-rc1) | Preview | March 2021 | New IoT Edge packages introduced, with new installation and configuration steps. For more information, see [Update from 1.0 or 1.1 to 1.2](how-to-update-iot-edge.md#special-case-update-from-10-or-11-to-12).
+| [1.2](https://github.com/Azure/azure-iotedge/releases/tag/1.2.0) | Stable | April 2021 | [IoT Edge devices behind gateways](how-to-connect-downstream-iot-edge-device.md?view=iotedge-2020-11&preserve-view=true)<br>[IoT Edge MQTT broker (preview)](how-to-publish-subscribe.md?view=iotedge-2020-11&preserve-view=true)<br>New IoT Edge packages introduced, with new installation and configuration steps. For more information, see [Update from 1.0 or 1.1 to 1.2](how-to-update-iot-edge.md#special-case-update-from-10-or-11-to-12)
| [1.1](https://github.com/Azure/azure-iotedge/releases/tag/1.1.0) | Long-term support (LTS) | February 2021 | [Long-term support plan and supported systems updates](support.md) |
-| [1.2-rc1](https://github.com/Azure/azure-iotedge/releases/tag/1.2.0-rc1) | Preview | November 2020 | [IoT Edge devices behind gateways](how-to-connect-downstream-iot-edge-device.md?view=iotedge-2020-11&preserve-view=true)<br>[IoT Edge MQTT broker](how-to-publish-subscribe.md?view=iotedge-2020-11&preserve-view=true) |
| [1.0.10](https://github.com/Azure/azure-iotedge/releases/tag/1.0.10) | Stable | October 2020 | [UploadSupportBundle direct method](how-to-retrieve-iot-edge-logs.md#upload-support-bundle-diagnostics)<br>[Upload runtime metrics](how-to-access-built-in-metrics.md)<br>[Route priority and time-to-live](module-composition.md#priority-and-time-to-live)<br>[Module startup order](module-composition.md#configure-modules)<br>[X.509 manual provisioning](how-to-register-device.md) | | [1.0.9](https://github.com/Azure/azure-iotedge/releases/tag/1.0.9) | Stable | March 2020 | [X.509 auto-provisioning with DPS](how-to-auto-provision-x509-certs.md)<br>[RestartModule direct method](how-to-edgeagent-direct-method.md#restart-module)<br>[support-bundle command](troubleshoot.md#gather-debug-information-with-support-bundle-command) |
iot-fundamentals Iot Glossary https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-fundamentals/iot-glossary.md
In [IoT Edge](#iot-edge), a module is a Docker container that you can deploy to
## O
+### Ontology
+
+A set of models for a particular domain, such as real estate, smart cities, IoT systems, energy grids, and more. Ontologies are often used as schemas for knowledge graphs like the ones in [Azure Digital Twins](#azure-digital-twins), because they provide a starting point based on industry standards and best practices.
+
+For more information about ontologies, see [What is an ontology?](../digital-twins/concepts-ontologies.md)
+ ### Operations monitoring IoT Hub [operations monitoring](../iot-hub/iot-hub-operations-monitoring.md) enables you to monitor the status of operations on your IoT hub in real time. [IoT Hub](#iot-hub) tracks events across several categories of operations. You can opt into sending events from one or more categories to an IoT Hub endpoint for processing. You can monitor the data for errors or set up more complex processing based on data patterns.
iot-hub-device-update Connected Cache Nested Level https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub-device-update/connected-cache-nested-level.md
The diagram below describes the scenario where one Azure IoT Edge gateway as dir
## Child gateway configuration >[!Note]
->If you have replicated containers used in your configuration in your own private registry, then there will need to be a modification to the config.toml settings and runtime settings in your module deployment. For more information, refer to [Tutorial - Create a hierarchy of IoT Edge devices - Azure IoT Edge](../iot-edge/tutorial-nested-iot-edge.md?preserve-view=true&tabs=azure-portal&view=iotedge-2020-11#deploy-modules-to-the-lower-layer-device) for more details.
+>If you have replicated containers used in your configuration in your own private registry, then there will need to be a modification to the config.toml settings and runtime settings in your module deployment. For more information, refer to [Connect downstream IoT Edge devices - Azure IoT Edge](../iot-edge/how-to-connect-downstream-iot-edge-device.md?preserve-view=true&tabs=azure-portal&view=iotedge-2020-11#deploy-modules-to-lower-layer-devices) for more details.
1. Modify the image path for the Edge agent as demonstrated in the example below:
iot-hub Iot Hub Scaling https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/iot-hub-scaling.md
IoT Hub also offers a free tier that is meant for testing and evaluation. It has
Azure IoT Hubs contain many core components of [Azure Event Hubs](../event-hubs/event-hubs-features.md), including [Partitions](../event-hubs/event-hubs-features.md#partitions). Event streams for IoT Hubs are generally populated with incoming telemetry data that is reported by various IoT devices. The partitioning of the event stream is used to reduce contentions that occur when concurrently reading and writing to event streams.
-The partition limit is chosen when IoT Hub is created, and cannot be changed. The maximum partition limit for basic tier IoT Hub and standard tier IoT Hub is 32. Most IoT hubs only need 4 partitions. For more information on determining the partitions, see the Event Hubs FAQ [How many partitions do I need?](../event-hubs/event-hubs-faq.md#how-many-partitions-do-i-need)
+The partition limit is chosen when IoT Hub is created, and cannot be changed. The maximum partition limit for basic tier IoT Hub and standard tier IoT Hub is 32. Most IoT hubs only need 4 partitions. For more information on determining the partitions, see the Event Hubs FAQ [How many partitions do I need?](../event-hubs/event-hubs-faq.yml#how-many-partitions-do-i-need-)
## Tier upgrade
machine-learning Graph Search Syntax https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/algorithm-module-reference/graph-search-syntax.md
Title: "Graph search query syntax"
-description: Learn how to use the search query syntax in Azure Machine Learning designer to search for nodes in in pipeline graph.
+description: Learn how to use the search query syntax in Azure Machine Learning designer to search for nodes in pipeline graph.
-- Previously updated : 8/24/2020++ Last updated : 03/24/2021 # Graph search query syntax
-In this article, you learn about the graph search query syntax in Azure Machine Learning. The graph search feature lets you search for a node by its name and properties.
+In this article, you learn about the graph search functionality in Azure Machine Learning.
- ![Animated screenshot showing an example graph search experience](media/search/graph-search.gif)
+Graph search lets you fast navigate a node when you are debugging or building a pipeline. You can either type the key word or query in the input box in the toolbar, or under search tab in the left panel to trigger search. All matched result will be highlighted in yellow in canvas, and if you select a result in the left panel, the node in canvas will be highlighted in red.
-Graph search supports full-text keyword search on node name and comments. You can also filters on node property like runStatus, duration, computeTarget. The keyword search is based on Lucene query. A complete search query looks like this:
+![Screenshot showing an example graph search experience](media/search/graph-search-0322.png)
-**[lucene query | [filter query]**
+Graph search supports full-text keyword search on node name and comments. You can also filter on node property like runStatus, duration, computeTarget. The keyword search is based on Lucene query. A complete search query looks like this:
+
+**[[lucene query] | [filter query]]**
You can use either Lucene query or filter query. To use both, use the **|** separator. The syntax of the filter query is more strict than Lucene query. So if customer input can be parsed as both, the filter query will be applied.
+For example, `data OR model | compute in {cpucluster}`, this is to search nodes of which name or comment contains `data` or `model`, and compute is cpucluster.
## Lucene query
You can use the following node properties as keys:
- compute - duration - reuse
+- publish
+- tags
And use the following operators:
machine-learning How To Deploy Azure Kubernetes Service https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-deploy-azure-kubernetes-service.md
For more information on the classes, methods, and parameters used in this exampl
To deploy using the CLI, use the following command. Replace `myaks` with the name of the AKS compute target. Replace `mymodel:1` with the name and version of the registered model. Replace `myservice` with the name to give this service: ```azurecli-interactive
-az ml model deploy -ct myaks -m mymodel:1 -n myservice -ic inferenceconfig.json -dc deploymentconfig.json
+az ml model deploy --ct myaks -m mymodel:1 -n myservice --ic inferenceconfig.json --dc deploymentconfig.json
``` [!INCLUDE [deploymentconfig](../../includes/machine-learning-service-aks-deploy-config.md)]
machine-learning Spark Advanced Data Exploration Modeling https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/team-data-science-process/spark-advanced-data-exploration-modeling.md
The models we use include logistic and linear regression, random forests, and gr
* [Linear regression with SGD](https://spark.apache.org/docs/latest/api/python/reference/api/pyspark.mllib.regression.LinearRegressionWithSGD.html#pyspark.mllib.regression.LinearRegressionWithSGD ) is a linear regression model that uses a Stochastic Gradient Descent (SGD) method and for optimization and feature scaling to predict the tip amounts paid.
-* [Logistic regression with LBFGS](https://spark.apache.org/docs/latest/api/python/pyspark.mllib.html#pyspark.mllib.classification.LogisticRegressionWithLBFGS) or "logit" regression, is a regression model that can be used when the dependent variable is categorical to do data classification. LBFGS is a quasi-Newton optimization algorithm that approximates the BroydenΓÇôFletcherΓÇôGoldfarbΓÇôShanno (BFGS) algorithm using a limited amount of computer memory and that is widely used in machine learning.
+* [Logistic regression with LBFGS](https://spark.apache.org/docs/latest/api/python/reference/api/pyspark.mllib.classification.LogisticRegressionWithLBFGS.html
+) or "logit" regression, is a regression model that can be used when the dependent variable is categorical to do data classification. LBFGS is a quasi-Newton optimization algorithm that approximates the BroydenΓÇôFletcherΓÇôGoldfarbΓÇôShanno (BFGS) algorithm using a limited amount of computer memory and that is widely used in machine learning.
* [Random forests](https://spark.apache.org/docs/latest/mllib-ensembles.html#Random-Forests) are ensembles of decision trees. They combine many decision trees to reduce the risk of overfitting. Random forests are used for regression and classification and can handle categorical features and can be extended to the multiclass classification setting. They do not require feature scaling and are able to capture non-linearities and feature interactions. Random forests are one of the most successful machine learning models for classification and regression. * [Gradient boosted trees](https://spark.apache.org/docs/latest/ml-classification-regression.html#gradient-boosted-trees-gbts) (GBTS) are ensembles of decision trees. GBTS train decision trees iteratively to minimize a loss function. GBTS is used for regression and classification and can handle categorical features, do not require feature scaling, and are able to capture non-linearities and feature interactions. They can also be used in a multiclass-classification setting.
print "Time taken to execute above cell: " + str(timedelta) + " seconds";
Time taken to execute above cell: 0.31 second ### Feature scaling
-Feature scaling, also known as data normalization, insures that features with widely disbursed values are not given excessive weigh in the objective function. The code for feature scaling uses the [StandardScaler](https://spark.apache.org/docs/latest/api/python/pyspark.mllib.html#pyspark.mllib.feature.StandardScaler) to scale the features to unit variance. It is provided by MLlib for use in linear regression with Stochastic Gradient Descent (SGD). SGD is a popular algorithm for training a wide range of other machine learning models such as regularized regressions or support vector machines (SVM).
+Feature scaling, also known as data normalization, insures that features with widely disbursed values are not given excessive weigh in the objective function. The code for feature scaling uses the [StandardScaler](https://spark.apache.org/docs/latest/api/python/reference/api/pyspark.mllib.regression.LinearRegressionWithSGD.html
+) to scale the features to unit variance. It is provided by MLlib for use in linear regression with Stochastic Gradient Descent (SGD). SGD is a popular algorithm for training a wide range of other machine learning models such as regularized regressions or support vector machines (SVM).
> [!TIP] > We have found the LinearRegressionWithSGD algorithm to be sensitive to feature scaling.
machine-learning Spark Model Consumption https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/team-data-science-process/spark-model-consumption.md
print "Time taken to execute above cell: " + str(timedelta) + " seconds";
Time taken to execute above cell: 19.22 seconds ## Score a Linear Regression Model
-We used [LinearRegressionWithSGD](https://spark.apache.org/docs/latest/api/python/pyspark.mllib.html#pyspark.mllib.regression.LinearRegressionWithSGD) to train a linear regression model using Stochastic Gradient Descent (SGD) for optimization to predict the amount of tip paid.
+We used [LinearRegressionWithSGD](https://spark.apache.org/docs/latest/api/python/reference/api/pyspark.mllib.regression.LinearRegressionWithSGD.html
+) to train a linear regression model using Stochastic Gradient Descent (SGD) for optimization to predict the amount of tip paid.
The code in this section shows how to load a Linear Regression Model from Azure blob storage, score using scaled variables, and then save the results back to the blob.
mariadb Concepts Backup https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/mariadb/concepts-backup.md
Azure Database for MariaDB automatically creates server backups and stores them
## Backups
-Azure Database for MariaDB takes full, differential, and transaction log backups. These backups allow you to restore a server to any point-in-time within your configured backup retention period. The default backup retention period is seven days. You can optionally configure it up to 35 days. All backups are encrypted using AES 256-bit encryption.
+Azure Database for MariaDB takes backups of the data files and the transaction log. These backups allow you to restore a server to any point-in-time within your configured backup retention period. The default backup retention period is seven days. You can [optionally configure it](howto-restore-server-portal.md#set-backup-configuration) up to 35 days. All backups are encrypted using AES 256-bit encryption.
-These backup files are not user-exposed and cannot be exported. These backups can only be used for restore operations in Azure Database for MariaDB. You can use [mysqldump](howto-migrate-dump-restore.md) to copy a database.
+These backup files are not user-exposed and cannot be exported. These backups can only be used for restore operations in Azure Database for MySQL. You can use [mysqldump](howto-migrate-dump-restore.md) to copy a database.
-### Backup frequency
+The backup type and frequency is depending on the backend storage for the servers.
-#### Servers with up to 4-TB storage
+### Backup type and frequency
-For servers which support up to 4-TB maximum storage, full backups occur once every week. Differential backups occur twice a day. Transaction log backups occur every five minutes.
+#### Basic storage servers
-#### Servers with up to 16-TB storage
-In a subset of [Azure regions](concepts-pricing-tiers.md#storage), all newly provisioned servers can support up to 16-TB storage. Backups on these large storage servers are snapshot-based. The first full snapshot backup is scheduled immediately after a server is created. That first full snapshot backup is retained as the server's base backup. Subsequent snapshot backups are differential backups only.
+The Basic storage is the backend storage supporting [Basic tier servers](concepts-pricing-tiers.md). Backups on Basic storage servers are snapshot-based. A full database snapshot is performed daily. There are no differential backups performed for basic storage servers and all snapshot backups are full database backups only.
-Differential snapshot backups occur at least once a day. Differential snapshot backups do not occur on a fixed schedule. Differential snapshot backups occur every 24 hours unless the transaction log (binlog in MariaDB) exceeds 50-GB since the last differential backup. In a day, a maximum of six differential snapshots are allowed.
+Transaction log backups occur every five minutes.
-Transaction log backups occur every five minutes.
+#### General purpose storage servers with up to 4-TB storage
+
+The General purpose storage is the backend storage supporting [General Purpose](concepts-pricing-tiers.md) and [Memory Optimized tier](concepts-pricing-tiers.md) server. For servers with general purpose storage up to 4 TB, full backups occur once every week. Differential backups occur twice a day. Transaction log backups occur every five minutes. The backups on general purpose storage up to 4-TB storage are not snapshot-based and consumes IO bandwidth at the time of backup. For large databases (> 1 TB) on 4-TB storage, we recommend you consider
+
+- Provisioning more IOPs to account for backup IOs OR
+- Alternatively, migrate to general purpose storage that supports up to 16-TB storage if the underlying storage infrastructure is available in your preferred [Azure regions](./concepts-pricing-tiers.md#storage). There is no additional cost for general purpose storage that supports up to 16-TB storage. For assistance with migration to 16-TB storage, please open a support ticket from Azure portal.
+
+#### General purpose storage servers with up to 16-TB storage
+
+In a subset of [Azure regions](./concepts-pricing-tiers.md#storage), all newly provisioned servers can support general purpose storage up to 16-TB storage. In other words, storage up to 16-TB storage is the default general purpose storage for all the [regions](concepts-pricing-tiers.md#storage) where it is supported. Backups on these 16-TB storage servers are snapshot-based. The first full snapshot backup is scheduled immediately after a server is created. That first full snapshot backup is retained as the server's base backup. Subsequent snapshot backups are differential backups only.
+
+Differential snapshot backups occur at least once a day. Differential snapshot backups do not occur on a fixed schedule. Differential snapshot backups occur every 24 hours unless the transaction log (binlog in MariaDB) exceeds 50 GB since the last differential backup. In a day, a maximum of six differential snapshots are allowed.
+
+Transaction log backups occur every five minutes.
+
### Backup retention Backups are retained based on the backup retention period setting on the server. You can select a retention period of 7 to 35 days. The default retention period is 7 days. You can set the retention period during server creation or later by updating the backup configuration using [Azure portal](howto-restore-server-portal.md#set-backup-configuration) or [Azure CLI](howto-restore-server-cli.md#set-backup-configuration). The backup retention period governs how far back in time a point-in-time restore can be retrieved, since it's based on backups available. The backup retention period can also be treated as a recovery window from a restore perspective. All backups required to perform a point-in-time restore within the backup retention period are retained in backup storage. For example, if the backup retention period is set to 7 days, the recovery window is considered last 7 days. In this scenario, all the backups required to restore the server in last 7 days are retained. With a backup retention window of seven days:+ - Servers with up to 4-TB storage will retain up to 2 full database backups, all the differential backups, and transaction log backups performed since the earliest full database backup. - Servers with up to 16-TB storage will retain the full database snapshot, all the differential snapshots and transaction log backups in last 8 days.
-#### Long term retention of backups
-Long term retention of backups beyond 35 days is currently not natively supported by the service yet. You have a option to use mysqldump to take backups and store them for long term retention. Our support team has blogged a [step by step article](https://techcommunity.microsoft.com/t5/azure-database-for-mysql/automate-backups-of-your-azure-database-for-mysql-server-to/ba-p/1791157) to share how you can achieve it.
+#### Long-term retention of backups
+Long-term retention of backups beyond 35 days is currently not natively supported by the service yet. You have a option to use mysqldump to take backups and store them for long-term retention. Our support team has blogged a [step by step article](https://techcommunity.microsoft.com/t5/azure-database-for-mysql/automate-backups-of-your-azure-database-for-mysql-server-to/ba-p/1791157) to share how you can achieve it.
### Backup redundancy options
mariadb Concepts Pricing Tiers https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/mariadb/concepts-pricing-tiers.md
You can create an Azure Database for MariaDB server in one of three different pr
| Compute generation | Gen 5 |Gen 5 | Gen 5 | | vCores | 1, 2 | 2, 4, 8, 16, 32, 64 |2, 4, 8, 16, 32 | | Memory per vCore | 2 GB | 5 GB | 10 GB |
-| Storage size | 5 GB to 1 TB | 5 GB to 4 TB | 5 GB to 4 TB |
+| Storage size | 5 GB to 1 TB | 5 GB to 16 TB | 5 GB to 16 TB |
| Database backup retention period | 7 to 35 days | 7 to 35 days | 7 to 35 days | To choose a pricing tier, use the following table as a starting point.
mariadb Concepts Server Logs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/mariadb/concepts-server-logs.md
For local server storage, you can list and download slow query logs using the Az
Azure Monitor Diagnostic Logs allows you to pipe slow query logs to Azure Monitor Logs (Log Analytics), Azure Storage, or Event Hubs. See [below](concepts-server-logs.md#diagnostic-logs) for more information. ## Local server storage log retention
-When logging to the server's local storage, logs are available for up to seven days from their creation. If the total size of the available logs exceeds 7 GB, then the oldest files are deleted until space is available.
+When logging to the server's local storage, logs are available for up to seven days from their creation. If the total size of the available logs exceeds 7 GB, then the oldest files are deleted until space is available. The 7 GB storage limit for the server logs is available free of cost and cannot be extended.
Logs are rotated every 24 hours or 7 GB, whichever comes first.
media-services Customize Brands Model With Api https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/video-indexer/customize-brands-model-with-api.md
You can use the Video Indexer APIs to create, use, and edit custom Brands models
## Create a Brand
-The [create a brand](https://api-portal.videoindexer.ai/docs/services/operations/operations/Create-Brand) API creates a new custom brand and adds it to the custom Brands model for the specified account.
+The [create a brand](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Create-Brand) API creates a new custom brand and adds it to the custom Brands model for the specified account.
> [!NOTE] > Setting `enabled` (in the body) to true puts the brand in the *Include* list for Video Indexer to detect. Setting `enabled` to false puts the brand in the *Exclude* list, so Video Indexer won't detect it.
The response provides information on the brand that you just created following t
## Delete a Brand
-The [delete a brand](https://api-portal.videoindexer.ai/docs/services/operations/operations/Delete-Brand?) API removes a brand from the custom Brands model for the specified account. The account is specified in the `accountId` parameter. Once called successfully, the brand will no longer be in the *Include* or *Exclude* brands lists.
+The [delete a brand](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Delete-Brand) API removes a brand from the custom Brands model for the specified account. The account is specified in the `accountId` parameter. Once called successfully, the brand will no longer be in the *Include* or *Exclude* brands lists.
### Response
There's no returned content when the brand is deleted successfully.
## Get a specific Brand
-The [get a brand](https://api-portal.videoindexer.ai/docs/services/operations/operations/Get-Brand?) API lets you search for the details of a brand in the custom Brands model for the specified account using the brand ID.
+The [get a brand](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Get-Brand) API lets you search for the details of a brand in the custom Brands model for the specified account using the brand ID.
### Response
The response provides information on the brand that you searched (using brand ID
## Update a specific brand
-The [update a brand](https://api-portal.videoindexer.ai/docs/services/operations/operations/Update-Brand?) API lets you search for the details of a brand in the custom Brands model for the specified account using the brand ID.
+The [update a brand](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Update-Brand) API lets you search for the details of a brand in the custom Brands model for the specified account using the brand ID.
### Response
The response provides the updated information on the brand that you updated foll
## Get all of the Brands
-The [get all brands](https://api-portal.videoindexer.ai/docs/services/operations/operations/Get-Brands?) API returns all of the brands in the custom Brands model for the specified account regardless of whether the brand is meant to be in the *Include* or *Exclude* brands list.
+The [get all brands](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Get-Brands) API returns all of the brands in the custom Brands model for the specified account regardless of whether the brand is meant to be in the *Include* or *Exclude* brands list.
### Response
The response provides a list of all of the brands in your account and each of th
## Get Brands model settings
-The [get brands settings](https://api-portal.videoindexer.ai/docs/services/operations/operations/Get-Brands) API returns the Brands model settings in the specified account. The Brands model settings represent whether detection from the Bing brands database is enabled or not. If Bing brands aren't enabled, Video Indexer will only detect brands from the custom Brands model of the specified account.
+The [get brands settings](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Get-Brands) API returns the Brands model settings in the specified account. The Brands model settings represent whether detection from the Bing brands database is enabled or not. If Bing brands aren't enabled, Video Indexer will only detect brands from the custom Brands model of the specified account.
### Response
The response shows whether Bing brands are enabled following the format of the e
## Update Brands model settings
-The [update brands](https://api-portal.videoindexer.ai/docs/services/operations/operations/Update-Brands-Model-Settings?) API updates the Brands model settings in the specified account. The Brands model settings represent whether detection from the Bing brands database is enabled or not. If Bing brands aren't enabled, Video Indexer will only detect brands from the custom Brands model of the specified account.
+The [update brands](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Update-Brands-Model-Settings) API updates the Brands model settings in the specified account. The Brands model settings represent whether detection from the Bing brands database is enabled or not. If Bing brands aren't enabled, Video Indexer will only detect brands from the custom Brands model of the specified account.
The `useBuiltIn` flag set to true means that Bing brands are enabled. If `useBuiltin` is false, Bing brands are disabled.
media-services Customize Language Model With Api https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/video-indexer/customize-language-model-with-api.md
You can use the Video Indexer APIs to create and edit custom Language models in
## Create a Language model
-The [create a language model](https://api-portal.videoindexer.ai/docs/services/Operations/operations/Create-Language-Model?) API creates a new custom Language model in the specified account. You can upload files for the Language model in this call. Alternatively, you can create the Language model here and upload files for the model later by updating the Language model.
+The [create a language model](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Create-Language-Model) API creates a new custom Language model in the specified account. You can upload files for the Language model in this call. Alternatively, you can create the Language model here and upload files for the model later by updating the Language model.
> [!NOTE] > You must still train the model with its enabled files for the model to learn the contents of its files. Directions on training a language are in the next section.
The response provides metadata on the newly created Language model along with me
## Train a Language model
-The [train a language model](https://api-portal.videoindexer.ai/docs/services/operations/operations/Train-Language-Model?&pattern=train) API trains a custom Language model in the specified account with the contents in the files that were uploaded to and enabled in the language model.
+The [train a language model](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Train-Language-Model) API trains a custom Language model in the specified account with the contents in the files that were uploaded to and enabled in the language model.
> [!NOTE] > You must first create the Language model and upload its files. You can upload files when creating the Language model or by updating the Language model.
The response provides metadata on the newly trained Language model along with me
} ```
-The returned `id` is a unique ID used to distinguish between language models, while `languageModelId` is used both for [uploading a video to index](https://api-portal.videoindexer.ai/docs/services/operations/operations/Upload-video?) and [reindexing a video](https://api-portal.videoindexer.ai/docs/services/operations/operations/Re-index-video?) APIs (also known as `linguisticModelId` in Video Indexer upload/reindex APIs).
+The returned `id` is a unique ID used to distinguish between language models, while `languageModelId` is used both for [uploading a video to index](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Upload-Video) and [reindexing a video](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Re-Index-Video) APIs (also known as `linguisticModelId` in Video Indexer upload/reindex APIs).
## Delete a Language model
-The [delete a language model](https://api-portal.videoindexer.ai/docs/services/operations/operations/Delete-Language-Model?&pattern=delete) API deletes a custom Language model from the specified account. Any video that was using the deleted Language model will keep the same index until you reindex the video. If you reindex the video, you can assign a new Language model to the video. Otherwise, Video Indexer will use its default model to reindex the video.
+The [delete a language model](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Delete-Language-Model) API deletes a custom Language model from the specified account. Any video that was using the deleted Language model will keep the same index until you reindex the video. If you reindex the video, you can assign a new Language model to the video. Otherwise, Video Indexer will use its default model to reindex the video.
### Response
There's no returned content when the Language model is deleted successfully.
## Update a Language model
-The [update a Language model](https://api-portal.videoindexer.ai/docs/services/operations/operations/Update-Language-Model?&pattern=update) API updates a custom Language person model in the specified account.
+The [update a Language model](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Update-Language-Model) API updates a custom Language person model in the specified account.
> [!NOTE] > You must have already created the Language model. You can use this call to enable or disable all files under the model, update the name of the Language model, and upload files to be added to the language model.
Use the `id` of the files returned in the response to download the contents of t
## Update a file from a Language model
-The [update a file](https://api-portal.videoindexer.ai/docs/services/operations/operations/Update-Language-Model-file?&pattern=update) allows you to update the name and `enable` state of a file in a custom Language model in the specified account.
+The [update a file](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Update-Language-Model-file) allows you to update the name and `enable` state of a file in a custom Language model in the specified account.
### Response
Use the `id` of the file returned in the response to download the contents of th
## Get a specific Language model
-The [get](https://api-portal.videoindexer.ai/docs/services/operations/operations/Get-Language-Model?&pattern=get) API returns information on the specified Language model in the specified account such as language and the files that are in the Language model.
+The [get](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Get-Language-Model) API returns information on the specified Language model in the specified account such as language and the files that are in the Language model.
### Response
Use the `id` of the file returned in the response to download the contents of th
## Get all the Language models
-The [get all](https://api-portal.videoindexer.ai/docs/services/operations/operations/Get-Language-Models?&pattern=get) API returns all of the custom Language models in the specified account in a list.
+The [get all](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Get-Language-Models) API returns all of the custom Language models in the specified account in a list.
### Response
The response provides a list of all of the Language models in your account and e
## Delete a file from a Language model
-The [delete](https://api-portal.videoindexer.ai/docs/services/operations/operations/Delete-Language-Model-File?&pattern=delete) API deletes the specified file from the specified Language model in the specified account.
+The [delete](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Delete-Language-Model-File) API deletes the specified file from the specified Language model in the specified account.
### Response
There's no returned content when the file is deleted from the Language model suc
## Get metadata on a file from a Language model
-The [get metadata of a file](https://api-portal.videoindexer.ai/docs/services/operations/operations/Get-Language-Model-File-Data?&pattern=get%20language%20model) API returns the contents of and metadata on the specified file from the chosen Language model in your account.
+The [get metadata of a file](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Get-Language-Model-File-Data) API returns the contents of and metadata on the specified file from the chosen Language model in your account.
### Response
The response provides the contents and metadata of the file in JSON format, simi
## Download a file from a Language model
-The [download a file](https://api-portal.videoindexer.ai/docs/services/operations/operations/Download-Language-Model-File-Content?) API downloads a text file containing the contents of the specified file from the specified Language model in the specified account. This text file should match the contents of the text file that was originally uploaded.
+The [download a file](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Download-Language-Model-File-Content) API downloads a text file containing the contents of the specified file from the specified Language model in the specified account. This text file should match the contents of the text file that was originally uploaded.
### Response
media-services Customize Person Model With Api https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/video-indexer/customize-person-model-with-api.md
Each account has a limit of 50 Person models. If you don't need the multiple Per
## Create a new Person model
-To create a new Person model in the specified account, use the [create a person model](https://api-portal.videoindexer.ai/docs/services/operations/operations/Create-Person-Model?) API.
+To create a new Person model in the specified account, use the [create a person model](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Create-Person-Model) API.
The response provides the name and generated model ID of the Person model that you just created following the format of the example below.
The response provides the name and generated model ID of the Person model that y
} ```
-You then use the **id** value for the **personModelId** parameter when [uploading a video to index](https://api-portal.videoindexer.ai/docs/services/operations/operations/Upload-video?) or [reindexing a video](https://api-portal.videoindexer.ai/docs/services/operations/operations/Re-index-video?).
+You then use the **id** value for the **personModelId** parameter when [uploading a video to index](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Upload-Video) or [reindexing a video](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Re-Index-Video).
## Delete a Person model
-To delete a custom Person model from the specified account, use the [delete a person model](https://api-portal.videoindexer.ai/docs/services/operations/operations/Delete-Person-Model?) API.
+To delete a custom Person model from the specified account, use the [delete a person model](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Delete-Person-Model) API.
Once the Person model is deleted successfully, the index of your current videos that were using the deleted model will remain unchanged until you reindex them. Upon reindexing, the faces that were named in the deleted model won't be recognized by Video Indexer in your current videos that were indexed using that model but the faces will still be detected. Your current videos that were indexed using the deleted model will now use your account's default Person model. If faces from the deleted model are also named in your account's default model, those faces will continue to be recognized in the videos.
There's no returned content when the Person model is deleted successfully.
## Get all Person models
-To get all Person models in the specified account, use the [get a person model](https://api-portal.videoindexer.ai/docs/services/operations/operations/Get-Person-Models?) API.
+To get all Person models in the specified account, use the [get a person model](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Get-Person-Models) API.
The response provides a list of all of the Person models in your account (including the default Person model in the specified account) and each of their names and IDs following the format of the example below.
The response provides a list of all of the Person models in your account (includ
] ```
-You can choose which model you want to use for a video by using the `id` value of the Person model for the `personModelId` parameter when [uploading a video to index](https://api-portal.videoindexer.ai/docs/services/operations/operations/Upload-video?) or [reindexing a video](https://api-portal.videoindexer.ai/docs/services/operations/operations/Re-index-video?).
+You can choose which model you want to use for a video by using the `id` value of the Person model for the `personModelId` parameter when [uploading a video to index](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Upload-Video) or [reindexing a video](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Re-Index-Video).
## Update a face
The system then recognizes the occurrences of the same face in your other curren
You can update a face that Video Indexer recognized as a celebrity with a new name. The new name that you give will take precedence over the built-in celebrity recognition.
-To update the face, use the [update a video face](https://api-portal.videoindexer.ai/docs/services/operations/operations/Update-Video-Face?) API.
+To update the face, use the [update a video face](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Update-Video-Face) API.
Names are unique for Person models, so if you give two different faces in the same Person model the same `name` parameter value, Video Indexer views the faces as the same person and converges them once you reindex your video.
mysql Concepts Certificate Rotation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/mysql/concepts-certificate-rotation.md
Previously updated : 01/18/2021 Last updated : 04/08/2021 -- # Understanding the changes in the Root CA change for Azure Database for MySQL Single Server Azure Database for MySQL Single Server successfully completed the root certificate change on **February 15, 2021 (02/15/2021)** as part of standard maintenance and security best practices. This article gives you more details about the changes, the resources affected, and the steps needed to ensure that your application maintains connectivity to your database server. > [!NOTE] > This article applies to [Azure Database for MySQL - Single Server](single-server-overview.md) ONLY. For [Azure Database for MySQL - Flexible Server](flexible-server/overview.md), the certificate needed to communicate over SSL is [DigiCert Global Root CA](https://dl.cacerts.digicert.com/DigiCertGlobalRootCA.crt.pem)
->
+>
> This article contains references to the term _slave_, a term that Microsoft no longer uses. When the term is removed from the software, we'll remove it from this article. >
-## Why root certificate update is required?
+#### Why is a root certificate update required?
-Azure database for MySQL users can only use the predefined certificate to connect to their MySQL server, which is located [here](https://www.digicert.com/CACerts/BaltimoreCyberTrustRoot.crt.pem). However, [Certificate Authority (CA) Browser forum](https://cabforum.org/) recently published reports of multiple certificates issued by CA vendors to be non-compliant.
+Azure Database for MySQL users can only use the predefined certificate to connect to their MySQL server, which is located [here](https://www.digicert.com/CACerts/BaltimoreCyberTrustRoot.crt.pem). However, [Certificate Authority (CA) Browser forum](https://cabforum.org/) recently published reports of multiple certificates issued by CA vendors to be non-compliant.
-As per the industry's compliance requirements, CA vendors began revoking CA certificates for non-compliant CAs, requiring servers to use certificates issued by compliant CAs, and signed by CA certificates from those compliant CAs. Since Azure Database for MySQL used one of these non-compliant certificates, we needed to rotate the certificate to the compliant version to minimize the potential threat to your MySQL servers.
+Per the industry's compliance requirements, CA vendors began revoking CA certificates for non-compliant CAs, requiring servers to use certificates issued by compliant CAs, and signed by CA certificates from those compliant CAs. Since Azure Database for MySQL used one of these non-compliant certificates, we needed to rotate the certificate to the compliant version to minimize the potential threat to your MySQL servers.
-The new certificate is rolled out and in effect starting February 15, 2021 (02/15/2021).
+The new certificate is rolled out and in effect as of February 15, 2021 (02/15/2021).
-## What change was performed on February 15, 2021 (02/15/2021)?
+#### What change was performed on February 15, 2021 (02/15/2021)?
-On February 15, 2021, the [BaltimoreCyberTrustRoot root certificate](https://www.digicert.com/CACerts/BaltimoreCyberTrustRoot.crt.pem) was replaced with a **compliant version** of the same [BaltimoreCyberTrustRoot root certificate](https://www.digicert.com/CACerts/BaltimoreCyberTrustRoot.crt.pem) to ensure existing customers do not need to change anything and there is no impact to their connections to the server. During this change, the [BaltimoreCyberTrustRoot root certificate](https://www.digicert.com/CACerts/BaltimoreCyberTrustRoot.crt.pem) was **not replaced** with [DigiCertGlobalRootG2](https://cacerts.digicert.com/DigiCertGlobalRootG2.crt.pem) and that change is deferred to allow more time for customers to make the change.
+On February 15, 2021, the [BaltimoreCyberTrustRoot root certificate](https://www.digicert.com/CACerts/BaltimoreCyberTrustRoot.crt.pem) was replaced with a **compliant version** of the same [BaltimoreCyberTrustRoot root certificate](https://www.digicert.com/CACerts/BaltimoreCyberTrustRoot.crt.pem) to ensure existing customers don't need to change anything and there's no impact to their connections to the server. During this change, the [BaltimoreCyberTrustRoot root certificate](https://www.digicert.com/CACerts/BaltimoreCyberTrustRoot.crt.pem) was **not replaced** with [DigiCertGlobalRootG2](https://cacerts.digicert.com/DigiCertGlobalRootG2.crt.pem) and that change is deferred to allow more time for customers to make the change.
-## Do I need to make any changes on my client to maintain connectivity?
+#### Do I need to make any changes on my client to maintain connectivity?
-There is no change required on client side. if you followed our previous recommendation below, you will still be able to continue to connect as long as **BaltimoreCyberTrustRoot certificate is not removed** from the combined CA certificate. **We recommend to not remove the BaltimoreCyberTrustRoot from your combined CA certificate until further notice to maintain connectivity.**
+No change is required on client side. If you followed our previous recommendation below, you can continue to connect as long as **BaltimoreCyberTrustRoot certificate is not removed** from the combined CA certificate. **To maintain connectivity, we recommend that you retain the BaltimoreCyberTrustRoot in your combined CA certificate until further notice.**
-### Previous recommendation
+###### Previous recommendation
-To avoid your application's availability being interrupted due to certificates being unexpectedly revoked, or to update a certificate that has been revoked, use the following steps. The idea is to create a new *.pem* file, which combines the current cert and the new one and during the SSL cert validation one of the allowed values will be used. Refer to the following steps:
+To avoid interruption of your application's availability as a result of certificates being unexpectedly revoked, or to update a certificate that has been revoked, use the following steps. The idea is to create a new *.pem* file, which combines the current cert and the new one and during the SSL cert validation, one of the allowed values will be used. Refer to the following steps:
-* Download BaltimoreCyberTrustRoot & DigiCertGlobalRootG2 Root CA from the following links:
+1. Download BaltimoreCyberTrustRoot & DigiCertGlobalRootG2 Root CA from the following links:
- * [https://www.digicert.com/CACerts/BaltimoreCyberTrustRoot.crt.pem](https://www.digicert.com/CACerts/BaltimoreCyberTrustRoot.crt.pem)
- * [https://cacerts.digicert.com/DigiCertGlobalRootG2.crt.pem](https://cacerts.digicert.com/DigiCertGlobalRootG2.crt.pem)
+ * [https://www.digicert.com/CACerts/BaltimoreCyberTrustRoot.crt.pem](https://www.digicert.com/CACerts/BaltimoreCyberTrustRoot.crt.pem)
+ * [https://cacerts.digicert.com/DigiCertGlobalRootG2.crt.pem](https://cacerts.digicert.com/DigiCertGlobalRootG2.crt.pem)
-* Generate a combined CA certificate store with both **BaltimoreCyberTrustRoot** and **DigiCertGlobalRootG2** certificates are included.
+2. Generate a combined CA certificate store with both **BaltimoreCyberTrustRoot** and **DigiCertGlobalRootG2** certificates are included.
- * For Java (MySQL Connector/J) users, execute:
+ * For Java (MySQL Connector/J) users, execute:
- ```console
- keytool -importcert -alias MySQLServerCACert -file D:\BaltimoreCyberTrustRoot.crt.pem -keystore truststore -storepass password -noprompt
- ```
+ ```console
+ keytool -importcert -alias MySQLServerCACert -file D:\BaltimoreCyberTrustRoot.crt.pem -keystore truststore -storepass password -noprompt
+ ```
- ```console
- keytool -importcert -alias MySQLServerCACert2 -file D:\DigiCertGlobalRootG2.crt.pem -keystore truststore -storepass password -noprompt
- ```
+ ```console
+ keytool -importcert -alias MySQLServerCACert2 -file D:\DigiCertGlobalRootG2.crt.pem -keystore truststore -storepass password -noprompt
+ ```
- Then replace the original keystore file with the new generated one:
+ Then replace the original keystore file with the new generated one:
- * System.setProperty("javax.net.ssl.trustStore","path_to_truststore_file");
- * System.setProperty("javax.net.ssl.trustStorePassword","password");
+ * System.setProperty("javax.net.ssl.trustStore","path_to_truststore_file");
+ * System.setProperty("javax.net.ssl.trustStorePassword","password");
- * For .NET (MySQL Connector/NET, MySQLConnector) users, make sure **BaltimoreCyberTrustRoot** and **DigiCertGlobalRootG2** both exist in Windows Certificate Store, Trusted Root Certification Authorities. If any certificates don't exist, import the missing certificate.
+ * For .NET (MySQL Connector/NET, MySQLConnector) users, make sure **BaltimoreCyberTrustRoot** and **DigiCertGlobalRootG2** both exist in Windows Certificate Store, Trusted Root Certification Authorities. If any certificates don't exist, import the missing certificate.
- :::image type="content" source="media/overview/netconnecter-cert.png" alt-text="Azure Database for MySQL .net cert diagram":::
+ :::image type="content" source="media/overview/netconnecter-cert.png" alt-text="Azure Database for MySQL .NET cert diagram":::
- * For .NET users on Linux using SSL_CERT_DIR, make sure **BaltimoreCyberTrustRoot** and **DigiCertGlobalRootG2** both exist in the directory indicated by SSL_CERT_DIR. If any certificates don't exist, create the missing certificate file.
+ * For .NET users on Linux using SSL_CERT_DIR, make sure **BaltimoreCyberTrustRoot** and **DigiCertGlobalRootG2** both exist in the directory indicated by SSL_CERT_DIR. If any certificates don't exist, create the missing certificate file.
- * For other (MySQL Client/MySQL Workbench/C/C++/Go/Python/Ruby/PHP/NodeJS/Perl/Swift) users, you can merge two CA certificate files into the following format:
+ * For other (MySQL Client/MySQL Workbench/C/C++/Go/Python/Ruby/PHP/NodeJS/Perl/Swift) users, you can merge two CA certificate files into the following format:
``` --BEGIN CERTIFICATE--
To avoid your application's availability being interrupted due to certificates
--END CERTIFICATE-- ```
-* Replace the original root CA pem file with the combined root CA file and restart your application/client.
-* In future, after the new certificate deployed on the server side, you can change your CA pem file to DigiCertGlobalRootG2.crt.pem.
+3. Replace the original root CA pem file with the combined root CA file and restart your application/client.
-> [!NOTE]
-> Please do not drop or alter **Baltimore certificate** until the cert change is made. We will send a communication once the change is done, after which it is safe for them to drop the Baltimore certificate.
+ In the future, after the new certificate is deployed on the server side, you can change your CA pem file to DigiCertGlobalRootG2.crt.pem.
-## Why was BaltimoreCyberTrustRoot certificate not replaced to DigiCertGlobalRootG2 during this change on February 15, 2021?
+> [!NOTE]
+> Please don't drop or alter **Baltimore certificate** until the cert change is made. We'll send a communication after the change is done, and then it will be safe to drop the **Baltimore certificate**.
-We evaluated the customer readiness for this change and realized many customers were looking for additional lead time to manage this change. In the interest of providing more lead time to customers for readiness, we have decided to defer the certificate change to DigiCertGlobalRootG2 for at least a year providing sufficient lead time to the customers and end users.
+#### Why was BaltimoreCyberTrustRoot certificate not replaced to DigiCertGlobalRootG2 during this change on February 15, 2021?
-Our recommendations to users is, use the aforementioned steps to create a combined certificate and connect to your server but do not remove BaltimoreCyberTrustRoot certificate until we send a communication to remove it.
+We evaluated the customer readiness for this change and realized that many customers were looking for extra lead time to manage this change. To provide more lead time to customers for readiness, we decided to defer the certificate change to DigiCertGlobalRootG2 for at least a year, providing sufficient lead time to the customers and end users.
-## What if we removed the BaltimoreCyberTrustRoot certificate?
+Our recommendation to users is to use the aforementioned steps to create a combined certificate and connect to your server but do not remove BaltimoreCyberTrustRoot certificate until we send a communication to remove it.
-You will start to connectivity errors while connecting to your Azure Database for MySQL server. You will need to [configure SSL](howto-configure-ssl.md) with [BaltimoreCyberTrustRoot](https://www.digicert.com/CACerts/BaltimoreCyberTrustRoot.crt.pem) certificate again to maintain connectivity.
+#### What if we removed the BaltimoreCyberTrustRoot certificate?
+You'll start to encounter connectivity errors while connecting to your Azure Database for MySQL server. You'll need to [configure SSL](howto-configure-ssl.md) with the [BaltimoreCyberTrustRoot](https://www.digicert.com/CACerts/BaltimoreCyberTrustRoot.crt.pem) certificate again to maintain connectivity.
## Frequently asked questions
-### 1. If I'm not using SSL/TLS, do I still need to update the root CA?
+#### If I'm not using SSL/TLS, do I still need to update the root CA?
- No actions required if you're not using SSL/TLS.
+ No actions are required if you aren't using SSL/TLS.
-### 2. If I'm using SSL/TLS, do I need to restart my database server to update the root CA?
+#### If I'm using SSL/TLS, do I need to restart my database server to update the root CA?
No, you don't need to restart the database server to start using the new certificate. This root certificate is a client-side change and the incoming client connections need to use the new certificate to ensure that they can connect to the database server.
-### 3. How do I know if I'm using SSL/TLS with root certificate verification?
+#### How do I know if I'm using SSL/TLS with root certificate verification?
You can identify whether your connections verify the root certificate by reviewing your connection string. -- If your connection string includes `sslmode=verify-ca` or `sslmode=verify-identity`, you need to update the certificate.-- If your connection string includes `sslmode=disable`, `sslmode=allow`, `sslmode=prefer`, or `sslmode=require`, you don't need to update certificates.-- If your connection string doesn't specify sslmode, you don't need to update certificates.
+* If your connection string includes `sslmode=verify-ca` or `sslmode=verify-identity`, you need to update the certificate.
+* If your connection string includes `sslmode=disable`, `sslmode=allow`, `sslmode=prefer`, or `sslmode=require`, you don't need to update certificates.
+* If your connection string doesn't specify sslmode, you don't need to update certificates.
If you're using a client that abstracts the connection string away, review the client's documentation to understand whether it verifies certificates.
-### 4. What is the impact if using App Service with Azure Database for MySQL?
+#### What is the impact of using App Service with Azure Database for MySQL?
-For Azure app services connecting to Azure Database for MySQL, there are two possible scenarios and depending on how on you're using SSL with your application.
+For Azure app services connecting to Azure Database for MySQL, there are two possible scenarios depending on how on you're using SSL with your application.
-* This new certificate has been added to App Service at platform level. If you're using the SSL certificates included on App Service platform in your application, then no action is needed. This is the most common scenario.
+* This new certificate has been added to App Service at platform level. If you're using the SSL certificates included on App Service platform in your application, then no action is needed. This is the most common scenario.
* If you're explicitly including the path to SSL cert file in your code, then you would need to download the new cert and produce a combined certificate as mentioned above and use the certificate file. A good example of this scenario is when you use custom containers in App Service as shared in the [App Service documentation](../app-service/tutorial-multi-container-app.md#configure-database-variables-in-wordpress). This is an uncommon scenario but we have seen some users using this.
-### 5. What is the impact if using Azure Kubernetes Services (AKS) with Azure Database for MySQL?
+#### What is the impact of using Azure Kubernetes Services (AKS) with Azure Database for MySQL?
If you're trying to connect to the Azure Database for MySQL using Azure Kubernetes Services (AKS), it's similar to access from a dedicated customers host environment. Refer to the steps [here](../aks/ingress-own-tls.md).
-### 6. What is the impact if using Azure Data Factory to connect to Azure Database for MySQL?
+#### What is the impact of using Azure Data Factory to connect to Azure Database for MySQL?
-For a connector using Azure Integration Runtime, the connector leverage certificates in the Windows Certificate Store in the Azure-hosted environment. These certificates are already compatible to the newly applied certificates and therefore no action is needed.
+For a connector using Azure Integration Runtime, the connector uses certificates in the Windows Certificate Store in the Azure-hosted environment. These certificates are already compatible to the newly applied certificates, and therefore no action is needed.
For a connector using Self-hosted Integration Runtime where you explicitly include the path to SSL cert file in your connection string, you'll need to download the [new certificate](https://cacerts.digicert.com/DigiCertGlobalRootG2.crt.pem) and update the connection string to use it.
-### 7. Do I need to plan a database server maintenance downtime for this change?
+#### Do I need to plan a database server maintenance downtime for this change?
-No. Since the change here is only on the client side to connect to the database server, there's no maintenance downtime needed for the database server for this change.
+No. Since the change is only on the client side to connect to the database server, there's no maintenance downtime needed for the database server for this change.
-### 8. If I create a new server after February 15, 2021 (02/15/2021), will I be impacted?
+#### If I create a new server after February 15, 2021 (02/15/2021), will I be impacted?
For servers created after February 15, 2021 (02/15/2021), you will continue to use the [BaltimoreCyberTrustRoot](https://www.digicert.com/CACerts/BaltimoreCyberTrustRoot.crt.pem) for your applications to connect using SSL.
-### 9. How often does Microsoft update their certificates or what is the expiry policy?
+#### How often does Microsoft update their certificates or what is the expiry policy?
These certificates used by Azure Database for MySQL are provided by trusted Certificate Authorities (CA). So the support of these certificates is tied to the support of these certificates by CA. The [BaltimoreCyberTrustRoot](https://www.digicert.com/CACerts/BaltimoreCyberTrustRoot.crt.pem) certificate is scheduled to expire in 2025 so Microsoft will need to perform a certificate change before the expiry. Also in case if there are unforeseen bugs in these predefined certificates, Microsoft will need to make the certificate rotation at the earliest similar to the change performed on February 15, 2021 to ensure the service is secure and compliant at all times.
-### 10. If I'm using read replicas, do I need to perform this update only on source server or the read replicas?
+#### If I'm using read replicas, do I need to perform this update only on source server or the read replicas?
Since this update is a client-side change, if the client used to read data from the replica server, you'll need to apply the changes for those clients as well.
-### 11. If I'm using Data-in replication, do I need to perform any action?
+#### If I'm using Data-in replication, do I need to perform any action?
If you're using [Data-in replication](concepts-data-in-replication.md) to connect to Azure Database for MySQL, there are two things to consider:
If you're using [Data-in replication](concepts-data-in-replication.md) to connec
Master_SSL_Key : ~\azure_mysqlclient_key.pem ```
- If you do see that the certificate is provided for the CA_file, SSL_Cert, and SSL_Key, you'll need to update the file by adding the [new certificate](https://cacerts.digicert.com/DigiCertGlobalRootG2.crt.pem) and create a combined cert file.
+ If you see that the certificate is provided for the CA_file, SSL_Cert, and SSL_Key, you'll need to update the file by adding the [new certificate](https://cacerts.digicert.com/DigiCertGlobalRootG2.crt.pem) and create a combined cert file.
-* If the data-replication is between two Azure Database for MySQL, then you'll need to reset the replica by executing
-**CALL mysql.az_replication_change_master** and provide the new dual root certificate as last parameter [master_ssl_ca](howto-data-in-replication.md#4-link-source-and-replica-servers-to-start-data-in-replication)
+* If the data-replication is between two Azure Database for MySQL servers, then you'll need to reset the replica by executing **CALL mysql.az_replication_change_master** and provide the new dual root certificate as the last parameter [master_ssl_ca](howto-data-in-replication.md#link-source-and-replica-servers-to-start-data-in-replication).
-### 12. Do we have server-side query to verify if SSL is being used?
+#### Is there a server-side query to determine whether SSL is being used?
To verify if you're using SSL connection to connect to the server refer [SSL verification](howto-configure-ssl.md#step-4-verify-the-ssl-connection).
-### 13. Is there an action needed if I already have the DigiCertGlobalRootG2 in my certificate file?
+#### Is there an action needed if I already have the DigiCertGlobalRootG2 in my certificate file?
No. There's no action needed if your certificate file already has the **DigiCertGlobalRootG2**.
-### 14. What if I have further questions?
+#### What if I have further questions?
-If you have questions, get answers from community experts in [Microsoft Q&A](mailto:AzureDatabaseforMySQL@service.microsoft.com). If you have a support plan and you need technical help, [contact us](mailto:AzureDatabaseforMySQL@service.microsoft.com).
+For questions, get answers from community experts in [Microsoft Q&A](mailto:AzureDatabaseforMySQL@service.microsoft.com). If you have a support plan and you need technical help, [contact us](mailto:AzureDatabaseforMySQL@service.microsoft.com).
mysql Concepts Data In Replication https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/mysql/concepts-data-in-replication.md
Title: Data-in replication - Azure Database for MySQL
-description: Learn about using data-in replication to synchronize from an external server into the Azure Database for MySQL service.
+ Title: Data-in Replication - Azure Database for MySQL
+description: Learn about using Data-in Replication to synchronize from an external server into the Azure Database for MySQL service.
Previously updated : 8/7/2020 Last updated : 04/08/2021 # Replicate data into Azure Database for MySQL
-Data-in Replication allows you to synchronize data from an external MySQL server into the Azure Database for MySQL service. The external server can be on-premises, in virtual machines, or a database service hosted by other cloud providers. Data-in Replication is based on the binary log (binlog) file position-based or gtid based replication native to MySQL. To learn more about binlog replication, see the [MySQL binlog replication overview](https://dev.mysql.com/doc/refman/5.7/en/binlog-replication-configuration-overview.html).
+Data-in Replication allows you to synchronize data from an external MySQL server into the Azure Database for MySQL service. The external server can be on-premises, in virtual machines, or a database service hosted by other cloud providers. Data-in Replication is based on the binary log (binlog) file position-based or GTID-based replication native to MySQL. To learn more about binlog replication, see the [MySQL binlog replication overview](https://dev.mysql.com/doc/refman/5.7/en/binlog-replication-configuration-overview.html).
## When to use Data-in Replication
-The main scenarios to consider using Data-in Replication are:
+
+The main scenarios to consider about using Data-in Replication are:
- **Hybrid Data Synchronization:** With Data-in Replication, you can keep data synchronized between your on-premises servers and Azure Database for MySQL. This synchronization is useful for creating hybrid applications. This method is appealing when you have an existing local database server but want to move the data to a region closer to end users. - **Multi-Cloud Synchronization:** For complex cloud solutions, use Data-in Replication to synchronize data between Azure Database for MySQL and different cloud providers, including virtual machines and database services hosted in those clouds.
-
+ For migration scenarios, use the [Azure Database Migration Service](https://azure.microsoft.com/services/database-migration/)(DMS). ## Limitations and considerations ### Data not replicated
-The [*mysql system database*](https://dev.mysql.com/doc/refman/5.7/en/system-schema.html) on the source server isn't replicated. Changes to accounts and permissions on the source server aren't replicated. If you create an account on the source server and this account needs to access the replica server, manually create the same account on the replica server side. To understand what tables are contained in the system database, see the [MySQL manual](https://dev.mysql.com/doc/refman/5.7/en/system-schema.html).
+
+The [*mysql system database*](https://dev.mysql.com/doc/refman/5.7/en/system-schema.html) on the source server isn't replicated. In addition, changes to accounts and permissions on the source server aren't replicated. If you create an account on the source server and this account needs to access the replica server, manually create the same account on the replica server. To understand what tables are contained in the system database, see the [MySQL manual](https://dev.mysql.com/doc/refman/5.7/en/system-schema.html).
### Filtering+ To skip replicating tables from your source server (hosted on-premises, in virtual machines, or a database service hosted by other cloud providers), the `replicate_wild_ignore_table` parameter is supported. Optionally, update this parameter on the replica server hosted in Azure using the [Azure portal](howto-server-parameters.md) or [Azure CLI](howto-configure-server-parameters-using-cli.md).
-Review the [MySQL documentation](https://dev.mysql.com/doc/refman/8.0/en/replication-options-replica.html#option_mysqld_replicate-wild-ignore-table) to learn more about this parameter.
+To learn more about this parameter, review the [MySQL documentation](https://dev.mysql.com/doc/refman/8.0/en/replication-options-replica.html#option_mysqld_replicate-wild-ignore-table).
## Supported in General Purpose or Memory Optimized tier only
-Data-in replication is only supported in General Purpose and Memory Optimized pricing tiers.
+
+Data-in Replication is only supported in General Purpose and Memory Optimized pricing tiers.
### Requirements-- The source server version must be at least MySQL version 5.6. +
+- The source server version must be at least MySQL version 5.6.
- The source and replica server versions must be the same. For example, both must be MySQL version 5.6 or both must be MySQL version 5.7. - Each table must have a primary key.-- Source server should use the MySQL InnoDB engine.
+- The source server should use the MySQL InnoDB engine.
- User must have permissions to configure binary logging and create new users on the source server.-- If the source server has SSL enabled, ensure the SSL CA certificate provided for the domain has been included in the `mysql.az_replication_change_master` or `mysql.az_replication_change_master_with_gtid` stored procedure. Refer to the following [examples](./howto-data-in-replication.md#4-link-source-and-replica-servers-to-start-data-in-replication) and the `master_ssl_ca` parameter.-- Ensure the source server's IP address has been added to the Azure Database for MySQL replica server's firewall rules. Update firewall rules using the [Azure portal](./howto-manage-firewall-using-portal.md) or [Azure CLI](./howto-manage-firewall-using-cli.md).-- Ensure the machine hosting the source server allows both inbound and outbound traffic on port 3306.-- Ensure the the source server has a **public IP address**, the DNS is publicly accessible, or has a fully qualified domain name (FQDN).-
+- If the source server has SSL enabled, ensure the SSL CA certificate provided for the domain has been included in the `mysql.az_replication_change_master` or `mysql.az_replication_change_master_with_gtid` stored procedure. Refer to the following [examples](./howto-data-in-replication.md#link-source-and-replica-servers-to-start-data-in-replication) and the `master_ssl_ca` parameter.
+- Ensure that the source server's IP address has been added to the Azure Database for MySQL replica server's firewall rules. Update firewall rules using the [Azure portal](./howto-manage-firewall-using-portal.md) or [Azure CLI](./howto-manage-firewall-using-cli.md).
+- Ensure that the machine hosting the source server allows both inbound and outbound traffic on port 3306.
+- Ensure that the source server has a **public IP address**, that DNS is publicly accessible, or that the source server has a fully qualified domain name (FQDN).
## Next steps+ - Learn how to [set up data-in replication](howto-data-in-replication.md) - Learn about [replicating in Azure with read replicas](concepts-read-replicas.md) - Learn about how to [migrate data with minimal downtime using DMS](howto-migrate-online.md)
mysql Concepts Server Logs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/mysql/concepts-server-logs.md
For local server storage, you can list and download slow query logs using the Az
Azure Monitor Diagnostic Logs allows you to pipe slow query logs to Azure Monitor Logs (Log Analytics), Azure Storage, or Event Hubs. See [below](concepts-server-logs.md#diagnostic-logs) for more information. ## Local server storage log retention
-When logging to the server's local storage, logs are available for up to seven days from their creation. If the total size of the available logs exceeds 7 GB, then the oldest files are deleted until space is available.
+When logging to the server's local storage, logs are available for up to seven days from their creation. If the total size of the available logs exceeds 7 GB, then the oldest files are deleted until space is available.The 7 GB storage limit for the server logs is available free of cost and cannot be extended.
Logs are rotated every 24 hours or 7 GB, whichever comes first.
Once your slow query logs are piped to Azure Monitor Logs through Diagnostic Log
## Next Steps - [How to configure slow query logs from the Azure portal](howto-configure-server-logs-in-portal.md)-- [How to configure slow query logs from the Azure CLI](howto-configure-server-logs-in-cli.md)
+- [How to configure slow query logs from the Azure CLI](howto-configure-server-logs-in-cli.md)
mysql Howto Data In Replication https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/mysql/howto-data-in-replication.md
Title: Configure data-in replication - Azure Database for MySQL
+ Title: Configure Data-in Replication - Azure Database for MySQL
description: This article describes how to set up Data-in Replication for Azure Database for MySQL. Previously updated : 01/13/2021 Last updated : 04/08/2021 # How to configure Azure Database for MySQL Data-in Replication
This article describes how to set up [Data-in Replication](concepts-data-in-repl
> This article contains references to the term _slave_, a term that Microsoft no longer uses. When the term is removed from the software, we'll remove it from this article. >
-To create a replica in the Azure Database for MySQL service, [Data-in Replication](concepts-data-in-replication.md) synchronizes data from a source MySQL server on-premises, in virtual machines (VMs), or in cloud database services. Data-in Replication is based on the binary log (binlog) file position-based or gtid-based replication native to MySQL. To learn more about binlog replication, see the [MySQL binlog replication overview](https://dev.mysql.com/doc/refman/5.7/en/binlog-replication-configuration-overview.html).
+To create a replica in the Azure Database for MySQL service, [Data-in Replication](concepts-data-in-replication.md) synchronizes data from a source MySQL server on-premises, in virtual machines (VMs), or in cloud database services. Data-in Replication is based on the binary log (binlog) file position-based or GTID-based replication native to MySQL. To learn more about binlog replication, see the [MySQL binlog replication overview](https://dev.mysql.com/doc/refman/5.7/en/binlog-replication-configuration-overview.html).
Review the [limitations and requirements](concepts-data-in-replication.md#limitations-and-considerations) of Data-in replication before performing the steps in this article.
-## 1. Create a Azure Database for MySQL Single Server to be used as replica
+## Create an Azure Database for MySQL Single Server instance to use as a replica
-1. Create a new Azure Database for MySQL Single Server (ex. "replica.mysql.database.azure.com"). Refer to [Create an Azure Database for MySQL server by using the Azure portal](quickstart-create-mysql-server-database-using-azure-portal.md) for server creation. This server is the "replica" server in Data-in Replication.
+1. Create a new instance of Azure Database for MySQL Single Server (ex. "replica.mysql.database.azure.com"). Refer to [Create an Azure Database for MySQL server by using the Azure portal](quickstart-create-mysql-server-database-using-azure-portal.md) for server creation. This server is the "replica" server for Data-in Replication.
> [!IMPORTANT] > The Azure Database for MySQL server must be created in the General Purpose or Memory Optimized pricing tiers as data-in replication is only supported in these tiers.
-2. Create same user accounts and corresponding privileges
+2. Create the same user accounts and corresponding privileges.
User accounts aren't replicated from the source server to the replica server. If you plan on providing users with access to the replica server, you need to create all accounts and corresponding privileges manually on this newly created Azure Database for MySQL server. 3. Add the source server's IP address to the replica's firewall rules. Update firewall rules using the [Azure portal](howto-manage-firewall-using-portal.md) or [Azure CLI](howto-manage-firewall-using-cli.md).
-
-4. **Optional** - If you wish to use [gtid-based replication](https://dev.mysql.com/doc/mysql-replication-excerpt/5.7/en/replication-gtids-concepts.html) from source server to Azure Database for MySQL replica server, you will need to enable following server parameters on the Azure Database for MySQL server as shown in the portal image below
- :::image type="content" source="./media/howto-data-in-replication/enable-gtid.png" alt-text="Enable gtid on Azure Database for MySQL server":::
+4. **Optional** - If you wish to use [GTID-based replication](https://dev.mysql.com/doc/mysql-replication-excerpt/5.7/en/replication-gtids-concepts.html) from the source server to the Azure Database for MySQL replica server, you'll need to enable the following server parameters on the Azure Database for MySQL server as shown in the portal image below:
-## 2. Configure the source MySQL server
+ :::image type="content" source="./media/howto-data-in-replication/enable-gtid.png" alt-text="Enable GTID on Azure Database for MySQL server":::
-The following steps prepare and configure the MySQL server hosted on-premises, in a virtual machine, or database service hosted by other cloud providers for Data-in Replication. This server is the "source" in Data-in replication.
+## Configure the source MySQL server
+
+The following steps prepare and configure the MySQL server hosted on-premises, in a virtual machine, or database service hosted by other cloud providers for Data-in Replication. This server is the "source" for Data-in replication.
1. Review the [source server requirements](concepts-data-in-replication.md#requirements) before proceeding.
-2. Ensure that the source server allows both inbound and outbound traffic on port 3306, and that it has a **public IP address**, the DNS is publicly accessible, or has a fully qualified domain name (FQDN).
+2. Ensure that the source server allows both inbound and outbound traffic on port 3306, and that it has a **public IP address**, the DNS is publicly accessible, or that it has a fully qualified domain name (FQDN).
Test connectivity to the source server by attempting to connect from a tool such as the MySQL command line hosted on another machine or from the [Azure Cloud Shell](../cloud-shell/overview.md) available in the Azure portal.
- If your organization has strict security policies and won't allow all IP addresses on the source server to enable communication from Azure to your source server, you can potentially use the below command to determine the IP address of your MySQL server.
+ If your organization has strict security policies and won't allow all IP addresses on the source server to enable communication from Azure to your source server, you can potentially use the command below to determine the IP address of your MySQL server.
- 1. Sign in to your Azure Database for MySQL using a tool such as the MySQL command line.
+ 1. Sign in to your Azure Database for MySQL server using a tool such as the MySQL command line.
- 2. Execute the below query.
+ 2. Execute the following query.
```bash mysql> SELECT @@global.redirect_server_host;
The following steps prepare and configure the MySQL server hosted on-premises, i
``` 3. Exit from the MySQL command line.
- 4. Execute the following command in the ping utility to get the IP address.
+ 4. To get the IP address, execute the following command in the ping utility:
```bash ping <output of step 2b>
The following steps prepare and configure the MySQL server hosted on-premises, i
5. Configure your source server's firewall rules to include the previous step's outputted IP address on port 3306.
- > [!NOTE]
- > This IP address may change due to maintenance/deployment operations. This method of connectivity is only for customers who cannot afford to allow all IP address on 3306 port.
+ > [!NOTE]
+ > This IP address may change due to maintenance / deployment operations. This method of connectivity is only for customers who cannot afford to allow all IP address on 3306 port.
-3. Turn on binary logging
+3. Turn on binary logging.
- Check to see if binary logging has been enabled on the source by running the following command:
+ Check to see if binary logging has been enabled on the source by running the following command:
```sql SHOW VARIABLES LIKE 'log_bin'; ``` If the variable [`log_bin`](https://dev.mysql.com/doc/refman/8.0/en/replication-options-binary-log.html#sysvar_log_bin) is returned with the value "ON", binary logging is enabled on your server.
-
- If `log_bin` is returned with the value "OFF" and your source server is running on-premises or on virtual machines where you can access the configuration file (my.cnf), you can follow the steps below:
+
+ If `log_bin` is returned with the value "OFF" and your source server is running on-premises or on virtual machines where you can access the configuration file (my.cnf), you can follow the steps below:
1. Locate your MySQL configuration file (my.cnf) in the source server. For example: /etc/my.cnf 2. Open the configuration file to edit it and locate **mysqld** section in the file.
- 3. In the mysqld section, add following line
-
+ 3. In the mysqld section, add following line:
+ ```bash log-bin=mysql-bin.log ```+ 4. Restart the MySQL source server for the changes to take effect.
- 5. Once the server is restarted, verify that binary logging is enabled by running the same query as before:
-
+ 5. After the server is restarted, verify that binary logging is enabled by running the same query as before:
+ ```sql SHOW VARIABLES LIKE 'log_bin'; ```
-
-4. Source server settings
- Data-in Replication requires parameter `lower_case_table_names` to be consistent between the source and replica servers. This parameter is 1 by default in Azure Database for MySQL.
+4. Configure the source server settings.
+
+ Data-in Replication requires the parameter `lower_case_table_names` to be consistent between the source and replica servers. This parameter is 1 by default in Azure Database for MySQL.
```sql SET GLOBAL lower_case_table_names = 1; ```
- **Optional** - If you wish to use [gtid-based replication](https://dev.mysql.com/doc/mysql-replication-excerpt/5.7/en/replication-gtids-concepts.html), you will need to check if gtid is enabled on the source server. You can execute following command against your source MySQL server to see if gtid mode is ON.
-
+
+ **Optional** - If you wish to use [GTID-based replication](https://dev.mysql.com/doc/mysql-replication-excerpt/5.7/en/replication-gtids-concepts.html), you'll need to check if GTID is enabled on the source server. You can execute following command against your source MySQL server to see if gtid_mode is ON.
+ ```sql show variables like 'gtid_mode'; ```
- >[!IMPORTANT]
- > All servers have gtid_mode set to the default value OFF. You do not need to enable gtid on source MySQL server specifically to setup data-in replication. If gtid is already enabled on source server, you can optionally use gtid-based replication to setup data-in replication too with Azure Database for MySQL Single Server. You can use file-based replication to setup data-in replication for all servers irrespective of gtid mode configuration on the source server.
+ >[!IMPORTANT]
+ > All servers have gtid_mode set to the default value OFF. You don't need to enable GTID on the source MySQL server specifically to set up Data-in Replication. If GTID is already enabled on source server, you can optionally use GTID-based replication to set up Data-in Replication too with Azure Database for MySQL Single Server. You can use file-based replication to set up data-in replication for all servers regardless of the gitd_mode configuration on the source server.
-5. Create a new replication role and set up permission
+5. Create a new replication role and set up permission.
- Create a user account on the source server that is configured with replication privileges. This can be done through SQL commands or a tool like MySQL Workbench. Consider whether you plan on replicating with SSL as this will need to be specified when creating the user. Refer to the MySQL documentation to understand how to [add user accounts](https://dev.mysql.com/doc/refman/5.7/en/user-names.html) on your source server.
+ Create a user account on the source server that is configured with replication privileges. This can be done through SQL commands or a tool such as MySQL Workbench. Consider whether you plan on replicating with SSL, as this will need to be specified when creating the user. Refer to the MySQL documentation to understand how to [add user accounts](https://dev.mysql.com/doc/refman/5.7/en/user-names.html) on your source server.
In the following commands, the new replication role created can access the source from any machine, not just the machine that hosts the source itself. This is done by specifying "syncuser@'%'" in the create user command. See the MySQL documentation to learn more about [specifying account names](https://dev.mysql.com/doc/refman/5.7/en/account-names.html).
The following steps prepare and configure the MySQL server hosted on-premises, i
:::image type="content" source="./media/howto-data-in-replication/replicationslave.png" alt-text="Replication Slave":::
-6. Set the source server to read-only mode
+6. Set the source server to read-only mode.
Before starting to dump out the database, the server needs to be placed in read-only mode. While in read-only mode, the source will be unable to process any write transactions. Evaluate the impact to your business and schedule the read-only window in an off-peak time if necessary.
The following steps prepare and configure the MySQL server hosted on-premises, i
SET GLOBAL read_only = ON; ```
-7. Get binary log file name and offset
+7. Get binary log file name and offset.
Run the [`show master status`](https://dev.mysql.com/doc/refman/5.7/en/show-master-status.html) command to determine the current binary log file name and offset. ```sql show master status; ```
- The results should appear similar to the following. Make sure to note the binary file name, as it will be used in later steps.
+ The results should appear similar to the following. Make sure to note the binary file name for use in later steps.
:::image type="content" source="./media/howto-data-in-replication/masterstatus.png" alt-text="Master Status Results":::
-
-## 3. Dump and restore source server
+## Dump and restore the source server
1. Determine which databases and tables you want to replicate into Azure Database for MySQL and perform the dump from the source server.
- You can use mysqldump to dump databases from your master. For details, refer to [Dump & Restore](concepts-migrate-dump-restore.md). It's unnecessary to dump MySQL library and test library.
+ You can use mysqldump to dump databases from your primary server. For details, refer to [Dump & Restore](concepts-migrate-dump-restore.md). It's unnecessary to dump the MySQL library and test library.
+
+2. **Optional** - If you wish to use [gtid-based replication](https://dev.mysql.com/doc/mysql-replication-excerpt/5.7/en/replication-gtids-concepts.html), you'll need to identify the GTID of the last transaction executed at the primary. You can use the following command to note the GTID of the last transaction executed on the master server.
-2. **Optional** - If you wish to use [gtid-based replication](https://dev.mysql.com/doc/mysql-replication-excerpt/5.7/en/replication-gtids-concepts.html), you will need to identify the gtid of the last transaction executed at the master. You can use the following command to note the gtid of the last transaction executed on the master server.
```sql show global variables like 'gtid_executed'; ```+ 3. Set source server to read/write mode. After the database has been dumped, change the source MySQL server back to read/write mode.
The following steps prepare and configure the MySQL server hosted on-premises, i
UNLOCK TABLES; ```
-3. Restore dump file to new server.
+4. Restore dump file to new server.
Restore the dump file to the server created in the Azure Database for MySQL service. Refer to [Dump & Restore](concepts-migrate-dump-restore.md) for how to restore a dump file to a MySQL server. If the dump file is large, upload it to a virtual machine in Azure within the same region as your replica server. Restore it to the Azure Database for MySQL server from the virtual machine.
-
-4. **Optional** - Note the gtid of the restored server on Azure Database for MySQL to ensure it is same as master. You can use the following command to note the gtid of the gtid purged value on the Azure Database for MySQL replica server. The value of gtid_purged should be same as gtid_executed on master noted in step 2 for gtid-based replication to work.
+
+5. **Optional** - Note the GTID of the restored server on Azure Database for MySQL to ensure it is same as the primary server. You can use the following command to note the GTID of the GTID purged value on the Azure Database for MySQL replica server. The value of gtid_purged should be same as gtid_executed on master noted in step 2 for GTID-based replication to work.
```sql show global variables like 'gtid_purged'; ```
-## 4. Link source and replica servers to start Data-in Replication
+## Link source and replica servers to start Data-in Replication
-1. Set source server.
+1. Set the source server.
All Data-in Replication functions are done by stored procedures. You can find all procedures at [Data-in Replication Stored Procedures](./reference-stored-procedures.md). The stored procedures can be run in the MySQL shell or MySQL Workbench.
The following steps prepare and configure the MySQL server hosted on-premises, i
```sql CALL mysql.az_replication_change_master('<master_host>', '<master_user>', '<master_password>', <master_port>, '<master_log_file>', <master_log_pos>, '<master_ssl_ca>'); ```+ **Optional** - If you wish to use [gtid-based replication](https://dev.mysql.com/doc/mysql-replication-excerpt/5.7/en/replication-gtids-concepts.html),you will need to use the following command to link the two servers+ ```sql call mysql.az_replication_change_master_with_gtid('<master_host>', '<master_user>', '<master_password>', <master_port>, '<master_ssl_ca>'); ```
The following steps prepare and configure the MySQL server hosted on-premises, i
CALL mysql.az_replication_change_master('master.companya.com', 'syncuser', 'P@ssword!', 3306, 'mysql-bin.000002', 120, ''); ```
-2. Filtering.
+2. Set up filtering.
If you want to skip replicating some tables from your master, update the `replicate_wild_ignore_table` server parameter on your replica server. You can provide more than one table pattern using a comma-separated list.
The following steps prepare and configure the MySQL server hosted on-premises, i
If the state of `Slave_IO_Running` and `Slave_SQL_Running` are "yes" and the value of `Seconds_Behind_Master` is "0", replication is working well. `Seconds_Behind_Master` indicates how late the replica is. If the value isn't "0", it means that the replica is processing updates.
-## Other useful stored procedures for data-in replication operations
+## Other useful stored procedures for Data-in Replication operations
### Stop replication
To skip a replication error and allow replication to continue, use the following
```sql CALL mysql.az_replication_skip_counter; ```+ **Optional** - If you wish to use [gtid-based replication](https://dev.mysql.com/doc/mysql-replication-excerpt/5.7/en/replication-gtids-concepts.html), use the following stored procedure to skip a transaction ```sql call mysql. az_replication_skip_gtid_transaction(ΓÇÿ<transaction_gtid>ΓÇÖ) ```
-The procedure can skip the transaction for the given gtid. If the gtid format is not right or the gtid transaction has already been executed, the procedure will fail to execute.The gtid for a transaction can be determined by parsing the binary log to check the transaction events. MySQL provides an utility [mysqlbinlog](https://dev.mysql.com/doc/refman/5.7/en/mysqlbinlog.html) to parse binary logs and display their contents in text format which can be used to identify gtid of the transaction.
-If you want to skip the next transaction after the current replication position, use the following command to identify the gtid of next transaction as shown below.
+The procedure can skip the transaction for the given GTID. If the GTID format is not right or the GTID transaction has already been executed, the procedure will fail to execute. The GTID for a transaction can be determined by parsing the binary log to check the transaction events. MySQL provides a utility [mysqlbinlog](https://dev.mysql.com/doc/refman/5.7/en/mysqlbinlog.html) to parse binary logs and display their contents in text format, which can be used to identify GTID of the transaction.
+
+To skip the next transaction after the current replication position, use the following command to identify the GTID of next transaction as shown below.
```sql SHOW BINLOG EVENTS [IN 'log_name'] [FROM pos][LIMIT [offset,] row_count] ```+ :::image type="content" source="./media/howto-data-in-replication/show-binary-log.png" alt-text="Show binary log results"::: ## Next steps
peering-service Security Baseline https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/peering-service/security-baseline.md
+
+ Title: Azure security baseline for Microsoft Azure Peering Service
+description: The Microsoft Azure Peering Service security baseline provides procedural guidance and resources for implementing the security recommendations specified in the Azure Security Benchmark.
+++ Last updated : 04/09/2021+++
+# Important: This content is machine generated; do not modify this topic directly. Contact mbaldwin for more information.
+++
+# Azure security baseline for Microsoft Azure Peering Service
+
+This security baseline applies guidance from the [Azure Security Benchmark version 2.0](../security/benchmarks/overview.md) to Microsoft Azure Peering Service. The Azure Security Benchmark provides recommendations on how you can secure your cloud solutions on Azure. The content is grouped by the **security controls** defined by the Azure Security Benchmark and the related guidance applicable to Microsoft Azure Peering Service.
+
+> [!NOTE]
+> **Controls** not applicable to Microsoft Azure Peering Service, or for which the responsibility is Microsoft's, have been excluded. To see how Microsoft Azure Peering Service completely maps to the Azure Security Benchmark, see the **[full Microsoft Azure Peering Service security baseline mapping file](https://github.com/MicrosoftDocs/SecurityBenchmarks/tree/master/Azure%20Offer%20Security%20Baselines)**.
+
+## Network Security
+
+*For more information, see the [Azure Security Benchmark: Network Security](/azure/security/benchmarks/security-controls-v2-network-security).*
+
+### NS-5: Deploy intrusion detection/intrusion prevention systems (IDS/IPS)
+
+**Guidance**: Peering Service does not offer native intrusion detection or prevention services. Customers should follow security best practices, such as using threat intelligence-based filtering from Azure Firewall to alert and block traffic from and to known-malicious IP addresses and domains, as applicable to their business requirements. These IP addresses and domains are sourced from the Microsoft Threat-Intelligence feed.
+
+In cases where payload inspection is required, deploy a third-party intrusion detection or prevention systems (IDS/IPS), with payload inspection capabilities, from the Azure Marketplace.
+Alternately, use a host-based endpoint detection and response (EDR) solution in conjunction with (or instead of) network-based intrusion detection or prevention systems.
+
+If you have a regulatory requirement to utilize an intrusion detection or intrusion prevention system, ensure that it is always tuned to consume or provide high-quality alerts.
+
+- [How to deploy Azure Firewall](../firewall/tutorial-firewall-deploy-portal.md)
+
+- [Azure Marketplace includes third party IDS capabilities](https://azuremarketplace.microsoft.com/marketplace?search=IDS)
+
+- [Microsoft Defender ATP EDR capability](/windows/security/threat-protection/microsoft-defender-atp/overview-endpoint-detection-response)
+
+**Responsibility**: Customer
+
+**Azure Security Center monitoring**: None
+
+## Identity Management
+
+*For more information, see the [Azure Security Benchmark: Identity Management](/azure/security/benchmarks/security-controls-v2-identity-management).*
+
+### IM-1: Standardize Azure Active Directory as the central identity and authentication system
+
+**Guidance**: Maintain an inventory of the user accounts with administrative access to the control plane (such as, Azure portal) of your Microsoft Azure Peering Service resources.
+
+Use the Identity and Access control (IAM) pane in the Azure portal for your subscription to configure Azure role-based access control (Azure RBAC). The roles are applied to users, groups, service principals, and managed identities in Azure Active Directory (Azure AD).
+
+- [Add or remove Azure role assignments using the Azure portal](../role-based-access-control/role-assignments-portal.md)
+
+**Responsibility**: Customer
+
+**Azure Security Center monitoring**: None
+
+### IM-2: Manage application identities securely and automatically
+
+**Guidance**: Managed identities are not supported for Peering Service. For services that do not support managed identities like Peering Service, use Azure Active Directory (Azure AD) to create a service principal with restricted permissions at the resource level instead.
+
+- [Azure managed identities](../active-directory/managed-identities-azure-resources/overview.md)
+
+- [Services that support managed identities for Azure resources](../active-directory/managed-identities-azure-resources/services-support-managed-identities.md)
+
+- [Azure service principal](/powershell/azure/create-azure-service-principal-azureps)
+
+- [Create a service principal with certificates](../active-directory/develop/howto-authenticate-service-principal-powershell.md)
+
+**Responsibility**: Customer
+
+**Azure Security Center monitoring**: None
+
+### IM-3: Use Azure AD single sign-on (SSO) for application access
+
+**Guidance**: Microsoft Azure Peering Service uses Azure Active Directory (Azure AD) to provide identity and access management to Azure resources. This includes enterprise identities such as employees, as well as external identities such as partners, vendors, and suppliers.
+
+Use single sign-on to manage and secure access to your organizationΓÇÖs data and resources on-premises and in the cloud. Connect all your users, applications, and devices to the Azure AD for seamless, secure access and greater visibility and control.
+
+- [Understand Application SSO with Azure AD](../active-directory/manage-apps/what-is-single-sign-on.md)
+
+**Responsibility**: Customer
+
+**Azure Security Center monitoring**: None
+
+### IM-4: Use strong authentication controls for all Azure Active Directory based access
+
+**Guidance**: Enable multifactor authentication with Azure Active Directory (Azure AD) and follow Identity and Access Management recommendations from Azure Security Center.
+
+- [How to enable multifactor authentication in Azure](../active-directory/authentication/howto-mfa-getstarted.md)
+
+- [How to monitor identity and access within Azure Security Center](../security-center/security-center-identity-access.md)
+
+**Responsibility**: Customer
+
+**Azure Security Center monitoring**: None
+
+### IM-5: Monitor and alert on account anomalies
+
+**Guidance**: Use Privileged Identity Management (PIM) with Azure Active Directory (Azure AD) for generation of logs and alerts when suspicious or unsafe activity occurs in the environment. In addition, use Azure AD risk detections to view alerts and reports on risky user behavior.
+
+- [How to deploy Privileged Identity Management (PIM)](../active-directory/privileged-identity-management/pim-deployment-plan.md)
+
+- [Understand Azure AD risk detections](../active-directory/identity-protection/overview-identity-protection.md)
+
+**Responsibility**: Customer
+
+**Azure Security Center monitoring**: None
+
+### IM-6: Restrict Azure resource access based on conditions
+
+**Guidance**: Use conditional access with Azure Active Directory (Azure AD) for more granular access control based on user-defined conditions, such as requiring user logins from certain IP ranges to use multifactor authentication. A granular authentication session management can also be used through Azure AD conditional access policy for different use cases.
+
+- [Azure Conditional Access overview](../active-directory/conditional-access/overview.md)
+
+- [Common Conditional Access policies](../active-directory/conditional-access/concept-conditional-access-policy-common.md)
+
+- [Configure authentication session management with Conditional Access](../active-directory/conditional-access/howto-conditional-access-session-lifetime.md)
+
+**Responsibility**: Customer
+
+**Azure Security Center monitoring**: None
+
+### IM-7: Eliminate unintended credential exposure
+
+**Guidance**: Implement Credential Scanner within Azure DevOps to identify credentials used within the code. Credential Scanner also encourages moving discovered credentials to more secure locations such as Azure Key Vault.
+
+For GitHub, you can use native secret scanning feature to identify credentials or other form of secrets within the code.
+
+- [How to setup Credential Scanner](https://secdevtools.azurewebsites.net/helpcredscan.html)
+
+- [GitHub secret scanning](https://docs.github.com/github/administering-a-repository/about-secret-scanning)
+
+**Responsibility**: Customer
+
+**Azure Security Center monitoring**: None
+
+## Privileged Access
+
+*For more information, see the [Azure Security Benchmark: Privileged Access](/azure/security/benchmarks/security-controls-v2-privileged-access).*
+
+### PA-1: Protect and limit highly privileged users
+
+**Guidance**: Limit the number of highly privileged user accounts, and protect these accounts at an elevated level.
+
+The most critical built-in roles in Azure Active Directory (Azure AD) are Global Administrator and the Privileged Role Administrator, because users assigned to these two roles can delegate administrator roles.
+
+With these privileges, users can directly or indirectly read and modify every resource in your Azure environment:
+
+- Global Administrator / Company Administrator: Users with this role have access to all administrative features in Azure AD, as well as services that use Azure AD identities.
+
+- Privileged Role Administrator: Users with this role can manage role assignments in Azure AD, as well as within Azure AD Privileged Identity Management (PIM). In addition, this role allows management of all aspects of PIM and administrative units.
+
+There may be other critical roles that need to be governed if you use custom roles with certain privileged permissions assigned to them. Apply similar controls to the administrator account of critical business assets.
+
+Enable just-in-time (JIT) privileged access to Azure resources and Azure AD using Azure AD Privileged Identity Management (PIM). JIT grants temporary permissions to perform privileged tasks only when users need it. PIM can also generate security alerts when there is suspicious or unsafe activity in your Azure AD organization.
+
+- [Administrator role permissions in Azure AD](/azure/active-directory/users-groups-roles/directory-assign-admin-roles)
+
+- [Use Azure Privileged Identity Management security alerts](../active-directory/privileged-identity-management/pim-how-to-configure-security-alerts.md)
+
+- [Securing privileged access for hybrid and cloud deployments in Azure AD](/azure/active-directory/users-groups-roles/directory-admin-roles-secure)
+
+**Responsibility**: Customer
+
+**Azure Security Center monitoring**: None
+
+### PA-2: Restrict administrative access to business-critical systems
+
+**Guidance**: Isolate access to business-critical systems by restricting accounts which have been granted privileged access to the subscriptions and management groups they belong to.
+
+Restrict access to the management, identity, and security systems that have administrative access to your business critical assets, such as Active Directory Domain Controllers (DC), security tools, and system management tools with agents installed on business critical systems. Attackers who compromise these management and security systems can immediately weaponize them to compromise business critical assets.
+
+All types of access controls should be aligned to your enterprise segmentation strategy to ensure consistent access control.
+
+Ensure to assign separate privileged accounts that are distinct from the standard user accounts used for email, browsing, and productivity tasks.
+
+- [Azure Components and Reference model](/security/compass/microsoft-security-compass-introduction#azure-components-and-reference-model-2151)
+
+- [Management Group Access](https://docs.microsoft.com/azure/governance/management-groups/overview#management-group-access)
+
+- [Azure subscription administrators](../cost-management-billing/manage/add-change-subscription-administrator.md)
+
+**Responsibility**: Customer
+
+**Azure Security Center monitoring**: None
+
+### PA-3: Review and reconcile user access regularly
+
+**Guidance**: Azure Active Directory (AD) provides logs to help you discover stale accounts. In addition, use Azure Identity Access Reviews to efficiently manage group memberships, access to enterprise applications, and role assignments. User access can be reviewed on a regular basis to make sure only the right Users have continued access.
+
+- [Find Azure AD activity reports](../active-directory/reports-monitoring/howto-find-activity-reports.md)
+
+**Responsibility**: Customer
+
+**Azure Security Center monitoring**: None
+
+### PA-4: Set up emergency access in Azure AD
+
+**Guidance**: Set up an emergency account for access, when the regular administrative accounts are unavailable, to prevent being accidentally locked out of your Azure Active Directory (Azure AD) organization. Emergency access accounts may contain higher-level privileges and should not be assigned to specific individuals. Limit emergency access accounts to emergency or 'break glass' scenarios.
+
+Ensure that the credentials (such as password, certificate, or smart card) for emergency access accounts are kept secure and known only to individuals who are authorized to use them in emergent situations.
+
+- [Manage emergency access accounts in Azure AD](/azure/active-directory/users-groups-roles/directory-emergency-access)
+
+**Responsibility**: Customer
+
+**Azure Security Center monitoring**: None
+
+### PA-5: Automate entitlement management
+
+**Guidance**: Use Azure Active Directory (Azure AD) entitlement management features to automate access request workflows, including access assignments, reviews, and expiration. Dual or multi-stage approval is also supported.
+
+- [What are Azure Active Directory (Azure AD) access reviews](../active-directory/governance/access-reviews-overview.md)
+
+- [What is Azure Active Directory (Azure AD) entitlement management](../active-directory/governance/entitlement-management-overview.md)
+
+**Responsibility**: Customer
+
+**Azure Security Center monitoring**: None
+
+### PA-6: Use privileged access workstations
+
+**Guidance**: Use a Privileged Access Workstation (PAW) with multifactor authentication from Azure Active Directory (Azure AD), enabled to log into and configure your Azure Sentinel-related resources.
+
+- [Privileged Access Workstations](/windows-server/identity/securing-privileged-access/privileged-access-workstations)
+
+- [Planning a cloud-based Azure AD multifactor authentication deployment](../active-directory/authentication/howto-mfa-getstarted.md)
+
+**Responsibility**: Customer
+
+**Azure Security Center monitoring**: None
+
+### PA-7: Follow just enough administration (least privilege principle)
+
+**Guidance**: Azure role-based access control (Azure RBAC) allows you to manage Azure resource access through role assignments. You can assign these roles to users, group service principals, and managed identities. There are pre-defined built-in roles for certain resources, and these roles can be inventoried or queried through tools such as Azure CLI, Azure PowerShell, and the Azure portal.
+
+The privileges you assign to resources through Azure RBAC should always be limited to what is required by the roles. Limited privileges complement the just in time (JIT) approach of Azure Active Directory (Azure AD) Privileged Identity Management (PIM), and those privileges should be reviewed periodically.
+
+Use built-in roles to allocate permission and only create custom role when required.
+
+- [What is Azure role-based access control (Azure RBAC)](../role-based-access-control/overview.md)
+
+- [How to configure Azure RBAC](../role-based-access-control/role-assignments-portal.md)
+
+- [How to use Azure AD identity and access reviews](../active-directory/governance/access-reviews-overview.md)
+
+**Responsibility**: Customer
+
+**Azure Security Center monitoring**: None
+
+## Asset Management
+
+*For more information, see the [Azure Security Benchmark: Asset Management](/azure/security/benchmarks/security-controls-v2-asset-management).*
+
+### AM-1: Ensure security team has visibility into risks for assets
+
+**Guidance**: Ensure security teams are granted Security Reader permissions in your Azure tenant and subscriptions so they can monitor for security risks using Azure Security Center.
+
+Depending on how security team responsibilities are structured, monitoring for security risks could be the responsibility of a central security team or a local team. That said, security insights and risks must always be aggregated centrally within an organization.
+
+Security Reader permissions can be applied broadly to an entire tenant (Root Management Group) or scoped to management groups or specific subscriptions.
+
+Additional permissions might be required to get visibility into workloads and services.
+
+- [Overview of Security Reader Role](https://docs.microsoft.com/azure/role-based-access-control/built-in-roles#security-reader)
+
+- [Overview of Azure Management Groups](../governance/management-groups/overview.md)
+
+**Responsibility**: Customer
+
+**Azure Security Center monitoring**: None
+
+### AM-2: Ensure security team has access to asset inventory and metadata
+
+**Guidance**: Ensure that security teams have access to a continuously updated inventory of assets on Azure. Security teams often need this inventory to evaluate their organization's potential exposure to emerging risks, and as an input to continuously security improvements.
+
+The Azure Security Center inventory feature and Azure Resource Graph can query for and discover all resources in your subscriptions, including Azure services, applications, and network resources.
+
+Logically organize assets according to your organizationΓÇÖs taxonomy using Tags as well as other metadata in Azure (Name, Description, and Category).
+
+- [How to create queries with Azure Resource Graph Explorer](../governance/resource-graph/first-query-portal.md)
+
+- [Azure Security Center asset inventory management](../security-center/asset-inventory.md)
+
+- [For more information about tagging assets, see the resource naming and tagging decision guide](https://docs.microsoft.com/azure/cloud-adoption-framework/decision-guides/resource-tagging/?toc=/azure/azure-resource-manager/management/toc.json)
+
+**Responsibility**: Customer
+
+**Azure Security Center monitoring**: None
+
+### AM-3: Use only approved Azure services
+
+**Guidance**: Microsoft Azure Peering Service supports Azure Resource Manager based deployments and configuration enforcement using Azure Policy in the 'Microsoft.Peering/peerings' namespace. Use Azure Policy to audit and restrict which services users can provision in your environment. Use Azure Resource Graph to query for and discover resources within their subscriptions. You can also use Azure Monitor to create rules to trigger alerts when a non-approved service is detected.
+
+- [How to configure and manage Azure Policy](../governance/policy/tutorials/create-and-manage.md)
+
+- [How to deny a specific resource type with Azure Policy](https://docs.microsoft.com/azure/governance/policy/samples/built-in-policies#general)
+
+- [How to create queries with Azure Resource Graph Explorer](../governance/resource-graph/first-query-portal.md)
+
+**Responsibility**: Customer
+
+**Azure Security Center monitoring**: None
+
+### AM-5: Limit users' ability to interact with Azure Resource Manager
+
+**Guidance**: Configure Azure Conditional Access to limit users' ability to interact with Azure Resource Manager by configuring "Block access" for the "Microsoft Azure Management" App.
+
+- [How to configure Conditional Access to block access to Azure Resource Manager](../role-based-access-control/conditional-access-azure-management.md)
+
+**Responsibility**: Customer
+
+**Azure Security Center monitoring**: None
+
+## Logging and Threat Detection
+
+*For more information, see the [Azure Security Benchmark: Logging and Threat Detection](/azure/security/benchmarks/security-controls-v2-logging-threat-protection).*
+
+### LT-1: Enable threat detection for Azure resources
+
+**Guidance**: Ensure you are monitoring different types of Azure assets for potential threats and anomalies. Focus on getting high-quality alerts to reduce false positives for analysts to sort through. Alerts can be sourced from log data, agents, or other data.
+
+Use the Azure Security Center built-in threat detection capability, which is based on monitoring Azure service telemetry and analyzing service logs. Data is collected using the Log Analytics agent, which reads various security-related configurations and event logs from the system and copies the data to your workspace for analysis.
+
+In addition, use Azure Sentinel to build analytics rules, which hunt threats that match specific criteria across your environment. The rules generate incidents when the criteria are matched, so that you can investigate each incident. Azure Sentinel can also import third-party threat intelligence to enhance its threat detection capability.
+
+- [Threat protection in Azure Security Center](/azure/security-center/threat-protection)
+
+- [Azure Security Center security alerts reference guide](../security-center/alerts-reference.md)
+
+- [Create custom analytics rules to detect threats](../sentinel/tutorial-detect-threats-custom.md)
+
+- [Cyber threat intelligence with Azure Sentinel](/azure/architecture/example-scenario/data/sentinel-threat-intelligence)
+
+**Responsibility**: Customer
+
+**Azure Security Center monitoring**: None
+
+### LT-2: Enable threat detection for Azure identity and access management
+
+**Guidance**: Azure Active Directory (Azure AD) provides user logs that can be viewed with Azure AD reporting or integrated with Azure Monitor. These logs can also be integrated with Azure Sentinel or other security information and event management (SIEM) solution or monitoring tools for more sophisticated monitoring and analytics use cases:
+
+- Sign-in ΓÇô The sign-in report provides information about the usage of managed applications and user sign-in activities
+
+- Audit logs - Provides traceability through logs for all changes made by various features within Azure AD. Examples of audit logs include changes made to any resources within Azure AD like adding or removing users, apps, groups, roles and policies
+
+- Risky sign-in - A risky sign-in is an indicator for a sign-in attempt that might have been performed by someone who is not the legitimate owner of a user account
+
+- Users flagged for risk - A risky user is an indicator for a user account that might have been compromised
+
+Azure Security Center can also alert on certain suspicious activities such as an excessive number of failed authentication attempts by deprecated accounts in a subscription. In addition to the basic security hygiene monitoring, the Threat Protection module in Security Center can also collect additional in-depth security alerts from individual Azure compute resources (such as virtual machines, containers, app service), data resources (such as SQL DB and storage), and Azure service layers. This capability allows you to see account anomalies inside the individual resources.
+
+- [Audit activity reports in Azure AD](../active-directory/reports-monitoring/concept-audit-logs.md)
+
+- [Enable Azure Identity Protection](../active-directory/identity-protection/overview-identity-protection.md)
+
+- [Threat protection in Azure Security Center](/azure/security-center/threat-protection)
+
+**Responsibility**: Customer
+
+**Azure Security Center monitoring**: None
+
+### LT-3: Enable logging for Azure network activities
+
+**Guidance**: Configure connection telemetry to provides insight into the connectivity between your connection location and the
+
+Microsoft network via Microsoft Azure Peering Service.
+
+- [Configure connection telemetry](measure-connection-telemetry.md)
+
+**Responsibility**: Customer
+
+**Azure Security Center monitoring**: None
+
+### LT-4: Enable logging for Azure resources
+
+**Guidance**: Enable Azure Activity Log and send the logs to a Log Analytics workspace, Azure event hub, or Azure storage account for archival
+
+Activity logs provide insight into the operations that were performed on your Azure ExpressRoute resources at the control plane level. Using Azure Activity Log data, you can determine the "what, who, and when" for any write operations (PUT, POST, DELETE) performed at the control plane level for your resources.
+
+- [Azure Activity log](/azure/azure-monitor/platform/activity-log)
+
+**Responsibility**: Customer
+
+**Azure Security Center monitoring**: None
+
+### LT-5: Centralize security log management and analysis
+
+**Guidance**: Centralize logging storage and analysis to enable correlation. For each log source, ensure you have assigned a data owner, access guidance, storage location, what tools are used to process and access the data, and data retention requirements.
+
+Ensure you are integrating Azure activity logs into your central logging. Ingest logs via Azure Monitor to aggregate security data generated by endpoint devices, network resources, and other security systems. In Azure Monitor, use Log Analytics workspaces to query and perform analytics, and use Azure Storage accounts for long term and archival storage.
+
+In addition, enable and onboard data to Azure Sentinel or a third-party SIEM.
+
+Many organizations choose to use Azure Sentinel for ΓÇ£hotΓÇ¥ data that is used frequently and Azure Storage for ΓÇ£coldΓÇ¥ data that is used less frequently.
+
+- [How to collect platform logs and metrics with Azure Monitor](../azure-monitor/essentials/diagnostic-settings.md)
+
+- [How to onboard Azure Sentinel](../sentinel/quickstart-onboard.md)
+
+**Responsibility**: Customer
+
+**Azure Security Center monitoring**: None
+
+### LT-6: Configure log storage retention
+
+**Guidance**: In Azure Monitor, set log retention period for Log Analytics workspaces associated with your Azure resources according to your organization's compliance regulations.
+
+- [How to set log retention parameters](/azure/azure-monitor/platform/manage-cost-storage#change-the-data-retention-period)
+
+**Responsibility**: Customer
+
+**Azure Security Center monitoring**: None
+
+## Incident Response
+
+*For more information, see the [Azure Security Benchmark: Incident Response](/azure/security/benchmarks/security-controls-v2-incident-response).*
+
+### IR-1: Preparation ΓÇô update incident response process for Azure
+
+**Guidance**: Ensure your organization has processes to respond to security incidents, has updated these processes for Azure, and is regularly exercising them to ensure readiness.
+
+- [Implement security across the enterprise environment](/azure/cloud-adoption-framework/security/security-top-10#4-process-update-incident-response-ir-processes-for-cloud)
+
+- [Incident response reference guide](/microsoft-365/downloads/IR-Reference-Guide.pdf)
+
+**Responsibility**: Customer
+
+**Azure Security Center monitoring**: None
+
+### IR-2: Preparation ΓÇô setup incident notification
+
+**Guidance**: Set up security incident contact information in Azure Security Center. This contact information is used by Microsoft to contact you if the Microsoft Security Response Center (MSRC) discovers that your data has been accessed by an unlawful or unauthorized party. You also have options to customize incident alert and notification in different Azure services based on your incident response needs.
+
+- [How to set the Azure Security Center security contact](../security-center/security-center-provide-security-contact-details.md)
+
+**Responsibility**: Customer
+
+**Azure Security Center monitoring**: None
+
+### IR-3: Detection and analysis ΓÇô create incidents based on high-quality alerts
+
+**Guidance**: Ensure you have a process to create high-quality alerts and measure the quality of alerts. This allows you to learn lessons from past incidents and prioritize alerts for analysts, so they donΓÇÖt waste time on false positives.
+
+High-quality alerts can be built based on experience from past incidents, validated community sources, and tools designed to generate and clean up alerts by fusing and correlating diverse signal sources.
+
+Azure Security Center provides high-quality alerts across many Azure assets. You can use the ASC data connector to stream the alerts to Azure Sentinel. Azure Sentinel lets you create advanced alert rules to generate incidents automatically for an investigation.
+
+Export your Azure Security Center alerts and recommendations using the export feature to help identify risks to Azure resources. Export alerts and recommendations either manually or in an ongoing, continuous fashion.
+
+- [How to configure export](../security-center/continuous-export.md)
+
+- [How to stream alerts into Azure Sentinel](../sentinel/connect-azure-security-center.md)
+
+**Responsibility**: Customer
+
+**Azure Security Center monitoring**: None
+
+### IR-4: Detection and analysis ΓÇô investigate an incident
+
+**Guidance**: Ensure analysts can query and use diverse data sources, as they investigate potential incidents to build a full view of what happened. Diverse logs should be collected to track the activities of a potential attacker across the kill chain to avoid blind spots. You should also ensure insights and learnings are captured for other analysts and for future historical reference.
+
+The data sources for investigation include the centralized logging sources that are already being collected from the in-scope services and running systems, but can also include:
+
+- Network data ΓÇô use network security groups' flow logs, Azure Network Watcher, and Azure Monitor to capture network flow logs and other analytics information.
+
+- Snapshots of running systems:
+
+ - Use Azure virtual machine's snapshot capability to create a snapshot of the running system's disk.
+
+ - Use the operating system's native memory dump capability to create a snapshot of the running system's memory.
+
+ - Use the snapshot feature of the Azure services or your software's own capability to create snapshots of the running systems.
+
+Azure Sentinel provides extensive data analytics across virtually any log source and a case management portal to manage the full lifecycle of incidents. Intelligence information during an investigation can be associated with an incident for tracking and reporting purposes.
+
+- [Snapshot a Windows machine's disk](../virtual-machines/windows/snapshot-copy-managed-disk.md)
+
+- [Snapshot a Linux machine's disk](../virtual-machines/linux/snapshot-copy-managed-disk.md)
+
+- [Microsoft Azure Support diagnostic information and memory dump collection](https://azure.microsoft.com/support/legal/support-diagnostic-information-collection/)
+
+- [Investigate incidents with Azure Sentinel](../sentinel/tutorial-investigate-cases.md)
+
+**Responsibility**: Customer
+
+**Azure Security Center monitoring**: None
+
+### IR-5: Detection and analysis ΓÇô prioritize incidents
+
+**Guidance**: Provide context to analysts on which incidents to focus on first based on alert severity and asset sensitivity.
+
+Azure Security Center assigns a severity to each alert to help you prioritize which alerts should be investigated first. The severity is based on how confident Security Center is in the finding or the analytics used to issue the alert, as well as the confidence level that there was malicious intent behind the activity that led to the alert.
+
+Additionally, mark resources using tags and create a naming system to identify and categorize Azure resources, especially those processing sensitive data. It is your responsibility to prioritize the remediation of alerts based on the criticality of the Azure resources and environment where the incident occurred.
+
+- [Security alerts in Azure Security Center](../security-center/security-center-alerts-overview.md)
+
+- [Use tags to organize your Azure resources](/azure/azure-resource-manager/resource-group-using-tags)
+
+**Responsibility**: Customer
+
+**Azure Security Center monitoring**: None
+
+### IR-6: Containment, eradication and recovery ΓÇô automate the incident handling
+
+**Guidance**: Automate manual repetitive tasks to speed up response time and reduce the burden on analysts. Manual tasks take longer to execute, slowing each incident and reducing how many incidents an analyst can handle. Manual tasks also increase analyst fatigue, which increases the risk of human error that causes delays, and degrades the ability of analysts to focus effectively on complex tasks.
+Use workflow automation features in Azure Security Center and Azure Sentinel to automatically trigger actions or run a playbook to respond to incoming security alerts. The playbook takes actions, such as sending notifications, disabling accounts, and isolating problematic networks.
+
+- [Configure workflow automation in Security Center](../security-center/workflow-automation.md)
+
+- [Set up automated threat responses in Azure Security Center](https://docs.microsoft.com/azure/security-center/tutorial-security-incident#triage-security-alerts)
+
+- [Set up automated threat responses in Azure Sentinel](../sentinel/tutorial-respond-threats-playbook.md)
+
+**Responsibility**: Customer
+
+**Azure Security Center monitoring**: None
+
+## Posture and Vulnerability Management
+
+*For more information, see the [Azure Security Benchmark: Posture and Vulnerability Management](/azure/security/benchmarks/security-controls-v2-vulnerability-management).*
+
+### PV-1: Establish secure configurations for Azure services
+
+**Guidance**: Define security guardrails for infrastructure and DevOps teams by making it easy to securely configure the Azure services they use.
+
+Start your security configuration of Azure services with the service baselines in the Azure Security Benchmark and customize as needed for your organization.
+
+Use Azure Security Center to configure Azure Policy to audit and enforce configurations of your Azure resources.
+
+You can use Azure Blueprints to automate deployment and configuration of services and application environments, including Azure Resource Manager templates, Azure RBAC controls, and policies, in a single blueprint definition.
+
+- [Illustration of guardrails implementation in enterprise-scale landing zone](/azure/cloud-adoption-framework/ready/enterprise-scale/architecture#landing-zone-expanded-definition)
+
+- [Working with security policies in Azure Security Center](../security-center/tutorial-security-policy.md)
+
+- [Create and manage policies to enforce compliance](../governance/policy/tutorials/create-and-manage.md)
+
+- [Azure Blueprints](../governance/blueprints/overview.md)
+
+**Responsibility**: Customer
+
+**Azure Security Center monitoring**: None
+
+### PV-2: Sustain secure configurations for Azure services
+
+**Guidance**: Not applicable; this recommendation is intended for compute resources.
+
+**Responsibility**: Customer
+
+**Azure Security Center monitoring**: None
+
+### PV-8: Conduct regular attack simulation
+
+**Guidance**: As required, conduct penetration testing or red team activities on your Azure resources and ensure remediation of all critical security findings.
+Follow the Microsoft Cloud Penetration Testing Rules of Engagement to ensure your penetration tests are not in violation of Microsoft policies. Use Microsoft's strategy and execution of Red Teaming and live site penetration testing against Microsoft-managed cloud infrastructure, services, and applications.
+
+- [Penetration testing in Azure](../security/fundamentals/pen-testing.md)
+
+- [Penetration Testing Rules of Engagement](https://www.microsoft.com/msrc/pentest-rules-of-engagement?rtc=1)
+
+- [Microsoft Cloud Red Teaming](https://gallery.technet.microsoft.com/Cloud-Red-Teaming-b837392e)
+
+**Responsibility**: Shared
+
+**Azure Security Center monitoring**: None
+
+## Governance and Strategy
+
+*For more information, see the [Azure Security Benchmark: Governance and Strategy](/azure/security/benchmarks/security-controls-v2-governance-strategy).*
+
+### GS-1: Define asset management and data protection strategy
+
+**Guidance**: Ensure you document and communicate a clear strategy for continuous monitoring and protection of systems and data. Prioritize discovery, assessment, protection, and monitoring of business-critical data and systems.
+
+This strategy should include documented guidance, policy, and standards for the following elements:
+
+- Data classification standard in accordance with the business risks
+
+- Security organization visibility into risks and asset inventory
+
+- Security organization approval of Azure services for use
+
+- Security of assets through their lifecycle
+
+- Required access control strategy in accordance with organizational data classification
+
+- Use of Azure native and third-party data protection capabilities
+
+- Data encryption requirements for in-transit and at-rest use cases
+
+- Appropriate cryptographic standards
+
+For more information, see the following references:
+- [Azure Security Architecture Recommendation - Storage, data, and encryption](https://docs.microsoft.com/azure/architecture/framework/security/storage-data-encryption?toc=/security/compass/toc.json&amp;bc=/security/compass/breadcrumb/toc.json)
+
+- [Azure Security Fundamentals - Azure Data security, encryption, and storage](../security/fundamentals/encryption-overview.md)
+
+- [Cloud Adoption Framework - Azure data security and encryption best practices](https://docs.microsoft.com/azure/security/fundamentals/data-encryption-best-practices?toc=/azure/cloud-adoption-framework/toc.json&amp;bc=/azure/cloud-adoption-framework/_bread/toc.json)
+
+- [Azure Security Benchmark - Asset management](/azure/security/benchmarks/security-benchmark-v2-asset-management)
+
+- [Azure Security Benchmark - Data Protection](/azure/security/benchmarks/security-benchmark-v2-data-protection)
+
+**Responsibility**: Customer
+
+**Azure Security Center monitoring**: None
+
+### GS-2: Define enterprise segmentation strategy
+
+**Guidance**: Establish an enterprise-wide strategy to segmenting access to assets using a combination of identity, network, application, subscription, management group, and other controls.
+
+Carefully balance the need for security separation with the need to enable daily operation of the systems that need to communicate with each other and access data.
+
+Ensure that the segmentation strategy is implemented consistently across control types including network security, identity and access models, and application permission/access models, and human process controls.
+
+- [Guidance on segmentation strategy in Azure (video)](/security/compass/microsoft-security-compass-introduction#azure-components-and-reference-model-2151)
+
+- [Guidance on segmentation strategy in Azure (document)](/security/compass/governance#enterprise-segmentation-strategy)
+
+- [Align network segmentation with enterprise segmentation strategy](/security/compass/network-security-containment#align-network-segmentation-with-enterprise-segmentation-strategy)
+
+**Responsibility**: Customer
+
+**Azure Security Center monitoring**: None
+
+### GS-3: Define security posture management strategy
+
+**Guidance**: Continuously measure and mitigate risks to your individual assets and the environment they are hosted in. Prioritize high value assets and highly-exposed attack surfaces, such as published applications, network ingress and egress points, user and administrator endpoints, etc.
+
+- [Azure Security Benchmark - Posture and vulnerability management](/azure/security/benchmarks/security-benchmark-v2-posture-vulnerability-management)
+
+**Responsibility**: Customer
+
+**Azure Security Center monitoring**: None
+
+### GS-4: Align organization roles, responsibilities, and accountabilities
+
+**Guidance**: Ensure you document and communicate a clear strategy for roles and responsibilities in your security organization. Prioritize providing clear accountability for security decisions, educating everyone on the shared responsibility model, and educate technical teams on technology to secure the cloud.
+
+- [Azure Security Best Practice 1 ΓÇô People: Educate Teams on Cloud Security Journey](/azure/cloud-adoption-framework/security/security-top-10#1-people-educate-teams-about-the-cloud-security-journey)
+
+- [Azure Security Best Practice 2 - People: Educate Teams on Cloud Security Technology](/azure/cloud-adoption-framework/security/security-top-10#2-people-educate-teams-on-cloud-security-technology)
+
+- [Azure Security Best Practice 3 - Process: Assign Accountability for Cloud Security Decisions](/azure/cloud-adoption-framework/security/security-top-10#4-process-update-incident-response-ir-processes-for-cloud)
+
+**Responsibility**: Customer
+
+**Azure Security Center monitoring**: None
+
+### GS-5: Define network security strategy
+
+**Guidance**: Establish an Azure network security approach as part of your organizationΓÇÖs overall security access control strategy.
+
+This strategy should include documented guidance, policy, and standards for the following elements:
+
+- Centralized network management and security responsibility
+
+- Virtual network segmentation model aligned with the enterprise segmentation strategy
+
+- Remediation strategy in different threat and attack scenarios
+
+- Internet edge and ingress and egress strategy
+
+- Hybrid cloud and on-premises interconnectivity strategy
+
+- Up-to-date network security artifacts (e.g. network diagrams, reference network architecture)
+
+For more information, see the following references:
+- [Azure Security Best Practice 11 - Architecture. Single unified security strategy](/azure/cloud-adoption-framework/security/security-top-10#11-architecture-establish-a-single-unified-security-strategy)
+
+- [Azure Security Benchmark - Network Security](/azure/security/benchmarks/security-benchmark-v2-network-security)
+
+- [Azure network security overview](../security/fundamentals/network-overview.md)
+
+- [Enterprise network architecture strategy](/azure/cloud-adoption-framework/ready/enterprise-scale/architecture)
+
+**Responsibility**: Customer
+
+**Azure Security Center monitoring**: None
+
+### GS-6: Define identity and privileged access strategy
+
+**Guidance**: Establish an Azure identity and privileged access approaches as part of your organizationΓÇÖs overall security access control strategy.
+
+This strategy should include documented guidance, policy, and standards for the following elements:
+
+- A centralized identity and authentication system and its interconnectivity with other internal and external identity systems
+
+- Strong authentication methods in different use cases and conditions
+
+- Protection of highly privileged users
+
+- Anomaly user activities monitoring and handling
+
+- User identity and access review and reconciliation process
+
+For more information, see the following references:
+
+- [Azure Security Benchmark - Identity management](/azure/security/benchmarks/security-benchmark-v2-identity-management)
+
+- [Azure Security Benchmark - Privileged access](/azure/security/benchmarks/security-benchmark-v2-privileged-access)
+
+- [Azure Security Best Practice 11 - Architecture. Single unified security strategy](/azure/cloud-adoption-framework/security/security-top-10#11-architecture-establish-a-single-unified-security-strategy)
+
+- [Azure identity management security overview](../security/fundamentals/identity-management-overview.md)
+
+**Responsibility**: Customer
+
+**Azure Security Center monitoring**: None
+
+### GS-7: Define logging and threat response strategy
+
+**Guidance**: Establish a logging and threat response strategy to rapidly detect and remediate threats while meeting compliance requirements. Prioritize providing analysts with high-quality alerts and seamless experiences so that they can focus on threats rather than integration and manual steps.
+
+This strategy should include documented guidance, policy, and standards for the following elements:
+
+- The security operations (SecOps) organizationΓÇÖs role and responsibilities
+
+- A well-defined incident response process aligning with NIST or another industry framework
+
+- Log capture and retention to support threat detection, incident response, and compliance needs
+
+- Centralized visibility of and correlation information about threats, using SIEM, native Azure capabilities, and other sources
+
+- Communication and notification plan with your customers, suppliers, and public parties of interest
+
+- Use of Azure native and third-party platforms for incident handling, such as logging and threat detection, forensics, and attack remediation and eradication
+
+- Processes for handling incidents and post-incident activities, such as lessons learned and evidence retention
+
+For more information, see the following references:
+
+- [Azure Security Benchmark - Logging and threat detection](/azure/security/benchmarks/security-benchmark-v2-logging-threat-detection)
+
+- [Azure Security Benchmark - Incident response](/azure/security/benchmarks/security-benchmark-v2-incident-response)
+
+- [Azure Security Best Practice 4 - Process. Update Incident Response Processes for Cloud](/azure/cloud-adoption-framework/security/security-top-10#4-process-update-incident-response-ir-processes-for-cloud)
+
+- [Azure Adoption Framework, logging, and reporting decision guide](/azure/cloud-adoption-framework/decision-guides/logging-and-reporting/)
+
+- [Azure enterprise scale, management, and monitoring](/azure/cloud-adoption-framework/ready/enterprise-scale/management-and-monitoring)
+
+**Responsibility**: Customer
+
+**Azure Security Center monitoring**: None
+
+## Next steps
+
+- See the [Azure Security Benchmark V2 overview](/azure/security/benchmarks/overview)
+- Learn more about [Azure security baselines](/azure/security/benchmarks/security-baselines-overview)
postgresql Howto Hyperscale Scale Rebalance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/howto-hyperscale-scale-rebalance.md
Previously updated : 11/17/2020 Last updated : 04/09/2021 # Rebalance shards in Hyperscale (Citus) server group To take advantage of newly added nodes you must rebalance distributed table [shards](concepts-hyperscale-distributed-data.md#shards), which means moving
-some shards from existing nodes to the new ones. First verify that the new
-workers have successfully finished provisioning. Then start the shard
-rebalancer, by connecting to the cluster coordinator node with psql and
-running:
+some shards from existing nodes to the new ones.
-```sql
-SELECT rebalance_table_shards('distributed_table_name');
-```
+## Determine if the server group needs a rebalance
+
+The Azure portal can show you whether data is distributed equally between
+worker nodes in a server group. To see it, go to the **Shard rebalancer** page
+in the **Server group management** menu. If data is skewed between workers,
+you'll see the message **Rebalancing is recommended**, along with a list of the
+size of each node.
-The
+If data is already balanced, you'll see the message **Rebalancing is not
+recommended at this time**.
+
+## Run the shard rebalancer
+
+To start the shard rebalancer, you need to connect to the coordinator node of
+the server group and run the
[rebalance_table_shards](reference-hyperscale-functions.md#rebalance_table_shards)
-function rebalances all tables in the
+SQL function on distributed tables. The function rebalances all tables in the
[colocation](concepts-hyperscale-colocation.md) group of the table named in its argument. Thus you do not have to call the function for every distributed table, just call it on a representative table from each colocation group.
-**Next steps**
+```sql
+SELECT rebalance_table_shards('distributed_table_name');
+```
+
+## Monitor rebalance progress
+
+To watch the rebalancer after you start it, go back to the Azure portal. Open
+the **Shard rebalancer** page in **Server group management**. It will show the
+message **Rebalancing is underway** along with two tables.
+
+The first table shows the number of shards moving into or out of a node, for
+example, "6 of 24 moved in." The second table shows progress per database
+table: name, shard count affected, data size affected, and rebalancing status.
+
+Select the **Refresh** button to update the page. When rebalancing is complete,
+it will again say **Rebalancing is not recommended at this time**.
+## Next steps
- Learn more about server group [performance options](concepts-hyperscale-configuration-options.md).
postgresql Hyperscale Preview Features https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/hyperscale-preview-features.md
Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/)
Here are the features currently available for preview:
-* **[Columnar storage](concepts-hyperscale-columnar.md)**.
- Store selected tables' columns (rather than rows) contiguously
- on disk. Supports on-disk compression. Good for analytic and
- data warehousing workloads.
-* **[PostgreSQL 12 and 13](concepts-hyperscale-versions.md)**.
- Use the latest database version in your server group.
* **[Basic tier](concepts-hyperscale-tiers.md)**. Run a server group using only a coordinator node and no worker nodes. An economical way to do initial testing and development, and handle small production workloads.
+* **[PostgreSQL 12 and 13](concepts-hyperscale-versions.md)**.
+ Use the latest database version in your server group.
+* **[Citus
+ 10](concepts-hyperscale-versions.md#citus-and-other-extension-versions)**.
+ Installed automatically on server groups running PostgreSQL 13.
+* **[Columnar storage](concepts-hyperscale-columnar.md)**.
+ Store selected tables' columns (rather than rows) contiguously
+ on disk. Supports on-disk compression. Good for analytic and
+ data warehousing workloads.
* **[Read replicas](howto-hyperscale-read-replicas-portal.md)** (currently same-region only). Any changes that happen to the primary server group get reflected in its replica, and queries
Here are the features currently available for preview:
the server group at once, while limiting the number of active connections. It satisfies connection requests while keeping the coordinator node running smoothly.
-* **[PgAudit](concepts-hyperscale-audit.md)**. Provides detailed
+* **[pgAudit](concepts-hyperscale-audit.md)**. Provides detailed
session and object audit logging via the standard PostgreSQL logging facility. It produces audit logs required to pass certain government, financial, or ISO certification audits.
postgresql Postgresql Hyperscale Security Baseline https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/postgresql-hyperscale-security-baseline.md
Title: Azure security baseline for Azure Database for PostgreSQL - Hyperscale (Citus)
-description: The Azure Database for PostgreSQL - Hyperscale (Citus) security baseline provides procedural guidance and resources for implementing the security recommendations specified in the Azure Security Benchmark.
+ Title: Azure security baseline for Azure Database for PostgreSQL - Hyperscale
+description: The Azure Database for PostgreSQL - Hyperscale security baseline provides procedural guidance and resources for implementing the security recommendations specified in the Azure Security Benchmark.
Previously updated : 08/04/2020 Last updated : 04/08/2021
-# Azure security baseline for Azure Database for PostgreSQL - Hyperscale (Citus)
+# Azure security baseline for Azure Database for PostgreSQL - Hyperscale
-The Azure Security Baseline for Azure Database for PostgreSQL - Hyperscale (Citus) contains recommendations that will help you improve the security posture of your deployment.
+This security baseline applies guidance from the [Azure Security Benchmark version1.0](../security/benchmarks/overview-v1.md) to Azure Database for PostgreSQL - Hyperscale. The Azure Security Benchmark provides recommendations on how you can secure your cloud solutions on Azure. The content is grouped by the **security controls** defined by the Azure Security Benchmark and the related guidance applicable to Azure Database for PostgreSQL - Hyperscale.
-The baseline for this service is drawn from the [Azure Security Benchmark version 1.0](../security/benchmarks/overview.md), which provides recommendations on how you can secure your cloud solutions on Azure with our best practices guidance.
+> [!NOTE]
+> **Controls** not applicable to Azure Database for PostgreSQL - Hyperscale, or for which the responsibility is Microsoft's, have been excluded. To see how Azure Database for PostgreSQL - Hyperscale completely maps to the Azure Security Benchmark, see the **[full Azure Database for PostgreSQL - Hyperscale security baseline mapping file](https://github.com/MicrosoftDocs/SecurityBenchmarks/raw/master/Azure%20Offer%20Security%20Baselines/1.1/azure-database-for-postgresql-hyperscale-security-baseline-v1.1.xlsx)**.
-For more information, see [Azure Security Baselines overview](../security/benchmarks/security-baselines-overview.md).
+## Network Security
-## Network security
-
-*For more information, see [Security control: Network security](../security/benchmarks/security-control-network-security.md).*
+*For more information, see the [Azure Security Benchmark: Network Security](../security/benchmarks/security-control-network-security.md).*
### 1.1: Protect Azure resources within virtual networks **Guidance**: Azure Database for PostgreSQL server firewall prevents all access to your Hyperscale (Citus) coordinator node until you specify which computers have permission. The firewall grants access to the server based on the originating IP address of each request. To configure your firewall, you create firewall rules that specify ranges of acceptable IP addresses. You can create firewall rules at the server level. -- [How to configure Firewall rules in Azure Database for PostgreSQL - Hyperscale (Citus)](./concepts-hyperscale-firewall-rules.md)-
-**Azure Security Center monitoring**: Currently not available
+- [How to configure Firewall rules in Azure Database for PostgreSQL - Hyperscale (Citus)](concepts-hyperscale-firewall-rules.md)
**Responsibility**: Customer
+**Azure Security Center monitoring**: The [Azure Security Benchmark](/azure/governance/policy/samples/azure-security-benchmark) is the default policy initiative for Security Center and is the foundation for [Security Center's recommendations](/azure/security-center/security-center-recommendations). The Azure Policy definitions related to this control are enabled automatically by Security Center. Alerts related to this control may require an [Azure Defender](/azure/security-center/azure-defender) plan for the related services.
+
+**Azure Policy built-in definitions - Microsoft.DBforPostgreSQL**:
++ ### 1.9: Maintain standard security configurations for network devices **Guidance**: Define and implement standard security configurations for network settings and network resources associated with your Azure Database for PostgreSQL instances with Azure Policy. Use Azure Policy aliases in the "Microsoft.Network" namespace to create custom policies to audit or enforce the network configuration of your Azure Database for PostgreSQL instances. - [How to configure and manage Azure Policy](../governance/policy/tutorials/create-and-manage.md) -- [Azure Policy samples for networking](../governance/policy/samples/built-in-policies.md#network)
+- [Azure Policy samples for networking](https://docs.microsoft.com/azure/governance/policy/samples/built-in-policies#network)
- [How to create an Azure Blueprint](../governance/blueprints/create-blueprint-portal.md)
-**Azure Security Center monitoring**: Not applicable
- **Responsibility**: Customer
-## Logging and monitoring
+**Azure Security Center monitoring**: None
-*For more information, see [Security control: Logging and monitoring](../security/benchmarks/security-control-logging-monitoring.md).*
+## Logging and Monitoring
+
+*For more information, see the [Azure Security Benchmark: Logging and Monitoring](../security/benchmarks/security-control-logging-monitoring.md).*
### 2.2: Configure central security log management
Also, ingest logs via Azure Monitor to aggregate security data generated by Hype
- [How to enable Diagnostic Settings for Azure Activity Log](../azure-monitor/essentials/activity-log.md) -- [Metrics in Hyperscale (Citus)](./concepts-hyperscale-monitoring.md)
+- [Metrics in Hyperscale (Citus)](concepts-hyperscale-monitoring.md)
- [How to onboard Azure Sentinel](../sentinel/quickstart-onboard.md)
-**Azure Security Center monitoring**: Currently not available
- **Responsibility**: Customer
+**Azure Security Center monitoring**: None
+ ### 2.3: Enable audit logging for Azure resources **Guidance**: Hyperscale (Citus) provides metrics for each node in a server group. The metrics give insight into the behavior of supporting resources. Each metric is emitted at a one-minute frequency, and has up to 30 days of history.
For control plane audit logging, enable Azure Activity Log diagnostic settings a
Also, ingest logs via Azure Monitor to aggregate security data generated by Hyperscale (Citus). Within the Azure Monitor, use Log Analytics workspace(s) to query and perform analytics, and use storage accounts for long-term/archival storage. Alternatively, you may enable, and on-board data to Azure Sentinel or a third-party Security Incident and Event Management (SIEM). -- [Metrics in Hyperscale (Citus)](./concepts-hyperscale-monitoring.md)
+- [Metrics in Hyperscale (Citus)](concepts-hyperscale-monitoring.md)
- [How to enable Diagnostic Settings for Azure Activity Log](../azure-monitor/essentials/activity-log.md) - [How to onboard Azure Sentinel](../sentinel/quickstart-onboard.md)
-**Azure Security Center monitoring**: Currently not available
- **Responsibility**: Customer
+**Azure Security Center monitoring**: None
+ ### 2.5: Configure security log storage retention **Guidance**: Within Azure Monitor, for the Log Analytics workspace being used to hold your Hyperscale (Citus) logs, set the retention period according to your organization's compliance regulations. Use Azure Storage Accounts for long-term/archival storage. -- [How to set log retention parameters for Log Analytics Workspaces](../azure-monitor/logs/manage-cost-storage.md#change-the-data-retention-period)--- [Storing resource logs in an Azure Storage Account](../azure-monitor/essentials/resource-logs.md#send-to-azure-storage)
+- [How to set log retention parameters for Log Analytics Workspaces](https://docs.microsoft.com/azure/azure-monitor/logs/manage-cost-storage#change-the-data-retention-period)
-**Azure Security Center monitoring**: Not applicable
+- [Storing resource logs in an Azure Storage Account](https://docs.microsoft.com/azure/azure-monitor/essentials/resource-logs#send-to-azure-storage)
**Responsibility**: Customer
+**Azure Security Center monitoring**: None
+ ### 2.6: Monitor and review logs **Guidance**: Analyze and monitor logs from your Hyperscale (Citus) instances for anomalous behavior. Use Azure Monitor's Log Analytics to review logs and perform queries on log data. Alternatively, you may enable and on-board data to Azure Sentinel or a third-party SIEM.
Also, ingest logs via Azure Monitor to aggregate security data generated by Hype
- [How to perform custom queries in Azure Monitor](../azure-monitor/logs/get-started-queries.md)
-**Azure Security Center monitoring**: Not applicable
- **Responsibility**: Customer
+**Azure Security Center monitoring**: None
+ ### 2.7: Enable alerts for anomalous activities **Guidance**: You may enable diagnostic settings for Hyperscale (Citus) and send logs to a Log Analytics workspace. You can configure and receive an alert based on monitoring metrics for your Azure services. Use Azure Monitor's Log Analytics to review logs and perform queries on log data. Alternatively, you may enable and on-board data to Azure Sentinel or a third-party SIEM. Onboard your Log Analytics workspace to Azure Sentinel as it provides a security orchestration automated response (SOAR) solution. This allows for playbooks (automated solutions) to be created and used to remediate security issues. -- [Metrics in Hyperscale (Citus)](./howto-hyperscale-alert-on-metric.md)
+- [Metrics in Hyperscale (Citus)](howto-hyperscale-alert-on-metric.md)
- [How to configure Diagnostic Settings for the Azure Activity Log](../azure-monitor/essentials/activity-log.md) - [How to onboard Azure Sentinel](../sentinel/quickstart-onboard.md)
-**Azure Security Center monitoring**: Currently not available
- **Responsibility**: Customer
-## Identity and access control
+**Azure Security Center monitoring**: None
-*For more information, see [Security control: Identity and access control](../security/benchmarks/security-control-identity-access-control.md).*
+## Identity and Access Control
+
+*For more information, see the [Azure Security Benchmark: Identity and Access Control](../security/benchmarks/security-control-identity-access-control.md).*
### 3.1: Maintain an inventory of administrative accounts
Additionally, The PostgreSQL engine uses roles to control access to database obj
- [Understand custom roles for Azure subscription](../role-based-access-control/custom-roles.md) -- [Understand Azure Database for PostgreSQL resource provider operations](../role-based-access-control/resource-provider-operations.md#microsoftdbforpostgresql) --- [Understand access management for Azure Database for PostgreSQL](./concepts-security.md#access-management)
+- [Understand Azure Database for PostgreSQL resource provider operations](https://docs.microsoft.com/azure/role-based-access-control/resource-provider-operations#microsoftdbforpostgresql)
-- [How to create users in Azure Database for PostgreSQL - Hyperscale (Citus)](./howto-hyperscale-create-users.md)
+- [Understand access management for Azure Database for PostgreSQL](https://docs.microsoft.com/azure/postgresql/concepts-security#access-management)
-- [How to connect to PostgreSQL - Hyperscale (Citus) using psql](./quickstart-create-hyperscale-portal.md#connect-to-the-database-using-psql)
+- [How to create users in Azure Database for PostgreSQL - Hyperscale (Citus)](howto-hyperscale-create-users.md)
-
-**Azure Security Center monitoring**: Not applicable
+- [How to connect to PostgreSQL - Hyperscale (Citus) using psql](https://docs.microsoft.com/azure/postgresql/quickstart-create-hyperscale-portal#connect-to-the-database-using-psql)
**Responsibility**: Customer
-### 3.2: Change default passwords where applicable
+**Azure Security Center monitoring**: None
-**Guidance**: Azure AD does not have the concept of default passwords. Other Azure resources requiring a password forces a password to be created with complexity requirements and a minimum password length, which differs depending on the service. You are responsible for third-party applications and marketplace services that may use default passwords.
+### 3.2: Change default passwords where applicable
-**Azure Security Center monitoring**: Not applicable
+**Guidance**: Azure Active Directory (Azure AD) does not have the concept of default passwords. Other Azure resources requiring a password forces a password to be created with complexity requirements and a minimum password length, which differs depending on the service. You are responsible for third-party applications and marketplace services that may use default passwords.
**Responsibility**: Customer
-### 3.3: Use dedicated administrative accounts
+**Azure Security Center monitoring**: None
-**Guidance**: Create standard operating procedures around the use of dedicated administrative accounts that are used to access your Hyperscale (Citus) instances. The admin accounts for managing the Azure resource are tied to Azure Active Directory, there are also local server admin accounts that exist within the Hyperscale (Citus) server group for managing database access permissions. Use Azure Security Center Identity and access management to monitor the number of administrative accounts within Azure Active Directory.
+### 3.3: Use dedicated administrative accounts
-- [Understand Azure Security Center Identity and Access](../security-center/security-center-identity-access.md)
+**Guidance**: Create standard operating procedures around the use of dedicated administrative accounts that are used to access your Hyperscale (Citus) instances. The admin accounts for managing the Azure resource are tied to Azure Active Directory (Azure AD), there are also local server admin accounts that exist within the Hyperscale (Citus) server group for managing database access permissions. Use Azure Security Center Identity and access management to monitor the number of administrative accounts within Azure AD.
-- [How to create users in Azure Database for PostgreSQL - Hyperscale (Citus)](./howto-hyperscale-create-users.md)
+- [Understand Azure Security Center Identity and Access](../security-center/security-center-identity-access.md)
-**Azure Security Center monitoring**: Not applicable
+- [How to create users in Azure Database for PostgreSQL - Hyperscale (Citus)](howto-hyperscale-create-users.md)
**Responsibility**: Customer
+**Azure Security Center monitoring**: None
+ ### 3.5: Use multi-factor authentication for all Azure Active Directory based access
-**Guidance**: For access to the Azure portal enable Azure Active Directory Multi-Factor Authentication (MFA) and follow Azure Security Center Identity and Access Management recommendations.
+**Guidance**: For access to the Azure portal enable Azure Active Directory (Azure AD) multifactor authentication and follow Azure Security Center Identity and Access Management recommendations.
-- [How to enable MFA in Azure](../active-directory/authentication/howto-mfa-getstarted.md)
+- [How to enable multifactor authentication in Azure](../active-directory/authentication/howto-mfa-getstarted.md)
- [How to monitor identity and access within Azure Security Center](../security-center/security-center-identity-access.md) -
-**Azure Security Center monitoring**: Yes
- **Responsibility**: Customer
-### 3.6: Use dedicated machines (Privileged Access Workstations) for all administrative tasks
+**Azure Security Center monitoring**: None
-**Guidance**: Use Privileged Access Workstations (PAWs) with Multi-Factor Authentication (MFA) configured to log into and configure Azure resources.
--- [Learn about Privileged Access Workstations](https://4sysops.com/archives/understand-the-microsoft-privileged-access-workstation-paw-security-model/)
+### 3.6: Use dedicated machines (Privileged Access Workstations) for all administrative tasks
-- [How to enable MFA in Azure](../active-directory/authentication/howto-mfa-getstarted.md)
+**Guidance**: Use Privileged Access Workstations (PAWs) with multifactor authentication configured to log into and configure Azure resources.
+- [Learn about Privileged Access Workstations](https://4sysops.com/archives/understand-the-microsoft-privileged-access-workstation-paw-security-model/)
-**Azure Security Center monitoring**: Not applicable
+- [How to enable multifactor authentication in Azure](../active-directory/authentication/howto-mfa-getstarted.md)
**Responsibility**: Customer
+**Azure Security Center monitoring**: None
+ ### 3.7: Alert on account login behavior deviation
-**Guidance**: Use Azure Active Directory (AD) Privileged Identity Management (PIM) for generation of logs and alerts when suspicious or unsafe activity occurs in the environment.
+**Guidance**: Use Azure Active Directory (Azure AD) Privileged Identity Management (PIM) for generation of logs and alerts when suspicious or unsafe activity occurs in the environment.
Use Azure AD Risk Detections to view alerts and reports on risky user behavior.
Use Azure AD Risk Detections to view alerts and reports on risky user behavior.
- [Understand Azure AD risk detections](../active-directory/identity-protection/overview-identity-protection.md) -
-**Azure Security Center monitoring**: Not applicable
- **Responsibility**: Customer
+**Azure Security Center monitoring**: None
+ ### 3.8: Manage Azure resources from only approved locations **Guidance**: Use Conditional Access Named Locations to allow portal and Azure Resource Manager access from only specific logical groupings of IP address ranges or countries/regions. - [How to configure Named Locations in Azure](../active-directory/reports-monitoring/quickstart-configure-named-locations.md)
-**Azure Security Center monitoring**: Not applicable
- **Responsibility**: Customer
-### 3.9: Use Azure Active Directory
-
-**Guidance**: Use Azure Active Directory (AD) as the central authentication and authorization system to manage PostgreSQL resources. Azure AD protects data by using strong encryption for data at rest and in transit. Azure AD also salts, hashes, and securely stores user credentials.
+**Azure Security Center monitoring**: None
-Users within a Hyperscale (Citus) server group cannot be tied directly to Azure Active Directory accounts. To modify user privileges for database object access use standard PostgreSQL commands with tools such as PgAdmin or psql.
--- [Modify privilege's for user roles](./howto-hyperscale-create-users.md#how-to-modify-privileges-for-user-role)
+### 3.9: Use Azure Active Directory
-- [How to create and configure an AAD instance](../active-directory/fundamentals/active-directory-access-create-new-tenant.md)
+**Guidance**: Use Azure Active Directory (Azure AD) as the central authentication and authorization system to manage PostgreSQL resources. Azure AD protects data by using strong encryption for data at rest and in transit. Azure AD also salts, hashes, and securely stores user credentials.
+Users within a Hyperscale (Citus) server group cannot be tied directly to Azure AD accounts. To modify user privileges for database object access use standard PostgreSQL commands with tools such as PgAdmin or psql.
+- [Modify privilege's for user roles](https://docs.microsoft.com/azure/postgresql/howto-hyperscale-create-users#how-to-modify-privileges-for-user-role)
-**Azure Security Center monitoring**: Not applicable
+- [How to create and configure an Azure AD instance](../active-directory/fundamentals/active-directory-access-create-new-tenant.md)
**Responsibility**: Customer
+**Azure Security Center monitoring**: None
+ ### 3.10: Regularly review and reconcile user access
-**Guidance**: Review and reconcile access for both users who have access to the local database as well as through Azure Active Directory to manage PostgreSQL resources.
+**Guidance**: Review and reconcile access for both users who have access to the local database as well as through Azure Active Directory (Azure AD) to manage PostgreSQL resources.
-For users with access to manage database Azure resources, review the Azure Active Directory (AD) logs to help discover stale accounts. In addition, use Azure Identity Access Reviews to efficiently manage group memberships, access to enterprise applications that may be used to access Hyperscale (Citus), and role assignments. User access should be reviewed on a regular basis such as every 90 days to make sure only the right users have continued access.
+For users with access to manage database Azure resources, review the Azure AD logs to help discover stale accounts. In addition, use Azure Identity Access Reviews to efficiently manage group memberships, access to enterprise applications that may be used to access Hyperscale (Citus), and role assignments. User access should be reviewed on a regular basis such as every 90 days to make sure only the right users have continued access.
- [Review PostgreSQL users and assigned roles](https://www.postgresql.org/docs/current/database-roles.html) -- [Understand Azure AD Reporting](../active-directory/reports-monitoring/index.yml)
+- [Understand Azure AD Reporting](/azure/active-directory/reports-monitoring/)
- [How to use Azure Identity Access Reviews](../active-directory/governance/access-reviews-overview.md)
-**Azure Security Center monitoring**: Yes
- **Responsibility**: Customer
-### 3.11: Monitor attempts to access deactivated credentials
-
-**Guidance**:
-Within Azure Active Directory (AD), you have access to Azure AD Sign-in Activity, Audit and Risk Event log sources, which allow you to integrate with any SIEM/Monitoring tool.
+**Azure Security Center monitoring**: None
-You can streamline this process by creating Diagnostic Settings for Azure Active Directory user accounts and sending the audit logs and sign-in logs to a Log Analytics Workspace. You can configure desired Alerts within Log Analytics Workspace.
+### 3.11: Monitor attempts to access deactivated credentials
-- [How to integrate Azure Activity Logs into Azure Monitor](../active-directory/reports-monitoring/howto-integrate-activity-logs-with-log-analytics.md)
+**Guidance**: Within Azure Active Directory (Azure AD), you have access to Azure AD Sign-in Activity, Audit and Risk Event log sources, which allow you to integrate with any SIEM/Monitoring tool.
+You can streamline this process by creating Diagnostic Settings for Azure AD user accounts and sending the audit logs and sign-in logs to a Log Analytics Workspace. You can configure desired Alerts within Log Analytics Workspace.
-**Azure Security Center monitoring**: Not applicable
+- [How to integrate Azure Activity Logs into Azure Monitor](/azure/active-directory/reports-monitoring/howto-integrate-activity-logs-with-log-analytics)
**Responsibility**: Customer
+**Azure Security Center monitoring**: None
+ ### 3.12: Alert on account sign-in behavior deviation
-**Guidance**: Use Azure Active Directory's Identity Protection and risk detection features to configure automated responses to detected suspicious actions at the Azure Active Directory (AD) level. You may enable automated responses through Azure Sentinel to implement your organization's security responses.
+**Guidance**: Use Azure Active Directory (Azure AD)'s Identity Protection and risk detection features to configure automated responses to detected suspicious actions at the Azure AD level. You may enable automated responses through Azure Sentinel to implement your organization's security responses.
You can also ingest logs into Azure Sentinel for further investigation.
You can also ingest logs into Azure Sentinel for further investigation.
- [How to onboard Azure Sentinel](../sentinel/quickstart-onboard.md)
-**Azure Security Center monitoring**: Not applicable
- **Responsibility**: Customer
+**Azure Security Center monitoring**: None
+ ### 3.13: Provide Microsoft with access to relevant customer data during support scenarios **Guidance**: Currently not available; Customer Lockbox is not yet supported for Hyperscale (Citus). -- [List of Customer Lockbox supported services](../security/fundamentals/customer-lockbox-overview.md#supported-services-and-scenarios-in-general-availability)
+- [List of Customer Lockbox supported services](https://docs.microsoft.com/azure/security/fundamentals/customer-lockbox-overview#supported-services-and-scenarios-in-general-availability)
-**Azure Security Center monitoring**: Currently not available
+**Responsibility**: Customer
-**Responsibility**: Currently not available
+**Azure Security Center monitoring**: None
-## Data protection
+## Data Protection
-*For more information, see [Security control: Data protection](../security/benchmarks/security-control-data-protection.md).*
+*For more information, see the [Azure Security Benchmark: Data Protection](../security/benchmarks/security-control-data-protection.md).*
### 4.1: Maintain an inventory of sensitive Information
You can also ingest logs into Azure Sentinel for further investigation.
- [How to create and use tags](../azure-resource-manager/management/tag-resources.md)
-**Azure Security Center monitoring**: Not applicable
- **Responsibility**: Customer
+**Azure Security Center monitoring**: None
+ ### 4.2: Isolate systems storing or processing sensitive information **Guidance**: Implement separate subscriptions and/or management groups for development, test, and production. Use a combination of administrative roles and firewall rules to isolate and limit network access to your Azure Database for PostgreSQL instances. - [How to create additional Azure subscriptions](../cost-management-billing/manage/create-subscription.md) -- [How to create Management Groups](../governance/management-groups/create-management-group-portal.md)--- [Understand firewall rules in Azure Database for PostgreSQL - Hyperscale (Citus)](./concepts-hyperscale-firewall-rules.md)
+- [How to create management groups](../governance/management-groups/create-management-group-portal.md)
-- [Understand roles in Hyperscale (Citus)](./howto-hyperscale-create-users.md)
+- [Understand firewall rules in Azure Database for PostgreSQL - Hyperscale (Citus)](concepts-hyperscale-firewall-rules.md)
-**Azure Security Center monitoring**: Not applicable
+- [Understand roles in Hyperscale (Citus)](howto-hyperscale-create-users.md)
**Responsibility**: Customer
+**Azure Security Center monitoring**: None
+ ### 4.4: Encrypt all sensitive information in transit **Guidance**: Client application connections to the Hyperscale (Citus) coordinator node require Transport Layer Security (TLS) 1.2. Enforcing TLS connections between your database server and your client applications helps protect against "man-in-the-middle" attacks by encrypting the data stream between the server and your application.
For all Azure Database for PostgreSQL servers provisioned through the Azure port
In some cases, third-party applications require a local certificate file generated from a trusted Certificate Authority (CA) certificate file (.cer) to connect securely. -- [How to configure TLS in Azure Database for PostgreSQL - Hyperscale (Citus)](./concepts-hyperscale-ssl-connection-security.md)
+- [How to configure TLS in Azure Database for PostgreSQL - Hyperscale (Citus)](concepts-hyperscale-ssl-connection-security.md)
-- [Applications that require certificate verification for TLS connectivity](./concepts-hyperscale-ssl-connection-security.md)
+- [Applications that require certificate verification for TLS connectivity](concepts-hyperscale-ssl-connection-security.md)
+**Responsibility**: Shared
+**Azure Security Center monitoring**: The [Azure Security Benchmark](/azure/governance/policy/samples/azure-security-benchmark) is the default policy initiative for Security Center and is the foundation for [Security Center's recommendations](/azure/security-center/security-center-recommendations). The Azure Policy definitions related to this control are enabled automatically by Security Center. Alerts related to this control may require an [Azure Defender](/azure/security-center/azure-defender) plan for the related services.
-**Azure Security Center monitoring**: Yes
+**Azure Policy built-in definitions - Microsoft.DBforPostgreSQL**:
-**Responsibility**: Shared
-### 4.6: Use Azure RBAC to control access to resources
+### 4.6: Use Azure RBAC to control access to resources
**Guidance**: Use Azure role-based access control (Azure RBAC) to control access to the Hyperscale (Citus) control plane (e.g. Azure portal). Azure RBAC does not affect user permissions within the database.
To modify user privileges at the database level, use standard PostgreSQL command
- [How to configure Azure RBAC](../role-based-access-control/role-assignments-portal.md) -- [How to configure user access with SQL for Azure Database for PostgreSQL](./howto-hyperscale-create-users.md)--
-**Azure Security Center monitoring**: Yes
+- [How to configure user access with SQL for Azure Database for PostgreSQL](howto-hyperscale-create-users.md)
**Responsibility**: Customer
-### 4.8: Encrypt sensitive information at rest
-
-**Guidance**:
-At least once a day, Azure Database for PostgreSQL Hyperscale (Citus) takes snapshot backups of data files and the database transaction log. The backups allow you to restore a server to any point in time within the retention period. (The retention period is currently 35 days for all clusters.) All backups are encrypted using AES 256-bit encryption. The PostgreSQL Hyperscale (Citus) offering uses Microsoft-managed keys for encryption.
--- [Understand encryption for Azure PostgreSQL - Hyperscale (Citus) backups](./concepts-hyperscale-backup.md)
+**Azure Security Center monitoring**: None
+### 4.8: Encrypt sensitive information at rest
+**Guidance**: At least once a day, Azure Database for PostgreSQL Hyperscale (Citus) takes snapshot backups of data files and the database transaction log. The backups allow you to restore a server to any point in time within the retention period. (The retention period is currently 35 days for all clusters.) All backups are encrypted using AES 256-bit encryption. The PostgreSQL Hyperscale (Citus) offering uses Microsoft-managed keys for encryption.
-**Azure Security Center monitoring**: Not applicable
+- [Understand encryption for Azure PostgreSQL - Hyperscale (Citus) backups](concepts-hyperscale-backup.md)
**Responsibility**: Shared
+**Azure Security Center monitoring**: None
+ ### 4.9: Log and alert on changes to critical Azure resources **Guidance**: Use Azure Monitor with the Azure Activity Log to create alerts for when changes take place to production instances of Hyperscale (Citus) and other critical or related resources. - [How to create alerts for Azure Activity Log events](../azure-monitor/alerts/alerts-activity-log.md)
-**Azure Security Center monitoring**: Yes
- **Responsibility**: Customer
-## Vulnerability management
+**Azure Security Center monitoring**: None
+
+## Vulnerability Management
-*For more information, see [Security control: Vulnerability management](../security/benchmarks/security-control-vulnerability-management.md).*
+*For more information, see the [Azure Security Benchmark: Vulnerability Management](../security/benchmarks/security-control-vulnerability-management.md).*
### 5.1: Run automated vulnerability scanning tools
At least once a day, Azure Database for PostgreSQL Hyperscale (Citus) takes sn
- [Feature coverage for Azure PaaS services in Azure Security Center](../security-center/features-paas.md)
-**Azure Security Center monitoring**: Currently not available
+**Responsibility**: Shared
-**Responsibility**: Currently not available
+**Azure Security Center monitoring**: None
-## Inventory and asset management
+## Inventory and Asset Management
-*For more information, see [Security control: Inventory and asset management](../security/benchmarks/security-control-inventory-asset-management.md).*
+*For more information, see the [Azure Security Benchmark: Inventory and Asset Management](../security/benchmarks/security-control-inventory-asset-management.md).*
### 6.1: Use automated asset discovery solution
At least once a day, Azure Database for PostgreSQL Hyperscale (Citus) takes sn
- [Understand Azure RBAC](../role-based-access-control/overview.md)
-**Azure Security Center monitoring**: Not applicable
- **Responsibility**: Customer
+**Azure Security Center monitoring**: None
+ ### 6.2: Maintain asset metadata **Guidance**: Apply tags to Hyperscale (Citus) instances and other related resources giving metadata to logically organize them into a taxonomy. - [How to create and use tags](../azure-resource-manager/management/tag-resources.md)
-**Azure Security Center monitoring**: Not applicable
- **Responsibility**: Customer
+**Azure Security Center monitoring**: None
+ ### 6.3: Delete unauthorized Azure resources **Guidance**: Use tagging, management groups, and separate subscriptions, where appropriate, to organize and track Hyperscale (Citus) instances and related resources. Reconcile inventory on a regular basis and ensure unauthorized resources are deleted from the subscription in a timely manner. - [How to create additional Azure subscriptions](../cost-management-billing/manage/create-subscription.md) -- [How to create Management Groups](../governance/management-groups/create-management-group-portal.md)
+- [How to create management groups](../governance/management-groups/create-management-group-portal.md)
- [How to create and use tags](../azure-resource-manager/management/tag-resources.md)
-**Azure Security Center monitoring**: Not applicable
- **Responsibility**: Customer
+**Azure Security Center monitoring**: None
+ ### 6.4: Define and maintain inventory of approved Azure resources
-**Guidance**:
-Use Azure policy to put restrictions on the type of resources that can be created in customer subscription(s) using the following built-in policy definitions:
+**Guidance**: Use Azure policy to put restrictions on the type of resources that can be created in customer subscription(s) using the following built-in policy definitions:
- Not allowed resource types
In addition, use the Azure Resource Graph to query/discover resources within the
- [How to create queries with Azure Graph](../governance/resource-graph/first-query-portal.md) -
-**Azure Security Center monitoring**: Not applicable
- **Responsibility**: Customer
+**Azure Security Center monitoring**: None
+ ### 6.5: Monitor for unapproved Azure resources **Guidance**: Use Azure policy to put restrictions on the type of resources that can be created in customer subscription(s) using the following built-in policy definitions:
In addition, use the Azure Resource Graph to query/discover resources within the
- [How to create queries with Azure Graph](../governance/resource-graph/first-query-portal.md)
-**Azure Security Center monitoring**: Not applicable
- **Responsibility**: Customer
+**Azure Security Center monitoring**: None
+ ### 6.9: Use only approved Azure services **Guidance**: Use Azure policy to put restrictions on the type of resources that can be created in customer subscription(s) using the following built-in policy definitions:- - Not allowed resource types - Allowed resource types - [How to configure and manage Azure Policy](../governance/policy/tutorials/create-and-manage.md) -- [How to deny a specific resource type with Azure Policy](../governance/policy/samples/index.md)-
-**Azure Security Center monitoring**: Not applicable
+- [How to deny a specific resource type with Azure Policy](https://docs.microsoft.com/azure/governance/policy/samples/built-in-policies#general)
**Responsibility**: Customer
+**Azure Security Center monitoring**: None
+ ### 6.11: Limit users' ability to interact with Azure Resource Manager **Guidance**: Use the Azure Conditional Access to limit users' ability to interact with Azure Resource Manager by configuring "Block access" for the "Microsoft Azure Management" App. This can prevent the creation and changes to resources within a high security environment, such as instances of Hyperscale (Citus) containing sensitive information. - [How to configure Conditional Access to block access to Azure Resource Manager](../role-based-access-control/conditional-access-azure-management.md)
-**Azure Security Center monitoring**: Not applicable
- **Responsibility**: Customer
-## Secure configuration
+**Azure Security Center monitoring**: None
+
+## Secure Configuration
-*For more information, see [Security control: Secure configuration](../security/benchmarks/security-control-secure-configuration.md).*
+*For more information, see the [Azure Security Benchmark: Secure Configuration](../security/benchmarks/security-control-secure-configuration.md).*
### 7.1: Establish secure configurations for all Azure resources **Guidance**: Define and implement standard security configurations for your Hyperscale (Citus) instances with Azure Policy. Use Azure Policy to create custom policies to audit or enforce the network configuration of your Azure Database for PostgreSQL instances.
-Also, Azure Resource Manager has the ability to export the template in JavaScript Object Notation (JSON), which should be reviewed to ensure that the configurations meet / exceed the security requirements for your organization.
+Also, Azure Resource Manager has the ability to export the template in JavaScript Object Notation (JSON), which should be reviewed to ensure that the configurations meet / exceed the security requirements for your organization.
- [How to view available Azure Policy Aliases](/powershell/module/az.resources/get-azpolicyalias) - [How to configure and manage Azure Policy](../governance/policy/tutorials/create-and-manage.md) -- [Single and multi-resource export to a template in Azure portal](../azure-resource-manager/templates/export-template-portal.md) ---
-**Azure Security Center monitoring**: Not applicable
+- [Single and multi-resource export to a template in Azure portal](../azure-resource-manager/templates/export-template-portal.md)
**Responsibility**: Customer
+**Azure Security Center monitoring**: None
+ ### 7.3: Maintain secure Azure resource configurations **Guidance**: Use Azure policy [deny] and [deploy if not exist] to enforce secure settings across your Azure resources. In addition, you may use Azure Resource Manager templates to maintain the security configuration of your Azure resources required by your organization.
Also, Azure Resource Manager has the ability to export the template in JavaScrip
- [Azure Resource Manager templates overview](../azure-resource-manager/templates/overview.md) --
-**Azure Security Center monitoring**: Not applicable
- **Responsibility**: Customer
+**Azure Security Center monitoring**: None
+ ### 7.5: Securely store configuration of Azure resources **Guidance**: If using custom Azure policy definitions for your Hyperscale (Citus) instances and related resources, use Azure Repos to securely store and manage your code. - [How to store code in Azure DevOps](/azure/devops/repos/git/gitworkflow) -- [Azure Repos Documentation](/azure/devops/repos/index)-
-**Azure Security Center monitoring**: Not applicable
+- [Azure Repos Documentation](/azure/devops/repos/)
**Responsibility**: Customer
+**Azure Security Center monitoring**: None
+ ### 7.7: Deploy configuration management tools for Azure resources **Guidance**: Use Azure policy [deny] and [deploy if not exist] to enforce secure settings across your Azure resources. In addition, you may use Azure Resource Manager templates to maintain the security configuration of your Azure resources required by your organization.
Also, Azure Resource Manager has the ability to export the template in JavaScrip
- [Azure Resource Manager templates overview](../azure-resource-manager/templates/overview.md) --
-**Azure Security Center monitoring**: Not applicable
- **Responsibility**: Customer
+**Azure Security Center monitoring**: None
+ ### 7.9: Implement automated configuration monitoring for Azure resources **Guidance**: Use Azure Policy aliases in the "Microsoft.DBforPostgreSQL" namespace to create custom policies to alert, audit, and enforce system configurations. Use Azure policy [audit], [deny], and [deploy if not exist] to automatically enforce configurations for your Azure Database for PostgreSQL instances and related resources. - [How to configure and manage Azure Policy](../governance/policy/tutorials/create-and-manage.md)
-**Azure Security Center monitoring**: Not applicable
- **Responsibility**: Customer
+**Azure Security Center monitoring**: None
+ ### 7.12: Manage identities securely and automatically **Guidance**: Azure Database for PostgreSQL - Hyperscale (Citus) currently does not directly support managed identities. While creating the Azure Database for PostgreSQL server, you must provide credentials for an administrator user. You can create additional user roles in the Azure portal interface. -- [Create an Azure Database for PostgreSQL - Hyperscale (Citus)](./quickstart-create-hyperscale-portal.md#create-a-hyperscale-citus-server-group)
+- [Create an Azure Database for PostgreSQL - Hyperscale (Citus)](https://docs.microsoft.com/azure/postgresql/quickstart-create-hyperscale-portal#create-a-hyperscale-citus-server-group)
-- [Create additional user roles](./howto-hyperscale-create-users.md#how-to-create-additional-user-roles)
+- [Create additional user roles](https://docs.microsoft.com/azure/postgresql/howto-hyperscale-create-users#how-to-create-additional-user-roles)
+**Responsibility**: Customer
-**Azure Security Center monitoring**: Currently not available
-
-**Responsibility**: Currently not available
+**Azure Security Center monitoring**: None
### 7.13: Eliminate unintended credential exposure
Also, Azure Resource Manager has the ability to export the template in JavaScrip
- [How to setup Credential Scanner](https://secdevtools.azurewebsites.net/helpcredscan.html)
-**Azure Security Center monitoring**: Not applicable
- **Responsibility**: Customer
-## Malware defense
+**Azure Security Center monitoring**: None
+
+## Malware Defense
-*For more information, see [Security control: Malware defense](../security/benchmarks/security-control-malware-defense.md).*
+*For more information, see the [Azure Security Benchmark: Malware Defense](../security/benchmarks/security-control-malware-defense.md).*
### 8.2: Pre-scan files to be uploaded to non-compute Azure resources
Also, Azure Resource Manager has the ability to export the template in JavaScrip
Pre-scan any content being uploaded to non-compute Azure resources, such as App Service, Data Lake Storage, Blob Storage, Azure Database for PostgreSQL, etc. Microsoft cannot access your data in these instances.
-**Azure Security Center monitoring**: Not applicable
- **Responsibility**: Customer
-## Data recovery
+**Azure Security Center monitoring**: None
+
+## Data Recovery
-*For more information, see [Security control: Data recovery](../security/benchmarks/security-control-data-recovery.md).*
+*For more information