Updates from: 01/15/2021 04:09:45
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c https://docs.microsoft.com/en-us/azure/active-directory-b2c/add-identity-provider https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/add-identity-provider.md
@@ -6,7 +6,7 @@ author: msmimart
manager: celestedg ms.author: mimart
-ms.date: 01/08/2021
+ms.date: 01/14/2021
ms.custom: mvc ms.topic: how-to ms.service: active-directory
@@ -33,6 +33,7 @@ You typically use only one identity provider in your applications, but you have
* [Amazon](identity-provider-amazon.md) * [Azure AD (Single-tenant)](identity-provider-azure-ad-single-tenant.md) * [Azure AD (Multi-tenant)](identity-provider-azure-ad-multi-tenant.md)
+* [Azure AD B2C](identity-provider-azure-ad-b2c.md)
* [Facebook](identity-provider-facebook.md) * [Generic identity provider](identity-provider-generic-openid-connect.md) * [GitHub](identity-provider-github.md)
active-directory-b2c https://docs.microsoft.com/en-us/azure/active-directory-b2c/add-sign-in-policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/add-sign-in-policy.md new file mode 100644
@@ -0,0 +1,111 @@
+---
+title: Set up a sign-in flow
+titleSuffix: Azure Active Directory B2C
+description: Learn how to set up a sign-in flow in Azure Active Directory B2C.
+services: active-directory-b2c
+author: msmimart
+manager: celestedg
+
+ms.service: active-directory
+ms.workload: identity
+ms.topic: how-to
+ms.date: 01/12/2021
+ms.author: mimart
+ms.subservice: B2C
+zone_pivot_groups: b2c-policy-type
+---
+
+# Set up a sign-in flow in Azure Active Directory B2C
+
+[!INCLUDE [active-directory-b2c-choose-user-flow-or-custom-policy](../../includes/active-directory-b2c-choose-user-flow-or-custom-policy.md)]
+
+## Sign-in flow overview
+
+The sign-in policy lets users:
+
+* Users can sign in with an Azure AD B2C Local Account
+* Sign-up or sign-in with a social account
+* Password reset
+* Users cannot sign up for an Azure AD B2C Local Account - To create an account, an Administrator can use [MS Graph API](manage-user-accounts-graph-api.md).
+
+![Profile editing flow](./media/add-sign-in-policy/sign-in-user-flow.png)
+
+## Prerequisites
+
+If you haven't already done so, [register a web application in Azure Active Directory B2C](tutorial-register-applications.md).
+
+::: zone pivot="b2c-user-flow"
+
+## Create a sign-in user flow
+
+To add sign-in policy:
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+1. Select the **Directory + Subscription** icon in the portal toolbar, and then select the directory that contains your Azure AD B2C tenant.
+1. In the Azure portal, search for and select **Azure AD B2C**.
+1. Under **Policies**, select **User flows**, and then select **New user flow**.
+1. On the **Create a user flow** page, select the **Sign in** user flow.
+1. Under **Select a version**, select **Recommended**, and then select **Create**. ([Learn more](user-flow-versions.md) about user flow versions.)
+1. Enter a **Name** for the user flow. For example, *signupsignin1*.
+1. For **Identity providers**, select **Email sign-in**.
+1. For **Application claims**, choose the claims and attributes that you want to send to your application. For example, select **Show more**, and then choose attributes and claims for **Display Name**, **Given Name**, **Surname**, and **User's Object ID**. Click **OK**.
+1. Click **Create** to add the user flow. A prefix of *B2C_1* is automatically prepended to the name.
+
+### Test the user flow
+
+1. Select the user flow you created to open its overview page, then select **Run user flow**.
+1. For **Application**, select the web application named *webapp1* that you previously registered. The **Reply URL** should show `https://jwt.ms`.
+1. Click **Run user flow**.
+1. You should be able to sign in with the account that you created (using MS Graph API), without the sign-up link. The returned token includes the claims that you selected.
+
+::: zone-end
+
+::: zone pivot="b2c-custom-policy"
+
+## Remove the sign-up link
+
+The **SelfAsserted-LocalAccountSignin-Email** technical profile is a [self-asserted](self-asserted-technical-profile.md), which is invoked during the sign-up or sign-in flow. To remove the sign-up link, set the `setting.showSignupLink` metadata to `false`. Override the SelfAsserted-LocalAccountSignin-Email technical profiles in the extension file.
+
+1. Open the extensions file of your policy. For example, _`SocialAndLocalAccounts/`**`TrustFrameworkExtensions.xml`**_.
+1. Find the `ClaimsProviders` element. If the element doesn't exist, add it.
+1. Add the following claims provider to the `ClaimsProviders` element:
+
+ ```xml
+ <ClaimsProvider>
+ <DisplayName>Local Account</DisplayName>
+ <TechnicalProfiles>
+ <TechnicalProfile Id="SelfAsserted-LocalAccountSignin-Email">
+ <Metadata>
+ <Item Key="setting.showSignupLink">false</Item>
+ </Metadata>
+ </TechnicalProfile>
+ </TechnicalProfiles>
+ </ClaimsProvider>
+ ```
+
+1. Within `<BuildingBlocks>` element, add the following [ContentDefinition](contentdefinitions.md) to reference the version 1.2.0, or newer data URI:
+
+ ```XML
+ <ContentDefinitions>
+ <ContentDefinition Id="api.localaccountsignup">
+ <DataUri>urn:com:microsoft:aad:b2c:elements:contract:unifiedssp:1.2.0</DataUri>
+ </ContentDefinition>
+ </ContentDefinitions>
+ ```
+
+## Update and test your policy
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+1. Make sure you're using the directory that contains your Azure AD tenant by selecting the **Directory + subscription** filter in the top menu and choosing the directory that contains your Azure AD tenant.
+1. Choose **All services** in the top-left corner of the Azure portal, and then search for and select **App registrations**.
+1. Select **Identity Experience Framework**.
+1. Select **Upload Custom Policy**, and then upload the two policy files that you changed.
+1. Select the sign-in policy that you uploaded, and click the **Run now** button.
+1. You should be able to sign in with the account that you created (using MS Graph API), without the sign-up link.
+
+::: zone-end
+
+## Next steps
+
+* Add a [sign-in with social identity provider](add-identity-provider.md).
+* Set up a [password reset flow](add-password-reset-policy.md).
active-directory-b2c https://docs.microsoft.com/en-us/azure/active-directory-b2c/identity-provider-azure-ad-b2c https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/identity-provider-azure-ad-b2c.md new file mode 100644
@@ -0,0 +1,269 @@
+---
+title: Set up sign-up and sign-in with an Azure AD B2C account from another Azure AD B2C tenant
+titleSuffix: Azure AD B2C
+description: Provide sign-up and sign-in to customers with Azure AD B2C accounts from another tenant in your applications using Azure Active Directory B2C.
+services: active-directory-b2c
+author: msmimart
+manager: celestedg
+
+ms.service: active-directory
+ms.workload: identity
+ms.topic: how-to
+ms.date: 01/14/2021
+ms.author: mimart
+ms.subservice: B2C
+ms.custom: fasttrack-edit, project-no-code
+zone_pivot_groups: b2c-policy-type
+---
+
+# Set up sign-up and sign-in with an Azure AD B2C account from another Azure AD B2C tenant
+
+[!INCLUDE [active-directory-b2c-choose-user-flow-or-custom-policy](../../includes/active-directory-b2c-choose-user-flow-or-custom-policy.md)]
+
+::: zone pivot="b2c-custom-policy"
+
+[!INCLUDE [active-directory-b2c-advanced-audience-warning](../../includes/active-directory-b2c-advanced-audience-warning.md)]
+
+::: zone-end
+
+## Overview
+
+This article describes how to set up a federation with another Azure AD B2C tenant. When your applications are protected with your Azure AD B2C, this allows users from other Azure AD B2CΓÇÖs to login with their existing accounts. In the following diagram, users are able to sign-in to an Application protected by *Contoso*ΓÇÖs Azure AD B2C, with an account managed by *Fabrikam*ΓÇÖs Azure AD B2C tenant
+
+![Azure AD B2C federation with another Azure AD B2C tenant](./media/identity-provider-azure-ad-b2c/azure-ad-b2c-federation.png)
++
+## Prerequisites
+
+[!INCLUDE [active-directory-b2c-customization-prerequisites](../../includes/active-directory-b2c-customization-prerequisites.md)]
+
+## Create an Azure AD B2C application
+
+To use an Azure AD B2C account as an [identity provider](openid-connect.md) in your Azure AD B2C tenant (for example, Contoso), in the other Azure AD B2C (for example, Fabrikam):
+
+1. Create a [user flow](tutorial-create-user-flows.md), or a [custom policy](custom-policy-get-started.md).
+1. Then create an application in the Azure AD B2C, as describe in this section.
+
+To create an application.
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+1. Make sure you're using the directory that contains your other Azure AD B2C tenant (for example, Fabrikam.com).
+1. In the Azure portal, search for and select **Azure AD B2C**.
+1. Select **App registrations**, and then select **New registration**.
+1. Enter a **Name** for the application. For example, *ContosoApp*.
+1. Under **Supported account types**, select **Accounts in any identity provider or organizational directory (for authenticating users with user flows)**.
+1. Under **Redirect URI**, select **Web**, and then enter the following URL in all lowercase letters, where `your-B2C-tenant-name` is replaced with the name of your Azure AD B2C tenant (for example, Contoso).
+
+ ```
+ https://your-B2C-tenant-name.b2clogin.com/your-B2C-tenant-name.onmicrosoft.com/oauth2/authresp
+ ```
+
+ For example, `https://contoso.b2clogin.com/contoso.onmicrosoft.com/oauth2/authresp`.
+
+1. Under Permissions, select the **Grant admin consent to openid and offline_access permissions** check box.
+1. Select **Register**.
+1. In the **Azure AD B2C - App registrations** page, select the application you created, for example *ContosoApp*.
+1. Record the **Application (client) ID** shown on the application Overview page. You need this when you configure the identity provider in the next section.
+1. In the left menu, under **Manage**, select **Certificates & secrets**.
+1. Select **New client secret**.
+1. Enter a description for the client secret in the **Description** box. For example, *clientsecret1*.
+1. Under **Expires**, select a duration for which the secret is valid, and then select **Add**.
+1. Record the secret's **Value**. You need this when you configure the identity provider in the next section.
++
+::: zone pivot="b2c-user-flow"
+
+## Configure Azure AD B2C as an identity provider
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+1. Make sure you're using the directory that contains the Azure AD B2C tenant you want to configure the federation (for example, Contoso). Select the **Directory + subscription** filter in the top menu and choose the directory that contains your Azure AD B2C tenant (for example, Contoso).
+1. Choose **All services** in the top-left corner of the Azure portal, and then search for and select **Azure AD B2C**.
+1. Select **Identity providers**, and then select **New OpenID Connect provider**.
+1. Enter a **Name**. For example, enter *Fabrikam*.
+1. For **Metadata url**, enter the following URL replacing `{tenant}` with the domain name of your Azure AD B2C tenant (for example, Fabrikam). Replace the `{policy}` with the policy name you configure in the other tenant:
+
+ ```
+ https://{tenant}.b2clogin.com/{tenant}.onmicrosoft.com/{policy}/v2.0/.well-known/openid-configuration
+ ```
+
+ For example, `https://fabrikam.b2clogin.com/fabrikam.onmicrosoft.com/B2C_1_susi/v2.0/.well-known/openid-configuration`.
+
+1. For **Client ID**, enter the application ID that you previously recorded.
+1. For **Client secret**, enter the client secret that you previously recorded.
+1. For the **Scope**, enter the `openid`.
+1. Leave the default values for **Response type**, and **Response mode**.
+1. (Optional) For the **Domain hint**, enter the domain name you want to use for the [direct sign-in](direct-signin.md#redirect-sign-in-to-a-social-provider). For example, *fabrikam.com*.
+1. Under **Identity provider claims mapping**, select the following claims:
+
+ - **User ID**: *sub*
+ - **Display name**: *name*
+ - **Given name**: *given_name*
+ - **Surname**: *family_name*
+ - **Email**: *email*
+
+1. Select **Save**.
+
+::: zone-end
+
+::: zone pivot="b2c-custom-policy"
+
+## Create a policy key
+
+You need to store the application key that you created earlier in your Azure AD B2C tenant.
+
+1. Make sure you're using the directory that contains the Azure AD B2C tenant you want to configure the federation (for example, Contoso). Select the **Directory + subscription** filter in the top menu and choose the directory that contains your Azure AD B2C tenant (for example, Contoso).
+1. Choose **All services** in the top-left corner of the Azure portal, and then search for and select **Azure AD B2C**.
+1. Under **Policies**, select **Identity Experience Framework**.
+1. Select **Policy keys** and then select **Add**.
+1. For **Options**, choose `Manual`.
+1. Enter a **Name** for the policy key. For example, `FabrikamAppSecret`. The prefix `B2C_1A_` is added automatically to the name of your key when it's created, so its reference in the XML in following section is to *B2C_1A_FabrikamAppSecret*.
+1. In **Secret**, enter your client secret that you recorded earlier.
+1. For **Key usage**, select `Signature`.
+1. Select **Create**.
+
+## Add a claims provider
+
+If you want users to sign in by using the other Azure AD B2C (Fabrikam), you need to define the other Azure AD B2C as a claims provider that Azure AD B2C can communicate with through an endpoint. The endpoint provides a set of claims that are used by Azure AD B2C to verify that a specific user has authenticated.
+
+You can define Azure AD B2C as a claims provider by adding Azure AD B2C to the **ClaimsProvider** element in the extension file of your policy.
+
+1. Open the *TrustFrameworkExtensions.xml* file.
+1. Find the **ClaimsProviders** element. If it does not exist, add it under the root element.
+1. Add a new **ClaimsProvider** as follows:
+ ```xml
+ <ClaimsProvider>
+ <Domain>fabrikam.com</Domain>
+ <DisplayName>Federation with Fabrikam tenant</DisplayName>
+ <TechnicalProfiles>
+ <TechnicalProfile Id="Fabrikam-OpenIdConnect">
+ <DisplayName>Fabrikam</DisplayName>
+ <Protocol Name="OpenIdConnect"/>
+ <Metadata>
+ <!-- Update the Client ID below to the Application ID -->
+ <Item Key="client_id">00000000-0000-0000-0000-000000000000</Item>
+ <!-- Update the metadata URL with the other Azure AD B2C tenant name and policy name -->
+ <Item Key="METADATA">https://{tenant}.b2clogin.com/{tenant}.onmicrosoft.com/{policy}/v2.0/.well-known/openid-configuration</Item>
+ <Item Key="UsePolicyInRedirectUri">false</Item>
+ <Item Key="response_types">code</Item>
+ <Item Key="scope">openid</Item>
+ <Item Key="response_mode">form_post</Item>
+ <Item Key="HttpBinding">POST</Item>
+ </Metadata>
+ <CryptographicKeys>
+ <Key Id="client_secret" StorageReferenceId="B2C_1A_FabrikamAppSecret"/>
+ </CryptographicKeys>
+ <OutputClaims>
+ <OutputClaim ClaimTypeReferenceId="issuerUserId" PartnerClaimType="sub" />
+ <OutputClaim ClaimTypeReferenceId="givenName" PartnerClaimType="given_name" />
+ <OutputClaim ClaimTypeReferenceId="surname" PartnerClaimType="family_name" />
+ <OutputClaim ClaimTypeReferenceId="displayName" PartnerClaimType="name" />
+ <OutputClaim ClaimTypeReferenceId="email" PartnerClaimType="email" />
+ <OutputClaim ClaimTypeReferenceId="identityProvider" PartnerClaimType="iss" />
+ <OutputClaim ClaimTypeReferenceId="authenticationSource" DefaultValue="socialIdpAuthentication" AlwaysUseDefaultValue="true" />
+ <OutputClaim ClaimTypeReferenceId="otherMails" PartnerClaimType="emails"/>
+ </OutputClaims>
+ <OutputClaimsTransformations>
+ <OutputClaimsTransformation ReferenceId="CreateRandomUPNUserName" />
+ <OutputClaimsTransformation ReferenceId="CreateUserPrincipalName" />
+ <OutputClaimsTransformation ReferenceId="CreateAlternativeSecurityId" />
+ </OutputClaimsTransformations>
+ <UseTechnicalProfileForSessionManagement ReferenceId="SM-SocialLogin"/>
+ </TechnicalProfile>
+ </TechnicalProfiles>
+ </ClaimsProvider>
+ ```
+
+1. Update the following XML elements with the relevant value:
+
+ |XML element |Value |
+ |---------|---------|
+ |ClaimsProvider\Domain | The domain name that is used for [direct sign-in](direct-signin.md?pivots=b2c-custom-policy#redirect-sign-in-to-a-social-provider). Enter the domain name you want to use in the direct sign-in. For example, *fabrikam.com*. |
+ |TechnicalProfile\DisplayName|This value will be displayed on the sign-in button on your sign-in screen. For example, *Fabrikam*. |
+ |Metadata\client_id|The application identifier of the identity provider. Update the Client ID with the Application ID you created earlier in the other Azure AD B2C tenant.|
+ |Metadata\METADATA|A URL that points to an OpenID Connect identity provider configuration document, which is also known as OpenID well-known configuration endpoint. Enter the following URL replacing `{tenant}` with the domain name of the other Azure AD B2C tenant (Fabrikam). Replace the `{tenant}` with the policy name you configure in the other tenant, and `{policy]` with the policy name: `https://{tenant}.b2clogin.com/{tenant}.onmicrosoft.com/{policy}/v2.0/.well-known/openid-configuration`. For example, `https://fabrikam.b2clogin.com/fabrikam.onmicrosoft.com/B2C_1_susi/v2.0/.well-known/openid-configuration`.|
+ |CryptographicKeys| Update the value of **StorageReferenceId** to the name of the policy key that you created earlier. For example, `B2C_1A_FabrikamAppSecret`.|
+
+
+### Upload the extension file for verification
+
+By now, you have configured your policy so that Azure AD B2C knows how to communicate with the other Azure AD B2C tenant. Try uploading the extension file of your policy just to confirm that it doesn't have any issues so far.
+
+1. On the **Custom Policies** page in your Azure AD B2C tenant, select **Upload Policy**.
+1. Enable **Overwrite the policy if it exists**, and then browse to and select the *TrustFrameworkExtensions.xml* file.
+1. Click **Upload**.
+
+## Register the claims provider
+
+At this point, the identity provider has been set up, but it's not yet available in any of the sign-up/sign-in pages. To make it available, create a duplicate of an existing template user journey, and then modify it so that it also has the Azure AD identity provider:
+
+1. Open the *TrustFrameworkBase.xml* file from the starter pack.
+1. Find and copy the entire contents of the **UserJourney** element that includes `Id="SignUpOrSignIn"`.
+1. Open the *TrustFrameworkExtensions.xml* and find the **UserJourneys** element. If the element doesn't exist, add one.
+1. Paste the entire content of the **UserJourney** element that you copied as a child of the **UserJourneys** element.
+1. Rename the ID of the user journey. For example, `SignUpSignInFabrikam`.
+
+### Display the button
+
+The **ClaimsProviderSelection** element is analogous to an identity provider button on a sign-up/sign-in page. If you add a **ClaimsProviderSelection** element for Azure AD B2C, a new button shows up when a user lands on the page.
+
+1. Find the **OrchestrationStep** element that includes `Order="1"` in the user journey that you created in *TrustFrameworkExtensions.xml*.
+1. Under **ClaimsProviderSelections**, add the following element. Set the value of **TargetClaimsExchangeId** to an appropriate value, for example `FabrikamExchange`:
+
+ ```xml
+ <ClaimsProviderSelection TargetClaimsExchangeId="FabrikamExchange" />
+ ```
+
+### Link the button to an action
+
+Now that you have a button in place, you need to link it to an action. The action, in this case, is for Azure AD B2C to communicate with the other Azure AD B2C to receive a token. Link the button to an action by linking the technical profile for the Azure AD B2C claims provider:
+
+1. Find the **OrchestrationStep** that includes `Order="2"` in the user journey.
+1. Add the following **ClaimsExchange** element making sure that you use the same value for **Id** that you used for **TargetClaimsExchangeId**:
+
+ ```xml
+ <ClaimsExchange Id="FabrikamExchange" TechnicalProfileReferenceId="Fabrikam-OpenIdConnect" />
+ ```
+
+ Update the value of **TechnicalProfileReferenceId** to the **Id** of the technical profile you created earlier. For example, `Fabrikam-OpenIdConnect`.
+
+1. Save the *TrustFrameworkExtensions.xml* file and upload it again for verification.
+
+::: zone-end
+
+::: zone pivot="b2c-user-flow"
+
+## Add Azure AD B2C identity provider to a user flow
+
+1. In your Azure AD B2C tenant, select **User flows**.
+1. Click the user flow that you want to add the Azure AD B2C identity provider.
+1. Under the **Social identity providers**, select **Fabrikam**.
+1. Select **Save**.
+1. To test your policy, select **Run user flow**.
+1. For **Application**, select the web application named *testapp1* that you previously registered. The **Reply URL** should show `https://jwt.ms`.
+1. Click **Run user flow**
+1. From the sign-up or sign-in page, select *Fabrikam* to sign in with the other Azure AD B2C tenant.
+
+::: zone-end
+
+::: zone pivot="b2c-custom-policy"
++
+## Update and test the relying party file
+
+Update the relying party (RP) file that initiates the user journey that you created.
+
+1. Make a copy of *SignUpOrSignIn.xml* in your working directory, and rename it. For example, rename it to *SignUpSignInFabrikam.xml*.
+1. Open the new file and update the value of the **PolicyId** attribute for **TrustFrameworkPolicy** with a unique value. For example, `SignUpSignInFabrikam`.
+1. Update the value of **PublicPolicyUri** with the URI for the policy. For example, `http://contoso.com/B2C_1A_signup_signin_fabrikam`.
+1. Update the value of the **ReferenceId** attribute in **DefaultUserJourney** to match the ID of the user journey that you created earlier. For example, *SignUpSignInFabrikam*.
+1. Save your changes and upload the file.
+1. Under **Custom policies**, select the new policy in the list.
+1. In the **Select application** drop-down, select the Azure AD B2C application that you created earlier. For example, *testapp1*.
+1. Select **Run now**
+1. From the sign-up or sign-in page, select *Fabrikam* to sign in with the other Azure AD B2C tenant.
+
+::: zone-end
+
+## Next steps
+
+Learn how to [pass the other Azure AD B2C token to your application](idp-pass-through-user-flow.md).
active-directory-domain-services https://docs.microsoft.com/en-us/azure/active-directory-domain-services/tutorial-create-forest-trust https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-domain-services/tutorial-create-forest-trust.md
@@ -71,8 +71,8 @@ Before you configure a forest trust in Azure AD DS, make sure your networking be
To correctly resolve the managed domain from the on-premises environment, you may need to add forwarders to the existing DNS servers. If you haven't configured the on-premises environment to communicate with the managed domain, complete the following steps from a management workstation for the on-premises AD DS domain:
-1. Select **Start | Administrative Tools | DNS**
-1. Right-select DNS server, such as *myAD01*, then select **Properties**
+1. Select **Start** > **Administrative Tools** > **DNS**.
+1. Right-select DNS server, such as *myAD01*, then select **Properties**.
1. Choose **Forwarders**, then **Edit** to add additional forwarders. 1. Add the IP addresses of the managed domain, such as *10.0.2.4* and *10.0.2.5*.
@@ -82,15 +82,15 @@ The on-premises AD DS domain needs an incoming forest trust for the managed doma
To configure inbound trust on the on-premises AD DS domain, complete the following steps from a management workstation for the on-premises AD DS domain:
-1. Select **Start | Administrative Tools | Active Directory Domains and Trusts**
-1. Right-select domain, such as *onprem.contoso.com*, then select **Properties**
-1. Choose **Trusts** tab, then **New Trust**
-1. Enter the name for Azure AD DS domain name, such as *aaddscontoso.com*, then select **Next**
+1. Select **Start | Administrative Tools | Active Directory Domains and Trusts**.
+1. Right-select domain, such as *onprem.contoso.com*, then select **Properties**.
+1. Choose **Trusts** tab, then **New Trust**.
+1. Enter the name for Azure AD DS domain name, such as *aaddscontoso.com*, then select **Next**.
1. Select the option to create a **Forest trust**, then to create a **One way: incoming** trust. 1. Choose to create the trust for **This domain only**. In the next step, you create the trust in the Azure portal for the managed domain. 1. Choose to use **Forest-wide authentication**, then enter and confirm a trust password. This same password is also entered in the Azure portal in the next section. 1. Step through the next few windows with default options, then choose the option for **No, do not confirm the outgoing trust**.
-1. Select **Finish**
+1. Select **Finish**.
## Create outbound forest trust in Azure AD DS
@@ -98,16 +98,16 @@ With the on-premises AD DS domain configured to resolve the managed domain and a
To create the outbound trust for the managed domain in the Azure portal, complete the following steps:
-1. In the Azure portal, search for and select **Azure AD Domain Services**, then select your managed domain, such as *aaddscontoso.com*
+1. In the Azure portal, search for and select **Azure AD Domain Services**, then select your managed domain, such as *aaddscontoso.com*.
1. From the menu on the left-hand side of the managed domain, select **Trusts**, then choose to **+ Add** a trust. > [!NOTE] > If you don't see the **Trusts** menu option, check under **Properties** for the *Forest type*. Only *resource* forests can create trusts. If the forest type is *User*, you can't create trusts. There's currently no way to change the forest type of a managed domain. You need to delete and recreate the managed domain as a resource forest.
-1. Enter a display name that identifies your trust, then the on-premises trusted forest DNS name, such as *onprem.contoso.com*
+1. Enter a display name that identifies your trust, then the on-premises trusted forest DNS name, such as *onprem.contoso.com*.
1. Provide the same trust password that was used when configuring the inbound forest trust for the on-premises AD DS domain in the previous section.
-1. Provide at least two DNS servers for the on-premises AD DS domain, such as *10.1.1.4* and *10.1.1.5*
-1. When ready, **Save** the outbound forest trust
+1. Provide at least two DNS servers for the on-premises AD DS domain, such as *10.1.1.4* and *10.1.1.5*.
+1. When ready, **Save** the outbound forest trust.
![Create outbound forest trust in the Azure portal](./media/tutorial-create-forest-trust/portal-create-outbound-trust.png)
@@ -179,7 +179,7 @@ Using the Windows Server VM joined to the Azure AD DS resource forest, you can t
1. In the *Permissions for CrossForestShare* dialog box, select **Add**. 1. Type *FileServerAccess* in **Enter the object names to select**, then select **OK**. 1. Select *FileServerAccess* from the **Groups or user names** list. In the **Permissions for FileServerAccess** list, choose *Allow* for the **Modify** and **Write** permissions, then select **OK**.
-1. Select the **Sharing** tab, then choose **Advanced Sharing…**
+1. Select the **Sharing** tab, then choose **Advanced Sharing…**.
1. Choose **Share this folder**, then enter a memorable name for the file share in **Share name** such as *CrossForestShare*. 1. Select **Permissions**. In the **Permissions for Everyone** list, choose **Allow** for the **Change** permission. 1. Select **OK** two times and then **Close**.
active-directory https://docs.microsoft.com/en-us/azure/active-directory/authentication/concept-mfa-data-residency https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/authentication/concept-mfa-data-residency.md
@@ -1,12 +1,12 @@
---
-title: Azure AD Multi-Factor Authentication data residency
-description: Learn what personal and organizational data Azure AD Multi-Factor Authentication stores about you and your users and what data remains within the country/region of origin.
+title: Azure AD Multifactor Authentication data residency
+description: Learn what personal and organizational data Azure AD Multifactor Authentication stores about you and your users and what data remains within the country/region of origin.
services: multi-factor-authentication ms.service: active-directory ms.subservice: authentication ms.topic: conceptual
-ms.date: 12/11/2020
+ms.date: 01/14/2021
ms.author: justinha author: justinha
@@ -15,92 +15,109 @@ ms.reviewer: inbarc
ms.collection: M365-identity-device-management ---
-# Data residency and customer data for Azure AD Multi-Factor Authentication
+# Data residency and customer data for Azure AD Multifactor Authentication
Customer data is stored by Azure AD in a geographical location based on the address provided by your organization when subscribing for a Microsoft Online service such as Microsoft 365 and Azure. For information on where your customer data is stored, you can use the [Where is your data located?](https://www.microsoft.com/trustcenter/privacy/where-your-data-is-located) section of the Microsoft Trust Center.
-Cloud-based Azure AD Multi-Factor Authentication and Azure Multi-Factor Authentication Server process and store some amount of personal data and organizational data. This article outlines what and where data is stored.
+Cloud-based Azure AD Multifactor Authentication and Azure AD Multifactor Authentication Server process and store some amount of personal data and organizational data. This article outlines what and where data is stored.
-The Azure AD Multi-Factor Authentication service has datacenters in the US, Europe, and Asia Pacific. The following activities originate out of the regional datacenters except where noted:
+The Azure AD Multifactor Authentication service has datacenters in the US, Europe, and Asia Pacific. The following activities originate out of the regional datacenters except where noted:
-* Multi-factor authentication using phone calls originate from US datacenters and are routed by global providers.
+* Multifactor authentication using phone calls originate from US datacenters and are routed by global providers.
* General purpose user authentication requests from other regions such as Europe or Australia are currently processed based on the user's location. * Push notifications using the Microsoft Authenticator app are currently processed in the regional datacenters based on the user's location. * Device vendor-specific services, such as Apple Push Notifications, may be outside the user's location.
-## Personal data stored by Azure AD Multi-Factor Authentication
+## Personal data stored by Azure AD Multifactor Authentication
Personal data is user-level information associated with a specific person. The following data stores contain personal information: * Blocked users * Bypassed users * Microsoft Authenticator device token change requests
-* Multi-Factor Authentication activity reports
+* Multifactor Authentication activity reports
* Microsoft Authenticator activations This information is retained for 90 days.
-Azure AD Multi-Factor Authentication doesn't log personal data such as username, phone number, or IP address, but there is a *UserObjectId* that identifies Multi-Factor Authentication attempts to users. Log data is stored for 30 days.
+Azure AD Multifactor Authentication doesn't log personal data such as username, phone number, or IP address, but there is a *UserObjectId* that identifies Multifactor Authentication attempts to users. Log data is stored for 30 days.
-### Azure AD Multi-Factor Authentication
+### Azure AD Multifactor Authentication
For Azure public clouds, excluding Azure B2C authentication, NPS Extension, and Windows Server 2016 or 2019 AD FS Adapter, the following personal data is stored: | Event type | Data store type | |--------------------------------------|-----------------|
-| OATH token | In Multi-Factor Authentication logs |
-| One-way SMS | In Multi-Factor Authentication logs |
-| Voice call | In Multi-Factor Authentication logs<br />Multi-Factor Authentication activity report data store<br />Blocked users if fraud reported |
-| Microsoft Authenticator notification | In Multi-Factor Authentication logs<br />Multi-Factor Authentication activity report data store<br />Blocked users if fraud reported<br />Change requests when Microsoft Authenticator device token changes |
-
-> [!NOTE]
-> The Multi-Factor Authentication activity report data store is stored in the United States for all clouds, regardless of the region that processes the authentication request. Microsoft Azure Germany, Microsoft Azure Operated by 21Vianet, and Microsoft Government Cloud have their own independent data stores separate from public cloud region data stores, however this data is always stored in the United States. These data stores contain personally identifiable information (PII) such as user principal name (UPN) and complete phone number.
+| OATH token | In Multifactor Authentication logs |
+| One-way SMS | In Multifactor Authentication logs |
+| Voice call | In Multifactor Authentication logs<br />Multifactor Authentication activity report data store<br />Blocked users if fraud reported |
+| Microsoft Authenticator notification | In Multifactor Authentication logs<br />Multifactor Authentication activity report data store<br />Blocked users if fraud reported<br />Change requests when Microsoft Authenticator device token changes |
For Microsoft Azure Government, Microsoft Azure Germany, Microsoft Azure Operated by 21Vianet, Azure B2C authentication, NPS Extension, and Windows Server 2016 or 2019 AD FS Adapter, the following personal data is stored: | Event type | Data store type | |--------------------------------------|-----------------|
-| OATH token | In Multi-Factor Authentication logs<br />Multi-Factor Authentication activity report data store |
-| One-way SMS | In Multi-Factor Authentication logs<br />Multi-Factor Authentication activity report data store |
-| Voice call | In Multi-Factor Authentication logs<br />Multi-Factor Authentication activity report data store<br />Blocked users if fraud reported |
-| Microsoft Authenticator notification | In Multi-Factor Authentication logs<br />Multi-Factor Authentication activity report data store<br />Blocked users if fraud reported<br />Change requests when Microsoft Authenticator device token changes |
+| OATH token | In Multifactor Authentication logs<br />Multifactor Authentication activity report data store |
+| One-way SMS | In Multifactor Authentication logs<br />Multifactor Authentication activity report data store |
+| Voice call | In Multifactor Authentication logs<br />Multifactor Authentication activity report data store<br />Blocked users if fraud reported |
+| Microsoft Authenticator notification | In Multifactor Authentication logs<br />Multifactor Authentication activity report data store<br />Blocked users if fraud reported<br />Change requests when Microsoft Authenticator device token changes |
-### Multi-Factor Authentication Server
+### Multifactor Authentication Server
-If you deploy and run Azure Multi-Factor Authentication Server, the following personal data is stored:
+If you deploy and run Azure AD Multifactor Authentication Server, the following personal data is stored:
> [!IMPORTANT]
-> As of July 1, 2019, Microsoft will no longer offer Multi-Factor Authentication Server for new deployments. New customers who would like to require multi-factor authentication from their users should use cloud-based Azure AD Multi-Factor Authentication. Existing customers who have activated Multi-Factor Authentication Server prior to July 1 will be able to download the latest version, future updates and generate activation credentials as usual.
+> As of July 1, 2019, Microsoft will no longer offer Multifactor Authentication Server for new deployments. New customers who would like to require multifactor authentication from their users should use cloud-based Azure AD Multifactor Authentication. Existing customers who have activated Multifactor Authentication Server prior to July 1 will be able to download the latest version, future updates and generate activation credentials as usual.
| Event type | Data store type | |--------------------------------------|-----------------|
-| OATH token | In Multi-Factor Authentication logs<br />Multi-Factor Authentication activity report data store |
-| One-way SMS | In Multi-Factor Authentication logs<br />Multi-Factor Authentication activity report data store |
-| Voice call | In Multi-Factor Authentication logs<br />Multi-Factor Authentication activity report data store<br />Blocked users if fraud reported |
-| Microsoft Authenticator notification | In Multi-Factor Authentication logs<br />Multi-Factor Authentication activity report data store<br />Blocked users if fraud reported<br />Change requests when Microsoft Authenticator device token changes |
+| OATH token | In Multifactor Authentication logs<br />Multifactor Authentication activity report data store |
+| One-way SMS | In Multifactor Authentication logs<br />Multifactor Authentication activity report data store |
+| Voice call | In Multifactor Authentication logs<br />Multifactor Authentication activity report data store<br />Blocked users if fraud reported |
+| Microsoft Authenticator notification | In Multifactor Authentication logs<br />Multifactor Authentication activity report data store<br />Blocked users if fraud reported<br />Change requests when Microsoft Authenticator device token changes |
-## Organizational data stored by Azure AD Multi-Factor Authentication
+## Organizational data stored by Azure AD Multifactor Authentication
-Organizational data is tenant-level information that could expose configuration or environment setup. Tenant settings from the following Azure portal Multi-Factor Authentication pages may store organizational data such as lockout thresholds or caller ID information for incoming phone authentication requests:
+Organizational data is tenant-level information that could expose configuration or environment setup. Tenant settings from the following Azure portal Multifactor Authentication pages may store organizational data such as lockout thresholds or caller ID information for incoming phone authentication requests:
* Account lockout * Fraud alert * Notifications * Phone call settings
-And for Azure Multi-Factor Authentication Server, the following Azure portal pages may contain organizational data:
+And for Azure AD Multifactor Authentication Server, the following Azure portal pages may contain organizational data:
* Server settings * One-time bypass * Caching rules
-* Multi-Factor Authentication Server status
+* Multifactor Authentication Server status
+
+## Multifactor authentication logs location
+
+The following table shows the location for service logs for public clouds.
+
+| Public cloud| Sign-in logs | Multifactor Authentication activity report | Multifactor Authentication service logs |
+|-------------|--------------|----------------------------------------|------------------------|
+| US | US | US | US |
+| Europe | Europe | US | Europe <sup>2</sup> |
+| Australia | Australia | US<sup>1</sup> | Australia <sup>2</sup> |
+
+<sup>1</sup>OATH Code logs are stored in Australia
+
+<sup>2</sup>Voice calls multifactor authentication service logs are stored in the US
+
+The following table shows the location for service logs for sovereign clouds.
-## Log data location
+| Sovereign cloud | Sign-in logs | Multifactor authentication activity report (includes personal data)| Multifactor authentication service logs |
+|--------------------------------------|--------------------------------------|-------------------------------|------------------|
+| Microsoft Azure Germany | Germany | US | US |
+| Microsoft Azure Operated by 21Vianet | China | US | US |
+| Microsoft Government Cloud | US | US | US |
-Where log information is stored depends on which region they're processed in. Most geographies have native Azure AD Multi-Factor Authentication capabilities, so log data is stored in the same region that processes the Multi-Factor Authentication request. In geographies without native Azure AD Multi-Factor Authentication support, they're serviced by either the United States or Europe geographies and log data is stored in the same region that processes the Multi-Factor Authentication request.
+The Multifactor Authentication activity report data contain personal data such as user principal name (UPN) and complete phone number.
-Some core authentication log data is only stored in the United States. Microsoft Azure Germany and Microsoft Azure Operated by 21Vianet are always stored in their respective cloud. Microsoft Government Cloud log data is always stored in the United States.
+The Multifactor Authentication service logs are used to operate the service.
## Next steps
-For more information about what user information is collected by cloud-based Azure AD Multi-Factor Authentication and Azure Multi-Factor Authentication Server, see [Azure AD Multi-Factor Authentication user data collection](howto-mfa-reporting-datacollection.md).
+For more information about what user information is collected by cloud-based Azure AD Multifactor Authentication and Azure AD Multifactor Authentication Server, see [Azure AD Multifactor Authentication user data collection](howto-mfa-reporting-datacollection.md).
active-directory https://docs.microsoft.com/en-us/azure/active-directory/authentication/howto-password-ban-bad-on-premises-deploy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/authentication/howto-password-ban-bad-on-premises-deploy.md
@@ -96,7 +96,7 @@ The following core requirements apply:
The following requirements apply to the Azure AD Password Protection DC agent:
-* All machines where the Azure AD Password Protection DC agent software will be installed must run Windows Server 2012 or later.
+* All machines where the Azure AD Password Protection DC agent software will be installed must run Windows Server 2012 or later, including Windows Server Core editions.
* The Active Directory domain or forest doesn't need to be at Windows Server 2012 domain functional level (DFL) or forest functional level (FFL). As mentioned in [Design Principles](concept-password-ban-bad-on-premises.md#design-principles), there's no minimum DFL or FFL required for either the DC agent or proxy software to run. * All machines that run the Azure AD Password Protection DC agent must have .NET 4.5 installed. * Any Active Directory domain that runs the Azure AD Password Protection DC agent service must use Distributed File System Replication (DFSR) for sysvol replication.
@@ -113,7 +113,7 @@ The following requirements apply to the Azure AD Password Protection DC agent:
The following requirements apply to the Azure AD Password Protection proxy service:
-* All machines where the Azure AD Password Protection proxy service will be installed must run Windows Server 2012 R2 or later.
+* All machines where the Azure AD Password Protection proxy service will be installed must run Windows Server 2012 R2 or later, including Windows Server Core editions.
> [!NOTE] > The Azure AD Password Protection proxy service deployment is a mandatory requirement for deploying Azure AD Password Protection even though the domain controller may have outbound direct internet connectivity.
active-directory https://docs.microsoft.com/en-us/azure/active-directory/azuread-dev/active-directory-acs-migration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/azuread-dev/active-directory-acs-migration.md
@@ -283,7 +283,7 @@ In these cases, you might want to consider migrating your web application to ano
![This image shows the Ping Identity logo](./media/active-directory-acs-migration/rsz-ping.png)
-[Ping Identity](https://www.pingidentity.com) offers two solutions similar to ACS. PingOne is a cloud identity service that supports many of the same features as ACS, and PingFederate is a similar on premises identity product that offers more flexibility. Refer to [Ping's ACS retirement guidance](https://www.pingidentity.com/en/company/blog/posts/2017/migrating-from-microsoft-acs-to-ping-identity.html) for more details on using these products.
+[Ping Identity](https://www.pingidentity.com) offers two solutions similar to ACS. PingOne is a cloud identity service that supports many of the same features as ACS, and PingFederate is a similar on premises identity product that offers more flexibility. Refer to Ping's ACS retirement guidance for more details on using these products.
Our aim in working with Ping Identity and Auth0 is to ensure that all Access Control customers have a migration path for their apps and services that minimizes the amount of work required to move from Access Control.
@@ -347,7 +347,7 @@ In these cases, you might consider migrating your web application to another clo
[Auth0](https://auth0.com/acs) is a flexible cloud identity service that has created [high-level migration guidance for customers of Access Control](https://auth0.com/acs), and supports nearly every feature that ACS does. ![This image shows the Ping Identity logo](./media/active-directory-acs-migration/rsz-ping.png)
-[Ping Identity](https://www.pingidentity.com) offers two solutions similar to ACS. PingOne is a cloud identity service that supports many of the same features as ACS, and PingFederate is a similar on premises identity product that offers more flexibility. Refer to [Ping's ACS retirement guidance](https://www.pingidentity.com/en/company/blog/posts/2017/migrating-from-microsoft-acs-to-ping-identity.html) for more details on using these products.
+[Ping Identity](https://www.pingidentity.com) offers two solutions similar to ACS. PingOne is a cloud identity service that supports many of the same features as ACS, and PingFederate is a similar on premises identity product that offers more flexibility. Refer to Ping's ACS retirement guidance for more details on using these products.
Our aim in working with Ping Identity and Auth0 is to ensure that all Access Control customers have a migration path for their apps and services that minimizes the amount of work required to move from Access Control.
active-directory https://docs.microsoft.com/en-us/azure/active-directory/conditional-access/concept-conditional-access-grant https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/conditional-access/concept-conditional-access-grant.md
@@ -136,7 +136,7 @@ This setting applies to the following client apps:
- Nine Mail - Email & Calendar > [!NOTE]
-> Microsoft Kaizala, Microsoft Skype for Business and Microsoft Visio do not support the **Require app protection policy** grant. If you require these apps to work, please use the **Require approved apps** grant exclusively. The use of the or clause between the two grants will not work for these three applications.
+> Microsoft Teams, Microsoft Kaizala, Microsoft Skype for Business and Microsoft Visio do not support the **Require app protection policy** grant. If you require these apps to work, please use the **Require approved apps** grant exclusively. The use of the or clause between the two grants will not work for these three applications.
**Remarks**
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/authentication-vs-authorization https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/authentication-vs-authorization.md
@@ -1,7 +1,7 @@
---
-title: Authentication vs authorization | Azure
+title: Authentication vs. authorization | Azure
titleSuffix: Microsoft identity platform
-description: Learn about the basics of authentication and authorization in Microsoft identity platform (v2.0).
+description: Learn about the basics of authentication and authorization in the Microsoft identity platform (v2.0).
services: active-directory author: rwike77 manager: CelesteDG
@@ -14,45 +14,46 @@ ms.date: 05/22/2020
ms.author: ryanwi ms.reviewer: jmprieur, saeeda, sureshja, hirsin ms.custom: aaddev, identityplatformtop40, scenarios:getting-started
-#Customer intent: As an application developer, I want to understand the basic concepts of authentication and authorization in Microsoft identity platform
+#Customer intent: As an application developer, I want to understand the basic concepts of authentication and authorization in the Microsoft identity platform.
---
-# Authentication vs authorization
+# Authentication vs. authorization
-This article defines authentication and authorization and briefly covers how you can use the Microsoft identity platform to authenticate and authorize users in your web apps, web APIs, or apps calling protected web APIs. If you see a term you aren't familiar with, try our [glossary](developer-glossary.md) or our [Microsoft identity platform videos](identity-videos.md) which cover basic concepts.
+This article defines authentication and authorization. It also briefly covers how you can use the Microsoft identity platform to authenticate and authorize users in your web apps, web APIs, or apps that call protected web APIs. If you see a term you aren't familiar with, try our [glossary](developer-glossary.md) or our [Microsoft identity platform videos](identity-videos.md), which cover basic concepts.
## Authentication
-**Authentication** is the process of proving you are who you say you are. Authentication is sometimes shortened to AuthN. Microsoft identity platform implements the [OpenID Connect](https://openid.net/connect/) protocol for handling authentication.
+*Authentication* is the process of proving that you are who you say you are. It's sometimes shortened to *AuthN*. The Microsoft identity platform uses the [OpenID Connect](https://openid.net/connect/) protocol for handling authentication.
## Authorization
-**Authorization** is the act of granting an authenticated party permission to do something. It specifies what data you're allowed to access and what you can do with that data. Authorization is sometimes shortened to AuthZ. Microsoft identity platform implements the [OAuth 2.0](https://oauth.net/2/) protocol for handling authorization.
+*Authorization* is the act of granting an authenticated party permission to do something. It specifies what data you're allowed to access and what you can do with that data. Authorization is sometimes shortened to *AuthZ*. The Microsoft identity platform uses the [OAuth 2.0](https://oauth.net/2/) protocol for handling authorization.
-## Authentication and authorization using Microsoft identity platform
+## Authentication and authorization using the Microsoft identity platform
-Instead of creating apps that each maintain their own username and password information, which incurs a high administrative burden when you need to add or remove users across multiple apps, apps can delegate that responsibility to a centralized identity provider.
+Creating apps that each maintain their own username and password information incurs a high administrative burden when you need to add or remove users across multiple apps. Instead, your apps can delegate that responsibility to a centralized identity provider.
-> [!VIDEO https://www.youtube.com/embed/tkQJSHFsduY]
+Azure Active Directory (Azure AD) is a centralized identity provider in the cloud. Delegating authentication and authorization to it enables scenarios such as:
-Azure Active Directory (Azure AD) is a centralized identity provider in the cloud. Delegating authentication and authorization to it enables scenarios such as Conditional Access policies that require a user to be in a specific location, the use of [multi-factor authentication](../authentication/concept-mfa-howitworks.md) (sometimes referred to as two-factor authentication or 2FA), as well as enabling a user to sign in once and then be automatically signed in to all of the web apps that share the same centralized directory. This capability is referred to as **Single Sign On (SSO)**.
+- Conditional Access policies that require a user to be in a specific location.
+- The use of [multi-factor authentication](../authentication/concept-mfa-howitworks.md), which is sometimes called two-factor authentication or 2FA.
+- Enabling a user to sign in once and then be automatically signed in to all of the web apps that share the same centralized directory. This capability is called *single sign-on (SSO)*.
-Microsoft identity platform simplifies authorization and authentication for application developers by providing identity as a service, with support for industry-standard protocols such as OAuth 2.0 and OpenID Connect, as well as open-source libraries for different platforms to help you start coding quickly. It allows developers to build applications that sign in all Microsoft identities, get tokens to call [Microsoft Graph](https://developer.microsoft.com/graph/), other Microsoft APIs, or APIs that developers have built.
+The Microsoft identity platform simplifies authorization and authentication for application developers by providing identity as a service. It supports industry-standard protocols and open-source libraries for different platforms to help you start coding quickly. It allows developers to build applications that sign in all Microsoft identities, get tokens to call [Microsoft Graph](https://developer.microsoft.com/graph/), access Microsoft APIs, or access other APIs that developers have built.
-Following is a brief comparison of the various protocols used by Microsoft identity platform:
+This video explains the Microsoft identity platform and the basics of modern authentication:
-* **OAuth vs OpenID Connect**: OAuth is used for authorization and OpenID Connect (OIDC) is used for authentication. OpenID Connect is built on top of OAuth 2.0, so the terminology and flow are similar between the two. You can even both authenticate a user (using OpenID Connect) and get authorization to access a protected resource that the user owns (using OAuth 2.0) in one request. For more information, see [OAuth 2.0 and OpenID Connect protocols](active-directory-v2-protocols.md) and [OpenID Connect protocol](v2-protocols-oidc.md).
-* **OAuth vs SAML**: OAuth is used for authorization and SAML is used for authentication. See [Microsoft identity platform and OAuth 2.0 SAML bearer assertion flow](v2-saml-bearer-assertion.md) for more information on how the two protocols can be used together to both authenticate a user (using SAML) and get authorization to access a protected resource (using OAuth 2.0).
-* **OpenID Connect vs SAML**: Both OpenID Connect and SAML are used to authenticate a user and are used to enable Single Sign On. SAML authentication is commonly used with identity providers such as Active Directory Federation Services (ADFS) federated to Azure AD and is therefore frequently used in enterprise applications. OpenID Connect is commonly used for apps that are purely in the cloud, such as mobile apps, web sites, and web APIs.
+> [!VIDEO https://www.youtube.com/embed/tkQJSHFsduY]
-## Next steps
+Here's a comparison of the protocols that the Microsoft identity platform uses:
-For other topics covering authentication and authorization basics:
+* **OAuth versus OpenID Connect**: The platform uses OAuth for authorization and OpenID Connect (OIDC) for authentication. OpenID Connect is built on top of OAuth 2.0, so the terminology and flow are similar between the two. You can even both authenticate a user (through OpenID Connect) and get authorization to access a protected resource that the user owns (through OAuth 2.0) in one request. For more information, see [OAuth 2.0 and OpenID Connect protocols](active-directory-v2-protocols.md) and [OpenID Connect protocol](v2-protocols-oidc.md).
+* **OAuth versus SAML**: The platform uses OAuth 2.0 for authorization and SAML for authentication. For more information on how to use these protocols together to both authenticate a user and get authorization to access a protected resource, see [Microsoft identity platform and OAuth 2.0 SAML bearer assertion flow](v2-saml-bearer-assertion.md).
+* **OpenID Connect versus SAML**: The platform uses both OpenID Connect and SAML to authenticate a user and enable single sign-on. SAML authentication is commonly used with identity providers such as Active Directory Federation Services (AD FS) federated to Azure AD, so it's often used in enterprise applications. OpenID Connect is commonly used for apps that are purely in the cloud, such as mobile apps, websites, and web APIs.
+
+## Next steps
-* See [Security tokens](security-tokens.md) to learn how access tokens, refresh tokens, and ID tokens are used in authorization and authentication.
-* See [Application model](application-model.md) to learn about the process of registering your application so it can integrate with Microsoft identity platform.
-* See [App sign-in flow](app-sign-in-flow.md) to learn about the sign-in flow of web, desktop, and mobile apps in Microsoft identity platform.
+For other topics that cover authentication and authorization basics:
-* To learn more about the protocols that Microsoft identity platform implements, see [OAuth 2.0 and OpenID Connect protocols on the Microsoft identity platform](active-directory-v2-protocols.md).
-* See [Single Sign-On SAML protocol](single-sign-on-saml-protocol.md) for more information on how Microsoft identity platform supports Single Sign-On.
-* See [Single sign-on to applications in Azure Active Directory](../manage-apps/what-is-single-sign-on.md) for more information on the different ways you can implement single sign-on in your app.
+* To learn how access tokens, refresh tokens, and ID tokens are used in authorization and authentication, see [Security tokens](security-tokens.md).
+* To learn about the process of registering your application so it can integrate with the Microsoft identity platform, see [Application model](application-model.md).
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/reference-app-manifest https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/reference-app-manifest.md
@@ -110,17 +110,6 @@ Example:
"allowPublicClient": false, ```
-### availableToOtherTenants attribute
-
-| Key | Value type |
-| :--- | :--- |
-| availableToOtherTenants | Boolean |
-
-Set to true if the application is shared with other tenants; otherwise, false.
-
-> [!NOTE]
-> This attribute is available only in the **App registrations (Legacy)** experience. Replaced by `signInAudience` in the [App registrations](https://go.microsoft.com/fwlink/?linkid=2083908) experience.
- ### appId attribute | Key | Value type |
@@ -160,17 +149,6 @@ Example:
], ```
-### displayName attribute
-
-| Key | Value type |
-| :--- | :--- |
-| displayName | String |
-
-The display name for the app.
-
-> [!NOTE]
-> This attribute is available only in the **App registrations (Legacy)** experience. Replaced by `name` in the [App registrations](https://go.microsoft.com/fwlink/?linkid=2083908) experience.
- ### errorUrl attribute | Key | Value type |
@@ -198,33 +176,6 @@ Example:
"groupMembershipClaims": "SecurityGroup", ```
-### homepage attribute
-
-| Key | Value type |
-| :--- | :--- |
-| homepage |String |
-
-The URL to the application's homepage.
-
-> [!NOTE]
-> This attribute is available only in the **App registrations (Legacy)** experience. Replaced by `signInUrl` in the [App registrations](https://go.microsoft.com/fwlink/?linkid=2083908) experience.
-
-### objectId attribute
-
-| Key | Value type |
-| :--- | :--- |
-|objectId | String |
-
-The unique identifier for the app in the directory.
-
-This is available only in the **App registrations (Legacy)** experience. Replaced by `id` in the [App registrations](https://go.microsoft.com/fwlink/?linkid=2083908) experience.
-
-Example:
-
-```json
- "objectId": "f7f9acfc-ae0c-4d6c-b489-0a81dc1652dd",
-```
- ### optionalClaims attribute | Key | Value type |
@@ -242,7 +193,6 @@ Example:
``` - ### identifierUris attribute | Key | Value type |
@@ -484,16 +434,6 @@ Example:
], ```
-### publicClient attribute
-
-| Key | Value type |
-| :--- | :--- |
-| publicClient | Boolean|
-
-Specifies whether this application is a public client (such as an installed application running on a mobile device).
-
-This property is available only in the **App registrations (Legacy)** experience. Replaced by `allowPublicClient` in the [App registrations](https://go.microsoft.com/fwlink/?linkid=2083908) experience.
- ### publisherDomain attribute | Key | Value type |
@@ -506,17 +446,7 @@ Example:
```json "publisherDomain": "https://www.contoso.com",
-````
-
-### replyUrls attribute
-
-| Key | Value type |
-| :--- | :--- |
-| replyUrls | String array |
-
-This multi-value property holds the list of registered redirect_uri values that Azure AD will accept as destinations when returning tokens.
-
-This property is available only in the **App registrations (Legacy)** experience. Replaced by `replyUrlsWithType` in the [App registrations](https://go.microsoft.com/fwlink/?linkid=2083908) experience.
+```
### replyUrlsWithType attribute
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/scenario-mobile-app-configuration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/scenario-mobile-app-configuration.md
@@ -246,8 +246,8 @@ To register your app's URL scheme, follow these steps:
Here, `BundleId` uniquely identifies your device. For example, if `BundleId` is `yourcompany.xforms`, your URL scheme is `msauth.com.yourcompany.xforms`.
- > [!NOTE]
- > This URL scheme will become part of the redirect URI that uniquely identifies your app when it receives the broker's response.
+
+ This URL scheme will become part of the redirect URI that uniquely identifies your app when it receives the broker's response.
```XML <key>CFBundleURLTypes</key>
@@ -307,10 +307,9 @@ When MSAL for iOS and macOS calls the broker, the broker calls back to your appl
} ```
-> [!NOTE]
-> If you adopted `UISceneDelegate` on iOS 13 or later, then place the MSAL callback into the `scene:openURLContexts:` of `UISceneDelegate` instead. MSAL `handleMSALResponse:sourceApplication:` must be called only once for each URL.
->
-> For more information, see the [Apple documentation](https://developer.apple.com/documentation/uikit/uiscenedelegate/3238059-scene?language=objc).
+If you adopted `UISceneDelegate` on iOS 13 or later, then place the MSAL callback into the `scene:openURLContexts:` of `UISceneDelegate` instead. MSAL `handleMSALResponse:sourceApplication:` must be called only once for each URL.
+
+For more information, see the [Apple documentation](https://developer.apple.com/documentation/uikit/uiscenedelegate/3238059-scene?language=objc).
#### Step 2: Register a URL scheme
@@ -326,8 +325,7 @@ To register a scheme for your app:
Here, `BundleId` uniquely identifies your device. For example, if `BundleId` is `yourcompany.xforms`, your URL scheme is `msauth.com.yourcompany.xforms`.
- > [!NOTE]
- > This URL scheme will become part of the redirect URI that uniquely identifies your app when it receives the broker's response. Make sure that the redirect URI in the format `msauth.(BundleId)://auth` is registered for your application in the [Azure portal](https://portal.azure.com).
+ This URL scheme will become part of the redirect URI that uniquely identifies your app when it receives the broker's response. Make sure that the redirect URI in the format `msauth.(BundleId)://auth` is registered for your application in the [Azure portal](https://portal.azure.com).
```XML <key>CFBundleURLTypes</key>
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/tutorial-v2-android https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/tutorial-v2-android.md
@@ -53,8 +53,7 @@ This sample uses the Microsoft Authentication Library for Android (MSAL) to impl
MSAL will automatically renew tokens, deliver single sign-on (SSO) between other apps on the device, and manage the Account(s).
-> [!NOTE]
-> This tutorial demonstrates simplified examples of working with MSAL for Android. For simplicity, it uses Single Account Mode only. To explore more complex scenarios, see a completed [working code sample](https://github.com/Azure-Samples/ms-identity-android-java/) on GitHub.
+This tutorial demonstrates simplified examples of working with MSAL for Android. For simplicity, it uses Single Account Mode only. To explore more complex scenarios, see a completed [working code sample](https://github.com/Azure-Samples/ms-identity-android-java/) on GitHub.
## Create a project If you do not already have an Android application, follow these steps to set up a new project.
@@ -81,8 +80,8 @@ If you do not already have an Android application, follow these steps to set up
1. Enter your project's Package Name. If you downloaded the code, this value is `com.azuresamples.msalandroidapp`. 1. In the **Signature hash** section of the **Configure your Android app** page, select **Generating a development Signature Hash.** and copy the KeyTool command to use for your platform.
- > [!Note]
- > KeyTool.exe is installed as part of the Java Development Kit (JDK). You must also install the OpenSSL tool to execute the KeyTool command. Refer to the [Android documentation on generating a key](https://developer.android.com/studio/publish/app-signing#generate-key) for more information.
+
+ KeyTool.exe is installed as part of the Java Development Kit (JDK). You must also install the OpenSSL tool to execute the KeyTool command. Refer to the [Android documentation on generating a key](https://developer.android.com/studio/publish/app-signing#generate-key) for more information.
1. Enter the **Signature hash** generated by KeyTool. 1. Select **Configure** and save the **MSAL Configuration** that appears in the **Android configuration** page so you can enter it when you configure your app later.
@@ -118,8 +117,7 @@ If you do not already have an Android application, follow these steps to set up
} ```
- >[!NOTE]
- >This tutorial only demonstrates how to configure an app in Single Account mode. View the documentation for more information on [single vs. multiple account mode](./single-multi-account.md) and [configuring your app](./msal-configuration.md)
+ This tutorial only demonstrates how to configure an app in Single Account mode. View the documentation for more information on [single vs. multiple account mode](./single-multi-account.md) and [configuring your app](./msal-configuration.md)
4. In **app** > **src** > **main** > **AndroidManifest.xml**, add the `BrowserTabActivity` activity below to the application body. This entry allows Microsoft to call back to your application after it completes the authentication:
@@ -140,10 +138,11 @@ If you do not already have an Android application, follow these steps to set up
Substitute the package name you registered in the Azure portal for the `android:host=` value. Substitute the key hash you registered in the Azure portal for the `android:path=` value. The Signature Hash should **not** be URL encoded. Ensure that there is a leading `/` at the beginning of your Signature Hash.
- >[!NOTE]
- >The "Package Name" you will replace the `android:host` value with should look similar to: "com.azuresamples.msalandroidapp"
- >The "Signature Hash" you will replace your `android:path` value with should look similar to: "/1wIqXSqBj7w+h11ZifsnqwgyKrY="
- >You will also be able to find these values in the Authentication blade of your app registration. Note that your redirect URI will look similar to: "msauth://com.azuresamples.msalandroidapp/1wIqXSqBj7w%2Bh11ZifsnqwgyKrY%3D". While the Signature Hash is URL encoded at the end of this value, the Signature Hash should **not** be URL encoded in your `android:path` value.
+
+ The "Package Name" you will replace the `android:host` value with should look similar to: "com.azuresamples.msalandroidapp".
+ The "Signature Hash" you will replace your `android:path` value with should look similar to: "/1wIqXSqBj7w+h11ZifsnqwgyKrY=".
+
+ You will also be able to find these values in the Authentication blade of your app registration. Note that your redirect URI will look similar to: "msauth://com.azuresamples.msalandroidapp/1wIqXSqBj7w%2Bh11ZifsnqwgyKrY%3D". While the Signature Hash is URL encoded at the end of this value, the Signature Hash should **not** be URL encoded in your `android:path` value.
## Use MSAL
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/v2-oauth2-on-behalf-of-flow https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/v2-oauth2-on-behalf-of-flow.md
@@ -23,8 +23,8 @@ The OAuth 2.0 On-Behalf-Of flow (OBO) serves the use case where an application i
This article describes how to program directly against the protocol in your application. When possible, we recommend you use the supported Microsoft Authentication Libraries (MSAL) instead to [acquire tokens and call secured web APIs](authentication-flows-app-scenarios.md#scenarios-and-supported-authentication-flows). Also take a look at the [sample apps that use MSAL](sample-v2-code.md).
-> [!NOTE]
-> As of May 2018, some implicit-flow derived `id_token` can't be used for OBO flow. Single-page apps (SPAs) should pass an **access** token to a middle-tier confidential client to perform OBO flows instead. For more info about which clients can perform OBO calls, see [limitations](#client-limitations).
+
+As of May 2018, some implicit-flow derived `id_token` can't be used for OBO flow. Single-page apps (SPAs) should pass an **access** token to a middle-tier confidential client to perform OBO flows instead. For more info about which clients can perform OBO calls, see [limitations](#client-limitations).
## Protocol diagram
@@ -38,10 +38,9 @@ The steps that follow constitute the OBO flow and are explained with the help of
1. API A authenticates to the Microsoft identity platform token issuance endpoint and requests a token to access API B. 1. The Microsoft identity platform token issuance endpoint validates API A's credentials along with token A and issues the access token for API B (token B) to API A. 1. Token B is set by API A in the authorization header of the request to API B.
-1. Data from the secured resource is returned by API B to API A, and from there to the client.
+1. Data from the secured resource is returned by API B to API A, then to the client.
-> [!NOTE]
-> In this scenario, the middle-tier service has no user interaction to obtain the user's consent to access the downstream API. Therefore, the option to grant access to the downstream API is presented upfront as a part of the consent step during authentication. To learn how to set this up for your app, see [Gaining consent for the middle-tier application](#gaining-consent-for-the-middle-tier-application).
+In this scenario, the middle-tier service has no user interaction to get the user's consent to access the downstream API. Therefore, the option to grant access to the downstream API is presented upfront as a part of the consent step during authentication. To learn how to set this up for your app, see [Gaining consent for the middle-tier application](#gaining-consent-for-the-middle-tier-application).
## Middle-tier access token request
@@ -148,10 +147,9 @@ The following example shows a success response to a request for an access token
} ```
-> [!NOTE]
-> The above access token is a v1.0-formatted token for Microsoft Graph. This is because the token format is based on the **resource** being accessed and unrelated to the endpoints used to request it. The Microsoft Graph is setup to accept v1.0 tokens, so Microsoft identity platform produces v1.0 access tokens when a client requests tokens for Microsoft Graph. Other apps may indicate that they want v2.0-format tokens, v1.0-format tokens, or even proprietary or encrypted token formats. Both the v1.0 and v2.0 endpoints can emit either format of token - this way the resource can always get the right format of token regardless of how or where the token was requested by the client.
->
-> Only applications should look at access tokens. Clients **must not** inspect them. Inspecting access tokens for other apps in your code will result in your app unexpectedly breaking when that app changes the format of their tokens or starts encrypting them.
+The above access token is a v1.0-formatted token for Microsoft Graph. This is because the token format is based on the **resource** being accessed and unrelated to the endpoints used to request it. The Microsoft Graph is setup to accept v1.0 tokens, so Microsoft identity platform produces v1.0 access tokens when a client requests tokens for Microsoft Graph. Other apps may indicate that they want v2.0-format tokens, v1.0-format tokens, or even proprietary or encrypted token formats. Both the v1.0 and v2.0 endpoints can emit either format of token - this way the resource can always get the right format of token regardless of how or where the token was requested by the client.
+
+Only applications should look at access tokens. Clients **must not** inspect them. Inspecting access tokens for other apps in your code will result in your app unexpectedly breaking when that app changes the format of their tokens or starts encrypting them.
### Error response example
@@ -185,8 +183,7 @@ Authorization: Bearer eyJ0eXAiO ... 0X2tnSQLEANnSPHY0gKcgw
Some OAuth-based web services need to access other web service APIs that accept SAML assertions in non-interactive flows. Azure Active Directory can provide a SAML assertion in response to an On-Behalf-Of flow that uses a SAML-based web service as a target resource.
->[!NOTE]
->This is a non-standard extension to the OAuth 2.0 On-Behalf-Of flow that allows an OAuth2-based application to access web service API endpoints that consume SAML tokens.
+This is a non-standard extension to the OAuth 2.0 On-Behalf-Of flow that allows an OAuth2-based application to access web service API endpoints that consume SAML tokens.
> [!TIP] > When you call a SAML-protected web service from a front-end web application, you can simply call the API and initiate a normal interactive authentication flow with the user's existing session. You only need to use an OBO flow when a service-to-service call requires a SAML token to provide user context.
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/v2-permissions-and-consent https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/v2-permissions-and-consent.md
@@ -27,8 +27,7 @@ The Microsoft identity platform implements the [OAuth 2.0](active-directory-v2-p
* Microsoft 365 Mail API: `https://outlook.office.com` * Azure Key Vault: `https://vault.azure.net`
-> [!NOTE]
-> We strongly recommend that you use Microsoft Graph instead of Microsoft 365 Mail API, etc.
+We strongly recommend that you use Microsoft Graph instead of Microsoft 365 Mail API, etc.
The same is true for any third-party resources that have integrated with the Microsoft identity platform. Any of these resources also can define a set of permissions that can be used to divide the functionality of that resource into smaller chunks. As an example, [Microsoft Graph](https://graph.microsoft.com) has defined permissions to do the following tasks, among others:
@@ -111,8 +110,7 @@ The `scope` parameter is a space-separated list of delegated permissions that th
After the user enters their credentials, the Microsoft identity platform endpoint checks for a matching record of *user consent*. If the user has not consented to any of the requested permissions in the past, nor has an administrator consented to these permissions on behalf of the entire organization, the Microsoft identity platform endpoint asks the user to grant the requested permissions.
-> [!NOTE]
->At this time, the `offline_access` ("Maintain access to data you have given it access to") and `user.read` ("Sign you in and read your profile") permissions are automatically included in the initial consent to an application. These permissions are generally required for proper app functionality - `offline_access` gives the app access to refresh tokens, critical for native and web apps, while `user.read` gives access to the `sub` claim, allowing the client or app to correctly identify the user over time and access rudimentary user information.
+At this time, the `offline_access` ("Maintain access to data you have given it access to") and `user.read` ("Sign you in and read your profile") permissions are automatically included in the initial consent to an application. These permissions are generally required for proper app functionality - `offline_access` gives the app access to refresh tokens, critical for native and web apps, while `user.read` gives access to the `sub` claim, allowing the client or app to correctly identify the user over time and access rudimentary user information.
![Example screenshot that shows work account consent](./media/v2-permissions-and-consent/work_account_consent.png)
@@ -144,8 +142,7 @@ If the application is requesting application permissions and an administrator gr
## Using the admin consent endpoint
-> [!NOTE]
-> Please note after granting admin consent using the admin consent endpoint, you have finished granting admin consent and users do not need to perform any further additional actions. After granting admin consent, users can get an access token via a typical auth flow and the resulting access token will have the consented permissions.
+After granting admin consent using the admin consent endpoint, you have finished granting admin consent and users do not need to perform any further additional actions. After granting admin consent, users can get an access token via a typical auth flow and the resulting access token will have the consented permissions.
When a Company Administrator uses your application and is directed to the authorize endpoint, Microsoft identity platform will detect the user's role and ask them if they would like to consent on behalf of the entire tenant for the permissions you have requested. However, there is also a dedicated admin consent endpoint you can use if you would like to proactively request that an administrator grants permission on behalf of the entire tenant. Using this endpoint is also necessary for requesting Application Permissions (which can't be requested using the authorize endpoint).
@@ -259,8 +256,7 @@ You can use the `/.default` scope to help migrate your apps from the v1.0 endpoi
The /.default scope can be used in any OAuth 2.0 flow, but is necessary in the [On-Behalf-Of flow](v2-oauth2-on-behalf-of-flow.md) and [client credentials flow](v2-oauth2-client-creds-grant-flow.md), as well as when using the v2 admin consent endpoint to request application permissions.
-> [!NOTE]
-> Clients can't combine static (`/.default`) and dynamic consent in a single request. Thus, `scope=https://graph.microsoft.com/.default+mail.read` will result in an error due to the combination of scope types.
+Clients can't combine static (`/.default`) and dynamic consent in a single request. Thus, `scope=https://graph.microsoft.com/.default+mail.read` will result in an error due to the combination of scope types.
### /.default and consent
active-directory https://docs.microsoft.com/en-us/azure/active-directory/enterprise-users/users-restrict-guest-permissions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/enterprise-users/users-restrict-guest-permissions.md
@@ -5,7 +5,7 @@ services: active-directory
author: curtand ms.author: curtand manager: daveba
-ms.date: 12/03/2020
+ms.date: 01/14/2020
ms.topic: how-to ms.service: active-directory ms.subservice: enterprise-users
@@ -134,14 +134,15 @@ By supported we mean that the experience is as expected; specifically, that it i
- Teams - Outlook (OWA) - SharePoint
+- Planner in Teams
+- Planner web app
### Services currently not supported Service without current support might have compatibility issues with the new guest restriction setting. - Forms-- Planner in Teams-- Planner app
+- Planner mobile app
- Project - Yammer
@@ -153,7 +154,7 @@ Where do these permissions apply? | These directory level permissions are enforc
How do restricted permissions affect which groups guests can see? | Regardless of default or restricted guest permissions, guests can't enumerate the list of groups or users. Guests can see groups they are members of in both the Azure portal and the My Apps portal depending on permissions:<li>**Default permissions**: To find the groups they are members of in the Azure portal, the guest must search for their object ID in the **All users** list, and then select **Groups**. Here they can see the list of groups that they are members of, including all the group details, including name, email, and so on. In the My Apps portal, they can see a list of groups they own and groups they are a member of.</li><li>**Restricted guest permissions**: In the Azure portal, they can still find the list of groups they are members of by searching for their object ID in the All users list, and then select Groups. They can only see very limited details about the group, notably the object ID. By design, the Name and Email columns are blank and Group Type is Unrecognized. In the My Apps portal, they are not able to access the list of groups they own or groups they are a member of.</li><br>For more detailed comparison of the directory permissions that come from the Graph API, see [Default user permissions](../fundamentals/users-default-permissions.md#member-and-guest-users). Which parts of the My Apps portal will this feature affect? | The groups functionality in the My Apps portal will honor these new permissions. This includes all paths to view the groups list and group memberships in My Apps. No changes were made to the group tile availability. The group tile availability is still controlled by the existing group setting in the Azure portal. Do these permissions override SharePoint or Microsoft Teams guest settings? | No. Those existing settings still control the experience and access in those applications. For example, if you see issues in SharePoint, double check your external sharing settings.
-What are the known compatibility issues in Planner and Yammer? | <li>With permissions set to ΓÇÿrestrictedΓÇÖ, guests logged into the Planner app or accessing the Planner in Microsoft Teams won't be able to access their plans or any tasks.<li>With permissions set to ΓÇÿrestrictedΓÇÖ, guests logged into Yammer won't be able to leave the group.
+What are the known compatibility issues in Planner and Yammer? | <li>With permissions set to ΓÇÿrestrictedΓÇÖ, guests signed into the Planner mobile app won't be able to access their plans or any tasks.<li>With permissions set to ΓÇÿrestrictedΓÇÖ, guests signed into Yammer won't be able to leave the group.
Will my existing guest permissions be changed in my tenant? | No changes were made to your current settings. We maintain backward compatibility with your existing settings. You decide when you want make changes. Will these permissions be set by default? | No. The existing default permissions remain unchanged. You can optionally set the permissions to be more restrictive. Are there any license requirements for this feature? | No, there are no new licensing requirements with this feature.
active-directory https://docs.microsoft.com/en-us/azure/active-directory/external-identities/add-users-information-worker https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/external-identities/add-users-information-worker.md
@@ -26,8 +26,8 @@ After a guest user has been added to the directory in Azure AD, an application o
- Configure the app for self-service and assign the group to the app > [!NOTE]
-> This article describes how to set up self-service management for gallery and SAML-based apps that youΓÇÖve added to your Azure AD tenant. You can also [set up self-service Microsoft 365 groups](../enterprise-users/groups-self-service-management.md) so your users can manage access to their own Microsoft 365 groups. For more ways users can share Office files and apps with guest users, see [Guest access in Microsoft 365 groups](https://support.office.com/article/guest-access-in-office-365-groups-bfc7a840-868f-4fd6-a390-f347bf51aff6) and [Share SharePoint files or folders](https://support.office.com/article/share-sharepoint-files-or-folders-1fe37332-0f9a-4719-970e-d2578da4941c).
-
+> * This article describes how to set up self-service management for gallery and SAML-based apps that youΓÇÖve added to your Azure AD tenant. You can also [set up self-service Microsoft 365 groups](../enterprise-users/groups-self-service-management.md) so your users can manage access to their own Microsoft 365 groups. For more ways users can share Office files and apps with guest users, see [Guest access in Microsoft 365 groups](https://support.office.com/article/guest-access-in-office-365-groups-bfc7a840-868f-4fd6-a390-f347bf51aff6) and [Share SharePoint files or folders](https://support.office.com/article/share-sharepoint-files-or-folders-1fe37332-0f9a-4719-970e-d2578da4941c).
+> * Users are only able to invite guests if they have the **Guest inviter** role.
## Invite a guest user to an app from the Access Panel After an app is configured for self-service, application owners can use their own Access Panel to invite a guest user to the app they want to share. The guest user doesn't necessarily need to be added to Azure AD in advance.
active-directory https://docs.microsoft.com/en-us/azure/active-directory/fundamentals/1-secure-access-posture https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/fundamentals/1-secure-access-posture.md
@@ -1,5 +1,5 @@
---
-title: Determine your security posture for external collaboration with Azure Active Directory
+title: Determine your security posture for external collaboration with Azure Active Directory
description: Before you can execute an external access security plan, you must determine what you are trying to achieve. services: active-directory author: BarbaraSelden
active-directory https://docs.microsoft.com/en-us/azure/active-directory/fundamentals/2-secure-access-current-state https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/fundamentals/2-secure-access-current-state.md
@@ -1,5 +1,5 @@
---
-title: Discover the current state of external collaboration with Azure Active Directory
+title: Discover the current state of external collaboration with Azure Active Directory
description: Learn methods to discover the current state of your collaboration. services: active-directory author: BarbaraSelden
@@ -41,9 +41,9 @@ External organizations can be determined by the domain names of external user em
### Use allow or deny lists
-Another way to discover who you currently collaborate with, or with whom you have blocked collaboration, is to see if you've added any organizations to your [allow or deny lists](../external-identities/allow-deny-list.md).
+Consider whether your organization wants to allow collaboration with only specific organizations. Alternatively, consider if your organization wants to block collaboration with specific organizations. At the tenant level, there is an [allow or deny list](../external-identities/allow-deny-list.md), which can be used to control overall B2B invitations and redemptions regardless of source (e.g. Teams, SharePoint, and Azure Portal).
+ If youΓÇÖre using entitlement management, you can also scope access packages to a subset of your partners by using the Specific connected organizations setting as shown below.
-Consider if your organization wants to allow collaboration with only specific organizations. Also consider if your organization wants to block collaboration with specific organizations. These settings can apply for overall B2B redemption or to only a specific access package.
![Screenshot of allow deny list in creating a new access package.](media/secure-external-access/2-new-access-package.png)
@@ -82,4 +82,4 @@ See the following articles on securing external access to resources. We recommen
8. [Secure access with Sensitivity labels](8-secure-access-sensitivity-labels.md)
-9. [Secure access to Microsoft Teams, OneDrive, and SharePoint](9-secure-access-teams-sharepoint.md)
\ No newline at end of file
+9. [Secure access to Microsoft Teams, OneDrive, and SharePoint](9-secure-access-teams-sharepoint.md)
active-directory https://docs.microsoft.com/en-us/azure/active-directory/fundamentals/3-secure-access-plan https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/fundamentals/3-secure-access-plan.md
@@ -1,5 +1,5 @@
---
-title: Create a security plan for external access to Azure Active Directory
+title: Create a security plan for external access to Azure Active Directory
description: Plan the security for external access to your organization's resources.. services: active-directory author: BarbaraSelden
active-directory https://docs.microsoft.com/en-us/azure/active-directory/fundamentals/4-secure-access-groups https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/fundamentals/4-secure-access-groups.md
@@ -1,5 +1,5 @@
---
-title: Secure external access with groups in Azure Active Directory and Microsoft 365
+title: Secure external access with groups in Azure Active Directory and Microsoft 365
description: Azure Active Directory and Microsoft 365 Groups can be used to increase security when external users access your resources. services: active-directory author: BarbaraSelden
active-directory https://docs.microsoft.com/en-us/azure/active-directory/fundamentals/5-secure-access-b2b https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/fundamentals/5-secure-access-b2b.md
@@ -1,5 +1,5 @@
---
-title: Transition to governed collaboration with Azure Active Directory B2B Collaboration
+title: Transition to governed collaboration with Azure Active Directory B2B Collaboration
description: Move to governed collaboration with Azure Ad B2B collaboration. services: active-directory author: BarbaraSelden
active-directory https://docs.microsoft.com/en-us/azure/active-directory/fundamentals/6-secure-access-entitlement-managment https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/fundamentals/6-secure-access-entitlement-managment.md
@@ -1,5 +1,5 @@
---
-title: Manage external access with Azure Active Directory Entitlement Management
+title: Manage external access with Azure Active Directory Entitlement Management
description: How to use Azure Active Directory Entitlement Management as a part of your overall external access security plan. services: active-directory author: BarbaraSelden
active-directory https://docs.microsoft.com/en-us/azure/active-directory/fundamentals/7-secure-access-conditional-access https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/fundamentals/7-secure-access-conditional-access.md
@@ -1,5 +1,5 @@
---
-title: Manage external access with Azure Active Directory Conditional Access
+title: Manage external access with Azure Active Directory Conditional Access
description: How to use Azure Active Directory conditional Access policies to secure external access to resources. services: active-directory author: BarbaraSelden
active-directory https://docs.microsoft.com/en-us/azure/active-directory/fundamentals/8-secure-access-sensitivity-labels https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/fundamentals/8-secure-access-sensitivity-labels.md
@@ -1,5 +1,5 @@
---
-title: Control external access to resources in Azure Active Directory with sensitivity labels.
+title: Control external access to resources in Azure Active Directory with sensitivity labels.
description: Use sensitivity labels as a part of your overall security plan for external access. services: active-directory author: BarbaraSelden
active-directory https://docs.microsoft.com/en-us/azure/active-directory/fundamentals/9-secure-access-teams-sharepoint https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/fundamentals/9-secure-access-teams-sharepoint.md
@@ -1,5 +1,5 @@
---
-title: Secure external access to Microsoft Teams, SharePoint, and OneDrive with Azure Active Directory
+title: Secure external access to Microsoft Teams, SharePoint, and OneDrive with Azure Active Directory
description: Secure access to Microsoft 365 services as a part of your overall external access security. services: active-directory author: BarbaraSelden
active-directory https://docs.microsoft.com/en-us/azure/active-directory/fundamentals/protect-m365-from-on-premises-attacks https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/fundamentals/protect-m365-from-on-premises-attacks.md
@@ -189,7 +189,7 @@ Provisioning refers to the creation of user accounts and groups in applications
* Block access to the Azure portal. You can make rare necessary exceptions. Create a Conditional Access policy that includes all guests and external users and then [implement a policy to block
- access](https://docs.microsoft.com/azure/role-based-access-control/conditional-access-azure-management.md).
+ access](/azure/role-based-access-control/conditional-access-azure-management).
* **Disconnected Forests:** Use [Azure AD Cloud Provisioning](../cloud-provisioning/what-is-cloud-provisioning.md). This enables you to connect to disconnected forests, eliminating the need to establish cross-forest connectivity or trusts, which can
active-directory https://docs.microsoft.com/en-us/azure/active-directory/fundamentals/secure-external-access-resources https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/fundamentals/secure-external-access-resources.md
@@ -1,6 +1,6 @@
--- title: Securing external collaboration in Azure Active Directory
-description: A guide for architects and IT administrators on securing external access to internal resources
+description: A guide for architects and IT administrators on securing external access to internal resources
services: active-directory author: BarbaraSelden manager: daveba
active-directory https://docs.microsoft.com/en-us/azure/active-directory/fundamentals/whats-new-archive https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/fundamentals/whats-new-archive.md
@@ -1199,7 +1199,7 @@ For more information about how to better secure your organization by using autom
In January 2020, we've added these 33 new apps with Federation support to the app gallery:
-[JOSA](../saas-apps/josa-tutorial.md), [Fastly Edge Cloud](../saas-apps/fastly-edge-cloud-tutorial.md), [Terraform Enterprise](../saas-apps/terraform-enterprise-tutorial.md), [Spintr SSO](../saas-apps/spintr-sso-tutorial.md), [Abibot Netlogistik](https://azuremarketplace.microsoft.com/marketplace/apps/aad.abibotnetlogistik), [SkyKick](https://login.skykick.com/login?state=g6Fo2SBTd3M5Q0xBT0JMd3luS2JUTGlYN3pYTE1remJQZnR1c6N0aWTZIDhCSkwzYVQxX2ZMZjNUaWxNUHhCSXg2OHJzbllTcmYto2NpZNkgM0h6czk3ZlF6aFNJV1VNVWQzMmpHeFFDbDRIMkx5VEc&client=3Hzs97fQzhSIWUMUd32jGxQCl4H2LyTG&protocol=oauth2&audience=https://papi.skykick.com&response_type=code&redirect_uri=https://portal.skykick.com/callback&scope=openid%20profile%20offline_access), [Upshotly](../saas-apps/upshotly-tutorial.md), [LeaveBot](https://appsource.microsoft.com/en-us/product/office/WA200001175), [DataCamp](../saas-apps/datacamp-tutorial.md), [TripActions](../saas-apps/tripactions-tutorial.md), [SmartWork](https://www.intumit.com/english/SmartWork.html), [Dotcom-Monitor](../saas-apps/dotcom-monitor-tutorial.md), [SSOGEN - Azure AD SSO Gateway for Oracle E-Business Suite - EBS, PeopleSoft, and JDE](../saas-apps/ssogen-tutorial.md), [Hosted MyCirqa SSO](../saas-apps/hosted-mycirqa-sso-tutorial.md), [Yuhu Property Management Platform](../saas-apps/yuhu-property-management-platform-tutorial.md), [LumApps](https://sites.lumapps.com/login), [Upwork Enterprise](../saas-apps/upwork-enterprise-tutorial.md), [Talentsoft](../saas-apps/talentsoft-tutorial.md), [SmartDB for Microsoft Teams](http://teams.smartdb.jp/login/), [PressPage](../saas-apps/presspage-tutorial.md), [ContractSafe Saml2 SSO](../saas-apps/contractsafe-saml2-sso-tutorial.md), [Maxient Conduct Manager Software](../saas-apps/maxient-conduct-manager-software-tutorial.md), [Helpshift](../saas-apps/helpshift-tutorial.md), [PortalTalk 365](https://www.portaltalk.com/), [CoreView](https://portal.coreview.com/), [Squelch Cloud Office365 Connector](https://laxmi.squelch.io/login), [PingFlow Authentication](https://app-staging.pingview.io/), [ PrinterLogic SaaS](../saas-apps/printerlogic-saas-tutorial.md), [Taskize Connect](../saas-apps/taskize-connect-tutorial.md), [Sandwai](https://app.sandwai.com/), [EZRentOut](../saas-apps/ezrentout-tutorial.md), [AssetSonar](../saas-apps/assetsonar-tutorial.md), [Akari Virtual Assistant](https://akari.io/akari-virtual-assistant/)
+[JOSA](../saas-apps/josa-tutorial.md), [Fastly Edge Cloud](../saas-apps/fastly-edge-cloud-tutorial.md), [Terraform Enterprise](../saas-apps/terraform-enterprise-tutorial.md), [Spintr SSO](../saas-apps/spintr-sso-tutorial.md), [Abibot Netlogistik](https://azuremarketplace.microsoft.com/marketplace/apps/aad.abibotnetlogistik), [SkyKick](https://login.skykick.com/login?state=g6Fo2SBTd3M5Q0xBT0JMd3luS2JUTGlYN3pYTE1remJQZnR1c6N0aWTZIDhCSkwzYVQxX2ZMZjNUaWxNUHhCSXg2OHJzbllTcmYto2NpZNkgM0h6czk3ZlF6aFNJV1VNVWQzMmpHeFFDbDRIMkx5VEc&client=3Hzs97fQzhSIWUMUd32jGxQCl4H2LyTG&protocol=oauth2&audience=https://papi.skykick.com&response_type=code&redirect_uri=https://portal.skykick.com/callback&scope=openid%20profile%20offline_access), [Upshotly](../saas-apps/upshotly-tutorial.md), [LeaveBot](https://appsource.microsoft.com/en-us/product/office/WA200001175), [DataCamp](../saas-apps/datacamp-tutorial.md), [TripActions](../saas-apps/tripactions-tutorial.md), [SmartWork](https://www.intumit.com/teams-smartwork/), [Dotcom-Monitor](../saas-apps/dotcom-monitor-tutorial.md), [SSOGEN - Azure AD SSO Gateway for Oracle E-Business Suite - EBS, PeopleSoft, and JDE](../saas-apps/ssogen-tutorial.md), [Hosted MyCirqa SSO](../saas-apps/hosted-mycirqa-sso-tutorial.md), [Yuhu Property Management Platform](../saas-apps/yuhu-property-management-platform-tutorial.md), [LumApps](https://sites.lumapps.com/login), [Upwork Enterprise](../saas-apps/upwork-enterprise-tutorial.md), [Talentsoft](../saas-apps/talentsoft-tutorial.md), [SmartDB for Microsoft Teams](http://teams.smartdb.jp/login/), [PressPage](../saas-apps/presspage-tutorial.md), [ContractSafe Saml2 SSO](../saas-apps/contractsafe-saml2-sso-tutorial.md), [Maxient Conduct Manager Software](../saas-apps/maxient-conduct-manager-software-tutorial.md), [Helpshift](../saas-apps/helpshift-tutorial.md), [PortalTalk 365](https://www.portaltalk.com/), [CoreView](https://portal.coreview.com/), [Squelch Cloud Office365 Connector](https://laxmi.squelch.io/login), [PingFlow Authentication](https://app-staging.pingview.io/), [ PrinterLogic SaaS](../saas-apps/printerlogic-saas-tutorial.md), [Taskize Connect](../saas-apps/taskize-connect-tutorial.md), [Sandwai](https://app.sandwai.com/), [EZRentOut](../saas-apps/ezrentout-tutorial.md), [AssetSonar](../saas-apps/assetsonar-tutorial.md), [Akari Virtual Assistant](https://akari.io/akari-virtual-assistant/)
For more information about the apps, see [SaaS application integration with Azure Active Directory](../saas-apps/tutorial-list.md). For more information about listing your application in the Azure AD app gallery, see [List your application in the Azure Active Directory application gallery](../develop/v2-howto-app-gallery-listing.md).
active-directory https://docs.microsoft.com/en-us/azure/active-directory/hybrid/how-to-connect-install-prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/how-to-connect-install-prerequisites.md
@@ -78,7 +78,7 @@ We recommend that you harden your Azure AD Connect server to decrease the securi
- Follow the guidance provided in [Securing privileged access](/windows-server/identity/securing-privileged-access/securing-privileged-access). - Deny use of NTLM authentication with the AADConnect server. Here are some ways to do this: [Restricting NTLM on the AADConnect Server](/windows/security/threat-protection/security-policy-settings/network-security-restrict-ntlm-outgoing-ntlm-traffic-to-remote-servers) and [Restricting NTLM on a domain](/windows/security/threat-protection/security-policy-settings/network-security-restrict-ntlm-ntlm-authentication-in-this-domain) - Ensure every machine has a unique local administrator password. For more information, see [Local Administrator Password Solution (LAPS)](https://support.microsoft.com/help/3062591/microsoft-security-advisory-local-administrator-password-solution-laps) can configure unique random passwords on each workstation and server store them in Active Directory protected by an ACL. Only eligible authorized users can read or request the reset of these local administrator account passwords. You can obtain the LAPS for use on workstations and servers from the [Microsoft Download Center](https://www.microsoft.com/download/details.aspx?id=46899). Additional guidance for operating an environment with LAPS and privileged access workstations (PAWs) can be found in [Operational standards based on clean source principle](/windows-server/identity/securing-privileged-access/securing-privileged-access-reference-material#operational-standards-based-on-clean-source-principle). -- Implement dedicated [privileged access workstations](/windows-server/identity/securing-privileged-access/privileged-access-workstations) for all personnel with privileged access to your organization's information systems.
+- Implement dedicated [privileged access workstations](https://4sysops.com/archives/understand-the-microsoft-privileged-access-workstation-paw-security-model/) for all personnel with privileged access to your organization's information systems.
- Follow these [additional guidelines](/windows-server/identity/ad-ds/plan/security-best-practices/reducing-the-active-directory-attack-surface) to reduce the attack surface of your Active Directory environment.
active-directory https://docs.microsoft.com/en-us/azure/active-directory/manage-apps/configure-linked-sign-on https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/configure-linked-sign-on.md
@@ -35,7 +35,7 @@ The **Linked** option doesn't provide sign-on functionality through Azure AD. Th
> [!IMPORTANT] > There are some scenarios where the **Single sign-on** option will not be in the navigation for an application in **Enterprise applications**. >
-> If the application was registered using **App registrations** then the single sign-on capability is setup to use OIDC OAuth by default. In this case, the **Single sign-on** option won't show in the navigation under **Enterprise applications**. When you use **App registrations** to add your custom app, you configure options in the manifest file. To learn more about the manifest file, see [Azure Active Directory app manifest](../develop/reference-app-manifest.md). To learn more about SSO standards, see [Authentication and authorization using Microsoft identity platform](../develop/authentication-vs-authorization.md#authentication-and-authorization-using-microsoft-identity-platform).
+> If the application was registered using **App registrations** then the single sign-on capability is setup to use OIDC OAuth by default. In this case, the **Single sign-on** option won't show in the navigation under **Enterprise applications**. When you use **App registrations** to add your custom app, you configure options in the manifest file. To learn more about the manifest file, see [Azure Active Directory app manifest](../develop/reference-app-manifest.md). To learn more about SSO standards, see [Authentication and authorization using Microsoft identity platform](../develop/authentication-vs-authorization.md#authentication-and-authorization-using-the-microsoft-identity-platform).
> > Other scenarios where **Single sign-on** will be missing from the navigation include when an application is hosted in another tenant or if your account does not have the required permissions (Global Administrator, Cloud Application Administrator, Application Administrator, or owner of the service principal). Permissions can also cause a scenario where you can open **Single sign-on** but won't be able to save. To learn more about Azure AD administrative roles, see (https://docs.microsoft.com/azure/active-directory/users-groups-roles/directory-assign-admin-roles).
@@ -48,4 +48,4 @@ After you configure an app, assign users and groups to it. When you assign users
## Next steps - [Assign users or groups to the application](./assign-user-or-group-access-portal.md)-- [Configure automatic user account provisioning](../app-provisioning/configure-automatic-user-provisioning-portal.md)\ No newline at end of file
+- [Configure automatic user account provisioning](../app-provisioning/configure-automatic-user-provisioning-portal.md)
active-directory https://docs.microsoft.com/en-us/azure/active-directory/manage-apps/configure-password-single-sign-on-non-gallery-applications https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/configure-password-single-sign-on-non-gallery-applications.md
@@ -39,7 +39,7 @@ Using Azure AD as your Identity Provider (IdP) and configuring single sign-on (S
> [!IMPORTANT] > There are some scenarios where the **Single sign-on** option will not be in the navigation for an application in **Enterprise applications**. >
-> If the application was registered using **App registrations** then the single sign-on capability is configured to use OIDC OAuth by default. In this case, the **Single sign-on** option won't show in the navigation under **Enterprise applications**. When you use **App registrations** to add your custom app, you configure options in the manifest file. To learn more about the manifest file, see [Azure Active Directory app manifest](../develop/reference-app-manifest.md). To learn more about SSO standards, see [Authentication and authorization using Microsoft identity platform](../develop/authentication-vs-authorization.md#authentication-and-authorization-using-microsoft-identity-platform).
+> If the application was registered using **App registrations** then the single sign-on capability is configured to use OIDC OAuth by default. In this case, the **Single sign-on** option won't show in the navigation under **Enterprise applications**. When you use **App registrations** to add your custom app, you configure options in the manifest file. To learn more about the manifest file, see [Azure Active Directory app manifest](../develop/reference-app-manifest.md). To learn more about SSO standards, see [Authentication and authorization using Microsoft identity platform](../develop/authentication-vs-authorization.md#authentication-and-authorization-using-the-microsoft-identity-platform).
> > Other scenarios where **Single sign-on** will be missing from the navigation include when an application is hosted in another tenant or if your account does not have the required permissions (Global Administrator, Cloud Application Administrator, Application Administrator, or owner of the service principal). Permissions can also cause a scenario where you can open **Single sign-on** but won't be able to save. To learn more about Azure AD administrative roles, see (https://docs.microsoft.com/azure/active-directory/users-groups-roles/directory-assign-admin-roles).
@@ -84,4 +84,4 @@ If Azure AD's parsing attempt fails, you can configure sign-on manually.
## Next steps - [Assign users or groups to the application](./assign-user-or-group-access-portal.md)-- [Configure automatic user account provisioning](../app-provisioning/configure-automatic-user-provisioning-portal.md)\ No newline at end of file
+- [Configure automatic user account provisioning](../app-provisioning/configure-automatic-user-provisioning-portal.md)
active-directory https://docs.microsoft.com/en-us/azure/active-directory/manage-apps/configure-saml-single-sign-on https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/configure-saml-single-sign-on.md
@@ -27,7 +27,7 @@ In the [quickstart series](add-application-portal-setup-sso.md), there's an arti
> [!IMPORTANT] > There are some scenarios where the **Single sign-on** option will not be present in the navigation for an application in **Enterprise applications**. >
-> If the application was registered using **App registrations** then the single sign-on capability is configured to use OIDC OAuth by default. In this case, the **Single sign-on** option won't show in the navigation under **Enterprise applications**. When you use **App registrations** to add your custom app, you configure options in the manifest file. To learn more about the manifest file, see [Azure Active Directory app manifest](../develop/reference-app-manifest.md). To learn more about SSO standards, see [Authentication and authorization using Microsoft identity platform](../develop/authentication-vs-authorization.md#authentication-and-authorization-using-microsoft-identity-platform).
+> If the application was registered using **App registrations** then the single sign-on capability is configured to use OIDC OAuth by default. In this case, the **Single sign-on** option won't show in the navigation under **Enterprise applications**. When you use **App registrations** to add your custom app, you configure options in the manifest file. To learn more about the manifest file, see [Azure Active Directory app manifest](../develop/reference-app-manifest.md). To learn more about SSO standards, see [Authentication and authorization using Microsoft identity platform](../develop/authentication-vs-authorization.md#authentication-and-authorization-using-the-microsoft-identity-platform).
> > Other scenarios where **Single sign-on** will be missing from the navigation include when an application is hosted in another tenant or if your account does not have the required permissions (Global Administrator, Cloud Application Administrator, Application Administrator, or owner of the service principal). Permissions can also cause a scenario where you can open **Single sign-on** but won't be able to save. To learn more about Azure AD administrative roles, see (https://docs.microsoft.com/azure/active-directory/users-groups-roles/directory-assign-admin-roles).
@@ -131,4 +131,4 @@ For more information, see [Debug SAML-based single sign-on to applications in Az
- [Quickstart Series on Application Management](view-applications-portal.md) - [Assign users or groups to the application](./assign-user-or-group-access-portal.md) - [Configure automatic user account provisioning](../app-provisioning/configure-automatic-user-provisioning-portal.md)-- [Single Sign-On SAML protocol](../develop/single-sign-on-saml-protocol.md)\ No newline at end of file
+- [Single Sign-On SAML protocol](../develop/single-sign-on-saml-protocol.md)
active-directory https://docs.microsoft.com/en-us/azure/active-directory/manage-apps/developer-guidance-for-integrating-applications https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/developer-guidance-for-integrating-applications.md
@@ -64,8 +64,8 @@ By default, each user goes through a consent experience to sign in. The consent
For applications that you trust, you can simplify the user experience by consenting to the application on behalf of your organization.
-For more information about user consent and the consent experience in Azure, see [Integrating Applications with Azure Active Directory](../develop/quickstart-register-app.md).
+For more information about user consent and the consent experience in Azure, see [Understanding Azure AD application consent experiences](../develop/application-consent-experience.md).
## Related Articles * [Enable secure remote access to on-premises applications with Azure AD Application Proxy](application-proxy.md)
-* [Managing access to apps with Azure AD](what-is-access-management.md)
\ No newline at end of file
+* [Managing access to apps with Azure AD](what-is-access-management.md)
active-directory https://docs.microsoft.com/en-us/azure/active-directory/roles/security-planning https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/roles/security-planning.md
@@ -251,7 +251,7 @@ Attackers might try to target privileged accounts so that they can disrupt the i
* Impersonation attacks * Credential theft attacks such as keystroke logging, Pass-the-Hash, and Pass-The-Ticket
-By deploying privileged access workstations, you can reduce the risk that admins enter their credentials in a desktop environment that hasn't been hardened. For more information, see [Privileged Access Workstations](/windows-server/identity/securing-privileged-access/privileged-access-workstations).
+By deploying privileged access workstations, you can reduce the risk that admins enter their credentials in a desktop environment that hasn't been hardened. For more information, see [Privileged Access Workstations](https://4sysops.com/archives/understand-the-microsoft-privileged-access-workstation-paw-security-model/).
#### Review National Institute of Standards and Technology recommendations for handling incidents
active-directory https://docs.microsoft.com/en-us/azure/active-directory/saas-apps/blink-provisioning-tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/blink-provisioning-tutorial.md
@@ -112,7 +112,7 @@ This section guides you through the steps to configure the Azure AD provisioning
9. Review the user attributes that are synchronized from Azure AD to Blink in the **Attribute Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in Blink for update operations. Select the **Save** button to commit any changes.
- ![Blink User Attributes](media/blink-provisioning-tutorial/user-attributes.png)
+ ![Blink User Attributes](media/blink-provisioning-tutorial/new-user-attributes.png)
10. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
@@ -132,6 +132,10 @@ This operation starts the initial synchronization of all users defined in **Scop
For more information on how to read the Azure AD provisioning logs, see [Reporting on automatic user account provisioning](../app-provisioning/check-status-user-account-provisioning.md).
+## Change log
+
+* 01/14/2021 - Custom extension attribute **company** , **description** and **location** has been added.
+ ## Additional resources * [Managing user account provisioning for Enterprise Apps](../app-provisioning/configure-automatic-user-provisioning-portal.md)
advisor https://docs.microsoft.com/en-us/azure/advisor/security-baseline https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/advisor/security-baseline.md
@@ -156,7 +156,7 @@ Use highly secured user workstations and/or Azure Bastion for administrative tas
Centrally manage the secured workstations to enforce secured configuration including strong authentication, software and hardware baselines, restricted logical and network access. -- [Understand privileged access workstations](../active-directory/devices/concept-azure-managed-workstation.md)
+- [Understand privileged access workstations](https://4sysops.com/archives/understand-the-microsoft-privileged-access-workstation-paw-security-model/)
- [Deploy a privileged access workstation](../active-directory/devices/howto-azure-managed-workstation.md)
aks https://docs.microsoft.com/en-us/azure/aks/azure-netapp-files https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/azure-netapp-files.md
@@ -25,14 +25,14 @@ You also need the Azure CLI version 2.0.59 or later installed and configured. Ru
The following limitations apply when you use Azure NetApp Files: * Azure NetApp Files is only available [in selected Azure regions][anf-regions].
-* Before you can use Azure NetApp Files, you must be granted access to the Azure NetApp Files service. To apply for access, you can use the [Azure NetApp Files waitlist submission form][anf-waitlist]. You can't access the Azure NetApp Files service until you receive the official confirmation email from the Azure NetApp Files team.
+* Before you can use Azure NetApp Files, you must be granted access to the Azure NetApp Files service. To apply for access, you can use the [Azure NetApp Files waitlist submission form][anf-waitlist] or go to https://azure.microsoft.com/services/netapp/#getting-started. You can't access the Azure NetApp Files service until you receive the official confirmation email from the Azure NetApp Files team.
* After the initial deployment of an AKS cluster, only static provisioning for Azure NetApp Files is supported. * To use dynamic provisioning with Azure NetApp Files, install and configure [NetApp Trident](https://netapp-trident.readthedocs.io/) version 19.07 or later. ## Configure Azure NetApp Files > [!IMPORTANT]
-> Before you can register the *Microsoft.NetApp* resource provider, you must complete the [Azure NetApp Files waitlist submission form][anf-waitlist] for your subscription. You can't register the resource provide until you receive the official confirmation email from the Azure NetApp Files team.
+> Before you can register the *Microsoft.NetApp* resource provider, you must complete the [Azure NetApp Files waitlist submission form][anf-waitlist] or go to https://azure.microsoft.com/services/netapp/#getting-started for your subscription. You can't register the resource provide until you receive the official confirmation email from the Azure NetApp Files team.
Register the *Microsoft.NetApp* resource provider:
@@ -155,6 +155,8 @@ spec:
storage: 100Gi accessModes: - ReadWriteMany
+ mountOptions:
+ - vers=3
nfs: server: 10.0.0.4 path: /myfilepath2
aks https://docs.microsoft.com/en-us/azure/aks/cluster-autoscaler https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/cluster-autoscaler.md
@@ -126,14 +126,15 @@ You can also configure more granular details of the cluster autoscaler by changi
| scale-down-unneeded-time | How long a node should be unneeded before it is eligible for scale down | 10 minutes | | scale-down-unready-time | How long an unready node should be unneeded before it is eligible for scale down | 20 minutes | | scale-down-utilization-threshold | Node utilization level, defined as sum of requested resources divided by capacity, below which a node can be considered for scale down | 0.5 |
-| max-graceful-termination-sec | Maximum number of seconds the cluster autoscaler waits for pod termination when trying to scale down a node. | 600 seconds |
+| max-graceful-termination-sec | Maximum number of seconds the cluster autoscaler waits for pod termination when trying to scale down a node | 600 seconds |
| balance-similar-node-groups | Detects similar node pools and balances the number of nodes between them | false |
-| expander | Type of node pool [expander](https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md#what-are-expanders) to be used in scale up. Possible values: `most-pods`, `random`, `least-waste` | random |
+| expander | Type of node pool [expander](https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md#what-are-expanders) to be used in scale up. Possible values: `most-pods`, `random`, `least-waste`, `priority` | random |
| skip-nodes-with-local-storage | If true cluster autoscaler will never delete nodes with pods with local storage, for example, EmptyDir or HostPath | true | | skip-nodes-with-system-pods | If true cluster autoscaler will never delete nodes with pods from kube-system (except for DaemonSet or mirror pods) | true |
-| max-empty-bulk-delete | Maximum number of empty nodes that can be deleted at the same time. | 10 nodes |
-| new-pod-scale-up-delay | For scenarios like burst/batch scale where you don't want CA to act before the kubernetes scheduler could schedule all the pods, you can tell CA to ignore unscheduled pods before they're a certain age". | 10 seconds |
-| max-total-unready-percentage | Maximum percentage of unready nodes in the cluster. After this percentage is exceeded, CA halts operations | 45% |
+| max-empty-bulk-delete | Maximum number of empty nodes that can be deleted at the same time | 10 nodes |
+| new-pod-scale-up-delay | For scenarios like burst/batch scale where you don't want CA to act before the kubernetes scheduler could schedule all the pods, you can tell CA to ignore unscheduled pods before they're a certain age. | 0 seconds |
+| max-total-unready-percentage | Maximum percentage of unready nodes in the cluster. After this percentage is exceeded, CA halts operations | 45% |
+| max-node-provision-time | Maximum time the autoscaler waits for a node to be provisioned | 15 minutes |
| ok-total-unready-count | Number of allowed unready nodes, irrespective of max-total-unready-percentage | 3 nodes | > [!IMPORTANT]
aks https://docs.microsoft.com/en-us/azure/aks/cluster-configuration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/cluster-configuration.md
@@ -3,7 +3,7 @@ title: Cluster configuration in Azure Kubernetes Services (AKS)
description: Learn how to configure a cluster in Azure Kubernetes Service (AKS) services: container-service ms.topic: article
-ms.date: 09/21/2020
+ms.date: 01/13/2020
ms.author: jpalma author: palma21 ---
@@ -16,10 +16,52 @@ As part of creating an AKS cluster, you may need to customize your cluster confi
AKS now supports Ubuntu 18.04 as the node operating system (OS) in general availability for clusters in kubernetes versions higher than 1.18.8. For versions below 1.18.x, AKS Ubuntu 16.04 is still the default base image. From kubernetes v1.18.x and onward, the default base is AKS Ubuntu 18.04.
-> [!IMPORTANT]
-> Node pools created on Kubernetes v1.18 or greater default to `AKS Ubuntu 18.04` node image. Node pools on a supported Kubernetes version less than 1.18 receive `AKS Ubuntu 16.04` as the node image, but will be updated to `AKS Ubuntu 18.04` once the node pool Kubernetes version is updated to v1.18 or greater.
->
-> It is highly recommended to test your workloads on AKS Ubuntu 18.04 node pools prior to using clusters on 1.18 or greater. Read about how to [test Ubuntu 18.04 node pools](#use-aks-ubuntu-1804-existing-clusters-preview).
+### Use AKS Ubuntu 18.04 Generally Available on new clusters
+
+Clusters created on Kubernetes v1.18 or greater default to `AKS Ubuntu 18.04` node image. Node pools on a supported Kubernetes version less than 1.18 will still receive `AKS Ubuntu 16.04` as the node image, but will be updated to `AKS Ubuntu 18.04` once the cluster or node pool Kubernetes version is updated to v1.18 or greater.
+
+It is highly recommended to test your workloads on AKS Ubuntu 18.04 node pools prior to using clusters on 1.18 or greater. Read about how to [test Ubuntu 18.04 node pools](#test-aks-ubuntu-1804-generally-available-on-existing-clusters).
+
+To create a cluster using `AKS Ubuntu 18.04` node image, simply create a cluster running kubernetes v1.18 or greater as shown below
+
+```azurecli
+az aks create --name myAKSCluster --resource-group myResourceGroup --kubernetes-version 1.18.14
+```
+
+### Use AKS Ubuntu 18.04 Generally Available on existing clusters
+
+Clusters created on Kubernetes v1.18 or greater default to `AKS Ubuntu 18.04` node image. Node pools on a supported Kubernetes version less than 1.18 will still receive `AKS Ubuntu 16.04` as the node image, but will be updated to `AKS Ubuntu 18.04` once the cluster or node pool Kubernetes version is updated to v1.18 or greater.
+
+It is highly recommended to test your workloads on AKS Ubuntu 18.04 node pools prior to using clusters on 1.18 or greater. Read about how to [test Ubuntu 18.04 node pools](#test-aks-ubuntu-1804-generally-available-on-existing-clusters).
+
+If your clusters or node pools are ready for `AKS Ubuntu 18.04` node image, you can simply upgrade them to a v1.18 or higher as below.
+
+```azurecli
+az aks upgrade --name myAKSCluster --resource-group myResourceGroup --kubernetes-version 1.18.14
+```
+
+If you just want to upgrade just one node pool:
+
+```azurecli
+az aks nodepool upgrade -name ubuntu1804 --cluster-name myAKSCluster --resource-group myResourceGroup --kubernetes-version 1.18.14
+```
+
+### Test AKS Ubuntu 18.04 Generally Available on existing clusters
+
+Node pools created on Kubernetes v1.18 or greater default to `AKS Ubuntu 18.04` node image. Node pools on a supported Kubernetes version less than 1.18 will still receive `AKS Ubuntu 16.04` as the node image, but will be updated to `AKS Ubuntu 18.04` once the node pool Kubernetes version is updated to v1.18 or greater.
+
+It is highly recommended to test your workloads on AKS Ubuntu 18.04 node pools prior to upgrading your production node pools.
+
+To create a node pool using `AKS Ubuntu 18.04` node image, simply create a node pool running kubernetes v1.18 or greater. Your cluster control plane needs to be at least on v1.18 or greater as well but your other node pools can remain on an older kubernetes version.
+Below we are first upgrading the control plane and then creating a new node pool with v1.18 that will receive the new node image OS version.
+
+```azurecli
+az aks upgrade --name myAKSCluster --resource-group myResourceGroup --kubernetes-version 1.18.14 --control-plane-only
+
+az aks nodepool add --name ubuntu1804 --cluster-name myAKSCluster --resource-group myResourceGroup --kubernetes-version 1.18.14
+```
+
+### Use AKS Ubuntu 18.04 on new clusters (Preview)
The following section will explain how you case use and test AKS Ubuntu 18.04 on clusters that aren't yet using a kubernetes version 1.18.x or higher, or were created before this feature became generally available, by using the OS configuration preview.
@@ -53,8 +95,6 @@ When the status shows as registered, refresh the registration of the `Microsoft.
az provider register --namespace Microsoft.ContainerService ```
-### Use AKS Ubuntu 18.04 on new clusters (Preview)
- Configure the cluster to use Ubuntu 18.04 when the cluster is created. Use the `--aks-custom-headers` flag to set the Ubuntu 18.04 as the default OS. ```azurecli
aks https://docs.microsoft.com/en-us/azure/aks/ingress-static-ip https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/ingress-static-ip.md
@@ -167,8 +167,12 @@ spec:
To create the issuer, use the `kubectl apply` command. ```
-$ kubectl apply -f cluster-issuer.yaml --namespace ingress-basic
+kubectl apply -f cluster-issuer.yaml --namespace ingress-basic
+```
+
+The output should be similar to this example:
+```
clusterissuer.cert-manager.io/letsencrypt-staging created ```
@@ -305,8 +309,12 @@ spec:
Create the ingress resource using the `kubectl apply` command. ```
-$ kubectl apply -f hello-world-ingress.yaml --namespace ingress-basic
+kubectl apply -f hello-world-ingress.yaml --namespace ingress-basic
+```
+
+The output should be similar to this example:
+```
ingress.extensions/hello-world-ingress created ```
aks https://docs.microsoft.com/en-us/azure/aks/ingress-tls https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/ingress-tls.md
@@ -260,7 +260,7 @@ kubectl apply -f aks-helloworld-two.yaml --namespace ingress-basic
Both applications are now running on your Kubernetes cluster. However they're configured with a service of type `ClusterIP` and aren't accessible from the internet. To make them publicly available, create a Kubernetes ingress resource. The ingress resource configures the rules that route traffic to one of the two applications.
-In the following example, traffic to the address *hello-world-ingress.MY_CUSTOM_DOMAIN* is routed to the *aks-helloworld* service. Traffic to the address *hello-world-ingress.MY_CUSTOM_DOMAIN/hello-world-two* is routed to the *aks-helloworld-two* service. Traffic to *hello-world-ingress.MY_CUSTOM_DOMAIN/static* is routed to the service named *aks-helloworld* for static assets.
+In the following example, traffic to the address *hello-world-ingress.MY_CUSTOM_DOMAIN* is routed to the *aks-helloworld-one* service. Traffic to the address *hello-world-ingress.MY_CUSTOM_DOMAIN/hello-world-two* is routed to the *aks-helloworld-two* service. Traffic to *hello-world-ingress.MY_CUSTOM_DOMAIN/static* is routed to the service named *aks-helloworld-one* for static assets.
> [!NOTE] > If you configured an FQDN for the ingress controller IP address instead of a custom domain, use the FQDN instead of *hello-world-ingress.MY_CUSTOM_DOMAIN*. For example if your FQDN is *demo-aks-ingress.eastus.cloudapp.azure.com*, replace *hello-world-ingress.MY_CUSTOM_DOMAIN* with *demo-aks-ingress.eastus.cloudapp.azure.com* in `hello-world-ingress.yaml`.
aks https://docs.microsoft.com/en-us/azure/aks/node-upgrade-github-actions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/node-upgrade-github-actions.md
@@ -155,7 +155,7 @@ To create the steps to execute Azure CLI commands.
- name: Upgrade node images uses: Azure/cli@v1.0.0 with:
- inlineScript: az aks upgrade -g {resourceGroupName} -n {aksClusterName} --node-image-only
+ inlineScript: az aks upgrade -g {resourceGroupName} -n {aksClusterName} --node-image-only --yes
``` > [!TIP]
aks https://docs.microsoft.com/en-us/azure/aks/security-baseline https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/security-baseline.md
@@ -449,7 +449,7 @@ Enable Azure AD Multi-Factor Authentication (MFA) and follow Security Center's I
### 3.6: Use dedicated machines (Privileged Access Workstations) for all administrative tasks **Guidance**: Use a Privileged Access Workstation (PAW), with Multi-Factor Authentication (MFA), configured to log into your specified Azure Kubernetes Service (AKS) clusters and related resources.-- [Learn about Privileged Access Workstations](/windows-server/identity/securing-privileged-access/privileged-access-workstations)
+- [Learn about Privileged Access Workstations](https://4sysops.com/archives/understand-the-microsoft-privileged-access-workstation-paw-security-model/)
- [How to enable MFA in Azure](../active-directory/authentication/howto-mfa-getstarted.md)
aks https://docs.microsoft.com/en-us/azure/aks/use-azure-ad-pod-identity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/use-azure-ad-pod-identity.md
@@ -52,7 +52,7 @@ Create an AKS cluster with a managed identity and pod-managed identity enabled.
```azurecli-interactive az group create --name myResourceGroup --location eastus
-az aks create -g myResourceGroup -n myAKSCluster --enable-managed-identity --enable-pod-identity
+az aks create -g myResourceGroup -n myAKSCluster --enable-managed-identity --enable-pod-identity --network-plugin azure
``` Use [az aks get-credentials][az-aks-get-credentials] to sign in to your AKS cluster. This command also downloads and configures the `kubectl` client certificate on your development computer.
api-management https://docs.microsoft.com/en-us/azure/api-management/security-baseline https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/security-baseline.md
@@ -442,7 +442,7 @@ Alternatively, the sign-in/sign-up process can be further customized through del
**Guidance**: Use privileged access workstations (PAW) with Multi-Factor Authentication (MFA) configured to log into and configure Azure resources.
-* [Learn about Privileged Access Workstations](/windows-server/identity/securing-privileged-access/privileged-access-workstations)
+* [Learn about Privileged Access Workstations](https://4sysops.com/archives/understand-the-microsoft-privileged-access-workstation-paw-security-model/)
* [How to enable MFA in Azure](../active-directory/authentication/howto-mfa-getstarted.md)
app-service https://docs.microsoft.com/en-us/azure/app-service/app-service-ip-restrictions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/app-service-ip-restrictions.md
@@ -5,7 +5,7 @@ author: ccompy
ms.assetid: 3be1f4bd-8a81-4565-8a56-528c037b24bd ms.topic: article
-ms.date: 06/06/2019
+ms.date: 12/17/2020
ms.author: ccompy ms.custom: seodec18
@@ -14,23 +14,23 @@ ms.custom: seodec18
By setting up access restrictions, you can define a priority-ordered allow/deny list that controls network access to your app. The list can include IP addresses or Azure Virtual Network subnets. When there are one or more entries, an implicit *deny all* exists at the end of the list.
-The access-restriction capability works with all Azure App Service-hosted workloads. The workloads can include web apps, API apps, Linux apps, Linux container apps, and functions.
+The access restriction capability works with all Azure App Service-hosted workloads. The workloads can include web apps, API apps, Linux apps, Linux container apps, and Functions.
-When a request is made to your app, the FROM address is evaluated against the IP address rules in your access-restriction list. If the FROM address is in a subnet that's configured with service endpoints to Microsoft.Web, the source subnet is compared against the virtual network rules in your access-restriction list. If the address isn't allowed access based on the rules in the list, the service replies with an [HTTP 403](https://en.wikipedia.org/wiki/HTTP_403) status code.
+When a request is made to your app, the FROM address is evaluated against the rules in your access restriction list. If the FROM address is in a subnet that's configured with service endpoints to Microsoft.Web, the source subnet is compared against the virtual network rules in your access restriction list. If the address isn't allowed access based on the rules in the list, the service replies with an [HTTP 403](https://en.wikipedia.org/wiki/HTTP_403) status code.
-The access-restriction capability is implemented in the App Service front-end roles, which are upstream of the worker hosts where your code runs. Therefore, access restrictions are effectively network access-control lists (ACLs).
+The access restriction capability is implemented in the App Service front-end roles, which are upstream of the worker hosts where your code runs. Therefore, access restrictions are effectively network access-control lists (ACLs).
-The ability to restrict access to your web app from an Azure virtual network is enabled by [service endpoints][serviceendpoints]. With service endpoints, you can restrict access to a multitenant service from selected subnets. It doesn't work to restrict traffic to apps that are hosted in an App Service Environment. If you're in an App Service Environment, you can control access to your app by applying IP address rules.
+The ability to restrict access to your web app from an Azure virtual network is enabled by [service endpoints][serviceendpoints]. With service endpoints, you can restrict access to a multi-tenant service from selected subnets. It doesn't work to restrict traffic to apps that are hosted in an App Service Environment. If you're in an App Service Environment, you can control access to your app by applying IP address rules.
> [!NOTE] > The service endpoints must be enabled both on the networking side and for the Azure service that they're being enabled with. For a list of Azure services that support service endpoints, see [Virtual Network service endpoints](../virtual-network/virtual-network-service-endpoints-overview.md). >
-![Diagram of the flow of access restrictions.](media/app-service-ip-restrictions/access-restrictions-flow.png)
+:::image type="content" source="media/app-service-ip-restrictions/access-restrictions-flow.png" alt-text="Diagram of the flow of access restrictions.":::
-## Add or edit access-restriction rules in the portal
+## Manage access restriction rules in the portal
-To add an access-restriction rule to your app, do the following:
+To add an access restriction rule to your app, do the following:
1. Sign in to the Azure portal.
@@ -38,47 +38,53 @@ To add an access-restriction rule to your app, do the following:
1. On the **Networking** pane, under **Access Restrictions**, select **Configure Access Restrictions**.
- ![Screenshot of the App Service networking options pane in the Azure portal.](media/app-service-ip-restrictions/access-restrictions.png)
+ :::image type="content" source="media/app-service-ip-restrictions/access-restrictions.png" alt-text="Screenshot of the App Service networking options pane in the Azure portal.":::
-1. On the **Access Restrictions** page, review the list of access-restriction rules that are defined for your app.
+1. On the **Access Restrictions** page, review the list of access restriction rules that are defined for your app.
- ![Screenshot of the Access Restrictions page in the Azure portal, showing the list of access-restriction rules defined for the selected app.](media/app-service-ip-restrictions/access-restrictions-browse.png)
+ :::image type="content" source="media/app-service-ip-restrictions/access-restrictions-browse.png" alt-text="Screenshot of the Access Restrictions page in the Azure portal, showing the list of access restriction rules defined for the selected app.":::
- The list displays all the current restrictions that are applied to the app. If you have a virtual-network restriction on your app, the table shows whether the service endpoints are enabled for Microsoft.Web. If no restrictions are defined on your app, the app is accessible from anywhere.
+ The list displays all the current restrictions that are applied to the app. If you have a virtual network restriction on your app, the table shows whether the service endpoints are enabled for Microsoft.Web. If no restrictions are defined on your app, the app is accessible from anywhere.
-### Add an access-restriction rule
+### Add an access restriction rule
-To add an access-restriction rule to your app, on the **Access Restrictions** pane, select **Add rule**. After you add a rule, it becomes effective immediately.
+To add an access restriction rule to your app, on the **Access Restrictions** pane, select **Add rule**. After you add a rule, it becomes effective immediately.
Rules are enforced in priority order, starting from the lowest number in the **Priority** column. An implicit *deny all* is in effect after you add even a single rule.
-On the **Add IP Restriction** pane, when you create a rule, do the following:
+On the **Add Access Restriction** pane, when you create a rule, do the following:
1. Under **Action**, select either **Allow** or **Deny**.
- ![Screenshot of the "Add IP Restriction" pane.](media/app-service-ip-restrictions/access-restrictions-ip-add.png)
-
-1. Optionally, enter a name and description of the rule.
-1. In the **Type** drop-down list, select the type of rule.
-1. In the **Priority** box, enter a priority value.
-1. In the **Subscription**, **Virtual Network**, and **Subnet** drop-down lists, select what you want to restrict access to.
+ :::image type="content" source="media/app-service-ip-restrictions/access-restrictions-ip-add.png?v2" alt-text="Screenshot of the 'Add Access Restriction' pane.":::
-### Set an IP address-based rule
+1. Optionally, enter a name and description of the rule.
+1. In the **Priority** box, enter a priority value.
+1. In the **Type** drop-down list, select the type of rule.
-Follow the procedure as outlined in the preceding section, but with the following variation:
-* For step 3, in the **Type** drop-down list, select **IPv4** or **IPv6**.
+The different types of rules are described in the following sections.
-Specify the IP address in Classless Inter-Domain Routing (CIDR) notation for both the IPv4 and IPv6 addresses. To specify an address, you can use something like *1.2.3.4/32*, where the first four octets represent your IP address and */32* is the mask. The IPv4 CIDR notation for all addresses is 0.0.0.0/0. To learn more about CIDR notation, see [Classless Inter-Domain Routing](https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing).
+> [!NOTE]
+> - There is a limit of 512 access restriction rules. If you require more than 512 access restriction rules, we suggest that you consider installing a standalone security product, such as Azure Front Door, Azure App Gateway, or an alternative WAF.
+>
+#### Set an IP address-based rule
-## Use service endpoints
+Follow the procedure as outlined in the preceding section, but with the following addition:
+* For step 4, in the **Type** drop-down list, select **IPv4** or **IPv6**.
-By using service endpoints, you can restrict access to selected Azure virtual network subnets. To restrict access to a specific subnet, create a restriction rule with a **Virtual Network** type. You can then select the subscription, virtual network, and subnet that you want to allow or deny access to.
+Specify the **IP Address Block** in Classless Inter-Domain Routing (CIDR) notation for both the IPv4 and IPv6 addresses. To specify an address, you can use something like *1.2.3.4/32*, where the first four octets represent your IP address and */32* is the mask. The IPv4 CIDR notation for all addresses is 0.0.0.0/0. To learn more about CIDR notation, see [Classless Inter-Domain Routing](https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing).
-If service endpoints aren't already enabled with Microsoft.Web for the subnet that you selected, they'll be automatically enabled unless you select the **Ignore missing Microsoft.Web service endpoints** check box. The scenario where you might want to enable service endpoints on the app but not the subnet depends mainly on whether you have the permissions to enable them on the subnet.
+#### Set a service endpoint-based rule
-If you need someone else to enable service endpoints on the subnet, select the **Ignore missing Microsoft.Web service endpoints** check box. Your app will be configured for service endpoints in anticipation of having them enabled later on the subnet.
+* For step 4, in the **Type** drop-down list, select **Virtual Network**.
+
+ :::image type="content" source="media/app-service-ip-restrictions/access-restrictions-vnet-add.png?v2" alt-text="Screenshot of the 'Add Restriction' pane with the Virtual Network type selected.":::
+
+Specify the **Subscription**, **Virtual Network**, and **Subnet** drop-down lists, matching what you want to restrict access to.
+
+By using service endpoints, you can restrict access to selected Azure virtual network subnets. If service endpoints aren't already enabled with Microsoft.Web for the subnet that you selected, they'll be automatically enabled unless you select the **Ignore missing Microsoft.Web service endpoints** check box. The scenario where you might want to enable service endpoints on the app but not the subnet depends mainly on whether you have the permissions to enable them on the subnet.
-![Screenshot of the "Add IP Restriction" pane with the Virtual Network type selected.](media/app-service-ip-restrictions/access-restrictions-vnet-add.png)
+If you need someone else to enable service endpoints on the subnet, select the **Ignore missing Microsoft.Web service endpoints** check box. Your app will be configured for service endpoints in anticipation of having them enabled later on the subnet.
You can't use service endpoints to restrict access to apps that run in an App Service Environment. When your app is in an App Service Environment, you can control access to it by applying IP access rules.
@@ -86,72 +92,100 @@ With service endpoints, you can configure your app with application gateways or
> [!NOTE] > - Service endpoints aren't currently supported for web apps that use IP Secure Sockets Layer (SSL) virtual IP (VIP).
-> - There is a limit of 512 rows of IP or service-endpoint restrictions. If you require more than 512 rows of restrictions, we suggest that you consider installing a standalone security product, such as Azure Front Door, Azure App Gateway, or a WAF.
>
+#### Set a service tag-based rule (preview)
+
+* For step 4, in the **Type** drop-down list, select **Service Tag (preview)**.
-## Manage access-restriction rules
+ :::image type="content" source="media/app-service-ip-restrictions/access-restrictions-service-tag-add.png" alt-text="Screenshot of the 'Add Restriction' pane with the Service Tag type selected.":::
-You can edit or delete an existing access-restriction rule.
+Each service tag represents a list of IP ranges from Azure services. A list of these services and links to the specific ranges can be found in the [service tag documentation][servicetags].
+
+The following list of service tags is supported in access restriction rules during the preview phase:
+* ActionGroup
+* AzureCloud
+* AzureCognitiveSearch
+* AzureConnectors
+* AzureEventGrid
+* AzureFrontDoor.Backend
+* AzureMachineLearning
+* AzureSignalR
+* AzureTrafficManager
+* LogicApps
+* ServiceFabric
### Edit a rule
-1. To begin editing an existing access-restriction rule, on the **Access Restrictions** page, double-click the rule you want to edit.
+1. To begin editing an existing access restriction rule, on the **Access Restrictions** page, select the rule you want to edit.
-1. On the **Edit IP Restriction** pane, make your changes, and then select **Update rule**. Edits are effective immediately, including changes in priority ordering.
+1. On the **Edit Access Restriction** pane, make your changes, and then select **Update rule**. Edits are effective immediately, including changes in priority ordering.
- ![Screenshot of the "Edit IP Restriction" pane in the Azure portal, showing the fields for an existing access-restriction rule.](media/app-service-ip-restrictions/access-restrictions-ip-edit.png)
+ :::image type="content" source="media/app-service-ip-restrictions/access-restrictions-ip-edit.png?v2" alt-text="Screenshot of the 'Edit Access Restriction' pane in the Azure portal, showing the fields for an existing access restriction rule.":::
> [!NOTE]
- > When you edit a rule, you can't switch between an IP address rule and a virtual network rule.
-
- ![Screenshot of the "Edit IP Restriction" pane in Azure portal, showing the settings for a virtual network rule.](media/app-service-ip-restrictions/access-restrictions-vnet-edit.png)
+ > When you edit a rule, you can't switch between rule types.
### Delete a rule To delete a rule, on the **Access Restrictions** page, select the ellipsis (**...**) next to the rule you want to delete, and then select **Remove**.
-![Screenshot of the "Access Restrictions" page, showing the "Remove" ellipsis next to the access-restriction rule to be deleted.](media/app-service-ip-restrictions/access-restrictions-delete.png)
+:::image type="content" source="media/app-service-ip-restrictions/access-restrictions-delete.png" alt-text="Screenshot of the 'Access Restrictions' page, showing the 'Remove' ellipsis next to the access restriction rule to be deleted.":::
-## Block a single IP address
+## Access restriction advanced scenarios
+The following sections describe some advanced scenarios using access restrictions.
+### Block a single IP address
-When you add your first IP restriction rule, the service adds an explicit *Deny all* rule with a priority of 2147483647. In practice, the explicit *Deny all* rule is the final rule to be executed, and it blocks access to any IP address that's not explicitly allowed by an *Allow* rule.
+When you add your first access restriction rule, the service adds an explicit *Deny all* rule with a priority of 2147483647. In practice, the explicit *Deny all* rule is the final rule to be executed, and it blocks access to any IP address that's not explicitly allowed by an *Allow* rule.
For a scenario where you want to explicitly block a single IP address or a block of IP addresses, but allow access to everything else, add an explicit *Allow All* rule.
-![Screenshot of the "Access Restrictions" page in the Azure portal, showing a single blocked IP address.](media/app-service-ip-restrictions/block-single-address.png)
+:::image type="content" source="media/app-service-ip-restrictions/block-single-address.png" alt-text="Screenshot of the 'Access Restrictions' page in the Azure portal, showing a single blocked IP address.":::
-## Restrict access to an SCM site
+### Restrict access to an SCM site
In addition to being able to control access to your app, you can restrict access to the SCM site that's used by your app. The SCM site is both the web deploy endpoint and the Kudu console. You can assign access restrictions to the SCM site from the app separately or use the same set of restrictions for both the app and the SCM site. When you select the **Same restrictions as \<app name>** check box, everything is blanked out. If you clear the check box, your SCM site settings are reapplied.
-![Screenshot of the "Access Restrictions" page in the Azure portal, showing that no access restrictions are set for the SCM site or the app.](media/app-service-ip-restrictions/access-restrictions-scm-browse.png)
+:::image type="content" source="media/app-service-ip-restrictions/access-restrictions-scm-browse.png" alt-text="Screenshot of the 'Access Restrictions' page in the Azure portal, showing that no access restrictions are set for the SCM site or the app.":::
-## Manage access-restriction rules programatically
+### Restrict access to a specific Azure Front Door instance (preview)
+Traffic from Azure Front Door to your application originates from a well known set of IP ranges defined in the AzureFrontDoor.Backend service tag. Using a service tag restriction rule, you can restrict traffic to only originate from Azure Front Door. To ensure traffic only originates from your specific instance, you will need to further filter the incoming requests based on the unique http header that Azure Front Door sends. During preview you can achieve this with PowerShell or REST/ARM.
-You can add access restrictions programatically by doing either of the following:
+* PowerShell example (Front Door ID can be found in the Azure portal):
+
+ ```azurepowershell-interactive
+ $frontdoorId = "xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
+ Add-AzWebAppAccessRestrictionRule -ResourceGroupName "ResourceGroup" -WebAppName "AppName" `
+ -Name "Front Door example rule" -Priority 100 -Action Allow -ServiceTag AzureFrontDoor.Backend `
+ -HttpHeader @{'x-azure-fdid' = $frontdoorId}
+ ```
+## Manage access restriction rules programmatically
+
+You can add access restrictions programmatically by doing either of the following:
* Use [the Azure CLI](/cli/azure/webapp/config/access-restriction?view=azure-cli-latest&preserve-view=true). For example: ```azurecli-interactive az webapp config access-restriction add --resource-group ResourceGroup --name AppName \
- --rule-name 'IP example rule' --action Allow --ip-address 122.133.144.0/24 --priority 100
+ --rule-name 'IP example rule' --action Allow --ip-address 122.133.144.0/24 --priority 100
```
-* Use [Azure PowerShell](/powershell/module/Az.Websites/Add-AzWebAppAccessRestrictionRule?view=azps-3.1.0&preserve-view=true). For example:
+* Use [Azure PowerShell](/powershell/module/Az.Websites/Add-AzWebAppAccessRestrictionRule?view=azps-5.2.0&preserve-view=true). For example:
```azurepowershell-interactive Add-AzWebAppAccessRestrictionRule -ResourceGroupName "ResourceGroup" -WebAppName "AppName" -Name "Ip example rule" -Priority 100 -Action Allow -IpAddress 122.133.144.0/24 ```
+ > [!NOTE]
+ > Working with service tags, http headers or multi-source rules requires at least version 5.1.0. You can verify the version of the installed module with: **Get-InstalledModule -Name Az**
You can also set values manually by doing either of the following: * Use an [Azure REST API](/rest/api/azure/) PUT operation on the app configuration in Azure Resource Manager. The location for this information in Azure Resource Manager is:
- management.azure.com/subscriptions/**subscription ID**/resourceGroups/**resource groups**/providers/Microsoft.Web/sites/**web app name**/config/web?api-version=2018-02-01
+ management.azure.com/subscriptions/**subscription ID**/resourceGroups/**resource groups**/providers/Microsoft.Web/sites/**web app name**/config/web?api-version=2020-06-01
-* Use an ARM template. As an example, you can use resources.azure.com and edit the ipSecurityRestrictions block to add the required JSON.
+* Use a Resource Manager template. As an example, you can use resources.azure.com and edit the ipSecurityRestrictions block to add the required JSON.
The JSON syntax for the earlier example is:
@@ -169,7 +203,27 @@ You can also set values manually by doing either of the following:
} } ```-
+ The JSON syntax for an advanced example using service tag and http header restriction is:
+ ```json
+ {
+ "properties": {
+ "ipSecurityRestrictions": [
+ {
+ "ipAddress": "AzureFrontDoor.Backend",
+ "tag": "ServiceTag",
+ "action": "Allow",
+ "priority": 100,
+ "name": "Azure Front Door example",
+ "headers": {
+ "x-azure-fdid": [
+ "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
+ ]
+ }
+ }
+ ]
+ }
+ }
+ ```
## Set up Azure Functions access restrictions Access restrictions are also available for function apps with the same functionality as App Service plans. When you enable access restrictions, you also disable the Azure portal code editor for any disallowed IPs.
@@ -180,3 +234,4 @@ Access restrictions are also available for function apps with the same functiona
<!--Links--> [serviceendpoints]: ../virtual-network/virtual-network-service-endpoints-overview.md
+[servicetags]: ../virtual-network/service-tags-overview.md
\ No newline at end of file
app-service https://docs.microsoft.com/en-us/azure/app-service/deploy-ftp https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/deploy-ftp.md
@@ -111,7 +111,7 @@ To determine a deployment or runtime issue, see [Deployment vs. runtime issues](
### I'm not able to FTP and publish my code. How can I resolve the issue? Check that you've entered the correct hostname and [credentials](#open-ftp-dashboard). Check also that the following FTP ports on your machine are not blocked by a firewall: -- FTP control connection port: 21
+- FTP control connection port: 21, 990
- FTP data connection port: 989, 10001-10300 ### How can I connect to FTP in Azure App Service via passive mode?
app-service https://docs.microsoft.com/en-us/azure/app-service/environment/using https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/environment/using.md
@@ -46,7 +46,7 @@ To create an app in an ASE:
> Linux apps and Windows apps can't be in the same App Service plan, but they can be in the same App Service Environment. >
-1. Select ** Next: Monitoring** If you want to enable App Insights with your app, you can do it here during the creation flow.
+1. Select **Next: Monitoring** If you want to enable App Insights with your app, you can do it here during the creation flow.
1. Select **Next: Tags** Add any tags you want to the app
app-service https://docs.microsoft.com/en-us/azure/app-service/networking-features https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/networking-features.md
@@ -106,7 +106,7 @@ This feature allows you to build a list of allow and deny rules that are evaluat
The IP-based access restrictions feature helps when you want to restrict the IP addresses that can be used to reach your app. Both IPv4 and IPv6 are supported. Some use cases for this feature: * Restrict access to your app from a set of well-defined addresses.
-* Restrict access to traffic coming through a load-balancing service, like Azure Front Door. If you want to lock down your inbound traffic to Azure Front Door, create rules to allow traffic from 147.243.0.0/16 and 2a01:111:2050::/44.
+* Restrict access to traffic coming through an external load-balancing service or other network appliances with known egress IP addresses.
To learn how to enable this feature, see [Configuring access restrictions][iprestrictions].
@@ -122,7 +122,21 @@ Some use cases for this feature:
![Diagram that illustrates the use of service endpoints with Application Gateway.](media/networking-features/service-endpoints-appgw.png) To learn more about configuring service endpoints with your app, see [Azure App Service access restrictions][serviceendpoints].
+#### Access restriction rules based on service tags (preview)
+[Azure service tags][servicetags] are well defined sets of IP addresses for Azure services. Service tags group the IP ranges used in various Azure services and is often also further scoped to specific regions. This allows you to filter *inbound* traffic from specific Azure services.
+For a full list of tags and more information, visit the service tag link above.
+To learn how to enable this feature, see [Configuring access restrictions][iprestrictions].
+#### Http header filtering for access restriction rules (preview)
+For each access restriction rule, you can add additional http header filtering. This allows you to further inspect the incoming request and filter based on specific http header values. Each header can have up to 8 values per rule. The following list of http headers is currently supported:
+* X-Forwarded-For
+* X-Forwarded-Host
+* X-Azure-FDID
+* X-FD-HealthProbe
+
+Some use cases for http header filtering are:
+* Restrict access to traffic from proxy servers forwarding the host name
+* Restrict access to a specific Azure Front Door instance with a service tag rule and X-Azure-FDID header restriction
### Private Endpoint Private Endpoint is a network interface that connects you privately and securely to your Web App by Azure private link. Private Endpoint uses a private IP address from your virtual network, effectively bringing the web app into your virtual network. This feature is only for *inbound* flows to your web app.
@@ -295,4 +309,5 @@ If you scan App Service, you'll find several ports that are exposed for inbound
[vnetintegration]: ./web-sites-integrate-with-vnet.md [networkinfo]: ./environment/network-info.md [appgwserviceendpoints]: ./networking/app-gateway-with-service-endpoints.md
-[privateendpoints]: ./networking/private-endpoint.md
\ No newline at end of file
+[privateendpoints]: ./networking/private-endpoint.md
+[servicetags]: ../virtual-network/service-tags-overview.md
\ No newline at end of file
app-service https://docs.microsoft.com/en-us/azure/app-service/networking/app-gateway-with-service-endpoints https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/networking/app-gateway-with-service-endpoints.md
@@ -24,20 +24,20 @@ There are three variations of App Service that require slightly different config
## Integration with App Service (multi-tenant) App Service (multi-tenant) has a public internet facing endpoint. Using [service endpoints](../../virtual-network/virtual-network-service-endpoints-overview.md) you can allow traffic only from a specific subnet within an Azure Virtual Network and block everything else. In the following scenario, we'll use this functionality to ensure that an App Service instance can only receive traffic from a specific Application Gateway instance.
-![Diagram shows the Internet flowing to an Application Gateway in an Azure Virtual Network and flowing from there through a firewall icon to instances of apps in App Service.](./media/app-gateway-with-service-endpoints/service-endpoints-appgw.png)
+:::image type="content" source="./media/app-gateway-with-service-endpoints/service-endpoints-appgw.png" alt-text="Diagram shows the Internet flowing to an Application Gateway in an Azure Virtual Network and flowing from there through a firewall icon to instances of apps in App Service.":::
There are two parts to this configuration besides creating the App Service and the Application Gateway. The first part is enabling service endpoints in the subnet of the Virtual Network where the Application Gateway is deployed. Service endpoints will ensure all network traffic leaving the subnet towards the App Service will be tagged with the specific subnet ID. The second part is to set an access restriction of the specific web app to ensure that only traffic tagged with this specific subnet ID is allowed. You can configure it using different tools depending on preference. ## Using Azure portal With Azure portal, you follow four steps to provision and configure the setup. If you have existing resources, you can skip the first steps.
-1. Create an App Service using one of the Quickstarts in the App Service documentation, for example [.Net Core Quickstart](../quickstart-dotnetcore.md)
+1. Create an App Service using one of the Quickstarts in the App Service documentation, for example [.NET Core Quickstart](../quickstart-dotnetcore.md)
2. Create an Application Gateway using the [portal Quickstart](../../application-gateway/quick-create-portal.md), but skip the Add backend targets section. 3. Configure [App Service as a backend in Application Gateway](../../application-gateway/configure-web-app-portal.md), but skip the Restrict access section.
-4. Finally create the [access restriction using service endpoints](../../app-service/app-service-ip-restrictions.md#use-service-endpoints).
+4. Finally create the [access restriction using service endpoints](../../app-service/app-service-ip-restrictions.md#set-a-service-endpoint-based-rule).
You can now access the App Service through Application Gateway, but if you try to access the App Service directly, you should receive a 403 HTTP error indicating that the web site is stopped.
-![Screenshot shows the text of an Error 403 - This web app is stopped.](./media/app-gateway-with-service-endpoints/web-site-stopped.png)
+![Screenshot shows the text of an Error 403 - Forbidden.](./media/app-gateway-with-service-endpoints/website-403-forbidden.png)
## Using Azure Resource Manager template The [Resource Manager deployment template][template-app-gateway-app-service-complete] will provision a complete scenario. The scenario consists of an App Service instance locked down with service endpoints and access restriction to only receive traffic from Application Gateway. The template includes many Smart Defaults and unique postfixes added to the resource names for it to be simple. To override them, you'll have to clone the repo or download the template and edit it.
app-service https://docs.microsoft.com/en-us/azure/app-service/security-baseline https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/security-baseline.md
@@ -436,7 +436,7 @@ Implement multifactor authentication for Azure AD. Administrators need to ensure
**Guidance**: Use Privileged Access Workstations (PAW) with multifactor authentication configured to log into and configure Azure resources. -- [Learn about Privileged Access Workstations](/windows-server/identity/securing-privileged-access/privileged-access-workstations)
+- [Learn about Privileged Access Workstations](https://4sysops.com/archives/understand-the-microsoft-privileged-access-workstation-paw-security-model/)
- [How to enable MFA in Azure](../active-directory/authentication/howto-mfa-getstarted.md)
application-gateway https://docs.microsoft.com/en-us/azure/application-gateway/security-baseline https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/application-gateway/security-baseline.md
@@ -405,7 +405,7 @@ Additionally, to help you keep track of dedicated administrative accounts, you m
**Guidance**: Use PAWs (privileged access workstations) with MFA configured to log into and configure Azure resources.
-* [Learn about Privileged Access Workstations](/windows-server/identity/securing-privileged-access/privileged-access-workstations)
+* [Learn about Privileged Access Workstations](https://4sysops.com/archives/understand-the-microsoft-privileged-access-workstation-paw-security-model/)
* [How to enable MFA in Azure](../active-directory/authentication/howto-mfa-getstarted.md)
attestation https://docs.microsoft.com/en-us/azure/attestation/quickstart-azure-cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/attestation/quickstart-azure-cli.md
@@ -11,7 +11,7 @@ ms.author: mbaldwin
--- # Quickstart: Set up Azure Attestation with Azure CLI
-Get started with Azure Attestation by using Azure CLI to set up attestation.
+Get started with [Azure Attestation by using Azure CLI](/cli/azure/ext/attestation/attestation?view=azure-cli-latest).
## Get started
@@ -60,7 +60,7 @@ Get started with Azure Attestation by using Azure CLI to set up attestation.
Here are commands you can use to create and manage the attestation provider:
-1. Run the [az attestation create](/cli/azure/ext/attestation/attestation?view=azure-cli-latest#ext_attestation_az_attestation_create) command to create an attestation provider:
+1. Run the [az attestation create](/cli/azure/ext/attestation/attestation?view=azure-cli-latest#ext_attestation_az_attestation_create) command to create an attestation provider without policy signing requirement:
```azurecli az attestation create --name "myattestationprovider" --resource-group "MyResourceGroup" --location westus
@@ -123,7 +123,7 @@ To set policy in JWT format for a given kind of attestation type using file path
```azurecli az attestation policy set --name "myattestationprovider" --resource-group "MyResourceGroup" \
+--attestation-type SGX-IntelSDK -f "{file_path}" --policy-format JWT
``` ## Next steps
attestation https://docs.microsoft.com/en-us/azure/attestation/quickstart-portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/attestation/quickstart-portal.md new file mode 100644
@@ -0,0 +1,179 @@
+---
+title: Set up Azure Attestation with Azure portal
+description: How to set up and configure an attestation provider using Azure portal.
+services: attestation
+author: msmbaldwin
+ms.service: attestation
+ms.topic: overview
+ms.date: 08/31/2020
+ms.author: mbaldwin
++
+---
+# Quickstart: Set up Azure Attestation with Azure portal
+
+Follow the below steps to manage an attestation provider using Azure portal.
+
+## Attestation provider
+
+### Create an attestation provider
+
+#### To configure the provider with unsigned policies
+
+1. From the Azure portal menu, or from the Home page, select **Create a resource**
+2. In the Search box, enter **attestation**
+3. From the results list, choose **Microsoft Azure Attestation**
+4. On the Microsoft Azure Attestation page, choose **Create**
+5. On the Create attestation provider page, provide the following inputs:
+
+ **Subscription**: Choose a subscription
+
+ **Resource Group**: select an existing resource group or choose **Create new** and enter a resource group name
+
+ **Name**: A unique name is required
+
+ **Location**: choose a location
+
+ **Policy signer certificates file**: Do not upload policy signer certificates file to configure the provider with unsigned policies
+6. After providing the required inputs, click **Review+Create**
+7. Fix validation issues if any and click **Create**.
+
+#### To configure the provider with signed policies
+
+1. From the Azure portal menu, or from the Home page, select **Create a resource**
+2. In the Search box, enter **attestation**
+3. From the results list, choose **Microsoft Azure Attestation**
+4. On the Microsoft Azure Attestation page, choose **Create**
+5. On the Create attestation provider page, provide the following information:
+
+ a. **Subscription**: Choose a subscription
+
+ b. **Resource Group**: select an existing resource group or choose **Create new** and enter a resource group name
+
+ c. **Name**: A unique name is required
+
+ d. **Location**: choose a location
+
+ e. **Policy signer certificates file**: To configure the attestation provider with policy signing certs, upload certs file. See examples [here](/azure/attestation/policy-signer-examples)
+6. After providing the required inputs, click **Review+Create**
+7. Fix validation issues if any and click **Create**.
+
+### View attestation provider
+
+1. From the Azure portal menu, or from the Home page, select **All resources**
+2. In the filter box, enter attestation provider name and select it
+
+### Delete attestation provider
+
+1. From the Azure portal menu, or from the Home page, select **All resources**
+2. In the filter box, enter attestation provider name
+3. Select the checkbox and click **Delete**
+4. Type yes and click **Delete**
+[OR]
+1. From the Azure portal menu, or from the Home page, select **All resources**
+2. In the filter box, enter attestation provider name
+3. Select the attestation provider and navigate to overview page
+4. Click **Delete** in the top menu and click **Yes**
++
+## Attestation policy signers
+
+### View policy signer certificates
+
+1. From the Azure portal menu, or from the Home page, select **All resources**
+2. In the filter box, enter attestation provider name
+3. Select the attestation provider and navigate to overview page
+4. Click **Policy signer certificates** in left-side resource menu or in the bottom pane
+5. Click **Download policy signer certificates** (The button will be disabled for the attestation providers created without policy signing requirement)
+6. The text file downloaded will have all certs in a JWS format.
+a. Verify the certificates count and certs downloaded.
+
+### Add policy signer certificate
+
+1. From the Azure portal menu, or from the Home page, select **All resources**
+2. In the filter box, enter attestation provider name
+3. Select the attestation provider and navigate to overview page
+4. Click **Policy signer certificates** in left-side resource menu or in the bottom pane
+5. Click **Add** in the top menu (The button will be disabled for the attestation providers created without policy signing requirement)
+6. Upload policy signer certificate file and click **Add**. See examples [here](/azure/attestation/policy-signer-examples)
+
+### Delete policy signer certificate
+
+1. From the Azure portal menu, or from the Home page, select **All resources**
+2. In the filter box, enter attestation provider name
+3. Select the attestation provider and navigate to overview page
+4. Click **Policy signer certificates** in left-side resource menu or in the bottom pane
+5. Click **Delete** in the top menu (The button will be disabled for the attestation providers created without policy signing requirement)
+6. Upload policy signer certificate file and click **Delete**. See examples [here](/azure/attestation/policy-signer-examples)
+
+## Attestation policy
+
+### View attestation policy
+
+1. From the Azure portal menu, or from the Home page, select **All resources**
+2. In the filter box, enter attestation provider name
+3. Select the attestation provider and navigate to overview page
+4. Click **Policy** in left-side resource menu or in the bottom pane
+5. Select the preferred **Attestation Type** and view the **Current policy**
+
+### Configure attestation policy
+
+#### When attestation provider is created without policy signing requirement
+
+##### Upload policy in JWT format
+
+1. From the Azure portal menu, or from the Home page, select **All resources**
+2. In the filter box, enter attestation provider name
+3. Select the attestation provider and navigate to overview page
+4. Click **Policy** in left-side resource menu or in the bottom pane
+5. Click **Configure** in the top menu
+6. When the attestation provider is created without policy signing requirement, user can upload a policy in **JWT** or **Text** format
+7. Select **Policy Format** as **JWT**
+8. Upload policy file with policy content in an **unsigned/signed JWT** format and click **Save**. See examples [here](/azure/attestation/policy-examples)
+
+ For file upload option, policy preview will be shown in text format and policy preview is not editable.
+
+7. Click **Refresh** in the top menu to view the configured policy
+
+##### Upload policy in Text format
+
+1. From the Azure portal menu, or from the Home page, select **All resources**
+2. In the filter box, enter attestation provider name
+3. Select the attestation provider and navigate to overview page
+4. Click **Policy** in left-side resource menu or in the bottom pane
+5. Click **Configure** in the top menu
+6. When the attestation provider is created without policy signing requirement, user can upload a policy in **JWT** or **Text** format
+7. Select **Policy Format** as **Text**
+8. Upload policy file with content in **Text** format or enter policy content in text area and click **Save**. See examples [here](/azure/attestation/policy-examples)
+
+ For file upload option, policy preview will be shown in text format and policy preview is not editable.
+
+8. Click **Refresh** to view the configured policy
+
+#### When attestation provider is created with policy signing requirement
+
+##### Upload policy in JWT format
+
+1. From the Azure portal menu, or from the Home page, select **All resources**
+2. In the filter box, enter attestation provider name
+3. Select the attestation provider and navigate to overview page
+4. Click **Policy** in left-side resource menu or in the bottom pane
+5. Click **Configure** in the top menu
+6. When the attestation provider is created with policy signing requirement, user can upload a policy only in **signed JWT format**
+7. Upload policy file is **signed JWT format** and click **Save**. See examples [here](/azure/attestation/policy-examples)
+
+ For file upload option, policy preview will be shown in text format and policy preview is not editable.
+
+8. Click **Refresh** to view the configured policy
+
+
++++++++++
automation https://docs.microsoft.com/en-us/azure/automation/security-baseline https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/security-baseline.md
@@ -400,7 +400,7 @@ You can also enable a Just-In-Time / Just-Enough-Access by using Azure AD Privil
**Guidance**: Use PAWs with multi-factor authentication configured to log into and configure Azure Automation Account resources in production environments.
-* [Learn about Privileged Access Workstations](/windows-server/identity/securing-privileged-access/privileged-access-workstations)
+* [Learn about Privileged Access Workstations](https://4sysops.com/archives/understand-the-microsoft-privileged-access-workstation-paw-security-model/)
* [How to enable MFA in Azure](../active-directory/authentication/howto-mfa-getstarted.md)
@@ -554,7 +554,7 @@ Follow Azure Security Center recommendations for encryption at rest and encrypti
* [Understand encryption in transit with Azure](../security/fundamentals/encryption-overview.md#encryption-of-data-in-transit)
-* [Azure Automation TLS 1.2 enforcement](https://azure.microsoft.com/updates/azure-automation-tls12-enforcement/)
+* [Azure Automation TLS 1.2 enforcement](/azure/active-directory/hybrid/reference-connect-tls-enforcement)
**Azure Security Center monitoring**: Yes
azure-app-configuration https://docs.microsoft.com/en-us/azure/azure-app-configuration/howto-best-practices https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-app-configuration/howto-best-practices.md
@@ -86,6 +86,10 @@ App Configuration offers the option to bulk [import](./howto-import-export-data.
App Configuration is regional service. For applications with different configurations per region, storing these configurations in one instance can create a single point of failure. Deploying one App Configuration instances per region across multiple regions may be a better option. It can help with regional disaster recovery, performance, and security siloing. Configuring by region also improves latency and uses separated throttling quotas, since throttling is per instance. To apply disaster recovery mitigation, you can use [multiple configuration stores](./concept-disaster-recovery.md).
+## Client Applications in App Configuration
+
+Excessive requests to App Configuration can result in throttling or overage charges. Applications take advantage of the caching and intelligent refreshing currently available to optimize the number of requests they send. This process can be mirrored in high volume client applications by avoiding direct connections to the configuration store. Instead, client applications connect to a custom service, and this service communicates with the configuration store. This proxy solution can ensure the client applications do not approach the throttling limit on the configuration store. For more information on throttling, see [the FAQ](https://docs.microsoft.com/azure/azure-app-configuration/faq#are-there-any-limits-on-the-number-of-requests-made-to-app-configuration).
+ ## Next steps * [Keys and values](./concept-key-value.md)\ No newline at end of file
azure-app-configuration https://docs.microsoft.com/en-us/azure/azure-app-configuration/howto-feature-filters-aspnet-core https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-app-configuration/howto-feature-filters-aspnet-core.md
@@ -50,19 +50,19 @@ You can configure these settings for feature flags defined in Azure App Configur
> [!div class="mx-imgBorder"] > ![Edit Beta feature flag](./media/edit-beta-feature-flag.png)
-1. In the **Edit** screen, select the **On** radio button if it isn't already selected. Then click the **Add Filter** button. (The **On** radio button's label will change to read **Conditional**.)
+1. In the **Edit** screen, select the **Enable feature flag** button if it isn't already selected. Then click the **Use feature filter** button and select **Custom**.
1. In the **Key** field, enter *Microsoft.Percentage*. > [!div class="mx-imgBorder"] > ![Add feature filter](./media/feature-flag-add-filter.png)
-1. Click the context menu next to the feature filter key. Click **Edit Parameters**.
+1. Click the context menu next to the feature filter key. Click **Edit filter parameters**.
> [!div class="mx-imgBorder"]
- > ![Edit feature filter parameters](./media/feature-flag-edit-filter-parameters.png)
+ > ![Edit feature filter parameters](./media/feature-flags-edit-filter-parameters.png)
-1. Hover under the **Name** header so that text boxes appear in the grid. Enter a **Name** of *Value* and a **Value** of 50. The **Value** field indicates the percentage of requests for which to enable the feature filter.
+1. Enter a **Name** of *Value* and a **Value** of 50. The **Value** field indicates the percentage of requests for which to enable the feature filter.
> [!div class="mx-imgBorder"] > ![Set feature filter parameters](./media/feature-flag-set-filter-parameters.png)
azure-app-configuration https://docs.microsoft.com/en-us/azure/azure-app-configuration/security-baseline https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-app-configuration/security-baseline.md
@@ -362,7 +362,7 @@ You should ensure that the credentials (such as password, certificate, or smart
**Guidance**: Secured, isolated workstations are critically important for the security of sensitive roles like administrators, developers, and critical service operators. Use highly secured user workstations and/or Azure Bastion for administrative tasks related the App Configuration. Use Azure Active Directory, Microsoft Defender Advanced Threat Protection (ATP), and/or Microsoft Intune to deploy a secure and managed user workstation for administrative tasks. The secured workstations can be centrally managed to enforce secured configuration including strong authentication, software and hardware baselines, restricted logical and network access. -- [Understand privileged access workstations](../active-directory/devices/concept-azure-managed-workstation.md)
+- [Understand privileged access workstations](https://4sysops.com/archives/understand-the-microsoft-privileged-access-workstation-paw-security-model/)
- [Deploy a privileged access workstation](../active-directory/devices/howto-azure-managed-workstation.md)
azure-australia https://docs.microsoft.com/en-us/azure/azure-australia/gateway-secure-remote-administration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-australia/gateway-secure-remote-administration.md
@@ -123,7 +123,7 @@ The privileged workstation is a hardened machine that can be used to perform adm
|Resources|Link| |---|---|
-|Privileged Access Workstations Architecture Overview|[https://docs.microsoft.com/windows-server/identity/securing-privileged-access/privileged-access-workstations](/windows-server/identity/securing-privileged-access/privileged-access-workstations)|
+|Privileged Access Workstations Architecture Overview|[https://4sysops.com/archives/understand-the-microsoft-privileged-access-workstation-paw-security-model/](/windows-server/identity/securing-privileged-access/privileged-access-workstations)|
|Securing Privileged Access Reference Material|[https://docs.microsoft.com/windows-server/identity/securing-privileged-access/securing-privileged-access-reference-material](/windows-server/identity/securing-privileged-access/securing-privileged-access-reference-material)| ### Mobile device
azure-cache-for-redis https://docs.microsoft.com/en-us/azure/azure-cache-for-redis/cache-rust-get-started https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-cache-for-redis/cache-rust-get-started.md new file mode 100644
@@ -0,0 +1,341 @@
+---
+title: Use Azure Cache for Redis with Rust
+description: In this quickstart, you learn how to interact with Azure Cache for Redis using Rust.
+author: abhirockzz
+ms.author: abhishgu
+ms.service: cache
+ms.devlang: rust
+ms.topic: quickstart
+ms.date: 01/08/2021
+#Customer intent: As a Rust developer new to Azure Cache for Redis, I want to learn how to use it with Azure Cache for Redis.
+---
+# Quickstart: Use Azure Cache for Redis with Rust
+
+In this article, you will learn how to use the [Rust programming language](https://www.rust-lang.org/) for interacting with [Azure Cache for Redis](./cache-overview.md). It will demonstrate examples of commonly used Redis data structures such as [String](https://redis.io/topics/data-types-intro#redis-strings), [Hash](https://redis.io/topics/data-types-intro#redis-hashes), [List](https://redis.io/topics/data-types-intro#redis-lists) etc. using the [redis-rs](https://github.com/mitsuhiko/redis-rs) library for Redis. This client exposes both high and low-level APIs and you will see both these styles in action with the help of sample code presented in this article.
+
+## Prerequisites
+
+- Azure subscription - [create one for free](https://azure.microsoft.com/free/)
+- [Rust](https://www.rust-lang.org/tools/install) (version 1.39 or above)
+- [Git](https://git-scm.com/downloads)
+
+## Create an Azure Cache for Redis instance
+[!INCLUDE [redis-cache-create](../../includes/redis-cache-create.md)]
+
+[!INCLUDE [redis-cache-create](../../includes/redis-cache-access-keys.md)]
+
+## Review the code (optional)
+
+If you're interested in learning how the code works, you can review the following snippets. Otherwise, feel free to skip ahead to [Run the application](#run-the-application).
+
+The `connect` function is used to establish a connection to Azure Cache for Redis. It expects host name and the password (Access Key) to be passed in via environment variables `REDIS_HOSTNAME` and `REDIS_PASSWORD` respectively. The format for the connection URL is `rediss://<username>:<password>@<hostname>` - Azure Cache for Redis only accepts secure connections with [TLS 1.2 as the minimum required version](cache-remove-tls-10-11.md).
+
+The call to [redis::Client::open](https://docs.rs/redis/0.19.0/redis/struct.Client.html#method.open) performs basic validation while [get_connection()](https://docs.rs/redis/0.19.0/redis/struct.Client.html#method.get_connection) actually initiates the connection - the program stops if the connectivity fails due to any reason such as an incorrect password.
+
+```rust
+fn connect() -> redis::Connection {
+ let redis_host_name =
+ env::var("REDIS_HOSTNAME").expect("missing environment variable REDIS_HOSTNAME");
+ let redis_password =
+ env::var("REDIS_PASSWORD").expect("missing environment variable REDIS_PASSWORD");
+ let redis_conn_url = format!("rediss://:{}@{}", redis_password, redis_host_name);
+
+ redis::Client::open(redis_conn_url)
+ .expect("invalid connection URL")
+ .get_connection()
+ .expect("failed to connect to redis")
+}
+```
+
+The `basics` function covers [SET](https://redis.io/commands/set), [GET](https://redis.io/commands/get), and [INCR](https://redis.io/commands/incr) commands. The low-level API is used for `SET` and `GET`, which sets and retrieves the value for a key named `foo`. The `INCRBY` command is executed using a high-level API i.e. [incr](https://docs.rs/redis/0.19.0/redis/trait.Commands.html#method.incr) increments the value of a key (named `counter`) by `2` followed by a call to [get](https://docs.rs/redis/0.19.0/redis/trait.Commands.html#method.get) to retrieve it.
+
+```rust
+fn basics() {
+ let mut conn = connect();
+ let _: () = redis::cmd("SET")
+ .arg("foo")
+ .arg("bar")
+ .query(&mut conn)
+ .expect("failed to execute SET for 'foo'");
+
+ let bar: String = redis::cmd("GET")
+ .arg("foo")
+ .query(&mut conn)
+ .expect("failed to execute GET for 'foo'");
+ println!("value for 'foo' = {}", bar);
+
+ let _: () = conn
+ .incr("counter", 2)
+ .expect("failed to execute INCR for 'counter'");
+ let val: i32 = conn
+ .get("counter")
+ .expect("failed to execute GET for 'counter'");
+ println!("counter = {}", val);
+}
+```
+
+The below code snippet demonstrates the functionality of a Redis `HASH` data structure. [HSET](https://redis.io/commands/hset) is invoked using the low-level API to store information (`name`, `version`, `repo`) about Redis drivers (clients). For example, details for the Rust driver (one being used in this sample code!) is captured in form of a [BTreeMap](https://doc.rust-lang.org/std/collections/struct.BTreeMap.html) and then passed on to the low-level API. It is then retrieved using [HGETALL](https://redis.io/commands/hgetall).
+
+`HSET` can also be executed using a high-level API using [hset_multiple](https://docs.rs/redis/0.19.0/redis/trait.Commands.html#method.hset_multiple) that accepts an array of tuples. [hget](https://docs.rs/redis/0.19.0/redis/trait.Commands.html#method.hget) is then executed to fetch the value for a single attribute (the `repo` in this case).
+
+```rust
+fn hash() {
+ let mut conn = connect();
+
+ let mut driver: BTreeMap<String, String> = BTreeMap::new();
+ let prefix = "redis-driver";
+ driver.insert(String::from("name"), String::from("redis-rs"));
+ driver.insert(String::from("version"), String::from("0.19.0"));
+ driver.insert(
+ String::from("repo"),
+ String::from("https://github.com/mitsuhiko/redis-rs"),
+ );
+
+ let _: () = redis::cmd("HSET")
+ .arg(format!("{}:{}", prefix, "rust"))
+ .arg(driver)
+ .query(&mut conn)
+ .expect("failed to execute HSET");
+
+ let info: BTreeMap<String, String> = redis::cmd("HGETALL")
+ .arg(format!("{}:{}", prefix, "rust"))
+ .query(&mut conn)
+ .expect("failed to execute HGETALL");
+ println!("info for rust redis driver: {:?}", info);
+
+ let _: () = conn
+ .hset_multiple(
+ format!("{}:{}", prefix, "go"),
+ &[
+ ("name", "go-redis"),
+ ("version", "8.4.6"),
+ ("repo", "https://github.com/go-redis/redis"),
+ ],
+ )
+ .expect("failed to execute HSET");
+
+ let repo_name: String = conn
+ .hget(format!("{}:{}", prefix, "go"), "repo")
+ .expect("HGET failed");
+ println!("go redis driver repo name: {:?}", repo_name);
+}
+```
+
+In the function below, you can see how to use a `LIST` data structure. [LPUSH](https://redis.io/commands/lpush) is executed (with the low-level API) to add an entry to the list and the high-level [lpop](https://docs.rs/redis/0.19.0/redis/trait.Commands.html#method.lpop) method is used to retrieve that from the list. Then, the [rpush](https://docs.rs/redis/0.19.0/redis/trait.Commands.html#method.rpush) method is used to add a couple of entries to the list which are then fetched using the low-level [lrange](https://docs.rs/redis/0.19.0/redis/trait.Commands.html#method.lrange) method.
+
+```rust
+fn list() {
+ let mut conn = connect();
+ let list_name = "items";
+
+ let _: () = redis::cmd("LPUSH")
+ .arg(list_name)
+ .arg("item-1")
+ .query(&mut conn)
+ .expect("failed to execute LPUSH for 'items'");
+
+ let item: String = conn
+ .lpop(list_name)
+ .expect("failed to execute LPOP for 'items'");
+ println!("first item: {}", item);
+
+ let _: () = conn.rpush(list_name, "item-2").expect("RPUSH failed");
+ let _: () = conn.rpush(list_name, "item-3").expect("RPUSH failed");
+
+ let len: isize = conn
+ .llen(list_name)
+ .expect("failed to execute LLEN for 'items'");
+ println!("no. of items in list = {}", len);
+
+ let items: Vec<String> = conn
+ .lrange(list_name, 0, len - 1)
+ .expect("failed to execute LRANGE for 'items'");
+
+ println!("listing items in list");
+ for item in items {
+ println!("item: {}", item)
+ }
+}
+```
+
+Here you can see some of the `SET` operations. The [sadd](https://docs.rs/redis/0.19.0/redis/trait.Commands.html#method.sadd) (high-level API) method is used to add couple of entries to a `SET` named `users`. [SISMEMBER](https://redis.io/commands/hset) is then executed (low-level API) to check whether `user1` exists. Finally, [smembers](https://docs.rs/redis/0.19.0/redis/trait.Commands.html#method.smembers) is used to fetch and iterate over all the set entries in the form of a Vector ([Vec<String>](https://doc.rust-lang.org/std/vec/struct.Vec.html)).
+
+```rust
+fn set() {
+ let mut conn = connect();
+ let set_name = "users";
+
+ let _: () = conn
+ .sadd(set_name, "user1")
+ .expect("failed to execute SADD for 'users'");
+ let _: () = conn
+ .sadd(set_name, "user2")
+ .expect("failed to execute SADD for 'users'");
+
+ let ismember: bool = redis::cmd("SISMEMBER")
+ .arg(set_name)
+ .arg("user1")
+ .query(&mut conn)
+ .expect("failed to execute SISMEMBER for 'users'");
+ println!("does user1 exist in the set? {}", ismember);
+
+ let users: Vec<String> = conn.smembers(set_name).expect("failed to execute SMEMBERS");
+ println!("listing users in set");
+
+ for user in users {
+ println!("user: {}", user)
+ }
+}
+```
+
+`sorted_set` function below demonstrates the Sorted Set data structure. [ZADD](https://redis.io/commands/zadd) is invoked (with the low-level API) to add a random integer score for a player (`player-1`). Next, the [zadd](https://docs.rs/redis/0.19.0/redis/trait.Commands.html#method.zadd) method (high-level API) is used to add more players (`player-2` to `player-5`) and their respective (randomly generated) scores. The number of entries in the sorted set is figured out using [ZCARD](https://redis.io/commands/zcard) and that's used as the limit to the [ZRANGE](https://redis.io/commands/zrange) command (invoked with the low-level API) to list out the players with their scores in ascending order.
+
+```rust
+fn sorted_set() {
+ let mut conn = connect();
+ let sorted_set = "leaderboard";
+
+ let _: () = redis::cmd("ZADD")
+ .arg(sorted_set)
+ .arg(rand::thread_rng().gen_range(1..10))
+ .arg("player-1")
+ .query(&mut conn)
+ .expect("failed to execute ZADD for 'leaderboard'");
+
+ for num in 2..=5 {
+ let _: () = conn
+ .zadd(
+ sorted_set,
+ String::from("player-") + &num.to_string(),
+ rand::thread_rng().gen_range(1..10),
+ )
+ .expect("failed to execute ZADD for 'leaderboard'");
+ }
+
+ let count: isize = conn
+ .zcard(sorted_set)
+ .expect("failed to execute ZCARD for 'leaderboard'");
+
+ let leaderboard: Vec<(String, isize)> = conn
+ .zrange_withscores(sorted_set, 0, count - 1)
+ .expect("ZRANGE failed");
+
+ println!("listing players and scores in ascending order");
+
+ for item in leaderboard {
+ println!("{} = {}", item.0, item.1)
+ }
+}
+```
+
+## Clone the sample application
+
+Start by cloning the application from GitHub.
+
+1. Open a command prompt and create a new folder named `git-samples`.
+
+ ```bash
+ md "C:\git-samples"
+ ```
+
+1. Open a git terminal window, such as git bash. Use the `cd` command to change into the new folder where you will be cloning the sample app.
+
+ ```bash
+ cd "C:\git-samples"
+ ```
+
+1. Run the following command to clone the sample repository. This command creates a copy of the sample app on your computer.
+
+ ```bash
+ git clone https://github.com/Azure-Samples/azure-redis-cache-rust-quickstart.git
+ ```
+
+## Run the application
+
+The application accepts connectivity and credentials in the form of environment variables.
+
+1. Fetch the **Host name** and **Access Keys** (available via Access Keys) for Azure Cache for Redis instance in the [Azure portal](https://portal.azure.com/).
+
+1. Set them to the respective environment variables:
+
+ ```shell
+ set REDIS_HOSTNAME=<Host name>:<port> (e.g. <name of cache>.redis.cache.windows.net:6380)
+ set REDIS_PASSWORD=<Primary Access Key>
+ ```
+
+1. In the terminal window, change to the correct folder. For example:
+
+ ```shell
+ cd "C:\git-samples\azure-redis-cache-rust-quickstart"
+ ```
+
+1. In the terminal, run the following command to start the application.
+
+ ```shell
+ cargo run
+ ```
+
+ You will see an output as such:
+
+ ```bash
+ ******* Running SET, GET, INCR commands *******
+ value for 'foo' = bar
+ counter = 2
+ ******* Running HASH commands *******
+ info for rust redis driver: {"name": "redis-rs", "repo": "https://github.com/mitsuhiko/redis-rs", "version": "0.19.0"}
+ go redis driver repo name: "https://github.com/go-redis/redis"
+ ******* Running LIST commands *******
+ first item: item-1
+ no. of items in list = 2
+ listing items in list
+ item: item-2
+ item: item-3
+ ******* Running SET commands *******
+ does user1 exist in the set? true
+ listing users in set
+ user: user2
+ user: user1
+ user: user3
+ ******* Running SORTED SET commands *******
+ listing players and scores
+ player-2 = 2
+ player-4 = 4
+ player-1 = 7
+ player-5 = 6
+ player-3 = 8
+ ```
+
+ If you want to run a specific function, comment out other functions in the `main` function:
+
+ ```rust
+ fn main() {
+ basics();
+ hash();
+ list();
+ set();
+ sorted_set();
+ }
+ ```
+
+## Clean up resources
+
+If you're finished with the Azure resource group and resources you created in this quickstart, you can delete them to avoid charges.
+
+> [!IMPORTANT]
+> Deleting a resource group is irreversible, and the resource group and all the resources in it are permanently deleted. If you created your Azure Cache for Redis instance in an existing resource group that you want to keep, you can delete just the cache by selecting **Delete** from the cache **Overview** page.
+
+To delete the resource group and its Redis Cache for Azure instance:
+
+1. From the [Azure portal](https://portal.azure.com), search for and select **Resource groups**.
+1. In the **Filter by name** text box, enter the name of the resource group that contains your cache instance, and then select it from the search results.
+1. On your resource group page, select **Delete resource group**.
+1. Type the resource group name, and then select **Delete**.
+
+ ![Delete your resource group for Azure Cache for Redis](./media/cache-python-get-started/delete-your-resource-group-for-azure-cache-for-redis.png)
+
+## Next steps
+
+In this quickstart, you learned how to use the Rust driver for Redis to connect and execute operations in Azure Cache for Redis.
+
+> [!div class="nextstepaction"]
+> [Create a simple ASP.NET web app that uses an Azure Cache for Redis.](./cache-web-app-howto.md)
azure-cache-for-redis https://docs.microsoft.com/en-us/azure/azure-cache-for-redis/security-baseline https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-cache-for-redis/security-baseline.md
@@ -390,7 +390,7 @@ https://docs.microsoft.com/azure/security-center/security-center-identity-access
Learn about Privileged Access Workstations:
-https://docs.microsoft.com/windows-server/identity/securing-privileged-access/privileged-access-workstations
+https://4sysops.com/archives/understand-the-microsoft-privileged-access-workstation-paw-security-model/
How to enable MFA in Azure:
azure-functions https://docs.microsoft.com/en-us/azure/azure-functions/functions-create-first-azure-function https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-create-first-azure-function.md deleted file mode 100644
@@ -1,77 +0,0 @@
-title: Create your first function in the Azure portal
-description: Learn how to create your first Azure Function for serverless execution using the Azure portal.
-ms.assetid: 96cf87b9-8db6-41a8-863a-abb828e3d06d
-ms.topic: how-to
-ms.date: 03/26/2020
-ms.custom: "devx-track-csharp, mvc, devcenter, cc996988-fb4f-47"
-
-# Create your first function in the Azure portal
-
-Azure Functions lets you run your code in a serverless environment without having to first create a virtual machine (VM) or publish a web application. In this article, you learn how to use Azure Functions to create a "hello world" HTTP trigger function in the Azure portal.
-
-We recommend that you [develop your functions locally](functions-develop-local.md) and publish to a function app in Azure.
-Use one of the following links to get started with your chosen local development environment and language:
-
-| Visual Studio Code | Terminal/command prompt | Visual Studio |
-| --- | --- | --- |
-| &bull;&nbsp;[Get started with C#](./create-first-function-vs-code-csharp.md)<br/>&bull;&nbsp;[Get started with Java](./create-first-function-vs-code-java.md)<br/>&bull;&nbsp;[Get started with JavaScript](./create-first-function-vs-code-node.md)<br/>&bull;&nbsp;[Get started with PowerShell](./create-first-function-vs-code-powershell.md)<br/>&bull;&nbsp;[Get started with Python](./create-first-function-vs-code-python.md) |&bull;&nbsp;[Get started with C#](./create-first-function-cli-csharp.md)<br/>&bull;&nbsp;[Get started with Java](./create-first-function-cli-java.md)<br/>&bull;&nbsp;[Get started with JavaScript](./create-first-function-cli-node.md)<br/>&bull;&nbsp;[Get started with PowerShell](./create-first-function-cli-powershell.md)<br/>&bull;&nbsp;[Get started with Python](./create-first-function-cli-python.md) | [Get started with C#](functions-create-your-first-function-visual-studio.md) |
-
-[!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)]
-
-## Sign in to Azure
-
-Sign in to the [Azure portal](https://portal.azure.com) with your Azure account.
-
-## Create a function app
-
-You must have a function app to host the execution of your functions. A function app lets you group functions as a logical unit for easier management, deployment, scaling, and sharing of resources.
-
-[!INCLUDE [Create function app Azure portal](../../includes/functions-create-function-app-portal.md)]
-
-Next, create a function in the new function app.
-
-## <a name="create-function"></a>Create an HTTP trigger function
-
-1. From the left menu of the **Functions** window, select **Functions**, then select **Add** from the top menu.
-
-1. From the **New Function** window, select **Http trigger**.
-
- ![Choose HTTP trigger function](./media/functions-create-first-azure-function/function-app-select-http-trigger.png)
-
-1. In the **New Function** window, accept the default name for **New Function**, or enter a new name.
-
-1. Choose **Anonymous** from the **Authorization level** drop-down list, and then select **Create Function**.
-
- Azure creates the HTTP trigger function. Now, you can run the new function by sending an HTTP request.
-
-## Test the function
-
-1. In your new HTTP trigger function, select **Code + Test** from the left menu, then select **Get function URL** from the top menu.
-
- ![Select Get function URL](./media/functions-create-first-azure-function/function-app-select-get-function-url.png)
-
-1. In the **Get function URL** dialog box, select **default** from the drop-down list, and then select the **Copy to clipboard** icon.
-
- ![Copy the function URL from the Azure portal](./media/functions-create-first-azure-function/function-app-develop-tab-testing.png)
-
-1. Paste the function URL into your browser's address bar. Add the query string value `?name=<your_name>` to the end of this URL and press Enter to run the request.
-
- The following example shows the response in the browser:
-
- ![Function response in the browser.](./media/functions-create-first-azure-function/function-app-browser-testing.png)
-
- If the request URL included an [access key](functions-bindings-http-webhook-trigger.md#authorization-keys) (`?code=...`), it means you choose **Function** instead of **Anonymous** access level when creating the function. In this case, you should instead append `&name=<your_name>`.
-
-1. When your function runs, trace information is written to the logs. To see the trace output, return to the **Code + Test** page in the portal and expand the **Logs** arrow at the bottom of the page.
-
- ![Functions log viewer in the Azure portal.](./media/functions-create-first-azure-function/function-view-logs.png)
-
-## Clean up resources
-
-[!INCLUDE [Clean-up resources](../../includes/functions-quickstart-cleanup.md)]
-
-## Next steps
-
-[!INCLUDE [Next steps note](../../includes/functions-quickstart-next-steps.md)]
azure-functions https://docs.microsoft.com/en-us/azure/azure-functions/functions-create-function-app-portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-create-function-app-portal.md
@@ -1,43 +1,76 @@
---
-title: Create a function app from the Azure portal
-description: Create a new function app in Azure from the portal.
+title: Create your first function in the Azure portal
+description: Learn how to create your first Azure Function for serverless execution using the Azure portal.
ms.topic: how-to
-ms.date: 08/29/2019
-ms.custom: mvc
-
+ms.date: 03/26/2020
+ms.custom: "devx-track-csharp, mvc, devcenter, cc996988-fb4f-47"
---
-# Create a function app from the Azure portal
-This topic shows you how to use Azure Functions to create a function app in the Azure portal. A function app is the container that hosts the execution of individual functions.
+# Create your first function in the Azure portal
+
+Azure Functions lets you run your code in a serverless environment without having to first create a virtual machine (VM) or publish a web application. In this article, you learn how to use Azure Functions to create a "hello world" HTTP trigger function in the Azure portal.
+
+We recommend that you [develop your functions locally](functions-develop-local.md) and publish to a function app in Azure.
+Use one of the following links to get started with your chosen local development environment and language:
+
+| Visual Studio Code | Terminal/command prompt | Visual Studio |
+| --- | --- | --- |
+| &bull;&nbsp;[Get started with C#](./create-first-function-vs-code-csharp.md)<br/>&bull;&nbsp;[Get started with Java](./create-first-function-vs-code-java.md)<br/>&bull;&nbsp;[Get started with JavaScript](./create-first-function-vs-code-node.md)<br/>&bull;&nbsp;[Get started with PowerShell](./create-first-function-vs-code-powershell.md)<br/>&bull;&nbsp;[Get started with Python](./create-first-function-vs-code-python.md) |&bull;&nbsp;[Get started with C#](./create-first-function-cli-csharp.md)<br/>&bull;&nbsp;[Get started with Java](./create-first-function-cli-java.md)<br/>&bull;&nbsp;[Get started with JavaScript](./create-first-function-cli-node.md)<br/>&bull;&nbsp;[Get started with PowerShell](./create-first-function-cli-powershell.md)<br/>&bull;&nbsp;[Get started with Python](./create-first-function-cli-python.md) | [Get started with C#](functions-create-your-first-function-visual-studio.md) |
+
+[!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)]
+
+## Sign in to Azure
+
+Sign in to the [Azure portal](https://portal.azure.com) with your Azure account.
## Create a function app
-[!INCLUDE [functions-create-function-app-portal](../../includes/functions-create-function-app-portal.md)]
+You must have a function app to host the execution of your functions. A function app lets you group functions as a logical unit for easier management, deployment, scaling, and sharing of resources.
-After the function app is created, you can create individual functions in one or more different languages. Create functions [by using the portal](functions-create-first-azure-function.md#create-function), [continuous deployment](functions-continuous-deployment.md), or by [uploading with FTP](https://github.com/projectkudu/kudu/wiki/Accessing-files-via-ftp).
+[!INCLUDE [Create function app Azure portal](../../includes/functions-create-function-app-portal.md)]
-## Service plans
+Next, create a function in the new function app.
-Azure Functions has three different service plans: Consumption plan, Premium plan, and Dedicated (App Service) plan. You must choose your service plan when your function app is created, and it cannot subsequently be changed. For more information, see [Choose an Azure Functions hosting plan](functions-scale.md).
+## <a name="create-function"></a>Create an HTTP trigger function
-If you are planning to run JavaScript functions on a Dedicated (App Service) plan, you should choose a plan with fewer cores. For more information, see the [JavaScript reference for Functions](functions-reference-node.md#choose-single-vcpu-app-service-plans).
+1. From the left menu of the **Functions** window, select **Functions**, then select **Add** from the top menu.
+
+1. From the **New Function** window, select **Http trigger**.
-<a name="storage-account-requirements"></a>
+ ![Choose HTTP trigger function](./media/functions-create-first-azure-function/function-app-select-http-trigger.png)
-## Storage account requirements
+1. In the **New Function** window, accept the default name for **New Function**, or enter a new name.
-When creating a function app, you must create or link to a general-purpose Azure Storage account that supports Blob, Queue, and Table storage. Internally, Functions uses Storage for operations such as managing triggers and logging function executions. Some storage accounts do not support queues and tables, such as blob-only storage accounts, Azure Premium Storage, and general-purpose storage accounts with ZRS replication.
+1. Choose **Anonymous** from the **Authorization level** drop-down list, and then select **Create Function**.
-Accounts of an unsupported type are filtered out when you create a function app in the Azure portal. The portal also only allows you use an existing storage account when that account is in the same region as the function app you're creating. If for some reason you want to violate the performance best practice of having the storage account used by your function app in the same region, you must create your function app outside of the portal.
+ Azure creates the HTTP trigger function. Now, you can run the new function by sending an HTTP request.
->[!NOTE]
->When using the Consumption hosting plan, your function code and binding configuration files are stored in Azure File storage in the main storage account. When you delete the main storage account, this content is deleted and cannot be recovered.
+## Test the function
-To learn more about storage account types, see [Introducing the Azure Storage Services](../storage/common/storage-introduction.md#core-storage-services).
+1. In your new HTTP trigger function, select **Code + Test** from the left menu, then select **Get function URL** from the top menu.
-## Next steps
+ ![Select Get function URL](./media/functions-create-first-azure-function/function-app-select-get-function-url.png)
+
+1. In the **Get function URL** dialog box, select **default** from the drop-down list, and then select the **Copy to clipboard** icon.
+
+ ![Copy the function URL from the Azure portal](./media/functions-create-first-azure-function/function-app-develop-tab-testing.png)
-While the Azure portal makes it easy to create and try out Functions, we recommend [local development](functions-develop-local.md). After creating a function app in the portal, you still need to add a function.
+1. Paste the function URL into your browser's address bar. Add the query string value `?name=<your_name>` to the end of this URL and press Enter to run the request.
+
+ The following example shows the response in the browser:
+
+ ![Function response in the browser.](./media/functions-create-first-azure-function/function-app-browser-testing.png)
+
+ If the request URL included an [access key](functions-bindings-http-webhook-trigger.md#authorization-keys) (`?code=...`), it means you choose **Function** instead of **Anonymous** access level when creating the function. In this case, you should instead append `&name=<your_name>`.
+
+1. When your function runs, trace information is written to the logs. To see the trace output, return to the **Code + Test** page in the portal and expand the **Logs** arrow at the bottom of the page.
+
+ ![Functions log viewer in the Azure portal.](./media/functions-create-first-azure-function/function-view-logs.png)
+
+## Clean up resources
+
+[!INCLUDE [Clean-up resources](../../includes/functions-quickstart-cleanup.md)]
+
+## Next steps
-> [!div class="nextstepaction"]
-> [Add an HTTP triggered function](functions-create-first-azure-function.md#create-function)
+[!INCLUDE [Next steps note](../../includes/functions-quickstart-next-steps.md)]
azure-functions https://docs.microsoft.com/en-us/azure/azure-functions/functions-how-to-use-azure-function-app-settings https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-how-to-use-azure-function-app-settings.md
@@ -15,11 +15,6 @@ Individual functions in a function app are deployed together and are scaled toge
Connection strings, environment variables, and other application settings are defined separately for each function app. Any data that must be shared between function apps should be stored externally in a persisted store.
-This article describes how to configure and manage your function apps.
-
-> [!TIP]
-> Many configuration options can also be managed by using the [Azure CLI].
- ## Get started in the Azure portal 1. To begin, go to the [Azure portal] and sign in to your Azure account. In the search bar at the top of the portal, enter the name of your function app and select it from the list.
@@ -32,15 +27,18 @@ You can navigate to everything you need to manage your function app from the ove
## <a name="settings"></a>Work with application settings
-The **Application settings** tab maintains settings that are used by your function app. These settings are stored encrypted, and you must select **Show values** to see the values in the portal. You can also access application settings by using the Azure CLI.
+Application settings can be managed from the [Azure portal](functions-how-to-use-azure-function-app-settings.md?tabs=portal#settings) and by using the [Azure CLI](functions-how-to-use-azure-function-app-settings.md?tabs=azurecli#settings) and [Azure PowerShell](functions-how-to-use-azure-function-app-settings.md?tabs=powershell#settings). You can also manage application settings from [Visual Studio Code](functions-develop-vs-code.md#application-settings-in-azure) and from [Visual Studio](functions-develop-vs.md#function-app-settings).
-### Portal
+These settings are stored encrypted. To learn more, see [Application settings security](security-concepts.md#application-settings).
+# [Portal](#tab/portal)
+
+The **Application settings** tab maintains settings that are used by your function app. You must select **Show values** to see the values in the portal.
To add a setting in the portal, select **New application setting** and add the new key-value pair. ![Function app settings in the Azure portal.](./media/functions-how-to-use-azure-function-app-settings/azure-function-app-settings-tab.png)
-### Azure CLI
+# [Azure CLI](#tab/azurecli)
The [`az functionapp config appsettings list`](/cli/azure/functionapp/config/appsettings#az-functionapp-config-appsettings-list) command returns the existing application settings, as in the following example:
@@ -58,6 +56,22 @@ az functionapp config appsettings set --name <FUNCTION_APP_NAME> \
--settings CUSTOM_FUNCTION_APP_SETTING=12345 ```
+# [Azure PowerShell](#tab/powershell)
+
+The [`Get-AzFunctionAppSetting`](/powershell/module/az.functions/get-azfunctionappsetting) cmdlet returns the existing application settings, as in the following example:
+
+```azurepowershell-interactive
+Get-AzFunctionAppSetting -Name <FUNCTION_APP_NAME> -ResourceGroupName <RESOURCE_GROUP_NAME>
+```
+
+The [`Update-AzFunctionAppSetting`](/powershell/module/az.functions/update-azfunctionappsetting) command adds or updates an application setting. The following example creates a setting with a key named `CUSTOM_FUNCTION_APP_SETTING` and a value of `12345`:
+
+```azurepowershell-interactive
+Update-AzFunctionAppSetting -Name <FUNCTION_APP_NAME> -ResourceGroupName <RESOURCE_GROUP_NAME> -AppSetting @{"CUSTOM_FUNCTION_APP_SETTING" = "12345"}
+```
+
+---
+ ### Use application settings [!INCLUDE [functions-environment-variables](../../includes/functions-environment-variables.md)]
azure-functions https://docs.microsoft.com/en-us/azure/azure-functions/functions-recover-storage-account https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-recover-storage-account.md
@@ -12,15 +12,15 @@ This article helps you troubleshoot the following error string that appears in t
> "Error: Azure Functions Runtime is unreachable. Click here for details on storage configuration."
-This issue occurs when the Azure Functions Runtime can't start. The most common reason for the issue is that the function app has lost access to its storage account. For more information, see [Storage account requirements](./functions-create-function-app-portal.md#storage-account-requirements).
+This issue occurs when the Functions runtime can't start. The most common reason for this is that the function app has lost access to its storage account. For more information, see [Storage account requirements](storage-considerations.md#storage-account-requirements).
-The rest of this article helps you troubleshoot the following causes of this error, including how to identify and resolve each case.
+The rest of this article helps you troubleshoot specific causes of this error, including how to identify and resolve each case.
## Storage account was deleted
-Every function app requires a storage account to operate. If that account is deleted, your function won't work.
+Every function app requires a storage account to operate. If that account is deleted, your functions won't work.
-Start by looking up your storage account name in your application settings. Either `AzureWebJobsStorage` or `WEBSITE_CONTENTAZUREFILECONNECTIONSTRING` contains the name of your storage account wrapped up in a connection string. For more information, see [App settings reference for Azure Functions](./functions-app-settings.md#azurewebjobsstorage).
+Start by looking up your storage account name in your application settings. Either `AzureWebJobsStorage` or `WEBSITE_CONTENTAZUREFILECONNECTIONSTRING` contains the name of your storage account as part of a connection string. For more information, see [App settings reference for Azure Functions](./functions-app-settings.md#azurewebjobsstorage).
Search for your storage account in the Azure portal to see whether it still exists. If it has been deleted, re-create the storage account and replace your storage connection strings. Your function code is lost, and you need to redeploy it.
@@ -40,7 +40,7 @@ For more information, see [App settings reference for Azure Functions](./functio
### Guidance
-* Don't check "slot setting" for any of these settings. If you swap deployment slots, the function app breaks.
+* Don't check **slot setting** for any of these settings. If you swap deployment slots, the function app breaks.
* Don't modify these settings as part of automated deployments. * These settings must be provided and valid at creation time. An automated deployment that doesn't contain these settings results in a function app that won't run, even if the settings are added later.
@@ -52,7 +52,7 @@ The previously discussed storage account connection strings must be updated if y
Your function app must be able to access the storage account. Common issues that block a function app's access to a storage account are:
-* The function app is deployed to your App Service Environment without the correct network rules to allow traffic to and from the storage account.
+* The function app is deployed to your App Service Environment (ASE) without the correct network rules to allow traffic to and from the storage account.
* The storage account firewall is enabled and not configured to allow traffic to and from functions. For more information, see [Configure Azure Storage firewalls and virtual networks](../storage/common/storage-network-security.md?toc=%2fazure%2fstorage%2ffiles%2ftoc.json).
@@ -68,7 +68,7 @@ To resolve this issue, remove or increase the daily quota, and then restart your
## App is behind a firewall
-Your function runtime might be unreachable for either of the following reasons:
+Your function app might be unreachable for either of the following reasons:
* Your function app is hosted in an [internally load balanced App Service Environment](../app-service/environment/create-ilb-ase.md) and it's configured to block inbound internet traffic.
@@ -76,8 +76,8 @@ Your function runtime might be unreachable for either of the following reasons:
The Azure portal makes calls directly to the running app to fetch the list of functions, and it makes HTTP calls to the Kudu endpoint. Platform-level settings under the **Platform Features** tab are still available.
-To verify your App Service Environment configuration:
-1. Go to the network security group (NSG) of the subnet where the App Service Environment resides.
+To verify your ASE configuration:
+1. Go to the network security group (NSG) of the subnet where the ASE resides.
1. Validate the inbound rules to allow traffic that's coming from the public IP of the computer where you're accessing the application. You can also use the portal from a computer that's connected to the virtual network that's running your app or to a virtual machine that's running in your virtual network.
azure-functions https://docs.microsoft.com/en-us/azure/azure-functions/security-baseline https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/security-baseline.md
@@ -424,7 +424,7 @@ External accounts with owner permissions should be removed from your subscriptio
**Guidance**: Use privileged access workstations (PAW) with Multi-Factor Authentication (MFA) configured to log into and configure Azure resources. -- [Learn about Privileged Access Workstations](/windows-server/identity/securing-privileged-access/privileged-access-workstations)
+- [Learn about Privileged Access Workstations](https://4sysops.com/archives/understand-the-microsoft-privileged-access-workstation-paw-security-model/)
- [How to enable MFA in Azure](../active-directory/authentication/howto-mfa-getstarted.md)
azure-government https://docs.microsoft.com/en-us/azure/azure-government/compliance/azure-services-in-fedramp-auditscope https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-government/compliance/azure-services-in-fedramp-auditscope.md
@@ -3,7 +3,7 @@ title: Azure Services in FedRAMP and DoD SRG Audit Scope
description: This article contains tables for Azure Public and Azure Government that illustrate what FedRAMP (Moderate vs. High) and DoD SRG (Impact level 2, 4, 5 or 6) audit scope a given service has reached. author: Jain-Garima ms.author: gjain
-ms.date: 11/30/2020
+ms.date: 01/13/2021
ms.topic: article ms.service: azure-government ms.reviewer: rochiou
@@ -183,7 +183,7 @@ This article provides a detailed list of in-scope cloud services across Azure Pu
**&ast;** FedRAMP high certification covers Datacenter Infrastructure Services & Databox Pod and Disk Service which are the online software components supporting Data Box hardware appliance. ## Azure Government services by audit scope
-| _Last Updated: November 2020_ |
+| _Last Updated: January 2021_ |
| Azure Service | DoD CC SRG IL 2 | DoD CC SRG IL 4 | DoD CC SRG IL 5 (Azure Gov)**&ast;** | DoD CC SRG IL 5 (Azure DoD) **&ast;&ast;** | FedRAMP High | DoD CC SRG IL 6 | ------------- |:---------------:|:---------------:|:---------------:|:------------:|:------------:|:------------:
@@ -225,7 +225,7 @@ This article provides a detailed list of in-scope cloud services across Azure Pu
| [Azure Firewall](https://azure.microsoft.com/services/azure-firewall/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | :heavy_check_mark: | | [Azure Front Door](https://azure.microsoft.com/services/frontdoor/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | [Azure HPC Cache](https://azure.microsoft.com/services/hpc-cache/) | :heavy_check_mark: | | | | :heavy_check_mark: |
-| [Azure Information Protection](https://azure.microsoft.com/services/information-protection/) | :heavy_check_mark: | :heavy_check_mark: | | | :heavy_check_mark: |
+| [Azure Information Protection](https://azure.microsoft.com/services/information-protection/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | :heavy_check_mark: |
| [Azure Intune](/intune/what-is-intune) | :heavy_check_mark: | :heavy_check_mark: | | | :heavy_check_mark: | | [Azure IoT Security](https://azure.microsoft.com/overview/iot/security/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | :heavy_check_mark: | | [Azure Kubernetes Service (AKS)](https://azure.microsoft.com/services/kubernetes-service/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | :heavy_check_mark: |
azure-government https://docs.microsoft.com/en-us/azure/azure-government/documentation-government-overview-itar https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-government/documentation-government-overview-itar.md
@@ -140,7 +140,7 @@ Microsoft takes strong measures to protect customer data from inappropriate acce
Microsoft engineers can be granted access to customer data using temporary credentials via **Just-in-Time (JIT)** access. There must be an incident logged in the Azure Incident Management system that describes the reason for access, approval record, what data was accessed, etc. This approach ensures that there is appropriate oversight for all access to customer data and that all JIT actions (consent and access) are logged for audit. Evidence that procedures have been established for granting temporary access for Azure personnel to customer data and applications upon appropriate approval for customer support or incident handling purposes is available from the Azure [SOC 2 Type 2 attestation report](https://aka.ms/azuresoc2auditreport) produced by an independent third-party auditing firm.
-JIT access works in conjunction with multi-factor authentication that requires Microsoft engineers to use a smartcard to confirm their identity. All access to production systems is performed using Secure Admin Workstations (SAWs) that are consistent with published [Privileged Access Workstation](/windows-server/identity/securing-privileged-access/privileged-access-workstations) guidance. Use of SAWs for access to production systems is required by Microsoft policy and compliance with this policy is closely monitored. These workstations use a fixed image with all software fully managed ΓÇô only select activities are allowed and users cannot accidentally circumvent the SAW design since they do not have admin privileges on these machines. Access is permitted only with a smartcard and access to each SAW is limited to specific set of users.
+JIT access works in conjunction with multi-factor authentication that requires Microsoft engineers to use a smartcard to confirm their identity. All access to production systems is performed using Secure Admin Workstations (SAWs) that are consistent with published [Privileged Access Workstation](https://4sysops.com/archives/understand-the-microsoft-privileged-access-workstation-paw-security-model/) guidance. Use of SAWs for access to production systems is required by Microsoft policy and compliance with this policy is closely monitored. These workstations use a fixed image with all software fully managed ΓÇô only select activities are allowed and users cannot accidentally circumvent the SAW design since they do not have admin privileges on these machines. Access is permitted only with a smartcard and access to each SAW is limited to specific set of users.
### Customer Lockbox for Azure
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/app/change-analysis https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/change-analysis.md
@@ -204,8 +204,10 @@ Refreshing the page after a few minutes usually fixes this issue. If the error p
This is the general unauthorized error message, explaining the current user does not have sufficient permissions to view the change. At least reader access is required on the resource to view infrastructure changes returned by Azure Resource Graph and Azure Resource Manager. For web app in-guest file changes and configuration changes, at least contributor role is required. ### Failed to register Microsoft.ChangeAnalysis resource provider
+This message means something failed immediately as the UI sent request to register the resource provider, and it's not related to permission issue. Likely it might be a temporary internet connectivity issue. Try refreshing the page and checking your internet connection. If the error persists, contact changeanalysishelp@microsoft.com
-**You don't have enough permissions to register Microsoft.ChangeAnalysis resource provider. Contact your Azure subscription administrator.** This error message means your role in the current subscription does not have the **Microsoft.Support/register/action** scope associated with it. This might happen if you are not the owner of a subscription and got shared access permissions through a coworker. i.e. view access to a resource group. To fix this, You can contact the owner of your subscription to register the **Microsoft.ChangeAnalysis** resource provider. This can be done in Azure portal through **Subscriptions | Resource providers** and search for ```Microsoft.ChangeAnalysis``` and register in the UI, or through Azure PowerShell or Azure CLI.
+### You don't have enough permissions to register Microsoft.ChangeAnalysis resource provider. Contact your Azure subscription administrator.
+This error message means your role in the current subscription does not have the **Microsoft.Support/register/action** scope associated with it. This might happen if you are not the owner of a subscription and got shared access permissions through a coworker. i.e. view access to a resource group. To fix this, You can contact the owner of your subscription to register the **Microsoft.ChangeAnalysis** resource provider. This can be done in Azure portal through **Subscriptions | Resource providers** and search for ```Microsoft.ChangeAnalysis``` and register in the UI, or through Azure PowerShell or Azure CLI.
Register resource provider through PowerShell:
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/app/javascript https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/javascript.md
@@ -191,7 +191,7 @@ Most configuration fields are named such that they can be defaulted to false. Al
| correlationHeaderDomains | | Enable correlation headers for specific domains | | disableFlushOnBeforeUnload | false | Default false. If true, flush method will not be called when onBeforeUnload event triggers | | enableSessionStorageBuffer | true | Default true. If true, the buffer with all unsent telemetry is stored in session storage. The buffer is restored on page load |
-| isCookieUseDisabled | false | Default false. If true, the SDK will not store or read any data from cookies.|
+| isCookieUseDisabled | false | Default false. If true, the SDK will not store or read any data from cookies. Note that this disables the User and Session cookies and renders the usage blades and experiences useless. |
| cookieDomain | null | Custom cookie domain. This is helpful if you want to share Application Insights cookies across subdomains. | | isRetryDisabled | false | Default false. If false, retry on 206 (partial success), 408 (timeout), 429 (too many requests), 500 (internal server error), 503 (service unavailable), and 0 (offline, only if detected) | | isStorageUseDisabled | false | If true, the SDK will not store or read any data from local and session storage. Default is false. |
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/app/monitor-web-app-availability https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/monitor-web-app-availability.md
@@ -23,6 +23,9 @@ There are three types of availability tests:
**You can create up to 100 availability tests per Application Insights resource.**
+> [!IMPORTANT]
+> Both, [URL ping test](#create-a-url-ping-test) and [multi-step web test](availability-multistep.md) rely on the public internet DNS infrastructure to resolve the domain names of the tested endpoints. This means that if you are using Private DNS, you must either ensure that every domain name of your test is also resolvable by the public domain name servers or, when it is not possible, you can use [custom track availability tests](/dotnet/api/microsoft.applicationinsights.telemetryclient.trackavailability?view=azure-dotnet) instead.
+ ## Create an Application Insights resource In order to create an availability test, you first need to create an Application Insights resource. If you have already created a resource, proceed to the next section to [create a URL Ping test](#create-a-url-ping-test).
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/app/profiler-bring-your-own-storage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/profiler-bring-your-own-storage.md
@@ -4,7 +4,7 @@ description: Configure BYOS (Bring Your Own Storage) for Profiler & Snapshot Deb
ms.topic: conceptual author: renatosalas ms.author: regutier
-ms.date: 04/14/2020
+ms.date: 01/14/2021
ms.reviewer: mbullwin ---
@@ -87,7 +87,7 @@ To configure BYOS for code-level diagnostics (Profiler/Debugger), there are thre
Pattern: ```powershell
- $appInsights = Get-AzApplicationInsights -ResourceGroupName "{resource_group_name}" -Name "{storage_account_name}"
+ $appInsights = Get-AzApplicationInsights -ResourceGroupName "{resource_group_name}" -Name "{application_insights_name}"
Remove-AzApplicationInsightsLinkedStorageAccount -ResourceId $appInsights.Id ```
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/insights/cosmosdb-insights-overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/insights/cosmosdb-insights-overview.md
@@ -1,8 +1,8 @@
--- title: Monitor Azure Cosmos DB with Azure Monitor for Cosmos DB| Microsoft Docs description: This article describes the Azure Monitor for Cosmos DB feature that provides Cosmos DB owners with a quick understanding of performance and utilization issues with their CosmosDB accounts.
-author: mrbullwinkle
-ms.author: mbullwin
+author: lgayhardt
+ms.author: lagayhar
ms.topic: conceptual ms.date: 05/11/2020
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/insights/key-vault-insights-overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/insights/key-vault-insights-overview.md
@@ -3,8 +3,8 @@ title: Monitor Key Vault with Azure Monitor for Key Vault | Microsoft Docs
description: This article describes the Azure Monitor for Key Vaults. services: azure-monitor ms.topic: conceptual
-author: mrbullwinkle
-ms.author: mbullwin
+author: lgayhardt
+ms.author: lagayhar
ms.date: 09/10/2020 ---
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/insights/redis-cache-insights-overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/insights/redis-cache-insights-overview.md
@@ -2,8 +2,8 @@
title: Azure Monitor for Azure Cache for Redis | Microsoft Docs description: This article describes the Azure Monitor for Azure Redis Cache feature, which provides cache owners with a quick understanding of performance and utilization problems. ms.topic: conceptual
-author: mrbullwinkle
-ms.author: mbullwin
+author: lgayhardt
+ms.author: lagayhar
ms.date: 09/10/2020 ---
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/insights/storage-insights-overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/insights/storage-insights-overview.md
@@ -3,8 +3,8 @@ title: Monitor Azure Storage services with Azure Monitor for Storage | Microsoft
description: This article describes the Azure Monitor for Storage feature that provides storage admins with a quick understanding of performance and utilization issues with their Azure Storage accounts. ms.subservice: ms.topic: conceptual
-author: mrbullwinkle
-ms.author: mbullwin
+author: lgayhardt
+ms.author: lagayhar
ms.date: 05/11/2020 ---
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/insights/troubleshoot-workbooks https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/insights/troubleshoot-workbooks.md
@@ -2,8 +2,8 @@
title: Troubleshooting Azure Monitor workbook-based insights description: Provides troubleshooting guidance for Azure Monitor workbook-based insights for services like Azure Key Vault, Azure CosmosDB, Azure Storage, and Azure Cache for Redis. services: azure-monitor
-ms.author: mbullwin
-author: mrbullwinkle
+ms.author: lagayhar
+author: lgayhardt
ms.topic: conceptual ms.date: 06/17/2020 ---
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/learn/mobile-center-quickstart https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/learn/mobile-center-quickstart.md
@@ -3,8 +3,8 @@ title: Monitor mobile apps with Azure Monitor Application Insights
description: Provides instructions to quickly set up a mobile app for monitoring with Azure Monitor Application Insights and App Center ms.subservice: application-insights ms.topic: quickstart
-author: mrbullwinkle
-ms.author: mbullwin
+author: lgayhardt
+ms.author: lagayhar
ms.date: 06/26/2019 ms.reviewer: daviste
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/learn/nodejs-quick-start https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/learn/nodejs-quick-start.md
@@ -3,8 +3,8 @@ title: 'Quickstart: Monitor Node.js with Azure Monitor Application Insights'
description: Provides instructions to quickly set up a Node.js Web App for monitoring with Azure Monitor Application Insights ms.subservice: application-insights ms.topic: quickstart
-author: mrbullwinkle
-ms.author: mbullwin
+author: lgayhardt
+ms.author: lagayhar
ms.date: 07/12/2019 ms.custom: mvc, seo-javascript-september2019, seo-javascript-october2019, devx-track-js
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/learn/tutorial-alert https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/learn/tutorial-alert.md
@@ -3,8 +3,8 @@ title: Send alerts from Azure Application Insights | Microsoft Docs
description: Tutorial to send alerts in response to errors in your application using Azure Application Insights. ms.subservice: application-insights ms.topic: tutorial
-author: mrbullwinkle
-ms.author: mbullwin
+author: lgayhardt
+ms.author: lagayhar
ms.date: 04/10/2019 ms.custom: mvc
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/learn/tutorial-runtime-exceptions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/learn/tutorial-runtime-exceptions.md
@@ -3,8 +3,8 @@ title: Diagnose run-time exceptions using Azure Application Insights | Microsoft
description: Tutorial to find and diagnose run-time exceptions in your application using Azure Application Insights. ms.subservice: application-insights ms.topic: tutorial
-author: mrbullwinkle
-ms.author: mbullwin
+author: lgayhardt
+ms.author: lagayhar
ms.date: 09/19/2017 ms.custom: mvc
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/learn/tutorial-users https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/learn/tutorial-users.md
@@ -3,8 +3,8 @@ title: Understand your customers in Azure Application Insights | Microsoft Docs
description: Tutorial on using Azure Application Insights to understand how customers are using your application. ms.subservice: application-insights ms.topic: tutorial
-author: mrbullwinkle
-ms.author: mbullwin
+author: lgayhardt
+ms.author: lagayhar
ms.date: 09/20/2017 ms.custom: mvc
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/platform/agent-linux-troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/platform/agent-linux-troubleshoot.md
@@ -93,6 +93,7 @@ We've seen that a clean re-install of the Agent will fix most issues. In fact th
| 5 | The shell bundle must be executed as root OR there was 403 error returned during onboarding. Run your command using `sudo`. | | 6 | Invalid package architecture OR there was error 200 error returned during onboarding; omsagent-*x64.sh packages can only be installed on 64-bit systems, and omsagent-*x86.sh packages can only be installed on 32-bit systems. Download the correct package for your architecture from the [latest release](https://github.com/Microsoft/OMS-Agent-for-Linux/releases/latest). | | 17 | Installation of OMS package failed. Look through the command output for the root failure. |
+| 18 | Installation of OMSConfig package failed. Look through the command output for the root failure. |
| 19 | Installation of OMI package failed. Look through the command output for the root failure. | | 20 | Installation of SCX package failed. Look through the command output for the root failure. | | 21 | Installation of Provider kits failed. Look through the command output for the root failure. |
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/platform/itsmc-resync-servicenow https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/platform/itsmc-resync-servicenow.md
@@ -49,6 +49,25 @@ In order to view the errors in the dashboard, you should follow the next steps:
5. Using this dashboard you will be able to review the status and the errors in your connector. ![Screenshot that shows connector status.](media/itsmc-resync-servicenow/connector-dashboard.png)
+### Dashboard Elements
+
+The dashboard contains information on the alerts that were sent into the ITSM tool using this connector.
+The dashboard is split into 4 parts:
+
+1. Work Item Created: The graph and the table below contain the count of the work item per type. If you click on the graph or on the table you can see more details about the work items.
+ ![Screenshot that shows work item created.](media/itsmc-resync-servicenow/itsm-dashboard-workitems.png)
+2. Impacted computers: The tables contain details about the configuration items that created configuration items.
+ By clicking on rows in the tables you can get further details on the configuration items.
+ The table contain limited number of rows if you would like to see all the list you can click on "See all".
+ ![Screenshot that shows impacted computers.](media/itsmc-resync-servicenow/itsm-dashboard-impacted-comp.png)
+3. Connector status: The graph and the table below contain messages about the status of the connector. By clicking on the graph on rows in the table you can get further details on the messages of the connector status.
+ The table contain limited number of rows if you would like to see all the list you can click on "See all".
+ ![Screenshot that shows connector status.](media/itsmc-resync-servicenow/itsm-dashboard-connector-status.png)
+4. Alert rules: The tables contain the information on the number of alert rules that were detected.
+ By clicking on rows in the tables you can get further details on the rules that were detected.
+ The table contain limited number of rows if you would like to see all the list you can click on "See all".
+ ![Screenshot that shows alert rules.](media/itsmc-resync-servicenow/itsm-dashboard-alert-rules.png)
+ ### Service map You can also visualize the incidents synced against the affected computers in Service Map.
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/platform/private-storage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/platform/private-storage.md
@@ -18,7 +18,7 @@ Log Analytics relies on Azure Storage in various scenarios. This use is typicall
## Ingesting Azure Diagnostics extension logs (WAD/LAD) The Azure Diagnostics extension agents (also called WAD and LAD for Windows and Linux agents respectively) collect various operating system logs and store them on a customer-managed storage account. You can then ingest these logs into Log Analytics to review and analyze them. ### How to collect Azure Diagnostics extension logs from your storage account
-Connect the storage account to your Log Analytics workspace as a storage data source using [the Azure portal](./diagnostics-extension-logs.md#collect-logs-from-azure-storage) or by calling the [Storage Insights API](/rest/api/loganalytics/connectedsources/storage%20insights/createorupdate).
+Connect the storage account to your Log Analytics workspace as a storage data source using [the Azure portal](./diagnostics-extension-logs.md#collect-logs-from-azure-storage) or by calling the [Storage Insights API](/rest/api/loganalytics/storage%20insights/createorupdate).
Supported data types: * Syslog
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/platform/usage-estimated-costs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/platform/usage-estimated-costs.md
@@ -6,7 +6,7 @@ services: azure-monitor
ms.topic: conceptual ms.date: 10/28/2019
-ms.author: mbullwin
+ms.author: lagayhar
ms.reviewer: Dale.Koetke ms.subservice: "" ---
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/samples/resource-manager-app-resource https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/samples/resource-manager-app-resource.md
@@ -3,8 +3,8 @@ title: Resource Manager template samples for Application Insights Resources
description: Sample Azure Resource Manager templates to deploy Application Insights resources in Azure Monitor. ms.subservice: application-insights ms.topic: sample
-author: mrbullwinkle
-ms.author: mbullwin
+author: lgayhardt
+ms.author: lagayhar
ms.date: 07/08/2020 ---
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/samples/resource-manager-function-app https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/samples/resource-manager-function-app.md
@@ -3,8 +3,8 @@ title: Resource Manager template samples for Azure Function App + Application In
description: Sample Azure Resource Manager templates to deploy an Azure Function App with an Application Insights resource. ms.subservice: application-insights ms.topic: sample
-author: mrbullwinkle
-ms.author: mbullwin
+author: lgayhardt
+ms.author: lagayhar
ms.date: 08/06/2020 ---
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/samples/resource-manager-web-app https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/samples/resource-manager-web-app.md
@@ -4,8 +4,8 @@ description: Sample Azure Resource Manager templates to deploy an Azure App Serv
ms.subservice: application-insights ms.topic: sample ms.custom: devx-track-dotnet
-author: mrbullwinkle
-ms.author: mbullwin
+author: lgayhardt
+ms.author: lagayhar
ms.date: 08/06/2020 ---
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/security-baseline https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/security-baseline.md
@@ -234,7 +234,7 @@ Enable Azure AD MFA and follow Azure Security Center identity and access recomme
**Guidance**: Use a secure, Azure-managed workstation (also known as a Privileged Access Workstation, or PAW) for administrative tasks that require elevated privileges. -- [Understand secure, Azure-managed workstations](../active-directory/devices/concept-azure-managed-workstation.md)
+- [Understand secure, Azure-managed workstations](https://4sysops.com/archives/understand-the-microsoft-privileged-access-workstation-paw-security-model/)
- [How to enable Azure AD MFA](../active-directory/authentication/howto-mfa-getstarted.md)
azure-relay https://docs.microsoft.com/en-us/azure/azure-relay/network-security https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-relay/network-security.md
@@ -8,7 +8,7 @@ ms.date: 06/23/2020
# Network security for Azure Relay This article describes how to use the following security features with Azure Relay: -- IP firewall rules (preview)
+- IP firewall rules
- Private endpoints > [!NOTE]
@@ -24,9 +24,6 @@ The IP firewall rules are applied at the Relay namespace level. Therefore, the r
For more information, see [How to configure IP firewall for a Relay namespace](ip-firewall-virtual-networks.md)
-> [!NOTE]
-> This feature is currently in **preview**.
- ## Private endpoints Azure **Private Link Service** enables you to access Azure services (for example, Azure Relay, Azure Service Bus, Azure Event Hubs, Azure Storage, and Azure Cosmos DB) and Azure hosted customer/partner services over a private endpoint in your virtual network. For more information, see [What is Azure Private Link?](../private-link/private-link-overview.md)
azure-resource-manager https://docs.microsoft.com/en-us/azure/azure-resource-manager/custom-providers/reference-custom-providers-csharp-endpoint https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/custom-providers/reference-custom-providers-csharp-endpoint.md
@@ -5,7 +5,7 @@ ms.topic: conceptual
ms.custom: devx-track-csharp ms.author: jobreen author: jjbfour
-ms.date: 06/20/2019
+ms.date: 01/14/2021
--- # Custom provider C# RESTful endpoint reference
@@ -19,7 +19,7 @@ The following code works with an Azure function app. To learn how to set up an A
```csharp #r "Newtonsoft.Json" #r "Microsoft.WindowsAzure.Storage"
-#r "../bin/Microsoft.Azure.Management.ResourceManager.Fluent.dll"
+#r "../bin/Microsoft.Azure.Management.ResourceManager.Fluent"
using System; using System.Net;
azure-resource-manager https://docs.microsoft.com/en-us/azure/azure-resource-manager/custom-providers/tutorial-custom-providers-function-authoring https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/custom-providers/tutorial-custom-providers-function-authoring.md
@@ -3,7 +3,7 @@ title: Author a RESTful endpoint
description: This tutorial shows how to author a RESTful endpoint for custom providers. It details how to handle requests and responses for the supported RESTful HTTP methods. author: jjbfour ms.topic: tutorial
-ms.date: 06/19/2019
+ms.date: 01/13/2021
ms.author: jobreen ---
@@ -342,7 +342,7 @@ After you add the methods and classes, you need to update the **using** methods
```csharp #r "Newtonsoft.Json" #r "Microsoft.WindowsAzure.Storage"
-#r "../bin/Microsoft.Azure.Management.ResourceManager.Fluent.dll"
+#r "../bin/Microsoft.Azure.Management.ResourceManager.Fluent"
using System; using System.Net;
azure-resource-manager https://docs.microsoft.com/en-us/azure/azure-resource-manager/managed-applications/security-baseline https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/managed-applications/security-baseline.md
@@ -240,7 +240,7 @@ You should ensure that the credentials (such as password, certificate, or smart
**Guidance**: Secured, isolated workstations are critically important for the security of sensitive roles like administrators, developers, and critical service operators. Use highly secured user workstations and/or Azure Bastion for administrative tasks related to your Managed Applications. Use Azure Active Directory, Microsoft Defender Advanced Threat Protection (ATP), and/or Microsoft Intune to deploy a secure and managed user workstation for administrative tasks. The secured workstations can be centrally managed to enforce secured configuration including strong authentication, software and hardware baselines, restricted logical and network access. -- [Understand privileged access workstations](../../active-directory/devices/concept-azure-managed-workstation.md)
+- [Understand privileged access workstations](https://4sysops.com/archives/understand-the-microsoft-privileged-access-workstation-paw-security-model/)
- [Deploy a privileged access workstation](../../active-directory/devices/howto-azure-managed-workstation.md)
azure-resource-manager https://docs.microsoft.com/en-us/azure/azure-resource-manager/management/security-baseline https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/management/security-baseline.md
@@ -149,7 +149,7 @@ You can also enable a Just-In-Time access by using Azure AD Privileged Identity
**Guidance**: Use a secure, Azure-managed workstation (also known as a Privileged Access Workstation, or PAW) for administrative tasks that require elevated privileges. -- [Understand secure, Azure-managed workstations](../../active-directory/devices/concept-azure-managed-workstation.md)
+- [Understand secure, Azure-managed workstations](https://4sysops.com/archives/understand-the-microsoft-privileged-access-workstation-paw-security-model/)
- [How to enable Azure AD MFA](../../active-directory/authentication/howto-mfa-getstarted.md)
azure-signalr https://docs.microsoft.com/en-us/azure/azure-signalr/security-baseline https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-signalr/security-baseline.md
@@ -316,7 +316,7 @@ You should ensure that the credentials (such as password, certificate, or smart
**Guidance**: Secured, isolated workstations are critically important for the security of sensitive roles like administrators, developers, and critical service operators. Use highly secured user workstations and/or Azure Bastion for administrative tasks. Use Azure Active Directory, Microsoft Defender Advanced Threat Protection (ATP), and/or Microsoft Intune to deploy a secure and managed user workstation for administrative tasks. The secured workstations can be centrally managed to enforce secured configuration including strong authentication, software and hardware baselines, restricted logical and network access. -- [Understand privileged access workstations](../active-directory/devices/concept-azure-managed-workstation.md)
+- [Understand privileged access workstations](https://4sysops.com/archives/understand-the-microsoft-privileged-access-workstation-paw-security-model/)
- [Deploy a privileged access workstation](../active-directory/devices/howto-azure-managed-workstation.md)
azure-signalr https://docs.microsoft.com/en-us/azure/azure-signalr/server-graceful-shutdown https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-signalr/server-graceful-shutdown.md
@@ -39,7 +39,7 @@ In general, there will be four stages in a graceful shutdown process:
You may have to design a way, like broadcast a closing message to all clients, and then let your clients to decide when to close/reconnect itself.
- Read [ChatSample](https://github.com/Azure/azure-signalr/tree/dev/samples/ChatSample/ChatSample) for sample usage, which we broadcast a 'exit' message to trigger client close in shutdown hook.
+ Read [ChatSample](https://github.com/Azure/azure-signalr/tree/dev/samples/ChatSample) for sample usage, which we broadcast a 'exit' message to trigger client close in shutdown hook.
**Mode set to MigrateClients**
azure-signalr https://docs.microsoft.com/en-us/azure/azure-signalr/signalr-howto-troubleshoot-guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-signalr/signalr-howto-troubleshoot-guide.md
@@ -363,7 +363,7 @@ Take ASP.NET Core one for example (ASP.NET one is similar):
* [ASP.NET Core C# Client](https://github.com/Azure/azure-signalr/tree/dev/samples/ChatSample/ChatSample.CSharpClient/Program.cs#L64)
- * [ASP.NET Core JavaScript Client](https://github.com/Azure/azure-signalr/tree/dev/samples/ChatSample/ChatSample/wwwroot/https://docsupdatetracker.net/index.html#L164)
+ * [ASP.NET Core JavaScript Client](https://github.com/Azure/azure-signalr/blob/release/1.0.0-preview1/samples/ChatSample/wwwroot/https://docsupdatetracker.net/index.html#L164)
* [ASP.NET C# Client](https://github.com/Azure/azure-signalr/tree/dev/samples/AspNet.ChatSample/AspNet.ChatSample.CSharpClient/Program.cs#L78)
azure-sql https://docs.microsoft.com/en-us/azure/azure-sql/database/business-continuity-high-availability-disaster-recover-hadr-overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/business-continuity-high-availability-disaster-recover-hadr-overview.md
@@ -45,7 +45,7 @@ SQL Database and SQL Managed Instance also provide several business continuity f
- You can [restore a deleted database](recovery-using-backups.md#deleted-database-restore) to the point at which it was deleted if the **server has not been deleted**. - [Long-term backup retention](long-term-retention-overview.md) enables you to keep the backups up to 10 years. This is in limited public preview for SQL Managed Instance - [Active geo-replication](active-geo-replication-overview.md) enables you to create readable replicas and manually failover to any replica in case of a datacenter outage or application upgrade.-- [Auto-failover group](auto-failover-group-overview.md#terminology-and-capabilities) allows the application to automatically recovery in case of a datacenter outage.
+- [Auto-failover group](auto-failover-group-overview.md#terminology-and-capabilities) allows the application to automatically recover in case of a datacenter outage.
## Recover a database within the same Azure region
@@ -148,4 +148,4 @@ Sometimes an application must be taken offline because of planned maintenance su
## Next steps
-For a discussion of application design considerations for single databases and for elastic pools, see [Design an application for cloud disaster recovery](designing-cloud-solutions-for-disaster-recovery.md) and [Elastic pool disaster recovery strategies](disaster-recovery-strategies-for-applications-with-elastic-pool.md).
\ No newline at end of file
+For a discussion of application design considerations for single databases and for elastic pools, see [Design an application for cloud disaster recovery](designing-cloud-solutions-for-disaster-recovery.md) and [Elastic pool disaster recovery strategies](disaster-recovery-strategies-for-applications-with-elastic-pool.md).
azure-sql https://docs.microsoft.com/en-us/azure/azure-sql/database/intelligent-insights-troubleshoot-performance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/intelligent-insights-troubleshoot-performance.md
@@ -10,7 +10,7 @@ ms.topic: troubleshooting
author: danimir ms.author: danil ms.reviewer: wiassaf, sstein
-ms.date: 06/12/2020
+ms.date: 1/14/2021
--- # Troubleshoot Azure SQL Database and Azure SQL Managed Instance performance issues with Intelligent Insights [!INCLUDE[appliesto-sqldb-sqlmi](../includes/appliesto-sqldb-sqlmi.md)]
@@ -122,7 +122,9 @@ The diagnostics log outputs locking details that you can use as the basis for tr
The simplest and safest way to mitigate the issue is to keep transactions short and to reduce the lock footprint of the most expensive queries. You can break up a large batch of operations into smaller operations. Good practice is to reduce the query lock footprint by making the query as efficient as possible. Reduce large scans because they increase chances of deadlocks and adversely affect overall database performance. For identified queries that cause locking, you can create new indexes or add columns to the existing index to avoid the table scans.
-For more suggestions, see [How to resolve blocking problems that are caused by lock escalation in SQL Server](https://support.microsoft.com/help/323630/how-to-resolve-blocking-problems-that-are-caused-by-lock-escalation-in).
+For more suggestions, see:
+- [Understand and resolve Azure SQL blocking problems](understand-resolve-blocking.md)
+- [How to resolve blocking problems that are caused by lock escalation in SQL Server](https://support.microsoft.com/help/323630/how-to-resolve-blocking-problems-that-are-caused-by-lock-escalation-in)
## Increased MAXDOP
azure-sql https://docs.microsoft.com/en-us/azure/azure-sql/database/monitoring-with-dmvs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/monitoring-with-dmvs.md
@@ -11,12 +11,12 @@ ms.topic: how-to
author: WilliamDAssafMSFT ms.author: wiassaf ms.reviewer: sstein
-ms.date: 04/19/2020
+ms.date: 1/14/2021
--- # Monitoring Microsoft Azure SQL Database and Azure SQL Managed Instance performance using dynamic management views [!INCLUDE[appliesto-sqldb-sqlmi](../includes/appliesto-sqldb-sqlmi.md)]
-Microsoft Azure SQL Database and Azure SQL Managed Instance enable a subset of dynamic management views to diagnose performance problems, which might be caused by blocked or long-running queries, resource bottlenecks, poor query plans, and so on. This topic provides information on how to detect common performance problems by using dynamic management views.
+Microsoft Azure SQL Database and Azure SQL Managed Instance enable a subset of dynamic management views to diagnose performance problems, which might be caused by blocked or long-running queries, resource bottlenecks, poor query plans, and so on. This article provides information on how to detect common performance problems by using dynamic management views.
Microsoft Azure SQL Database and Azure SQL Managed Instance partially support three categories of dynamic management views:
@@ -248,12 +248,12 @@ GO
When identifying IO performance issues, the top wait types associated with `tempdb` issues is `PAGELATCH_*` (not `PAGEIOLATCH_*`). However, `PAGELATCH_*` waits do not always mean you have `tempdb` contention. This wait may also mean that you have user-object data page contention due to concurrent requests targeting the same data page. To further confirm `tempdb` contention, use [sys.dm_exec_requests](/sql/relational-databases/system-dynamic-management-views/sys-dm-exec-requests-transact-sql) to confirm that the wait_resource value begins with `2:x:y` where 2 is `tempdb` is the database ID, `x` is the file ID, and `y` is the page ID.
-For tempdb contention, a common method is to reduce or re-write application code that relies on `tempdb`. Common `tempdb` usage areas include:
+For tempdb contention, a common method is to reduce or rewrite application code that relies on `tempdb`. Common `tempdb` usage areas include:
- Temp tables - Table variables - Table-valued parameters-- Version store usage (specifically associated with long running transactions)
+- Version store usage (associated with long running transactions)
- Queries that have query plans that use sorts, hash joins, and spools ### Top queries that use table variables and temporary tables
@@ -557,14 +557,14 @@ SELECT resource_name, AVG(avg_cpu_percent) AS Average_Compute_Utilization
FROM sys.server_resource_stats WHERE start_time BETWEEN @s AND @e GROUP BY resource_name
-HAVING AVG(avg_cpu_percent) >= 80
+HAVING AVG(avg_cpu_percent) >= 80;
``` ### sys.resource_stats The [sys.resource_stats](/sql/relational-databases/system-catalog-views/sys-resource-stats-azure-sql-database) view in the **master** database has additional information that can help you monitor the performance of your database at its specific service tier and compute size. The data is collected every 5 minutes and is maintained for approximately 14 days. This view is useful for a longer-term historical analysis of how your database uses resources.
-The following graph shows the CPU resource use for a Premium database with the P2 compute size for each hour in a week. This graph starts on a Monday, shows 5 work days, and then shows a weekend, when much less happens on the application.
+The following graph shows the CPU resource use for a Premium database with the P2 compute size for each hour in a week. This graph starts on a Monday, shows five work days, and then shows a weekend, when much less happens on the application.
![Database resource use](./media/monitoring-with-dmvs/sql_db_resource_utilization.png)
@@ -583,7 +583,7 @@ This example shows you how the data in this view is exposed:
SELECT TOP 10 * FROM sys.resource_stats WHERE database_name = 'resource1'
-ORDER BY start_time DESC
+ORDER BY start_time DESC;
``` ![The sys.resource_stats catalog view](./media/monitoring-with-dmvs/sys_resource_stats.png)
@@ -618,7 +618,7 @@ The next example shows you different ways that you can use the **sys.resource_st
WHERE database_name = 'userdb1' AND start_time > DATEADD(day, -7, GETDATE()); ```
-3. With this information about the average and maximum values of each resource metric, you can assess how well your workload fits into the compute size you chose. Usually, average values from **sys.resource_stats** give you a good baseline to use against the target size. It should be your primary measurement stick. For an example, you might be using the Standard service tier with S2 compute size. The average use percentages for CPU and IO reads and writes are below 40 percent, the average number of workers is below 50, and the average number of sessions is below 200. Your workload might fit into the S1 compute size. It's easy to see whether your database fits in the worker and session limits. To see whether a database fits into a lower compute size with regards to CPU, reads, and writes, divide the DTU number of the lower compute size by the DTU number of your current compute size, and then multiply the result by 100:
+3. With this information about the average and maximum values of each resource metric, you can assess how well your workload fits into the compute size you chose. Usually, average values from **sys.resource_stats** give you a good baseline to use against the target size. It should be your primary measurement stick. For an example, you might be using the Standard service tier with S2 compute size. The average use percentages for CPU and IO reads and writes are below 40 percent, the average number of workers is below 50, and the average number of sessions is below 200. Your workload might fit into the S1 compute size. It's easy to see whether your database fits in the worker and session limits. To see whether a database fits into a lower compute size with regard to CPU, reads, and writes, divide the DTU number of the lower compute size by the DTU number of your current compute size, and then multiply the result by 100:
`S1 DTU / S2 DTU * 100 = 20 / 50 * 100 = 40`
@@ -693,7 +693,7 @@ To see the number of current active sessions, run this Transact-SQL query on you
```sql SELECT COUNT(*) AS [Sessions]
-FROM sys.dm_exec_connections
+FROM sys.dm_exec_connections;
``` If you're analyzing a SQL Server workload, modify the query to focus on a specific database. This query helps you determine possible session needs for the database if you are considering moving it to Azure.
@@ -703,7 +703,7 @@ SELECT COUNT(*) AS [Sessions]
FROM sys.dm_exec_connections C INNER JOIN sys.dm_exec_sessions S ON (S.session_id = C.session_id) INNER JOIN sys.databases D ON (D.database_id = S.database_id)
-WHERE D.name = 'MyDatabase'
+WHERE D.name = 'MyDatabase';
``` Again, these queries return a point-in-time count. If you collect multiple samples over time, you'll have the best understanding of your session use.
@@ -737,7 +737,7 @@ ORDER BY 2 DESC;
### Monitoring blocked queries
-Slow or long-running queries can contribute to excessive resource consumption and be the consequence of blocked queries. The cause of the blocking can be poor application design, bad query plans, the lack of useful indexes, and so on. You can use the sys.dm_tran_locks view to get information about the current locking activity in database. For example code, see [sys.dm_tran_locks (Transact-SQL)](/sql/relational-databases/system-dynamic-management-views/sys-dm-tran-locks-transact-sql).
+Slow or long-running queries can contribute to excessive resource consumption and be the consequence of blocked queries. The cause of the blocking can be poor application design, bad query plans, the lack of useful indexes, and so on. You can use the sys.dm_tran_locks view to get information about the current locking activity in database. For example code, see [sys.dm_tran_locks (Transact-SQL)](/sql/relational-databases/system-dynamic-management-views/sys-dm-tran-locks-transact-sql). For more information on troubleshooting blocking, see [Understand and resolve Azure SQL blocking problems](understand-resolve-blocking.md).
### Monitoring query plans
azure-sql https://docs.microsoft.com/en-us/azure/azure-sql/database/query-performance-insight-use https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/query-performance-insight-use.md
@@ -10,7 +10,7 @@ ms.topic: how-to
author: danimir ms.author: danil ms.reviewer: wiassaf, sstein
-ms.date: 03/10/2020
+ms.date: 1/14/2021
--- # Query Performance Insight for Azure SQL Database [!INCLUDE[appliesto-sqldb](../includes/appliesto-sqldb.md)]
@@ -149,7 +149,7 @@ To view query details:
Two metrics in Query Performance Insight can help you find potential bottlenecks: duration and execution count.
-Long-running queries have the greatest potential for locking resources longer, blocking other users, and limiting scalability. They're also the best candidates for optimization.
+Long-running queries have the greatest potential for locking resources longer, blocking other users, and limiting scalability. They're also the best candidates for optimization. For more information, see [Understand and resolve Azure SQL blocking problems](understand-resolve-blocking.md).
To identify long-running queries:
azure-sql https://docs.microsoft.com/en-us/azure/azure-sql/database/resource-limits-logical-server https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/resource-limits-logical-server.md
@@ -10,7 +10,7 @@ ms.topic: reference
author: stevestein ms.author: sstein ms.reviewer: sashan,moslake,josack
-ms.date: 09/15/2020
+ms.date: 1/14/2021
--- # Resource limits for Azure SQL Database and Azure Synapse Analytics servers
@@ -75,7 +75,7 @@ When encountering high session or worker utilization, mitigation options include
- Increasing the service tier or compute size of the database or elastic pool. See [Scale single database resources](single-database-scale.md) and [Scale elastic pool resources](elastic-pool-scale.md). - Optimizing queries to reduce the resource utilization of each query if the cause of increased worker utilization is due to contention for compute resources. For more information, see [Query Tuning/Hinting](performance-guidance.md#query-tuning-and-hinting). - Reducing the [MAXDOP](/sql/database-engine/configure-windows/configure-the-max-degree-of-parallelism-server-configuration-option#Guidelines) (maximum degree of parallelism) setting.-- Optimizing query workload to reduce number of occurrences and duration of query blocking.
+- Optimizing query workload to reduce number of occurrences and duration of query blocking. For more information, see [Understand and resolve Azure SQL blocking problems](understand-resolve-blocking.md).
### Memory
azure-sql https://docs.microsoft.com/en-us/azure/azure-sql/database/security-baseline https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/security-baseline.md
@@ -398,7 +398,7 @@ https://docs.microsoft.com/azure/security-center/security-center-identity-access
Learn about Privileged Access Workstations:
-https://docs.microsoft.com/windows-server/identity/securing-privileged-access/privileged-access-workstations
+https://4sysops.com/archives/understand-the-microsoft-privileged-access-workstation-paw-security-model/
How to enable MFA in Azure:
azure-sql https://docs.microsoft.com/en-us/azure/azure-sql/database/service-tier-hyperscale https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/service-tier-hyperscale.md
@@ -10,7 +10,7 @@ ms.topic: conceptual
author: stevestein ms.author: sstein ms.reviewer:
-ms.date: 10/19/2020
+ms.date: 1/13/2021
--- # Hyperscale service tier
@@ -163,16 +163,15 @@ If you need to restore a Hyperscale database in Azure SQL Database to a region o
2. Follow the instructions in the [geo-restore](./recovery-using-backups.md#geo-restore) topic of the page on restoring a database in Azure SQL Database from automatic backups. > [!NOTE]
-> Because the source and target are in separate regions, the database cannot share snapshot storage with the source database as in non-geo restores, which complete extremely quickly. In the case of a geo-restore of a Hyperscale database, it will be a size-of-data operation, even if the target is in the paired region of the geo-replicated storage. That means that doing a geo-restore will take time proportional to the size of the database being restored. If the target is in the paired region, the copy will be within a region, which will be significantly faster than a cross-region copy, but it will still be a size-of-data operation.
+> Because the source and target are in separate regions, the database cannot share snapshot storage with the source database as in non-geo restores, which complete quickly regardless of database size. In the case of a geo-restore of a Hyperscale database, it will be a size-of-data operation, even if the target is in the paired region of the geo-replicated storage. Therefore, a geo-restore will take time proportional to the size of the database being restored. If the target is in the paired region, data transfer will be within a region, which will be significantly faster than a cross-region data transfer, but it will still be a size-of-data operation.
## <a name=regions></a>Available regions
-The Azure SQL Database Hyperscale tier is available in all regions but enabled by default available in the following regions listed below.
-If you want to create Hyperscale database in a region that isn't listed as supported, you can send an onboarding request via Azure portal. For instructions, see [Request quota increases for Azure SQL Database](quota-increase-request.md) for instructions. When submitting your request, use the following guidelines:
+The Azure SQL Database Hyperscale tier is available in all regions but enabled by default in the following regions listed below. If you want to create a Hyperscale database in a region where Hyperscale is not enabled by default, you can send an onboarding request via Azure portal. For instructions, see [Request quota increases for Azure SQL Database](quota-increase-request.md) for instructions. When submitting your request, use the following guidelines:
- Use the [Region access](quota-increase-request.md#region) SQL Database quota type.-- In the text details, add the compute SKU/total cores including readable replicas.-- Also specify the estimated TB.
+- In the description, add the compute SKU/total cores including readable replicas, and indicate that you are requesting Hyperscale capacity.
+- Also specify a projection of the total size of all databases over time in TB.
Enabled Regions: - Australia East
@@ -218,12 +217,12 @@ These are the current limitations to the Hyperscale service tier as of GA. We'r
| Issue | Description | | :---- | :--------- | | The Manage Backups pane for a server doesn't show Hyperscale databases. These will be filtered from the view. | Hyperscale has a separate method for managing backups, so the Long-Term Retention and Point-in-Time backup retention settings don't apply. Accordingly, Hyperscale databases don't appear in the Manage Backup pane.<br><br>For databases migrated to Hyperscale from other Azure SQL Database service tiers, pre-migration backups are kept for the duration of [backup retention](automated-backups-overview.md#backup-retention) period of the source database. These backups can be used to [restore](recovery-using-backups.md#programmatic-recovery-using-automated-backups) the source database to a point in time before migration.|
-| Point-in-time restore | A non-Hyperscale database can't be restored as a Hyperscale database, and a Hyperscale database can't be restored as a non-Hyperscale database. For a non-Hyperscale database that has been migrated to Hyperscale by changing its service tier, restore to a point in time before migration and within the backup retention period of the database is possible [programmatically](recovery-using-backups.md#programmatic-recovery-using-automated-backups). The restored database will be non-Hyperscale. |
-| If a database has one or more data files larger than 1 TB, migration fails | In some cases, it may be possible to work around this issue by shrinking the large files to be less than 1 TB. If migrating a database being used during the migration process, make sure that no file gets larger than 1 TB. Use the following query to determine the size of database files. `SELECT *, name AS file_name, size * 8. / 1024 / 1024 AS file_size_GB FROM sys.database_files WHERE type_desc = 'ROWS'`;|
+| Point-in-time restore | A non-Hyperscale database can't be restored as a Hyperscale database, and a Hyperscale database can't be restored as a non-Hyperscale database. For a non-Hyperscale database that has been migrated to Hyperscale by changing its service tier, restore to a point in time before migration and within the backup retention period of the database is supported [programmatically](recovery-using-backups.md#programmatic-recovery-using-automated-backups). The restored database will be non-Hyperscale. |
+| When changing Azure SQL Database service tier to Hyperscale, the operation fails if the database has any data files larger than 1 TB | In some cases, it may be possible to work around this issue by [shrinking](file-space-manage.md#shrinking-data-files) the large files to be less than 1 TB before attempting to change the service tier to Hyperscale. Use the following query to determine the current size of database files. `SELECT file_id, name AS file_name, size * 8. / 1024 / 1024 AS file_size_GB FROM sys.database_files WHERE type_desc = 'ROWS'`;|
| SQL Managed Instance | Azure SQL Managed Instance isn't currently supported with Hyperscale databases. | | Elastic Pools | Elastic Pools aren't currently supported with Hyperscale.|
-| Migration to Hyperscale is currently a one-way operation | Once a database is migrated to Hyperscale, it can't be migrated directly to a non-Hyperscale service tier. At present, the only way to migrate a database from Hyperscale to non-Hyperscale is to export/import using a bacpac file or other data movement technologies (Bulk Copy, Azure Data Factory, Azure Databricks, SSIS, etc.) Bacpac export/import from Azure portal, from PowerShell using [New-AzSqlDatabaseExport](/powershell/module/az.sql/new-azsqldatabaseexport) or [New-AzSqlDatabaseImport](/powershell/module/az.sql/new-azsqldatabaseimport), from Azure CLI using [az sql db export](/cli/azure/sql/db#az-sql-db-export) and [az sql db import](/cli/azure/sql/db#az-sql-db-import), and from [REST API](/rest/api/sql/databases%20-%20import%20export) isn't supported. Bacpac import/export for smaller Hyperscale databases (up to 200 GB) is supported using SSMS and [SqlPackage](/sql/tools/sqlpackage) version 18.4 and later. For larger databases, bacpac export/import may take a long time, and may fail for various reasons.|
-| Migration of databases with In-Memory OLTP objects | Hyperscale supports a subset of In-Memory OLTP objects, including memory-optimized table types, table variables, and natively compiled modules. However, when any kind of In-Memory OLTP objects are present in the database being migrated, migration from Premium and Business Critical service tiers to Hyperscale is not supported. To migrate such a database to Hyperscale, all In-Memory OLTP objects and their dependencies must be dropped. After the database is migrated, these objects can be recreated. Durable and non-durable memory-optimized tables are not currently supported in Hyperscale, and must be recreated as disk tables.|
+| Migration to Hyperscale is currently a one-way operation | Once a database is migrated to Hyperscale, it can't be migrated directly to a non-Hyperscale service tier. At present, the only way to migrate a database from Hyperscale to non-Hyperscale is to export/import using a bacpac file or other data movement technologies (Bulk Copy, Azure Data Factory, Azure Databricks, SSIS, etc.) Bacpac export/import from Azure portal, from PowerShell using [New-AzSqlDatabaseExport](/powershell/module/az.sql/new-azsqldatabaseexport) or [New-AzSqlDatabaseImport](/powershell/module/az.sql/new-azsqldatabaseimport), from Azure CLI using [az sql db export](/cli/azure/sql/db#az-sql-db-export) and [az sql db import](/cli/azure/sql/db#az-sql-db-import), and from [REST API](/rest/api/sql/databases%20-%20import%20export) is not supported. Bacpac import/export for smaller Hyperscale databases (up to 200 GB) is supported using SSMS and [SqlPackage](/sql/tools/sqlpackage) version 18.4 and later. For larger databases, bacpac export/import may take a long time, and may fail for various reasons.|
+| Migration of databases with In-Memory OLTP objects | Hyperscale supports a subset of In-Memory OLTP objects, including memory-optimized table types, table variables, and natively compiled modules. However, when any kind of In-Memory OLTP objects are present in the database being migrated, migration from Premium and Business Critical service tiers to Hyperscale is not supported. To migrate such a database to Hyperscale, all In-Memory OLTP objects and their dependencies must be dropped. After the database is migrated, these objects can be recreated. Durable and non-durable memory-optimized tables are not currently supported in Hyperscale, and must be changed to disk tables.|
| Geo Replication | You can't yet configure geo-replication for Azure SQL Database Hyperscale. | | Database Copy | Database copy on Hyperscale is now in public preview. | | Intelligent Database Features | With the exception of the "Force Plan" option, all other Automatic Tuning options aren't yet supported on Hyperscale: options may appear to be enabled, but there won't be any recommendations or actions made. |
azure-sql https://docs.microsoft.com/en-us/azure/azure-sql/database/troubleshoot-common-errors-issues https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/troubleshoot-common-errors-issues.md
@@ -9,7 +9,7 @@ ms.custom: seo-lt-2019, OKR 11/2019, sqldbrb=1
author: ramakoni1 ms.author: ramakoni ms.reviewer: sstein,vanto
-ms.date: 01/14/2020
+ms.date: 01/14/2021
--- # Troubleshooting connectivity issues and other errors with Azure SQL Database and Azure SQL Managed Instance
@@ -119,7 +119,7 @@ Typically, the service administrator can use the following steps to add the logi
```sql CREATE LOGIN <SQL_login_name, sysname, login_name>
- WITH PASSWORD = ΓÇÿ<password, sysname, Change_Password>ΓÇÖ
+ WITH PASSWORD = '<password, sysname, Change_Password>'
GO ```
@@ -136,7 +136,7 @@ Typically, the service administrator can use the following steps to add the logi
GO -- Add user to the database owner role
- EXEC sp_addrolemember NΓÇÖdb_ownerΓÇÖ, NΓÇÖ<user_name, sysname, user_name>ΓÇÖ
+ EXEC sp_addrolemember N'db_owner', N'<user_name, sysname, user_name>'
GO ```
@@ -178,23 +178,21 @@ To work around this issue, try one of the following methods:
- Verify whether there are long-running queries. > [!NOTE]
- > This is a minimalist approach that might not resolve the issue.
+ > This is a minimalist approach that might not resolve the issue. For detailed information on troubleshooting query blocking, see [Understand and resolve Azure SQL blocking problems](understand-resolve-blocking.md).
1. Run the following SQL query to check the [sys.dm_exec_requests](/sql/relational-databases/system-dynamic-management-views/sys-dm-exec-requests-transact-sql) view to see any blocking requests: ```sql
- SELECT * FROM dm_exec_requests
+ SELECT * FROM sys.dm_exec_requests;
``` 2. Determine the **input buffer** for the head blocker. 3. Tune the head blocker query.
- For an in-depth troubleshooting procedure, see [Is my query running fine in the cloud?](/archive/blogs/sqlblog/is-my-query-running-fine-in-the-cloud).
+ For an in-depth troubleshooting procedure, see [Is my query running fine in the cloud?](/archive/blogs/sqlblog/is-my-query-running-fine-in-the-cloud).
If the database consistently reaches its limit despite addressing blocking and long-running queries, consider upgrading to an edition with more resources [Editions](https://azure.microsoft.com/pricing/details/sql-database/)).
-For more information about dynamic management views, see [System dynamic management views](/sql/relational-databases/system-dynamic-management-views/system-dynamic-management-views).
- For more information about database limits, see [SQL Database resource limits for servers](./resource-limits-logical-server.md). ### Error 10929: Resource ID: 1
@@ -229,7 +227,7 @@ The following steps can either help you work around the problem or provide you w
FROM sys.objects o JOIN sys.dm_db_partition_stats p on p.object_id = o.object_id GROUP BY o.name
- ORDER BY [Table Size (MB)] DESC
+ ORDER BY [Table Size (MB)] DESC;
``` 2. If the current size does not exceed the maximum size supported for your edition, you can use ALTER DATABASE to increase the MAXSIZE setting.
@@ -248,7 +246,7 @@ If you repeatedly encounter this error, try to resolve the issue by following th
1. Check the sys.dm_exec_requests view to see any open sessions that have a high value for the total_elapsed_time column. Perform this check by running the following SQL script: ```sql
- SELECT * FROM dm_exec_requests
+ SELECT * FROM sys.dm_exec_requests;
``` 2. Determine the input buffer for the long-running query.
@@ -336,7 +334,7 @@ This issue occurs because the account doesn't have permission to access the mast
To resolve this issue, follow these steps: 1. On the login screen of SSMS, select **Options**, and then select **Connection Properties**.
-2. In the **Connect to database** field, enter the userΓÇÖs default database name as the default login database, and then select **Connect**.
+2. In the **Connect to database** field, enter the user's default database name as the default login database, and then select **Connect**.
![Connection properties](./media/troubleshoot-common-errors-issues/cannot-open-database-master.png)
@@ -369,7 +367,7 @@ For additional guidance on fine-tuning performance, see the following resources:
## Steps to fix common connection issues 1. Make sure that TCP/IP is enabled as a client protocol on the application server. For more information, see [Configure client protocols](/sql/database-engine/configure-windows/configure-client-protocols). On application servers where you don't have SQL tools installed, verify that TCP/IP is enabled by running **cliconfg.exe** (SQL Server Client Network utility).
-2. Check the applicationΓÇÖs connection string to make sure it's configured correctly. For example, make sure that the connection string specifies the correct port (1433) and fully qualified server name.
+2. Check the application's connection string to make sure it's configured correctly. For example, make sure that the connection string specifies the correct port (1433) and fully qualified server name.
See [Get connection information](./connect-query-ssms.md#get-server-connection-information). 3. Try increasing the connection timeout value. We recommend using a connection timeout of at least 30 seconds. 4. Test the connectivity between the application server and the Azure SQL Database by using [SQL Server management Studio (SSMS)](./connect-query-ssms.md), a UDL file, ping, or telnet. For more information, see [Troubleshooting connectivity issues](https://support.microsoft.com/help/4009936/solving-connectivity-errors-to-sql-server) and [Diagnostics for connectivity issues](./troubleshoot-common-connectivity-issues.md#diagnostics).
azure-sql https://docs.microsoft.com/en-us/azure/azure-sql/database/understand-resolve-blocking https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/understand-resolve-blocking.md new file mode 100644
@@ -0,0 +1,386 @@
+---
+title: Understand and resolve Azure SQL blocking problems
+titleSuffix: Azure SQL Database
+description: "An overview of Azure SQL database-specific topics on blocking and troubleshooting."
+services: sql-database
+dev_langs:
+ - "TSQL"
+ms.service: sql-database
+ms.subservice: performance
+ms.custom:
+ms.devlang:
+ms.topic: conceptual
+author: WilliamDAssafMSFT
+ms.author: wiassaf
+ms.reviewer:
+ms.date: 1/14/2020
+---
+# Understand and resolve Azure SQL Database blocking problems
+[!INCLUDE[appliesto-sqldb](../includes/appliesto-sqldb.md)]
+
+## Objective
+
+The article describes blocking in Azure SQL databases and demonstrates how to troubleshoot and resolve blocking.
+
+In this article, the term connection refers to a single logged-on session of the database. Each connection appears as a session ID (SPID) or session_id in many DMVs. Each of these SPIDs is often referred to as a process, although it is not a separate process context in the usual sense. Rather, each SPID consists of the server resources and data structures necessary to service the requests of a single connection from a given client. A single client application may have one or more connections. From the perspective of Azure SQL Database, there is no difference between multiple connections from a single client application on a single client computer and multiple connections from multiple client applications or multiple client computers; they are atomic. One connection can block another connection, regardless of the source client.
+
+> [!NOTE]
+> **This content is specific to Azure SQL Database.** Azure SQL Database is based on the latest stable version of the Microsoft SQL Server database engine, so much of the content is similar though troubleshooting options and tools may differ. For more on blocking in SQL Server, see [Understand and resolve SQL Server blocking problems](/troubleshoot/sql/performance/understand-resolve-blocking).
+
+## Understand blocking
+
+Blocking is an unavoidable and by-design characteristic of any relational database management system (RDBMS) with lock-based concurrency. As mentioned previously, in SQL Server, blocking occurs when one session holds a lock on a specific resource and a second SPID attempts to acquire a conflicting lock type on the same resource. Typically, the time frame for which the first SPID locks the resource is small. When the owning session releases the lock, the second connection is then free to acquire its own lock on the resource and continue processing. This is normal behavior and may happen many times throughout the course of a day with no noticeable effect on system performance.
+
+The duration and transaction context of a query determine how long its locks are held and, thereby, their effect on other queries. If the query is not executed within a transaction (and no lock hints are used), the locks for SELECT statements will only be held on a resource at the time it is actually being read, not during the query. For INSERT, UPDATE, and DELETE statements, the locks are held during the query, both for data consistency and to allow the query to be rolled back if necessary.
+
+For queries executed within a transaction, the duration for which the locks are held are determined by the type of query, the transaction isolation level, and whether lock hints are used in the query. For a description of locking, lock hints, and transaction isolation levels, see the following articles:
+
+* [Locking in the Database Engine](/sql/relational-databases/sql-server-transaction-locking-and-row-versioning-guide)
+* [Customizing Locking and Row Versioning](/sql/relational-databases/sql-server-transaction-locking-and-row-versioning-guide#customizing-locking-and-row-versioning)
+* [Lock Modes](/sql/relational-databases/sql-server-transaction-locking-and-row-versioning-guide#lock_modes)
+* [Lock Compatibility](/sql/relational-databases/sql-server-transaction-locking-and-row-versioning-guide#lock_compatibility)
+* [Transactions](/sql/t-sql/language-elements/transactions-transact-sql)
+
+When locking and blocking persists to the point where there is a detrimental effect on system performance, it is due to one of the following reasons:
+
+* A SPID holds locks on a set of resources for an extended period of time before releasing them. This type of blocking resolves itself over time but can cause performance degradation.
+
+* A SPID holds locks on a set of resources and never releases them. This type of blocking does not resolve itself and prevents access to the affected resources indefinitely.
+
+In the first scenario, the situation can be very fluid as different SPIDs cause blocking on different resources over time, creating a moving target. These situations are difficult to troubleshoot using [SQL Server Management Studio](/sql/ssms/download-sql-server-management-studio-ssms) to narrow down the issue to individual queries. In contrast, the second situation results in a consistent state that can be easier to diagnose.
+
+## Applications and blocking
+
+There may be a tendency to focus on server-side tuning and platform issues when facing a blocking problem. However, attention paid only to the database may not lead to a resolution, and can absorb time and energy better directed at examining the client application and the queries it submits. No matter what level of visibility the application exposes regarding the database calls being made, a blocking problem nonetheless frequently requires both the inspection of the exact SQL statements submitted by the application and the application's exact behavior regarding query cancellation, connection management, fetching all result rows, and so on. If the development tool does not allow explicit control over connection management, query cancellation, query time-out, result fetching, and so on, blocking problems may not be resolvable. This potential should be closely examined before selecting an application development tool for Azure SQL Database, especially for performance sensitive OLTP environments.
+
+Pay attention to database performance during the design and construction phase of the database and application. In particular, the resource consumption, isolation level, and transaction path length should be evaluated for each query. Each query and transaction should be as lightweight as possible. Good connection management discipline must be exercised, without it, the application may appear to have acceptable performance at low numbers of users, but the performance may degrade significantly as the number of users scales upward.
+
+With proper application and query design, Azure SQL Database is capable of supporting many thousands of simultaneous users on a single server, with little blocking.
+
+> [!Note]
+> For more application development guidance, see [Troubleshooting connectivity issues and other errors with Azure SQL Database and Azure SQL Managed Instance](troubleshoot-common-errors-issues.md) and [Transient Fault Handling](/aspnet/aspnet/overview/developing-apps-with-windows-azure/building-real-world-cloud-apps-with-windows-azure/transient-fault-handling).
+
+## Troubleshoot blocking
+
+Regardless of which blocking situation we are in, the methodology for troubleshooting locking is the same. These logical separations are what will dictate the rest of the composition of this article. The concept is to find the head blocker and identify what that query is doing and why it is blocking. Once the problematic query is identified (that is, what is holding locks for the prolonged period), the next step is to analyze and determine why the blocking is happening. After we understand the why, we can then make changes by redesigning the query and the transaction.
+
+Steps in troubleshooting:
+
+1. Identify the main blocking session (head blocker)
+
+2. Find the query and transaction that is causing the blocking (what is holding locks for a prolonged period)
+
+3. Analyze/understand why the prolonged blocking occurs
+
+4. Resolve blocking issue by redesigning query and transaction
+
+Now let's dive in to discuss how to pinpoint the main blocking session with an appropriate data capture.
+
+## Gather blocking information
+
+To counteract the difficulty of troubleshooting blocking problems, a database administrator can use SQL scripts that constantly monitor the state of locking and blocking in the Azure SQL database. To gather this data, there are essentially two methods.
+
+The first is to query dynamic management objects (DMOs) and store the results for comparison over time. Some objects referenced in this article are dynamic management views (DMVs) and some are dynamic management functions (DMFs). The second method is to use XEvents to capture what is executing.
++
+## Gather information from DMVs
+
+Referencing DMVs to troubleshoot blocking has the goal of identifying the SPID (session ID) at the head of the blocking chain and the SQL Statement. Look for victim SPIDs that are being blocked. If any SPID is being blocked by another SPID, then investigate the SPID owning the resource (the blocking SPID). Is that owner SPID being blocked as well? You can walk the chain to find the head blocker then investigate why it is maintaining its lock.
+
+Remember to run each of these scripts in the target Azure SQL database.
+
+* The sp_who and sp_who2 commands are older commands to show all current sessions. The DMV sys.dm_exec_sessions returns more data in a result set that is easier to query and filter. You will find sys.dm_exec_sessions at the core of other queries.
+
+* If you already have a particular session identified, you can use `DBCC INPUTBUFFER(<session_id>)` to find the last statement that was submitted by a session. Similar results can be returned with the sys.dm_exec_input_buffer dynamic management function (DMF), in a result set that is easier to query and filter, providing the session_id and the request_id. For example, to return the most recent query submitted by session_id 66 and request_id 0:
+
+```sql
+SELECT * FROM sys.dm_exec_input_buffer (66,0);
+```
+
+* Refer to the sys.dm_exec_requests and reference the blocking_session_id column. When blocking_session_id = 0, a session is not being blocked. While sys.dm_exec_requests lists only requests currently executing, any connection (active or not) will be listed in sys.dm_exec_sessions. Build on this common join between sys.dm_exec_requests and sys.dm_exec_sessions in the next query.
+
+* Run this sample query to find the actively executing queries and their current SQL batch text or input buffer text, using the [sys.dm_exec_sql_text](/sql/relational-databases/system-dynamic-management-views/sys-dm-exec-sql-text-transact-sql) or [sys.dm_exec_input_buffer](/sql/relational-databases/system-dynamic-management-views/sys-dm-exec-input-buffer-transact-sql) DMVs. If the data returned by the `text` field of sys.dm_exec_sql_text is NULL, the query is not currently executing. In that case, the `event_info` field of sys.dm_exec_input_buffer will contain the last command string passed to the SQL engine.
+
+```sql
+WITH cteBL (session_id, blocking_these) AS
+(SELECT s.session_id, blocking_these = x.blocking_these FROM sys.dm_exec_sessions s
+CROSS APPLY (SELECT isnull(convert(varchar(6), er.session_id),'') + ', '
+ FROM sys.dm_exec_requests as er
+ WHERE er.blocking_session_id = isnull(s.session_id ,0)
+ AND er.blocking_session_id <> 0
+ FOR XML PATH('') ) AS x (blocking_these)
+)
+SELECT s.session_id, blocked_by = r.blocking_session_id, bl.blocking_these
+, batch_text = t.text, input_buffer = ib.event_info, *
+FROM sys.dm_exec_sessions s
+LEFT OUTER JOIN sys.dm_exec_requests r on r.session_id = s.session_id
+INNER JOIN cteBL as bl on s.session_id = bl.session_id
+OUTER APPLY sys.dm_exec_sql_text (r.sql_handle) t
+OUTER APPLY sys.dm_exec_input_buffer(s.session_id, NULL) AS ib
+WHERE blocking_these is not null or r.blocking_session_id > 0
+ORDER BY len(bl.blocking_these) desc, r.blocking_session_id desc, r.session_id;
+```
+
+* To catch long-running or uncommitted transactions, use another set of DMVs for viewing current open transactions, including [sys.dm_tran_database_transactions](/sql/relational-databases/system-dynamic-management-views/sys-dm-tran-database-transactions-transact-sql), [sys.dm_tran_session_transactions](/sql/relational-databases/system-dynamic-management-views/sys-dm-tran-session-transactions-transact-sql), [sys.dm_exec_connections](/sql/relational-databases/system-dynamic-management-views/sys-dm-exec-connections-transact-sql), and sys.dm_exec_sql_text. There are several DMVs associated with tracking transactions, see more [DMVs on transactions](/sql/relational-databases/system-dynamic-management-views/transaction-related-dynamic-management-views-and-functions-transact-sql) here.
+
+```sql
+SELECT [s_tst].[session_id],
+[database_name] = DB_NAME (s_tdt.database_id),
+[s_tdt].[database_transaction_begin_time],
+[sql_text] = [s_est].[text]
+FROM sys.dm_tran_database_transactions [s_tdt]
+INNER JOIN sys.dm_tran_session_transactions [s_tst] ON [s_tst].[transaction_id] = [s_tdt].[transaction_id]
+INNER JOIN sys.dm_exec_connections [s_ec] ON [s_ec].[session_id] = [s_tst].[session_id]
+CROSS APPLY sys.dm_exec_sql_text ([s_ec].[most_recent_sql_handle]) AS [s_est];
+```
+
+* Reference [sys.dm_os_waiting_tasks](/sql/relational-databases/system-dynamic-management-views/sys-dm-os-waiting-tasks-transact-sql) that is at the thread/task layer of SQL. This returns information about what SQL wait type the request is currently experiencing. Like sys.dm_exec_requests, only active requests are returned by sys.dm_os_waiting_tasks.
+
+> [!Note]
+> For much more on wait types including aggregated wait stats over time, see the DMV [sys.dm_db_wait_stats](/sql/relational-databases/system-dynamic-management-views/sys-dm-db-wait-stats-azure-sql-database). This DMV returns aggregate wait stats for the current database only.
+
+* Use the [sys.dm_tran_locks](/sql/relational-databases/system-dynamic-management-views/sys-dm-tran-locks-transact-sql) DMV for more granular information on what locks have been placed by queries. This DMV can return large amounts of data on a production SQL Server, and is useful for diagnosing what locks are currently held.
+
+Due to the INNER JOIN on sys.dm_os_waiting_tasks, the following query restricts the output from sys.dm_tran_locks only to currently blocked requests, their wait status, and their locks:
+
+```sql
+SELECT table_name = schema_name(o.schema_id) + '.' + o.name
+, wt.wait_duration_ms, wt.wait_type, wt.blocking_session_id, wt.resource_description
+, tm.resource_type, tm.request_status, tm.request_mode, tm.request_session_id
+FROM sys.dm_tran_locks AS tm
+INNER JOIN sys.dm_os_waiting_tasks as wt ON tm.lock_owner_address = wt.resource_address
+LEFT OUTER JOIN sys.partitions AS p on p.hobt_id = tm.resource_associated_entity_id
+LEFT OUTER JOIN sys.objects o on o.object_id = p.object_id or tm.resource_associated_entity_id = o.object_id
+WHERE resource_database_id = DB_ID()
+AND object_name(p.object_id) = '<table_name>';
+```
+
+* With DMVs, storing the query results over time will provide data points that will allow you to review blocking over a specified time interval to identify persisted blocking or trends.
+
+## Gather information from Extended events
+
+In addition to the above information, it is often necessary to capture a trace of the activities on the server to thoroughly investigate a blocking problem on Azure SQL Database. For example, if a session executes multiple statements within a transaction, only the last statement that was submitted will be represented. However, one of the earlier statements may be the reason locks are still being held. A trace will enable you to see all the commands executed by a session within the current transaction.
+
+There are two ways to capture traces in SQL Server; Extended Events (XEvents) and Profiler Traces. However, [SQL Server
+Profiler](/sql/tools/sql-server-profiler/sql-server-profiler)
+is deprecated trace technology not supported for Azure SQL Database. [Extended Events](/sql/relational-databases/extended-events/extended-events) is the newer tracing technology that allows more versatility and less impact to the observed system, and its interface is integrated into SQL Server Management Studio (SSMS).
+
+Refer to the document that explains how to use the [Extended Events New Session Wizard](/sql/relational-databases/extended-events/quick-start-extended-events-in-sql-server) in SSMS. For Azure SQL databases however, SSMS provides an Extended Events subfolder under each database in Object Explorer. Use an Extended Events session wizard to capture these useful events:
+
+- Category Errors:
+ - Attention
+ - Error_reported
+ - Execution_warning
+
+- Category Warnings:
+ - Missing_join_predicate
+
+- Category Execution:
+ - Rpc_completed
+ - Rpc_starting
+ - Sql_batch_completed
+ - Sql_batch_starting
+
+- Lock
+ - Lock_deadlock
+
+- Session
+ - Existing_connection
+ - Login
+ - Logout
+
+## Identify and resolve common blocking scenarios
+
+By examining the above information, you can determine the cause of most blocking problems. The rest of this article is a discussion of how to use this information to identify and resolve some common blocking scenarios. This discussion assumes you have used the blocking scripts (referenced earlier) to capture information on the blocking SPIDs and have captured application activity using an XEvent session.
+
+## Analyze blocking data
+
+* Examine the output of the DMVs sys.dm_exec_requests and sys.dm_exec_sessions to determine the heads of the blocking chains, using blocking_these and session_id. This will most clearly identify which requests are blocked and which are blocking. Look further into the sessions that are blocked and blocking. Is there a common or root to the blocking chain? They likely share a common table, and one or more of the sessions involved in a blocking chain is performing a write operation.
+
+* Examine the output of the DMVs sys.dm_exec_requests and sys.dm_exec_sessions for information on the SPIDs at the head of the blocking chain. Look for the following fields:
+
+ - `sys.dm_exec_requests.status`
+ This column shows the status of a particular request. Typically, a sleeping status indicates that the SPID has completed execution and is waiting for the application to submit another query or batch. A runnable or running status indicates that the SPID is currently processing a query. The following table gives brief explanations of the various status values.
+
+ | Status | Meaning |
+ |:-|:-|
+ | Background | The SPID is running a background task, such as deadlock detection, log writer, or checkpoint. |
+ | Sleeping | The SPID is not currently executing. This usually indicates that the SPID is awaiting a command from the application. |
+ | Running | The SPID is currently running on a scheduler. |
+ | Runnable | The SPID is in the runnable queue of a scheduler and waiting to get scheduler time. |
+ | Suspended | The SPID is waiting for a resource, such as a lock or a latch. |
+
+ - `sys.dm_exec_sessions.open_transaction_count`
+ This field tells you the number of open transactions in this session. If this value is greater than 0, the SPID is within an open transaction and may be holding locks acquired by any statement within the transaction.
+
+ - `sys.dm_exec_requests.open_transaction_count`
+ Similarly, this field tells you the number of open transactions in this request. If this value is greater than 0, the SPID is within an open transaction and may be holding locks acquired by any statement within the transaction.
+
+ - `sys.dm_exec_requests.wait_type`, `wait_time`, and `last_wait_type`
+ If the `sys.dm_exec_requests.wait_type` is NULL, the request is not currently waiting for anything and the `last_wait_type` value indicates the last `wait_type` that the request encountered. For more information about `sys.dm_os_wait_stats` and a description of the most common wait types, see [sys.dm_os_wait_stats](/sql/relational-databases/system-dynamic-management-views/sys-dm-os-wait-stats-transact-sql). The `wait_time` value can be used to determine if the request is making progress. When a query against the sys.dm_exec_requests table returns a value in the `wait_time` column that is less than the `wait_time` value from a previous query of sys.dm_exec_requests, this indicates that the prior lock was acquired and released and is now waiting on a new lock (assuming non-zero `wait_time`). This can be verified by comparing the `wait_resource` between sys.dm_exec_requests output, which displays the resource for which the request is waiting.
+
+ - `sys.dm_exec_requests.wait_resource`
+ This field indicates the resource that a blocked request is waiting on. The following table lists common `wait_resource` formats and their meaning:
+
+ | Resource | Format | Example | Explanation |
+ |:-|:-|:-|:-|
+ | Table | DatabaseID:ObjectID:IndexID | TAB: 5:261575970:1 | In this case, database ID 5 is the pubs sample database and object ID 261575970 is the titles table and 1 is the clustered index. |
+ | Page | DatabaseID:FileID:PageID | PAGE: 5:1:104 | In this case, database ID 5 is pubs, file ID 1 is the primary data file, and page 104 is a page belonging to the titles table. To identify the object_id the page belongs to, use the dynamic management function [sys.dm_db_page_info](/sql/relational-databases/system-dynamic-management-views/sys-dm-db-page-info-transact-sql), passing in the DatabaseID, FileId, PageId from the `wait_resource`. |
+ | Key | DatabaseID:Hobt_id (Hash value for index key) | KEY: 5:72057594044284928 (3300a4f361aa) | In this case, database ID 5 is Pubs, Hobt_ID 72057594044284928 corresponds to index_id 2 for object_id 261575970 (titles table). Use the sys.partitions catalog view to associate the hobt_id to a particular index_id and object_id. There is no way to unhash the index key hash to a specific key value. |
+ | Row | DatabaseID:FileID:PageID:Slot(row) | RID: 5:1:104:3 | In this case, database ID 5 is pubs, file ID 1 is the primary data file, page 104 is a page belonging to the titles table, and slot 3 indicates the row's position on the page. |
+ | Compile | DatabaseID:FileID:PageID:Slot(row) | RID: 5:1:104:3 | In this case, database ID 5 is pubs, file ID 1 is the primary data file, page 104 is a page belonging to the titles table, and slot 3 indicates the row's position on the page. |
+
+ - `sys.dm_tran_active_transactions`
+ The [sys.dm_tran_active_transactions](/sql/relational-databases/system-dynamic-management-views/sys-dm-tran-active-transactions-transact-sql) DMV contains data about open transactions that can be joined to other DMVs for a complete picture of transactions awaiting commit or rollback. Use the following query to return information on open transactions, joined to other DMVs including [sys.dm_tran_session_transactions](/sql/relational-databases/system-dynamic-management-views/sys-dm-tran-session-transactions-transact-sql). Consider a transaction's current state, `transaction_begin_time`, and other situational data to evaluate whether it could be a source of blocking.
+
+ ```sql
+ SELECT tst.session_id, [database_name] = db_name(s.database_id)
+ , tat.transaction_begin_time
+ , transaction_duration_s = datediff(s, tat.transaction_begin_time, sysdatetime())
+ , transaction_type = CASE tat.transaction_type WHEN 1 THEN 'Read/write transaction'
+ WHEN 2 THEN 'Read-only transaction'
+ WHEN 3 THEN 'System transaction'
+ WHEN 4 THEN 'Distributed transaction' END
+ , input_buffer = ib.event_info, tat.transaction_uow
+ , transaction_state = CASE tat.transaction_state
+ WHEN 0 THEN 'The transaction has not been completely initialized yet.'
+ WHEN 1 THEN 'The transaction has been initialized but has not started.'
+ WHEN 2 THEN 'The transaction is active - has not been committed or rolled back.'
+ WHEN 3 THEN 'The transaction has ended. This is used for read-only transactions.'
+ WHEN 4 THEN 'The commit process has been initiated on the distributed transaction.'
+ WHEN 5 THEN 'The transaction is in a prepared state and waiting resolution.'
+ WHEN 6 THEN 'The transaction has been committed.'
+ WHEN 7 THEN 'The transaction is being rolled back.'
+ WHEN 8 THEN 'The transaction has been rolled back.' END
+ , transaction_name = tat.name, request_status = r.status
+ , azure_dtc_state = CASE tat.dtc_state
+ WHEN 1 THEN 'ACTIVE'
+ WHEN 2 THEN 'PREPARED'
+ WHEN 3 THEN 'COMMITTED'
+ WHEN 4 THEN 'ABORTED'
+ WHEN 5 THEN 'RECOVERED' END
+ , tst.is_user_transaction, tst.is_local
+ , session_open_transaction_count = tst.open_transaction_count
+ , s.host_name, s.program_name, s.client_interface_name, s.login_name, s.is_user_process
+ FROM sys.dm_tran_active_transactions tat
+ INNER JOIN sys.dm_tran_session_transactions tst on tat.transaction_id = tst.transaction_id
+ INNER JOIN Sys.dm_exec_sessions s on s.session_id = tst.session_id
+ LEFT OUTER JOIN sys.dm_exec_requests r on r.session_id = s.session_id
+ CROSS APPLY sys.dm_exec_input_buffer(s.session_id, null) AS ib;
+ ```
+
+ - Other columns
+
+ The remaining columns in [sys.dm_exec_sessions](/sql/relational-databases/system-dynamic-management-views/sys-dm-exec-sessions-transact-sql) and [sys.dm_exec_request](/sql/relational-databases/system-dynamic-management-views/sys-dm-exec-requests-transact-sql) can provide insight into the root of a problem as well. Their usefulness varies depending on the circumstances of the problem. For example, you can determine if the problem happens only from certain clients (hostname), on certain network libraries (net_library), when the last batch submitted by a SPID was `last_request_start_time` in sys.dm_exec_sessions, how long a request had been running using `start_time` in sys.dm_exec_requests, and so on.
++
+## Common blocking scenarios
+
+The table below maps common symptoms to their probable causes.
+
+The `wait_type`, `open_transaction_count`, and `status` columns refer to information returned by [sys.dm_exec_request](/sql/relational-databases/system-dynamic-management-views/sys-dm-exec-requests-transact-sql), other columns may be returned by [sys.dm_exec_sessions](/sql/relational-databases/system-dynamic-management-views/sys-dm-exec-sessions-transact-sql). The "Resolves?" column indicates whether or not the blocking will resolve on its own, or whether the session should be killed via the `KILL` command. For more information, see [KILL (Transact-SQL)](/sql/t-sql/language-elements/kill-transact-sql).
+
+| Scenario | Waittype | Open_Tran | Status | Resolves? | Other Symptoms |
+|:-|:-|:-|:-|:-|:-|--|
+| 1 | NOT NULL | >= 0 | runnable | Yes, when query finishes. | In sys.dm_exec_sessions, **reads**, **cpu_time**, and/or **memory_usage** columns will increase over time. Duration for the query will be high when completed. |
+| 2 | NULL | \>0 | sleeping | No, but SPID can be killed. | An attention signal may be seen in the Extended Event session for this SPID, indicating a query time-out or cancel has occurred. |
+| 3 | NULL | \>= 0 | runnable | No. Will not resolve until client fetches all rows or closes connection. SPID can be killed, but it may take up to 30 seconds. | If open_transaction_count = 0, and the SPID holds locks while the transaction isolation level is default (READ COMMMITTED), this is a likely cause. |
+| 4 | Varies | \>= 0 | runnable | No. Will not resolve until client cancels queries or closes connections. SPIDs can be killed, but may take up to 30 seconds. | The **hostname** column in sys.dm_exec_sessions for the SPID at the head of a blocking chain will be the same as one of the SPID it is blocking. |
+| 5 | NULL | \>0 | rollback | Yes. | An attention signal may be seen in the Extended Events session for this SPID, indicating a query time-out or cancel has occurred, or simply a rollback statement has been issued. |
+| 6 | NULL | \>0 | sleeping | Eventually. When Windows NT determines the session is no longer active, the Azure SQL Database connection will be broken. | The `last_request_start_time` value in sys.dm_exec_sessions is much earlier than the current time. |
+
+The following scenarios will expand on these scenarios.
+
+1. Blocking caused by a normally running query with a long execution time
+
+ **Resolution**: The solution to this type of blocking problem is to look for ways to optimize the query. Actually, this class of blocking problem may just be a performance problem, and require you to pursue it as such. For information on troubleshooting a specific slow-running query, see [How to troubleshoot slow-running queries on SQL Server](/troubleshoot/sql/performance/troubleshoot-slow-running-queries). For more information, see [Monitor and Tune for Performance](/sql/relational-databases/performance/monitor-and-tune-for-performance).
+
+ Reports from the [Query Store](/sql/relational-databases/performance/best-practice-with-the-query-store) in SSMS are also a highly recommended and valuable tool for identifying the most costly queries, suboptimal execution plans. Also review the [Intelligent Performance](intelligent-insights-overview.md) section of the Azure portal for the Azure SQL database, including [Query Performance Insight](query-performance-insight-use.md).
+
+ If you have a long-running query that is blocking other users and cannot be optimized, consider moving it from an OLTP environment to a dedicated reporting system, a [synchronous read-only replica of the database](read-scale-out.md).
+
+1. Blocking caused by a sleeping SPID that has an uncommitted transaction
+
+ This type of blocking can often be identified by a SPID that is sleeping or awaiting a command, yet whose transaction nesting level (`@@TRANCOUNT`, `open_transaction_count` from sys.dm_exec_requests) is greater than zero. This can occur if the application experiences a query time-out, or issues a cancel without also issuing the required number of
+ ROLLBACK and/or COMMIT statements. When a SPID receives a query time-out or a cancel, it will terminate the current query and batch, but does not automatically roll back or commit the transaction. The application is responsible for this, as Azure SQL Database cannot assume that an entire transaction must be rolled back due to a single query being canceled. The query time-out or cancel will appear as an ATTENTION signal event for the SPID in the Extended Event session.
+
+ To demonstrate an uncommitted explicit transaction, issue the following query:
+
+ ```sql
+ CREATE TABLE #test (col1 INT);
+ INSERT INTO #test SELECT 1;
+ BEGIN TRAN
+ UPDATE #test SET col1 = 2 where col1 = 1;
+ ```
+
+ Then, execute this query in the same window:
+ ```sql
+ SELECT @@TRANCOUNT;
+ ROLLBACK TRAN
+ DROP TABLE #test;
+ ```
+
+ The output of the second query indicates that the transaction nesting level is one. All the locks acquired in the transaction are still be held until the transaction was committed or rolled back. If applications explicitly open and commit transactions, a communication or other error could leave the session and its transaction in an open state.
+
+ Use the above script based on sys.dm_tran_active_transactions to identify currently uncommitted transactions.
+
+ **Resolutions**:
+
+ - Additionally, this class of blocking problem may also be a performance problem, and require you to pursue it as such. If the query execution time can be diminished, the query time-out or cancel would not occur. It is important that the application is able to handle the time-out or cancel scenarios should they arise, but you may also benefit from examining the performance of the query.
+
+ - Applications must properly manage transaction nesting levels, or they may cause a blocking problem following the cancellation of the query in this manner. Consider the following:
+
+ * In the error handler of the client application, execute `IF @@TRANCOUNT > 0 ROLLBACK TRAN` following any error, even if the client application does not believe a transaction is open. Checking for open transactions is required, because a stored procedure called during the batch could have started a transaction without the client application's knowledge. Certain conditions, such as canceling the query, prevent the procedure from executing past the current statement, so even if the procedure has logic to check `IF @@ERROR <> 0` and abort the transaction, this rollback code will not be executed in such cases.
+ * If connection pooling is being used in an application that opens the connection and runs a small number of queries before releasing the connection back to the pool, such as a Web-based application, temporarily disabling connection pooling may help alleviate the problem until the client application is modified to handle the errors appropriately. By disabling connection pooling, releasing the connection will cause a physical disconnect of the Azure SQL Database connection, resulting in the server rolling back any open transactions.
+ * Use `SET XACT_ABORT ON` for the connection, or in any stored procedures that begin transactions and are not cleaning up following an error. In the event of a run-time error, this setting will abort any open transactions and return control to the client. For more information, review [SET XACT_ABORT (Transact-SQL)](/sql/t-sql/statements/set-xact-abort-transact-sql).
+ > [!NOTE]
+ > The connection is not reset until it is reused from the connection pool, so it is possible that a user could open a transaction and then release the connection to the connection pool, but it might not be reused for several seconds, during which time the transaction would remain open. If the connection is not reused, the transaction will be aborted when the connection times out and is removed from the connection pool. Thus, it is optimal for the client application to abort transactions in their error handler or use `SET XACT_ABORT ON` to avoid this potential delay.
+
+ > [!CAUTION]
+ > Following `SET XACT_ABORT ON`, T-SQL statements following a statement that causes an error will not be executed. This could affect the intended flow of existing code.
+
+1. Blocking caused by a SPID whose corresponding client application did not fetch all result rows to completion
+
+ After sending a query to the server, all applications must immediately fetch all result rows to completion. If an application does not fetch all result rows, locks can be left on the tables, blocking other users. If you are using an application that transparently submits SQL statements to the server, the application must fetch all result rows. If it does not (and if it cannot be configured to do so), you may be unable to resolve the blocking problem. To avoid the problem, you can restrict poorly behaved applications to a reporting or a decision-support database.
+
+ > [!NOTE]
+ > See [guidance for retry logic](/azure/azure-sql/database/troubleshoot-common-connectivity-issues#retry-logic-for-transient-errors) for applications connecting to Azure SQL Database.
+
+ **Resolution**: The application must be rewritten to fetch all rows of the result to completion. This does not rule out the use of [OFFSET and FETCH in the ORDER BY clause](/sql/t-sql/queries/select-order-by-clause-transact-sql#using-offset-and-fetch-to-limit-the-rows-returned) of a query to perform server-side paging.
+
+1. Blocking caused by a SPID that is in rollback state
+
+ A data modification query that is KILLed, or canceled outside of a user-defined transaction, will be rolled back. This can also occur as a side effect of the client network session disconnecting, or when a request is selected as the deadlock victim. This can often be identified by observing the output of sys.dm_exec_requests, which may indicate the ROLLBACK **command**, and the **percent_complete column** may show progress.
+
+ Thanks to the [Accelerated Database Recovery feature](../accelerated-database-recovery.md) introduced in 2019, lengthy rollbacks should be rare.
+
+ **Resolution**: Wait for the SPID to finish rolling back the changes that were made.
+
+ To avoid this situation, do not perform large batch write operations or index creation or maintenance operations during busy hours on OLTP systems. If possible, perform such operations during periods of low activity.
+
+1. Blocking caused by an orphaned connection
+
+ If the client application traps errors or the client workstation is restarted, the network session to the server may not be immediately canceled under some conditions. From the Azure SQL Database perspective, the client still appears to be present, and any locks acquired may still be retained. For more information, see [How to troubleshoot orphaned connections in SQL Server](/sql/t-sql/language-elements/kill-transact-sql#remarks).
+
+ **Resolution**: If the client application has disconnected without appropriately cleaning up its resources, you can terminate the SPID by using the `KILL` command. The `KILL` command takes the SPID value as input. For example, to kill SPID 99, issue the following command:
+
+ ```sql
+ KILL 99
+ ```
+
+## See also
+
+* [Monitoring and performance tuning in Azure SQL Database and Azure SQL Managed Instance](/monitor-tune-overview.md)
+* [Monitoring performance by using the Query Store](/sql/relational-databases/performance/monitoring-performance-by-using-the-query-store)
+* [Transaction Locking and Row Versioning Guide](/sql/relational-databases/sql-server-transaction-locking-and-row-versioning-guide)
+* [SET TRANSACTION ISOLATION LEVEL](/sql/t-sql/statements/set-transaction-isolation-level-transact-sql)
+* [Quickstart: Extended events in SQL Server](/sql/relational-databases/extended-events/quick-start-extended-events-in-sql-server)
+* [Intelligent Insights using AI to monitor and troubleshoot database performance](intelligent-insights-overview.md)
+
+## Learn more
+
+* [Azure SQL Database: Improving Performance Tuning with Automatic Tuning](https://channel9.msdn.com/Shows/Data-Exposed/Azure-SQL-Database-Improving-Performance-Tuning-with-Automatic-Tuning)
+* [Improve Azure SQL Database Performance with Automatic Tuning](https://channel9.msdn.com/Shows/Azure-Friday/Improve-Azure-SQL-Database-Performance-with-Automatic-Tuning)
+* [Deliver consistent performance with Azure SQL](/learn/modules/azure-sql-performance/)
+* [Troubleshooting connectivity issues and other errors with Azure SQL Database and Azure SQL Managed Instance](troubleshoot-common-errors-issues.md)
+* [Transient Fault Handling](/aspnet/aspnet/overview/developing-apps-with-windows-azure/building-real-world-cloud-apps-with-windows-azure/transient-fault-handling)
azure-sql https://docs.microsoft.com/en-us/azure/azure-sql/identify-query-performance-issues https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/identify-query-performance-issues.md
@@ -10,7 +10,7 @@ ms.topic: troubleshooting
author: jovanpop-msft ms.author: jovanpop ms.reviewer: wiassaf, sstein
-ms.date: 03/10/2020
+ms.date: 1/14/2021
--- # Detectable types of query performance bottlenecks in Azure SQL Database
@@ -85,7 +85,7 @@ Here's an example of a partially parameterized query:
```sql SELECT * FROM t1 JOIN t2 ON t1.c1 = t2.c1
-WHERE t1.c1 = @p1 AND t2.c2 = '961C3970-0E54-4E8E-82B6-5545BE897F8F'
+WHERE t1.c1 = @p1 AND t2.c2 = '961C3970-0E54-4E8E-82B6-5545BE897F8F';
``` In this example, `t1.c1` takes `@p1`, but `t2.c2` continues to take GUID as literal. In this case, if you change the value for `c2`, the query is treated as a different query, and a new compilation will happen. To reduce compilations in this example, you would also parameterize the GUID.
@@ -110,7 +110,7 @@ WHERE
rsi.start_time >= DATEADD(hour, -2, GETUTCDATE()) AND query_parameterization_type_desc IN ('User', 'None') GROUP BY q.query_hash
-ORDER BY count (distinct p.query_id) DESC
+ORDER BY count (distinct p.query_id) DESC;
``` ### Factors that affect query plan changes
@@ -182,7 +182,7 @@ Once you have eliminated a suboptimal plan and *Waiting-related* problems that a
- **Blocking**:
- One query might hold the lock on objects in the database while others try to access the same objects. You can identify blocking queries by using [DMVs](database/monitoring-with-dmvs.md#monitoring-blocked-queries) or [Intelligent Insights](database/intelligent-insights-troubleshoot-performance.md#locking).
+ One query might hold the lock on objects in the database while others try to access the same objects. You can identify blocking queries by using [DMVs](database/monitoring-with-dmvs.md#monitoring-blocked-queries) or [Intelligent Insights](database/intelligent-insights-troubleshoot-performance.md#locking). For more information, see [Understand and resolve Azure SQL blocking problems](database/understand-resolve-blocking.md).
- **IO problems** Queries might be waiting for the pages to be written to the data or log files. In this case, check the `INSTANCE_LOG_RATE_GOVERNOR`, `WRITE_LOG`, or `PAGEIOLATCH_*` wait statistics in the DMV. See using DMVs to [identify IO performance issues](database/monitoring-with-dmvs.md#identify-io-performance-issues).
azure-sql https://docs.microsoft.com/en-us/azure/azure-sql/virtual-machines/windows/failover-cluster-instance-prepare-vm https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/virtual-machines/windows/failover-cluster-instance-prepare-vm.md
@@ -64,7 +64,7 @@ On an Azure VM guest failover cluster, we recommend a single NIC per server (clu
Place both virtual machines: - In the same Azure resource group as your availability set, if you're using availability sets.-- On the same virtual network as your domain controller.
+- On the same virtual network as your domain controller or on a virtual network that has suitable connectivity to your domain controller.
- On a subnet that has enough IP address space for both virtual machines and all FCIs that you might eventually use on the cluster. - In the Azure availability set or availability zone.
azure-vmware https://docs.microsoft.com/en-us/azure/azure-vmware/faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/faq.md
@@ -2,7 +2,7 @@
title: Frequently asked questions description: Provides answers to some of the common questions about Azure VMware Solution. ms.topic: conceptual
-ms.date: 1/4/2020
+ms.date: 1/14/2021
--- # Frequently asked questions about Azure VMware Solution
@@ -11,212 +11,212 @@ In this article, we'll answer frequently asked questions about Azure VMware Solu
## General
-#### What is Azure VMware Solution?
+### What is Azure VMware Solution?
As enterprises pursue IT modernization strategies to improve business agility, reduce costs, and accelerate innovation, hybrid cloud platforms have emerged as key enablers of customers' digital transformation. Azure VMware Solution combines VMware's Software-Defined Data Center (SDDC) software with Microsoft's Azure global cloud service ecosystem. Azure VMware Solution is managed to meet performance, availability, security, and compliance requirements. ## Azure VMware Solution Service
-#### Where is Azure VMware Solution available today?
+### Where is Azure VMware Solution available today?
The service is continuously being added to new regions, so view the [latest service availability information](https://azure.microsoft.com/global-infrastructure/services/?products=azure-vmware) for more details.
-#### Can workloads running in an Azure VMware Solution instance consume or integrate with Azure services?
+### Can workloads running in an Azure VMware Solution instance consume or integrate with Azure services?
All Azure services will be available to Azure VMware Solution customers. Performance and availability limitations for specific services will need to be addressed on a case-by-case basis.
-#### What guest operating systems are compatible with Azure VMware Solution?
+### What guest operating systems are compatible with Azure VMware Solution?
You can find information about guest operating system compatibility with vSphere by using the [VMware Compatibility Guide](https://www.vmware.com/resources/compatibility/search.php?deviceCategory=software&details=1&releases=485&page=1&display_interval=10&sortColumn=Partner&sortOrder=Asc&testConfig=16). To identify the version of vSphere running in Azure VMware Solution, see [VMware software versions](concepts-private-clouds-clusters.md#vmware-software-versions).
-#### Do I use the same tools that I use now to manage private cloud resources?
+### Do I use the same tools that I use now to manage private cloud resources?
Yes. The Azure portal is used for deployment and several management operations. vCenter and NSX Manager are used to manage vSphere and NSX-T resources.
-#### Can I manage a private cloud with my on-premises vCenter?
+### Can I manage a private cloud with my on-premises vCenter?
At launch, Azure VMware Solution won't support a single management experience across on-premises and private cloud environments. Private cloud clusters will be managed with vCenter and NSX Manager local to a private cloud.
-#### Can I use vRealize Suite running on-premises?
+### Can I use vRealize Suite running on-premises?
Specific integrations and use cases may be evaluated on a case-by-case basis.
-#### Can I migrate vSphere VMs from on-premises environments to Azure VMware Solution private clouds?
+### Can I migrate vSphere VMs from on-premises environments to Azure VMware Solution private clouds?
Yes. VM migration and vMotion can be used to move VMs to a private cloud if standard cross vCenter [vMotion requirements](https://kb.vmware.com/s/article/2106952?lang=en_US&queryTerm=2106952) are met.
-#### Is a specific version of vSphere required in on-premises environments?
+### Is a specific version of vSphere required in on-premises environments?
All cloud environments come with VMware HCX, vSphere 5.5, or later in on-premises environments for vMotion.
-#### What does the change control process look like?
+### What does the change control process look like?
Updates made to the service itself follows Microsoft Azure's standard change management process. Customers are responsible for any workload administration tasks and the associated change management processes.
-#### How is this different from Azure VMware Solution by CloudSimple?
+### How is this different from Azure VMware Solution by CloudSimple?
With the new Azure VMware Solution, Microsoft and VMware have a direct cloud provider partnership. The new solution is entirely designed, built, and supported by Microsoft, and endorsed by VMware. Architecturally, the solutions are consistent, with the VMware technology stack running on a dedicated Azure infrastructure.
-#### Can Azure VMware Solution VMs be managed by VMRC?
+### Can Azure VMware Solution VMs be managed by VMRC?
Yes. Provided the system it's installed on can access the private cloud vCenter and is using public DNS to resolve ESXi hostnames.
-#### Are there special instructions for installing and using VMRC with Azure VMware Solution VMs?
+### Are there special instructions for installing and using VMRC with Azure VMware Solution VMs?
No. To meet the VM prerequisites follow the [instructions provided by VMware](https://docs.vmware.com/en/VMware-vSphere/6.7/com.vmware.vsphere.vm_admin.doc/GUID-89E7E8F0-DB2B-437F-8F70-BA34C505053F.html).
-#### Is VMware HCX supported on VPNs?
+### Is VMware HCX supported on VPNs?
No, because of bandwidth and latency requirements.
-#### Can Azure Bastion be used for connecting to Azure VMware Solution VMs?
+### Can Azure Bastion be used for connecting to Azure VMware Solution VMs?
Azure Bastion is the service recommended to connect to the jump box to prevent exposing Azure VMware Solution to the internet. You can't use Azure Bastion to connect to Azure VMware Solution VMs since they aren't Azure IaaS objects.
-#### Can Azure Load Balancer internal be used for Azure VMware Solution VMs?
+### Can Azure Load Balancer internal be used for Azure VMware Solution VMs?
No. Azure Load Balancer internal-only supports Azure IaaS VMs. Azure Load Balancer doesn't support IP-based backend pools; only Azure VMs or virtual machine scale set objects in which Azure VMware Solution VMs aren't Azure objects.
-#### Can an existing ExpressRoute Gateway be used to connect to Azure VMware Solution?
+### Can an existing ExpressRoute Gateway be used to connect to Azure VMware Solution?
Yes. Use an existing ExpressRoute Gateway to connect to Azure VMware Solution as long as it doesn't exceed the limit of four ExpressRoute circuits per virtual network. To access Azure VMware Solution from on-premises through ExpressRoute, you must have ExpressRoute Global Reach since the ExpressRoute Gateway doesn't provide transitive routing between its connected circuits. ## Compute, network, storage, and backup
-#### Is there more than one type of host available?
+### Is there more than one type of host available?
There's only one type of host available.
-#### What are the CPU specifications in each type of host?
+### What are the CPU specifications in each type of host?
The servers have dual 18 core 2.3 GHz Intel CPUs.
-#### How much memory is in each host?
+### How much memory is in each host?
The servers have 576 GB of RAM.
-#### What is the storage capacity of each host?
+### What is the storage capacity of each host?
Each ESXi host has two vSAN diskgroups with a capacity tier of 15.2 TB and a 3.2-TB NVMe cache tier (1.6 TB in each diskgroup).
-#### How much network bandwidth is available in each ESXi host?
+### How much network bandwidth is available in each ESXi host?
Each ESXi host in Azure VMware Solution is configured with four 25-Gbps NICs, two NICs provisioned for ESXi system traffic, and two NICs provisioned for workload traffic.
-#### Is data stored on the vSAN datastores encrypted at rest?
+### Is data stored on the vSAN datastores encrypted at rest?
Yes, all vSAN data is encrypted by default using keys stored in Azure Key Vault.
-#### What independent software vendors (ISVs) backup solutions work with Azure VMware Solution?
+### What independent software vendors (ISVs) backup solutions work with Azure VMware Solution?
Commvault, Veritas, and Veeam have extended their backup solutions to work with Azure VMware Solution. However, any backup solution that uses VMware VADP with the HotAdd transport mode would work right out of the box on Azure VMware Solution.
-#### What about support for ISV backup solutions?
+### What about support for ISV backup solutions?
As these backup solutions are installed and managed by customers, they can reach out to the respective ISV for support.
-#### What is the correct storage policy for the dedupe setup?
+### What is the correct storage policy for the dedupe setup?
Use the *thin_provision* storage policy for your VM template. The default is *thick_provision*.
-#### Are the SNMP infrastructure logs shared?
+### Are the SNMP infrastructure logs shared?
No. ## Hosts, clusters, and private clouds
-#### Is the underlying infrastructure shared?
+### Is the underlying infrastructure shared?
No, private cloud hosts and clusters are dedicated and securely erased before and after use.
-#### What are the minimum and maximum number of hosts per cluster?
+### What are the minimum and maximum number of hosts per cluster?
Clusters can scale between 3 and 16 ESXi hosts. Trial clusters are limited to three hosts.
-#### Can I scale my private cloud clusters?
+### Can I scale my private cloud clusters?
Yes, clusters scale between the minimum and the maximum number of ESXi hosts. Trial clusters are limited to three hosts.
-#### What are trial clusters?
+### What are trial clusters?
Trial clusters are three host clusters used for one-month evaluations of Azure VMware Solution private clouds.
-#### Can I use High-end hosts for trial clusters?
+### Can I use High-end hosts for trial clusters?
No. High-end ESXi hosts are reserved for use in production clusters. ## Azure VMware Solution and VMware software
-#### What versions of VMware software is used in private clouds?
+### What versions of VMware software is used in private clouds?
[!INCLUDE [vmware-software-versions](includes/vmware-software-versions.md)]
-#### Do private clouds use VMware NSX?
+### Do private clouds use VMware NSX?
Yes, NSX-T 2.5 is used for the software-defined networking in Azure VMware Solution private clouds.
-#### Can I use VMware NSX-V in a private cloud?
+### Can I use VMware NSX-V in a private cloud?
No. NSX-T is the only supported version of NSX.
-#### Is NSX required in on-premises environments or networks that connect to a private cloud?
+### Is NSX required in on-premises environments or networks that connect to a private cloud?
No, you aren't required to use NSX on-premises.
-#### What is the upgrade and update schedule for VMware software in a private cloud?
+### What is the upgrade and update schedule for VMware software in a private cloud?
The private cloud software bundle upgrades keep the software within one version of the most recent software bundle release from VMware. The private cloud software versions may differ from the most recent versions of the individual software components (ESXi, NSX-T, vCenter, vSAN).
-#### How often will the private cloud software stack be updated?
+### How often will the private cloud software stack be updated?
The private cloud software is upgraded on a schedule that tracks the software bundle's release from VMware. Your private cloud doesn't require downtime for upgrades. ## Connectivity
-#### What network IP address planning is required to incorporate private clouds with on-premises environments?
+### What network IP address planning is required to incorporate private clouds with on-premises environments?
A private network /22 address space is required to deploy an Azure VMware Solution private cloud. This private address space shouldn't overlap with other virtual networks in a subscription or with on-premises networks.
-#### How do I connect from on-premises environments to an Azure VMware Solution private cloud?
+### How do I connect from on-premises environments to an Azure VMware Solution private cloud?
You can connect to the service in one of two methods: - With a VM or application gateway deployed on an Azure virtual network that is peered through ExpressRoute to the private cloud. - Through ExpressRoute Global Reach from your on-premises data center to an Azure ExpressRoute circuit.
-#### How do I connect a workload VM to the internet or an Azure service endpoint?
+### How do I connect a workload VM to the internet or an Azure service endpoint?
In the Azure portal, enable internet connectivity for a private cloud. With NSX-T manager, create an NSX-T T1 router and a logical switch. You then use vCenter to deploy a VM on the network segment defined by the logical switch. That VM will have network access to the internet and Azure services.
-#### Do I need to restrict access from the internet to VMs on logical networks in a private cloud?
+### Do I need to restrict access from the internet to VMs on logical networks in a private cloud?
No. Network traffic inbound from the internet directly to private clouds isn't allowed by default. However, you're able to expose Azure VMware Solution VMs to the internet through the [Public IP](public-ip-usage.md) option in your Azure portal for your Azure VMware Solution private cloud.
-#### Do I need to restrict internet access from VMs on logical networks to the internet?
+### Do I need to restrict internet access from VMs on logical networks to the internet?
Yes. You'll need to use NSX-T manager to create a firewall to restrict VM access to the internet.
-#### Can Azure VMware Solution use Azure Virtual WAN hosted ExpressRoute Gateways?
+### Can Azure VMware Solution use Azure Virtual WAN hosted ExpressRoute Gateways?
Yes.
-#### Can transit connectivity be established between on-premises and Azure VMware Solution through Azure Virtual WAN over ExpressRoute Global Reach?
+### Can transit connectivity be established between on-premises and Azure VMware Solution through Azure Virtual WAN over ExpressRoute Global Reach?
Azure Virtual WAN doesn't provide transitive routing between two connected ExpressRoute circuits and non-virtual WAN ExpressRoute Gateway. Using ExpressRoute Global Reach allows connectivity between on-premises and Azure VMware Solution, but goes through Microsoft's global network instead of the Virtual WAN Hub.
-#### Could I use HCX through public Internet communications as a workaround for the non-supportability of HCX when using VPN S2S with vWAN for on-premises communications?
+### Could I use HCX through public Internet communications as a workaround for the non-supportability of HCX when using VPN S2S with vWAN for on-premises communications?
Currently, the only supported method for VMware HCX is through ExpressRoute. ## Accounts and privileges
-#### What accounts and privileges will I get with my new Azure VMware Solution private cloud?
+### What accounts and privileges will I get with my new Azure VMware Solution private cloud?
You're provided credentials for a cloudadmin user in vCenter and admin access on NSX-T Manager. There's also a CloudAdmin group that can be used to incorporate Azure Active Directory. For more information, see [Access and Identity Concepts](concepts-identity.md).
-#### Can have administrator access to ESXi hosts?
+### Can have administrator access to ESXi hosts?
No, administrator access to ESXi is restricted to meet the security requirements of the solution.
-#### What privileges and permissions will I have in vCenter?
+### What privileges and permissions will I have in vCenter?
You'll have CloudAdmin group privileges. For more information, see [Access and Identity Concepts](concepts-identity.md).
-#### What privileges and permissions will I have on the NSX-T manager?
+### What privileges and permissions will I have on the NSX-T manager?
You'll have full administrator privileges on NSX-T and can manage vSphere role-based access control as you would with NSX-T Data Center on-premises. For more information, see [Access and Identity Concepts](concepts-identity.md).
@@ -225,33 +225,33 @@ You'll have full administrator privileges on NSX-T and can manage vSphere role-b
## Billing and Support
-#### How will pricing be structured for Azure VMware Solution?
+### How will pricing be structured for Azure VMware Solution?
For general questions on pricing, see the Azure VMware Solution [pricing](https://azure.microsoft.com/pricing/details/azure-vmware) page.
-#### Can Azure VMware Solution be purchased through a Microsoft CSP?
+### Can Azure VMware Solution be purchased through a Microsoft CSP?
Yes, customers can deploy Azure VMware Solution within an Azure subscription managed by a CSP.
-#### Who supports Azure VMware Solution?
+### Who supports Azure VMware Solution?
Microsoft delivers support for Azure VMware Solution. You can submit a [support request](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest). For CSP-managed subscriptions, the first level of support provides the Solution Provider in the same fashion as CSP does for other Azure services.
-#### What accounts do I need to create an Azure VMware Solution private cloud?
+### What accounts do I need to create an Azure VMware Solution private cloud?
You'll need an Azure account in an Azure subscription.
-#### Are Red Hat solutions supported on Azure VMware Solution?
+### Are Red Hat solutions supported on Azure VMware Solution?
Microsoft and Red Hat share an integrated, colocated support team that provides a unified contact point for Red Hat ecosystems running on the Azure platform. Like other Azure platform services that work with Red Hat Enterprise Linux, Azure VMware Solution falls under the Cloud Access and integrated support umbrella. Red Hat Enterprise Linux is supported for running on top of Azure VMware Solution within Azure.
-#### Is VMware HCX Enterprise available, and if so, how much does it cost?
+### Is VMware HCX Enterprise available, and if so, how much does it cost?
VMware HCX Enterprise is available with Azure VMware Solution as a *Preview* function/service. While VMware HCX Enterprise for Azure VMware Solution is in Preview, it's a free function/service and subject to Preview service terms and conditions. Once the VMware HCX Enterprise service goes GA, you'll get a 30-day notice that billing will switch over. You can switch it off or opt-out of the service.
-#### How do I request a host quota increase for Azure VMware Solution?
+### How do I request a host quota increase for Azure VMware Solution?
For CSP-managed subscriptions, the customer must submit the request to the partner. The partner team then engages with Microsoft to get the quota increased for the subscription. See [How to enable Azure VMware Solution resource article](enable-azure-vmware-solution.md) for the details.
@@ -296,22 +296,22 @@ Before you can create your Azure VMware Solution resource, you'll submit a suppo
For more ways to register the resource provider, see [Azure resource providers and types](../azure-resource-manager/management/resource-providers-and-types.md).
-#### Are Reserved Instances available for purchasing through the Cloud Solution Provider (CSP) program?
+### Are Reserved Instances available for purchasing through the Cloud Solution Provider (CSP) program?
Yes. CSP can purchase reserved instances for their customers. For more information, see [Save costs with a reserved instance](reserved-instance.md).
-#### Does Azure VMware Solution offer multi-tenancy for hosting CSP partners?
+### Does Azure VMware Solution offer multi-tenancy for hosting CSP partners?
No. Currently, Azure VMware Solution doesn't offer multi-tenancy.
-#### Will traffic between on-premises and Azure VMware Solution over ExpressRoute incur any outbound data transfer charge in the metered data plan?
+### Will traffic between on-premises and Azure VMware Solution over ExpressRoute incur any outbound data transfer charge in the metered data plan?
Traffic in the Azure VMware Solution ExpressRoute circuit isn't metered in any way. Traffic from your ExpressRoute circuit connecting to your on-premises to Azure is charged according to ExpressRoute pricing plans. ## Customer communication
-#### How can I receive an alert when Azure sends service health notifications to my Azure subscription?
+### How can I receive an alert when Azure sends service health notifications to my Azure subscription?
Service issues, planned maintenance, health advisories, security advisories notifications are published through **Service Health** in the Azure portal. You can take timely actions when you set up activity log alerts for these notifications. For more information, see [Create service health alerts using the Azure portal](../service-health/alerts-activity-log-service-notifications-portal.md#create-service-health-alert-using-azure-portal).
backup https://docs.microsoft.com/en-us/azure/backup/encryption-at-rest-with-cmk https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/encryption-at-rest-with-cmk.md
@@ -149,22 +149,30 @@ To assign the key:
1. Enter the **Key URI** with which you want to encrypt the data in this Recovery Services vault. You also need to specify the subscription in which the Azure Key Vault (that contains this key) is present. This key URI can be obtained from the corresponding key in your Azure Key Vault. Ensure the key URI is copied correctly. It's recommended that you use the **Copy to clipboard** button provided with the key identifier.
+ >[!NOTE]
+ >When specifying the encryption key using the Key URI, the key will not be auto-rotated. So key updates will need to be done manually, by specifying the new key when required.
+ ![Enter key URI](./media/encryption-at-rest-with-cmk/key-uri.png) 1. Browse and select the key from the Key Vault in the key picker pane.
+ >[!NOTE]
+ >When specifying the encryption key using the key picker pane, the key will be auto-rotated whenever a new version for the key is enabled.
+ ![Select key from key vault](./media/encryption-at-rest-with-cmk/key-vault.png) 1. Select **Save**.
-1. **Tracking progress of encryption key update:** You can track the progress of the key assignment using the **Activity Log** in the Recovery Services vault. The status should soon change to **Succeeded**. Your vault will now encrypt all the data with the specified key as KEK.
+1. **Tracking progress and status of encryption key update**: You can track the progress and status of the encryption key assignment using the **Backup Jobs** view on the left navigation bar. The status should soon change to **Completed**. Your vault will now encrypt all the data with the specified key as KEK.
+
+ ![Status completed](./media/encryption-at-rest-with-cmk/status-succeeded.png)
- ![Track progress with activity log](./media/encryption-at-rest-with-cmk/activity-log.png)
+ The encryption key updates are also logged in the vaultΓÇÖs Activity Log.
- ![Status succeeded](./media/encryption-at-rest-with-cmk/status-succeeded.png)
+ ![Activity log](./media/encryption-at-rest-with-cmk/activity-log.png)
>[!NOTE]
-> This process remains the same when you wish to update/change the encryption key. If you wish to update and use a key from another Key Vault (different from the one that's being currently used), make sure that:
+> This process remains the same when you wish to update or change the encryption key. If you wish to update and use a key from another Key Vault (different from the one that's being currently used), make sure that:
> > - The Key Vault is located in the same region as the Recovery Services vault >
backup https://docs.microsoft.com/en-us/azure/backup/security-baseline https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/security-baseline.md
@@ -311,7 +311,7 @@ External accounts with owner permissions should be removed from your subscriptio
**Guidance**: Use a Privileged Access Workstation (PAW) with Azure AD Multi-Factor Authentication (MFA) configured to log into and configure your Azure Backup-enabled resources. -- [Privileged Access Workstations](/windows-server/identity/securing-privileged-access/privileged-access-workstations)
+- [Privileged Access Workstations](https://4sysops.com/archives/understand-the-microsoft-privileged-access-workstation-paw-security-model/)
- [Planning a cloud-based Azure AD Multi-Factor Authentication deployment](../active-directory/authentication/howto-mfa-getstarted.md)
bastion https://docs.microsoft.com/en-us/azure/bastion/security-baseline https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/bastion/security-baseline.md
@@ -213,7 +213,7 @@ You should ensure that the credentials (such as password, certificate, or smart
**Guidance**: Secured, isolated workstations are critically important for the security of sensitive roles like administrators, developers, and critical service operators. Depending on your requirements, you can use highly secured user workstations for performing administrative management tasks with your Azure Bastion resources in production environments. Use Azure Active Directory, Microsoft Defender Advanced Threat Protection (ATP), and/or Microsoft Intune to deploy a secure and managed user workstation for administrative tasks. The secured workstations can be centrally managed to enforce secured configuration, including strong authentication, software and hardware baselines, and restricted logical and network access. -- [Understand privileged access workstations](../active-directory/devices/concept-azure-managed-workstation.md)
+- [Understand privileged access workstations](https://4sysops.com/archives/understand-the-microsoft-privileged-access-workstation-paw-security-model/)
- [Deploy a privileged access workstation](../active-directory/devices/howto-azure-managed-workstation.md)
batch https://docs.microsoft.com/en-us/azure/batch/security-baseline https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/batch/security-baseline.md
@@ -319,7 +319,7 @@ In addition, you may use Azure Security Center Identity and Access Management re
**Guidance**: Use PAWs (privileged access workstations) with multifactor authentication configured to log into and configure your Azure Batch resources. -- [Learn about Privileged Access Workstations](/windows-server/identity/securing-privileged-access/privileged-access-workstations)
+- [Learn about Privileged Access Workstations](https://4sysops.com/archives/understand-the-microsoft-privileged-access-workstation-paw-security-model/)
- [How to enable multifactor authentication in Azure](../active-directory/authentication/howto-mfa-getstarted.md)
cdn https://docs.microsoft.com/en-us/azure/cdn/security-baseline https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cdn/security-baseline.md
@@ -58,7 +58,7 @@ All types of access controls should be aligned to your enterprise segmentation s
**Guidance**: Secured, isolated workstations are critically important for the security of sensitive roles like administrators, developers, and critical service operators. Use highly secured user workstations and/or Azure Bastion for administrative tasks. Use Azure Active Directory (Azure AD), Microsoft Defender Advanced Threat Protection (ATP), and/or Microsoft Intune to deploy a secure and managed user workstation for administrative tasks. The secured workstations can be centrally managed to enforce secured configuration including strong authentication, software and hardware baselines, restricted logical and network access. -- [Understand privileged access workstations](../active-directory/devices/concept-azure-managed-workstation.md)
+- [Understand privileged access workstations](https://4sysops.com/archives/understand-the-microsoft-privileged-access-workstation-paw-security-model/)
- [Deploy a privileged access workstation](../active-directory/devices/howto-azure-managed-workstation.md)
cloud-services https://docs.microsoft.com/en-us/azure/cloud-services/security-baseline https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services/security-baseline.md
@@ -351,7 +351,7 @@ Review the differences between classic subscription administrative roles.
**Guidance**: It is recommended to use a secure, Azure-managed workstation (also known as a Privileged Access Workstation) for administrative tasks, which require elevated privileges. -- [Understand secure, Azure-managed workstations](../active-directory/devices/concept-azure-managed-workstation.md)
+- [Understand secure, Azure-managed workstations](https://4sysops.com/archives/understand-the-microsoft-privileged-access-workstation-paw-security-model/)
- [How to enable Azure AD MFA](../active-directory/authentication/howto-mfa-getstarted.md)
cloud-shell https://docs.microsoft.com/en-us/azure/cloud-shell/private-vnet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-shell/private-vnet.md
@@ -84,9 +84,6 @@ If you already have a desired VNET that you would like to connect to, skip this
In the Azure portal, or using Azure CLI, Azure PowerShell, etc. create a resource group and a virtual network in the new resource group, **the resource group and virtual network need to be in the same region**.
-> [!NOTE]
-> While in public preview, the resource group and virtual network must be located in either WestCentralUS or WestUS.
- ### ARM templates Utilize the [Azure Quickstart Template](https://aka.ms/cloudshell/docs/vnet/template) for creating Cloud Shell resources in a virtual network, and the [Azure Quickstart Template](https://aka.ms/cloudshell/docs/vnet/template/storage) for creating necessary storage. Take note of your resource names, primarily your file share name.
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Computer-vision/intro-to-spatial-analysis-public-preview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Computer-vision/intro-to-spatial-analysis-public-preview.md
@@ -14,7 +14,7 @@ ms.date: 12/14/2020
# Introduction to Computer Vision spatial analysis
-Computer Vision spatial analysis is a new feature of Azure Cognitive Services Computer Vision that helps organizations maximize the value of their physical spaces by understanding people's movements and presence within a given area. It allows you to ingest video from CCTV or surveillance cameras, run AI skills to extract insights from the video streams, and generate events to be used by other systems. With input from a camera stream, an AI skill can do things like count the number of people entering a space or measure compliance with social distancing guidelines.
+Computer Vision spatial analysis is a new feature of Azure Cognitive Services Computer Vision that helps organizations maximize the value of their physical spaces by understanding people's movements and presence within a given area. It allows you to ingest video from CCTV or surveillance cameras, run AI operations to extract insights from the video streams, and generate events to be used by other systems. With input from a camera stream, an AI operation can do things like count the number of people entering a space or measure compliance with face mask and social distancing guidelines.
## The basics of spatial analysis
@@ -25,9 +25,10 @@ Today the core operations of spatial analysis are all built on a pipeline that i
| Term | Definition | |------|------------| | People Detection | This component answers the question "where are the people in this image"? It finds humans in an image and passes a bounding box indicating the location of each person to the people tracking component. |
-| People Tracking | This component connects the people detections over time as the people move around in front of a camera. It uses temporal logic about how people typically move and basic information about the overall appearance of the people to do this. It cannot track people across multiple cameras or reidentify someone who has disappeared for more than approximately one minute. People Tracking does not use any biometric markers like face recognition or gait tracking. |
-| Region of Interest | This is a zone or line defined in the input video as part of configuration. When a person interacts with the region of the video the system generates an event. For example, for the PersonCrossingLine skill, a line is defined in the video. When a person crosses that line an event is generated. |
-| Event | An event is the primary output of spatial analysis. Each skill emits a specific event either periodically (ex. once per minute) or when a specific trigger occurs. The event includes information about what occurred in the input video but does not include any images or video. For example, the PeopleCount skill can emit an event containing the updated count every time the count of people changes (trigger) or once every minute (periodically). |
+| People Tracking | This component connects the people detections over time as the people move around in front of a camera. It uses temporal logic about how people typically move and basic information about the overall appearance of the people to do this. It does not track people across multiple cameras. If a person exists the field of view from a camera for longer than approximately a minute and then re-enters the camera view, the system will perceive this as a new person. People Tracking does not uniquely identify individuals across cameras. It does not use facial recognition or gait tracking. |
+| Face Mask Detection | This component detects the location of a personΓÇÖs face in the cameraΓÇÖs field of view and identifies the presence of a face mask. To do so, the AI operation scans images from video; where a face is detected the service provides a bounding box around the face. Using object detection capabilities, it identifies the presence of face masks within the bounding box. Face Mask detection does not involve distinguishing one face from another face, predicting or classifying facial attributes or performing facial recognition. |
+| Region of Interest | This is a zone or line defined in the input video as part of configuration. When a person interacts with the region of the video the system generates an event. For example, for the PersonCrossingLine operation, a line is defined in the video. When a person crosses that line an event is generated. |
+| Event | An event is the primary output of spatial analysis. Each operation emits a specific event either periodically (ex. once per minute) or when a specific trigger occurs. The event includes information about what occurred in the input video but does not include any images or video. For example, the PeopleCount operation can emit an event containing the updated count every time the count of people changes (trigger) or once every minute (periodically). |
## Example use cases for spatial analysis
@@ -39,6 +40,8 @@ The following are example use cases that we had in mind as we designed and teste
**Queue Management** - Cameras pointed at checkout queues provide alerts to managers when wait time gets too long, allowing them to open more lines. Historical data on queue abandonment gives insights into consumer behavior.
+**Face Mask Compliance** ΓÇô Retail stores can use cameras pointing at the store fronts to check if customers walking into the store are wearing face masks to maintain safety compliance and analyze aggregate statistics to gain insights on mask usage trends.
+ **Building Occupancy & Analysis** - An office building uses cameras focused on entrances to key spaces to measure footfall and how people use the workplace. Insights allow the building manager to adjust service and layout to better serve occupants. **Minimum Staff Detection** - In a data center, cameras monitor activity around servers. When employees are physically fixing sensitive equipment two people are always required to be present during the repair for security reasons. Cameras are used to verify that this guideline is followed.
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Speech-Service/how-to-custom-speech-test-and-train https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/how-to-custom-speech-test-and-train.md
@@ -116,11 +116,14 @@ Audio files can have silence at the beginning and end of the recording. If possi
To address issues like word deletion or substitution, a significant amount of data is required to improve recognition. Generally, it's recommended to provide word-by-word transcriptions for roughly 10 to 20 hours of audio. The transcriptions for all WAV files should be contained in a single plain-text file. Each line of the transcription file should contain the name of one of the audio files, followed by the corresponding transcription. The file name and transcription should be separated by a tab (\t).
- For example:
-```
- speech01.wav speech recognition is awesome
- speech02.wav the quick brown fox jumped all over the place
- speech03.wav the lazy dog was not amused
+For example:
+
+<!-- The following example contains tabs. Don't accidentally convert these into spaces. -->
+
+```input
+speech01.wav speech recognition is awesome
+speech02.wav the quick brown fox jumped all over the place
+speech03.wav the lazy dog was not amused
``` > [!IMPORTANT]
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Speech-Service/language-support https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/language-support.md
@@ -31,7 +31,7 @@ To get pronunciation bits:
https://cris.ai -> Click on Adaptation Data -> scroll down to section "Pronunciation Datasets" -> Click on Import -> Locale: the list of locales there correspond to the supported locales -->
-| Language | Locale (BCP-47) | Customizations | [Automatic language detection?](how-to-automatic-language-detection.md) |
+| Language | Locale (BCP-47) | Customizations | [Language detection](how-to-automatic-language-detection.md) |
|------------------------------------|--------|---------------------------------------------------|-------------------------------| | Arabic (Bahrain), modern standard | `ar-BH` | Language model | Yes | | Arabic (Egypt) | `ar-EG` | Language model | Yes |
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Speech-Service/spx-basics https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/spx-basics.md
@@ -171,11 +171,13 @@ In Windows, you can play the audio file by entering `start greetings.wav`.
The easiest way to run batch text-to-speech is to create a new `.tsv` (tab-separated-value) file, and leverage the `--foreach` command in the Speech CLI. Consider the following file `text_synthesis.tsv`:
-```output
-audio.output text
-C:\batch_wav_output\wav_1.wav Sample text to synthesize.
-C:\batch_wav_output\wav_2.wav Using the Speech CLI to run batch-synthesis.
-C:\batch_wav_output\wav_3.wav Some more text to test capabilities.
+<!-- The following example contains tabs. Don't accidentally convert these into spaces. -->
+
+```input
+audio.output text
+C:\batch_wav_output\wav_1.wav Sample text to synthesize.
+C:\batch_wav_output\wav_2.wav Using the Speech CLI to run batch-synthesis.
+C:\batch_wav_output\wav_3.wav Some more text to test capabilities.
``` Next, you run a command to point to `text_synthesis.tsv`, perform synthesis on each `text` field, and write the result to the corresponding `audio.output` path as a `.wav` file.
@@ -192,11 +194,13 @@ This command is the equivalent of running `spx synthesize --text Sample text to
However, if you have a `.tsv` file like the following example, with column headers that **do not match** command line arguments:
-```output
-wav_path str_text
-C:\batch_wav_output\wav_1.wav Sample text to synthesize.
-C:\batch_wav_output\wav_2.wav Using the Speech CLI to run batch-synthesis.
-C:\batch_wav_output\wav_3.wav Some more text to test capabilities.
+<!-- The following example contains tabs. Don't accidentally convert these into spaces. -->
+
+```input
+wav_path str_text
+C:\batch_wav_output\wav_1.wav Sample text to synthesize.
+C:\batch_wav_output\wav_2.wav Using the Speech CLI to run batch-synthesis.
+C:\batch_wav_output\wav_3.wav Some more text to test capabilities.
``` You can override these field names to the correct arguments using the following syntax in the `--foreach` call. This is the same call as above.
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/personalizer/concepts-features https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/personalizer/concepts-features.md
@@ -32,12 +32,12 @@ Personalizer does not prescribe, limit, or fix what features you can send for ac
## Supported feature types
-Personalizer supports features of string, numeric, and boolean types.
+Personalizer supports features of string, numeric, and boolean types. It is very likely that your application will mostly use string features, with a few exceptions.
### How choice of feature type affects Machine Learning in Personalizer
-* **Strings**: For string types, every combination of key and value creates new weights in the Personalizer machine learning model.
-* **Numeric**: You should use numerical values when the number should proportionally affect the personalization result. This is very scenario dependent. In a simplified example e.g. when personalizing a retail experience, NumberOfPetsOwned could be a feature that is numeric as you may want people with 2 or 3 pets to influence the personalization result twice or thrice as much as having 1 pet. Features that are based on numeric units but where the meaning isn't linear - such as Age, Temperature, or Person Height - are best encoded as strings, and the feature quality can typically be improved by using ranges. For example, Age could be encoded as "Age":"0-5", "Age":"6-10", etc.
+* **Strings**: For string types, every combination of key and value is treated as a One-Hot feature (e.g. genre:"ScienceFiction" and genre:"Documentary" would create two new input features for the machine learning model.
+* **Numeric**: You should use numerical values when the number is a magnitude that should proportionally affect the personalization result. This is very scenario dependent. In a simplified example e.g. when personalizing a retail experience, NumberOfPetsOwned could be a feature that is numeric as you may want people with 2 or 3 pets to influence the personalization result twice or thrice as much as having 1 pet. Features that are based on numeric units but where the meaning isn't linear - such as Age, Temperature, or Person Height - are best encoded as strings. For example DayOfMonth would be a string with "1","2"..."31". If you have many categories The feature quality can typically be improved by using ranges. For example, Age could be encoded as "Age":"0-5", "Age":"6-10", etc.
* **Boolean** values sent with value of "false" act as if they hadn't been sent at all. Features that are not present should be omitted from the request. Avoid sending features with a null value, because it will be processed as existing and with a value of "null" when training the model.
@@ -75,12 +75,14 @@ JSON objects can include nested JSON objects and simple property/values. An arra
{ "user": { "profileType":"AnonymousUser",
- "latlong": [47.6, -122.1]
+ "latlong": ["47.6", "-122.1"]
} }, {
- "state": {
- "timeOfDay": "noon",
+ "environment": {
+ "dayOfMonth": "28",
+ "monthOfYear": "8",
+ "timeOfDay": "13:00",
"weather": "sunny" } },
@@ -89,6 +91,13 @@ JSON objects can include nested JSON objects and simple property/values. An arra
"mobile":true, "Windows":true }
+ },
+ {
+ "userActivity" : {
+ "itemsInCart": 3,
+ "cartValue": 250,
+ "appliedCoupon": true
+ }
} ] }
@@ -108,6 +117,8 @@ A good feature set helps Personalizer learn how to predict the action that will
Consider sending features to the Personalizer Rank API that follow these recommendations:
+* Use categorical and string types for features that are not a magnitude.
+ * There are enough features to drive personalization. The more precisely targeted the content needs to be, the more features are needed. * There are enough features of diverse *densities*. A feature is *dense* if many items are grouped in a few buckets. For example, thousands of videos can be classified as "Long" (over 5 min long) and "Short" (under 5 min long). This is a *very dense* feature. On the other hand, the same thousands of items can have an attribute called "Title", which will almost never have the same value from one item to another. This is a very non-dense or *sparse* feature.
@@ -317,4 +328,4 @@ JSON objects can include nested JSON objects and simple property/values. An arra
## Next steps
-[Reinforcement learning](concepts-reinforcement-learning.md)
\ No newline at end of file
+[Reinforcement learning](concepts-reinforcement-learning.md)
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/security-baseline https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/security-baseline.md
@@ -360,7 +360,7 @@ Additionally, to help you keep track of dedicated administrative accounts, you m
**Guidance**: Use privileged access workstations (PAW) with Multi-Factor Authentication (MFA) configured to log into and configure Azure resources.
-* [Learn about Privileged Access Workstations](/windows-server/identity/securing-privileged-access/privileged-access-workstations)
+* [Learn about Privileged Access Workstations](https://4sysops.com/archives/understand-the-microsoft-privileged-access-workstation-paw-security-model/)
* [How to enable MFA in Azure](../active-directory/authentication/howto-mfa-getstarted.md)
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/whats-new-docs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/whats-new-docs.md
@@ -33,31 +33,31 @@ Welcome to what's new in the Cognitive Services docs from December 1, 2020 throu
### Updated articles -- [Use an insights token to get insights for an image](/azure/cognitive-services/bing-visual-search/use-insights-token.md)
+- [Use an insights token to get insights for an image](/azure/cognitive-services/bing-visual-search/use-insights-token)
## Containers ### Updated articles -- [Deploy and run container on Azure Container Instance](/azure/cognitive-services/containers/azure-container-instance-recipe.md)
+- [Deploy and run container on Azure Container Instance](/azure/cognitive-services/containers/azure-container-instance-recipe)
## Form Recognizer ### Updated articles -- [Form Recognizer landing page](/azure/cognitive-services/form-recognizer/index.yml)-- [Quickstart: Use the Form Recognizer client library](/azure/cognitive-services/form-recognizer/quickstarts/client-library.md)
+- [Form Recognizer landing page](/azure/cognitive-services/form-recognizer/)
+- [Quickstart: Use the Form Recognizer client library](/azure/cognitive-services/form-recognizer/quickstarts/client-library)
## Text Analytics ### Updated articles -- [Text Analytics API v3 language support](/azure/cognitive-services/text-analytics/language-support.md)-- [How to call the Text Analytics REST API](/azure/cognitive-services/text-analytics/how-tos/text-analytics-how-to-call-api.md)-- [How to use Named Entity Recognition in Text Analytics](/azure/cognitive-services/text-analytics/how-tos/text-analytics-how-to-entity-linking.md)-- [Example: How to extract key phrases using Text Analytics](/azure/cognitive-services/text-analytics/how-tos/text-analytics-how-to-keyword-extraction.md)-- [Text Analytics API Documentation - Tutorials, API Reference - Azure Cognitive Services | Microsoft Docs](/azure/cognitive-services/text-analytics/index.yml)-- [Quickstart: Use the Text Analytics client library and REST API](/azure/cognitive-services/text-analytics/quickstarts/client-libraries-rest-api.md)
+- [Text Analytics API v3 language support](/azure/cognitive-services/text-analytics/language-support)
+- [How to call the Text Analytics REST API](/azure/cognitive-services/text-analytics/how-tos/text-analytics-how-to-call-api)
+- [How to use Named Entity Recognition in Text Analytics](/azure/cognitive-services/text-analytics/how-tos/text-analytics-how-to-entity-linking)
+- [Example: How to extract key phrases using Text Analytics](/azure/cognitive-services/text-analytics/how-tos/text-analytics-how-to-keyword-extraction)
+- [Text Analytics API Documentation - Tutorials, API Reference - Azure Cognitive Services | Microsoft Docs](/azure/cognitive-services/text-analytics/)
+- [Quickstart: Use the Text Analytics client library and REST API](/azure/cognitive-services/text-analytics/quickstarts/client-libraries-rest-api)
## Community contributors
container-instances https://docs.microsoft.com/en-us/azure/container-instances/security-baseline https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-instances/security-baseline.md
@@ -383,7 +383,7 @@ If you use an Azure container registry with Azure Container Instances, create pr
**Guidance**: Use PAWs (privileged access workstations) with MFA configured to log into and configure Azure resources.
-* [Learn about Privileged Access Workstations](/windows-server/identity/securing-privileged-access/privileged-access-workstations)
+* [Learn about Privileged Access Workstations](https://4sysops.com/archives/understand-the-microsoft-privileged-access-workstation-paw-security-model/)
* [How to enable MFA in Azure](../active-directory/authentication/howto-mfa-getstarted.md)
container-registry https://docs.microsoft.com/en-us/azure/container-registry/security-baseline https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-registry/security-baseline.md
@@ -364,7 +364,7 @@ How to monitor identity and access within Azure Security Center: https://docs.m
**Guidance**: Use PAWs (privileged access workstations) with MFA configured to log into and configure Azure resources.
-Learn about Privileged Access Workstations: https://docs.microsoft.com/windows-server/identity/securing-privileged-access/privileged-access-workstations
+Learn about Privileged Access Workstations: https://4sysops.com/archives/understand-the-microsoft-privileged-access-workstation-paw-security-model/
How to enable MFA in Azure: https://docs.microsoft.com/azure/active-directory/authentication/howto-mfa-getstarted
cosmos-db https://docs.microsoft.com/en-us/azure/cosmos-db/configure-synapse-link https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/configure-synapse-link.md
@@ -74,16 +74,15 @@ You can turn on analytical store on an Azure Cosmos container while creating the
The following code creates a container with analytical store by using the .NET SDK. Set the analytical TTL property to the required value. For the list of allowed values, see the [analytical TTL supported values](analytical-store-introduction.md#analytical-ttl) article: ```csharp
-// Create a container with a partition key, and analytical TTL configured to -1 (infinite retention)
-string containerId = ΓÇ£myContainerNameΓÇ¥;
-int analyticalTtlInSec = -1;
-ContainerProperties cpInput = new ContainerProperties()
- {
-Id = containerId,
-PartitionKeyPath = "/id",
-AnalyticalStorageTimeToLiveInSeconds = analyticalTtlInSec,
+// Create a container with a partition key, and analytical TTL configured to -1 (infinite retention)
+ContainerProperties properties = new ContainerProperties()
+{
+ Id = "myContainerId",
+ PartitionKeyPath = "/id",
+ AnalyticalStoreTimeToLiveInSeconds = -1,
};
- await this. cosmosClient.GetDatabase("myDatabase").CreateContainerAsync(cpInput);
+CosmosClient cosmosClient = new CosmosClient("myConnectionString");
+await cosmosClient.GetDatabase("myDatabase").CreateContainerAsync(properties);
``` ### Java V4 SDK
cosmos-db https://docs.microsoft.com/en-us/azure/cosmos-db/postgres-migrate-cosmos-db-kafka https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/postgres-migrate-cosmos-db-kafka.md
@@ -102,7 +102,7 @@ bin/kafka-server-start.sh config/server.properties
### Setup connectors
-Install the Debezium PostgreSQL and DataStax Apache Kafka connector. Download the Debezium PostgreSQL connector plug-in archive. For example, to download version 1.3.0 of the connector (latest at the time of writing), use [this link](https://repo1.maven.org/maven2/io/debezium/debezium-connector-postgres/1.3.0.Final/debezium-connector-postgres-1.2.0.Final-plugin.tar.gz). Download the DataStax Apache Kafka connector from [this link](https://downloads.datastax.com/#akc).
+Install the Debezium PostgreSQL and DataStax Apache Kafka connector. Download the Debezium PostgreSQL connector plug-in archive. For example, to download version 1.3.0 of the connector (latest at the time of writing), use [this link](https://repo1.maven.org/maven2/io/debezium/debezium-connector-postgres/1.3.0.Final/debezium-connector-postgres-1.3.0.Final-plugin.tar.gz). Download the DataStax Apache Kafka connector from [this link](https://downloads.datastax.com/#akc).
Unzip both the connector archives and copy the JAR files to the [Kafka Connect plugin.path](https://kafka.apache.org/documentation/#connectconfigs).
cosmos-db https://docs.microsoft.com/en-us/azure/cosmos-db/security-baseline https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/security-baseline.md
@@ -357,7 +357,7 @@ How to monitor identity and access within Azure Security Center: https://docs.mi
**Guidance**: Use Privileged Access Workstations (PAW) with Multi-Factor Authentication configured to log into and configure Azure resources.
-Learn about Privileged Access Workstations: https://docs.microsoft.com/windows-server/identity/securing-privileged-access/privileged-access-workstations
+Learn about Privileged Access Workstations: https://4sysops.com/archives/understand-the-microsoft-privileged-access-workstation-paw-security-model/
How to enable MFA in Azure: https://docs.microsoft.com/azure/active-directory/authentication/howto-mfa-getstarted
cosmos-db https://docs.microsoft.com/en-us/azure/cosmos-db/sql-api-sdk-python https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql-api-sdk-python.md
@@ -317,34 +317,34 @@ Version 4.0.0b1 is the first preview of our efforts to create a user-friendly cl
Microsoft provides notification at least **12 months** in advance of retiring an SDK in order to smooth the transition to a newer/supported version. New features and functionality and optimizations are only added to the current SDK, as such it is recommended that you always upgrade to the latest SDK version as early as possible. > [!WARNING]
-> After 31 August 2022, Azure Cosmos DB will no longer make bug fixes, add new features, and provide support to versions 1.x or 2.x of the Azure Cosmos DB Python SDK for SQL API. If you prefer not to upgrade, requests sent from version 1.x and 2.x of the SDK will continue to be served by the Azure Cosmos DB service.
+> After 31 August 2022, Azure Cosmos DB will no longer make bug fixes or provide support to versions 1.x and 2.x of the Azure Cosmos DB Python SDK for SQL API. If you prefer not to upgrade, requests sent from version 1.x and 2.x of the SDK will continue to be served by the Azure Cosmos DB service.
| Version | Release Date | Retirement Date | | --- | --- | --- | | [4.0.0](#400) |May 20, 2020 |--- | | [3.0.2](#302) |Nov 15, 2018 |--- | | [3.0.1](#301) |Oct 04, 2018 |--- |
-| [2.3.3](#233) |Sept 08, 2018 |August 30, 2020 |
-| [2.3.2](#232) |May 08, 2018 |August 30, 2020 |
-| [2.3.1](#231) |December 21, 2017 |August 30, 2020 |
-| [2.3.0](#230) |November 10, 2017 |August 30, 2020 |
-| [2.2.1](#221) |Sep 29, 2017 |August 30, 2020 |
-| [2.2.0](#220) |May 10, 2017 |August 30, 2020 |
-| [2.1.0](#210) |May 01, 2017 |August 30, 2020 |
-| [2.0.1](#201) |October 30, 2016 |August 30, 2020 |
-| [2.0.0](#200) |September 29, 2016 |August 30, 2020 |
-| [1.9.0](#190) |July 07, 2016 |August 30, 2020 |
-| [1.8.0](#180) |June 14, 2016 |August 30, 2020 |
-| [1.7.0](#170) |April 26, 2016 |August 30, 2020 |
-| [1.6.1](#161) |April 08, 2016 |August 30, 2020 |
-| [1.6.0](#160) |March 29, 2016 |August 30, 2020 |
-| [1.5.0](#150) |January 03, 2016 |August 30, 2020 |
-| [1.4.2](#142) |October 06, 2015 |August 30, 2020 |
-| 1.4.1 |October 06, 2015 |August 30, 2020 |
-| [1.2.0](#120) |August 06, 2015 |August 30, 2020 |
-| [1.1.0](#110) |July 09, 2015 |August 30, 2020 |
-| [1.0.1](#101) |May 25, 2015 |August 30, 2020 |
-| 1.0.0 |April 07, 2015 |August 30, 2020 |
+| [2.3.3](#233) |Sept 08, 2018 |August 31, 2022 |
+| [2.3.2](#232) |May 08, 2018 |August 31, 2022 |
+| [2.3.1](#231) |December 21, 2017 |August 31, 2022 |
+| [2.3.0](#230) |November 10, 2017 |August 31, 2022 |
+| [2.2.1](#221) |Sep 29, 2017 |August 31, 2022 |
+| [2.2.0](#220) |May 10, 2017 |August 31, 2022 |
+| [2.1.0](#210) |May 01, 2017 |August 31, 2022 |
+| [2.0.1](#201) |October 30, 2016 |August 31, 2022 |
+| [2.0.0](#200) |September 29, 2016 |August 31, 2022 |
+| [1.9.0](#190) |July 07, 2016 |August 31, 2022 |
+| [1.8.0](#180) |June 14, 2016 |August 31, 2022 |
+| [1.7.0](#170) |April 26, 2016 |August 31, 2022 |
+| [1.6.1](#161) |April 08, 2016 |August 31, 2022 |
+| [1.6.0](#160) |March 29, 2016 |August 31, 2022 |
+| [1.5.0](#150) |January 03, 2016 |August 31, 2022 |
+| [1.4.2](#142) |October 06, 2015 |August 31, 2022 |
+| 1.4.1 |October 06, 2015 |August 31, 2022 |
+| [1.2.0](#120) |August 06, 2015 |August 31, 2022 |
+| [1.1.0](#110) |July 09, 2015 |August 31, 2022 |
+| [1.0.1](#101) |May 25, 2015 |August 31, 2022 |
+| 1.0.0 |April 07, 2015 |August 31, 2022 |
| 0.9.4-prelease |January 14, 2015 |February 29, 2016 | | 0.9.3-prelease |December 09, 2014 |February 29, 2016 | | 0.9.2-prelease |November 25, 2014 |February 29, 2016 |
@@ -357,4 +357,4 @@ Microsoft provides notification at least **12 months** in advance of retiring an
## Next steps
-To learn more about Cosmos DB, see [Microsoft Azure Cosmos DB](https://azure.microsoft.com/services/cosmos-db/) service page.
\ No newline at end of file
+To learn more about Cosmos DB, see [Microsoft Azure Cosmos DB](https://azure.microsoft.com/services/cosmos-db/) service page.
cosmos-db https://docs.microsoft.com/en-us/azure/cosmos-db/synapse-link-power-bi https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/synapse-link-power-bi.md
@@ -36,7 +36,7 @@ Make sure to create the following resources before you start:
## Create a database and views
-From the Synapse workspace go the **Develop** tab, select the **+** icon, and select **SQL Script**.
+Creating views in the master or default databases is not recommended or supported. So you need to start this step by creating a database. From the Synapse workspace go the **Develop** tab, select the **+** icon, and select **SQL Script**.
:::image type="content" source="./media/synapse-link-power-bi/add-sql-script.png" alt-text="Add a SQL script to the Synapse Analytics workspace":::
data-factory https://docs.microsoft.com/en-us/azure/data-factory/connector-amazon-simple-storage-service https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-amazon-simple-storage-service.md
@@ -10,7 +10,7 @@ ms.service: data-factory
ms.workload: data-services ms.topic: conceptual ms.custom: seo-lt-2019
-ms.date: 12/08/2020
+ms.date: 01/14/2021
--- # Copy data from Amazon Simple Storage Service by using Azure Data Factory
@@ -66,6 +66,7 @@ The following properties are supported for an Amazon S3 linked service:
| secretAccessKey | The secret access key itself. Mark this field as a **SecureString** to store it securely in Data Factory, or [reference a secret stored in Azure Key Vault](store-credentials-in-key-vault.md). |Yes | | sessionToken | Applicable when using [temporary security credentials](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp.html) authentication. Learn how to [request temporary security credentials](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp_request.html#api_getsessiontoken) from AWS.<br>Note AWS temporary credential expires between 15 minutes to 36 hours based on settings. Make sure your credential is valid when activity executes, especially for operationalized workload - for example, you can refresh it periodically and store it in Azure Key Vault.<br>Mark this field as a **SecureString** to store it securely in Data Factory, or [reference a secret stored in Azure Key Vault](store-credentials-in-key-vault.md). |No | | serviceUrl | Specify the custom S3 endpoint if you're copying data from an S3-compatible storage provider other than the official Amazon S3 service. For example, to copy data from Google Cloud Storage, specify `https://storage.googleapis.com`. | No |
+| forcePathStyle | Indicates whether to use S3 [path-style access](https://docs.aws.amazon.com/AmazonS3/latest/dev/VirtualHosting.html#path-style-access) instead of virtual hosted-style access. Allowed values are: **false** (default), **true**.<br>If you're connecting to S3-compatible storage provider other than the official Amazon S3 service, and that data store requires path-style access (for example, [Oracle Cloud Storage](https://docs.oracle.com/iaas/Content/Object/Tasks/s3compatibleapi.htm)), set this property to true. Check each data storeΓÇÖs documentation on if path-style access is needed or not. |No |
| connectVia | The [integration runtime](concepts-integration-runtime.md) to be used to connect to the data store. You can use the Azure integration runtime or the self-hosted integration runtime (if your data store is in a private network). If this property isn't specified, the service uses the default Azure integration runtime. |No | >[!TIP]
data-factory https://docs.microsoft.com/en-us/azure/data-factory/create-self-hosted-integration-runtime https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/create-self-hosted-integration-runtime.md
@@ -152,21 +152,21 @@ Here are details of the application's actions and arguments:
|ACTION|args|Description| |------|----|-----------|
-|-rn,<br/>-RegisterNewNode|"`<AuthenticationKey>`" ["`<NodeName>`"]|Register a self-hosted integration runtime node with the specified authentication key and node name.|
-|-era,<br/>-EnableRemoteAccess|"`<port>`" ["`<thumbprint>`"]|Enable remote access on the current node to set up a high-availability cluster. Or enable setting credentials directly against the self-hosted IR without going through Azure Data Factory. You do the latter by using the **New-AzDataFactoryV2LinkedServiceEncryptedCredential** cmdlet from a remote machine in the same network.|
-|-erac,<br/>-EnableRemoteAccessInContainer|"`<port>`" ["`<thumbprint>`"]|Enable remote access to the current node when the node runs in a container.|
-|-dra,<br/>-DisableRemoteAccess||Disable remote access to the current node. Remote access is needed for multinode setup. The **New-AzDataFactoryV2LinkedServiceEncryptedCredential** PowerShell cmdlet still works even when remote access is disabled. This behavior is true as long as the cmdlet is executed on the same machine as the self-hosted IR node.|
-|-k,<br/>-Key|"`<AuthenticationKey>`"|Overwrite or update the previous authentication key. Be careful with this action. Your previous self-hosted IR node can go offline if the key is of a new integration runtime.|
-|-gbf,<br/>-GenerateBackupFile|"`<filePath>`" "`<password>`"|Generate a backup file for the current node. The backup file includes the node key and data-store credentials.|
-|-ibf,<br/>-ImportBackupFile|"`<filePath>`" "`<password>`"|Restore the node from a backup file.|
-|-r,<br/>-Restart||Restart the self-hosted integration runtime host service.|
-|-s,<br/>-Start||Start the self-hosted integration runtime host service.|
-|-t,<br/>-Stop||Stop the self-hosted integration runtime host service.|
-|-sus,<br/>-StartUpgradeService||Start the self-hosted integration runtime upgrade service.|
-|-tus,<br/>-StopUpgradeService||Stop the self-hosted integration runtime upgrade service.|
-|-tonau,<br/>-TurnOnAutoUpdate||Turn on the self-hosted integration runtime auto-update.|
-|-toffau,<br/>-TurnOffAutoUpdate||Turn off the self-hosted integration runtime auto-update.|
-|-ssa,<br/>-SwitchServiceAccount|"`<domain\user>`" ["`<password>`"]|Set DIAHostService to run as a new account. Use the empty password "" for system accounts and virtual accounts.|
+|`-rn`,<br/>`-RegisterNewNode`|"`<AuthenticationKey>`" ["`<NodeName>`"]|Register a self-hosted integration runtime node with the specified authentication key and node name.|
+|`-era`,<br/>`-EnableRemoteAccess`|"`<port>`" ["`<thumbprint>`"]|Enable remote access on the current node to set up a high-availability cluster. Or enable setting credentials directly against the self-hosted IR without going through Azure Data Factory. You do the latter by using the **New-AzDataFactoryV2LinkedServiceEncryptedCredential** cmdlet from a remote machine in the same network.|
+|`-erac`,<br/>`-EnableRemoteAccessInContainer`|"`<port>`" ["`<thumbprint>`"]|Enable remote access to the current node when the node runs in a container.|
+|`-dra`,<br/>`-DisableRemoteAccess`||Disable remote access to the current node. Remote access is needed for multinode setup. The **New-AzDataFactoryV2LinkedServiceEncryptedCredential** PowerShell cmdlet still works even when remote access is disabled. This behavior is true as long as the cmdlet is executed on the same machine as the self-hosted IR node.|
+|`-k`,<br/>`-Key`|"`<AuthenticationKey>`"|Overwrite or update the previous authentication key. Be careful with this action. Your previous self-hosted IR node can go offline if the key is of a new integration runtime.|
+|`-gbf`,<br/>`-GenerateBackupFile`|"`<filePath>`" "`<password>`"|Generate a backup file for the current node. The backup file includes the node key and data-store credentials.|
+|`-ibf`,<br/>`-ImportBackupFile`|"`<filePath>`" "`<password>`"|Restore the node from a backup file.|
+|`-r`,<br/>`-Restart`||Restart the self-hosted integration runtime host service.|
+|`-s`,<br/>`-Start`||Start the self-hosted integration runtime host service.|
+|`-t`,<br/>`-Stop`||Stop the self-hosted integration runtime host service.|
+|`-sus`,<br/>`-StartUpgradeService`||Start the self-hosted integration runtime upgrade service.|
+|`-tus`,<br/>`-StopUpgradeService`||Stop the self-hosted integration runtime upgrade service.|
+|`-tonau`,<br/>`-TurnOnAutoUpdate`||Turn on the self-hosted integration runtime auto-update.|
+|`-toffau`,<br/>`-TurnOffAutoUpdate`||Turn off the self-hosted integration runtime auto-update.|
+|`-ssa`,<br/>`-SwitchServiceAccount`|"`<domain\user>`" ["`<password>`"]|Set DIAHostService to run as a new account. Use the empty password "" for system accounts and virtual accounts.|
## Install and register a self-hosted IR from Microsoft Download Center
@@ -200,9 +200,9 @@ The default log on service account of Self-hosted integration runtime is **NT SE
Make sure the account has the permission of Log on as a service. Otherwise self-hosted integration runtime can't start successfully. You can check the permission in **Local Security Policy -> Security Settings -> Local Policies -> User Rights Assignment -> Log on as a service**
-![Service account permission](media/create-self-hosted-integration-runtime/shir-service-account-permission.png)
+![Screenshot of Local Security Policy - User Rights Assignment](media/create-self-hosted-integration-runtime/shir-service-account-permission.png)
-![Service account permission](media/create-self-hosted-integration-runtime/shir-service-account-permission-2.png)
+![Screenshot of Log on as a service user rights assignment](media/create-self-hosted-integration-runtime/shir-service-account-permission-2.png)
## Notification area icons and notifications
data-factory https://docs.microsoft.com/en-us/azure/data-factory/pipeline-trigger-troubleshoot-guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/pipeline-trigger-troubleshoot-guide.md
@@ -13,35 +13,29 @@ ms.reviewer: susabat
[!INCLUDE[appliesto-adf-asa-md](includes/appliesto-adf-asa-md.md)]
-A pipeline run in Azure Data Factory defines an instance of a pipeline execution. For example, you have a pipeline that executes at 8:00 AM, 9:00 AM, and 10:00 AM. In this case, there are three separate runs of the pipeline or pipeline runs. Each pipeline run has a unique pipeline run ID. A run ID is a GUID (Globally Unique Identifier) that defines that particular pipeline run.
+A pipeline run in Azure Data Factory defines an instance of a pipeline execution. For example, let's say you have a pipeline that runs at 8:00 AM, 9:00 AM, and 10:00 AM. In this case, there are three separate pipeline runs. Each pipeline run has a unique pipeline run ID. A run ID is a globally unique identifier (GUID) that defines that particular pipeline run.
-Pipeline runs are typically instantiated by passing arguments to parameters that you define in the pipeline. You can execute a pipeline either manually or by using a trigger. Refer to [Pipeline execution and triggers in Azure Data Factory](concepts-pipeline-execution-triggers.md) for details.
+Pipeline runs are typically instantiated by passing arguments to parameters that you define in the pipeline. You can run a pipeline either manually or by using a trigger. See [Pipeline execution and triggers in Azure Data Factory](concepts-pipeline-execution-triggers.md) for details.
## Common issues, causes, and solutions
-### Pipeline with Azure Function throws error with private end-point connectivity
+### An Azure Functions app pipeline throws an error with private endpoint connectivity
-#### Issue
-For some context, you have Data Factory and Azure Function App running on a private endpoint. You are trying to get a pipeline that interacts with the Azure Function App to work. You have tried three different methods, but one returns error `Bad Request`, the other two methods return `103 Error Forbidden`.
+You have Data Factory and an Azure function app running on a private endpoint. You're trying to run a pipeline that interacts with the function app. You've tried three different methods, but one returns error "Bad Request," and the other two methods return "103 Error Forbidden."
-#### Cause
-Data Factory currently does not support a private endpoint connector for Azure Function App. And this should be the reason why Azure Function App is rejecting the calls since it would be configured to allow only connections from a Private Link.
+**Cause**: Data Factory currently doesn't support a private endpoint connector for function apps. Azure Functions rejects calls because it's configured to allow only connections from a private link.
-#### Resolution
-You can create a Private Endpoint of type **PrivateLinkService** and provide your function app's DNS, and the connection should work.
+**Resolution**: Create a **PrivateLinkService** endpoint and provide your function app's DNS.
-### Pipeline run is killed but the monitor still shows progress status
+### A pipeline run is canceled but the monitor still shows progress status
-#### Issue
-Often when you kill a pipeline run, pipeline monitoring still shows the progress status. This happens because of the cache issue in browser and you are not having right filters for monitoring.
+When you cancel a pipeline run, pipeline monitoring often still shows the progress status. This happens because of a browser cache issue. You also might not have the correct monitoring filters.
-#### Resolution
-Refresh the browser and apply right filters for monitoring.
+**Resolution**: Refresh the browser and apply the correct monitoring filters.
-### Copy Pipeline failure ΓÇô found more columns than expected column count (DelimitedTextMoreColumnsThanDefined)
-
-#### Issue
-If the files under a particular folder you are copying contains files with different schemas like variable number of columns, different delimiters, quote char settings, or some data issue, the Data Factory pipeline will end up running in this error:
+### You see a "DelimitedTextMoreColumnsThanDefined" error when copying a pipeline
+
+If a folder you're copying contains files with different schemas, such as variable number of columns, different delimiters, quote char settings, or some data issue, the Data Factory pipeline might throw this error:
` Operation on target Copy_sks failed: Failure happened on 'Sink' side.
@@ -51,51 +45,41 @@ Message=Error found when processing 'Csv/Tsv Format Text' source '0_2020_11_09_1
Source=Microsoft.DataTransfer.Common,' `
-#### Resolution
-Select "Binary Copy" option while creating the Copy Data activity. In this way, for bulk copy or migrating your data from one Data Lake to another, with **binary** option, Data Factory won't open the files to read schema, but just treat every file as binary and copy them to the other location.
+**Resolution**: Select the **Binary Copy** option while creating the Copy activity. This way, for bulk copies or migrating your data from one data lake to another, Data Factory won't open the files to read the schema. Instead, Data Factory will treat each file as binary and copy it to the other location.
-### Pipeline run fails when capacity limit of integration runtime is reached
+### A pipeline run fails when you reach the capacity limit of the integration runtime
-#### Issue
Error message: ` Type=Microsoft.DataTransfer.Execution.Core.ExecutionException,Message=There are substantial concurrent MappingDataflow executions which is causing failures due to throttling under Integration Runtime 'AutoResolveIntegrationRuntime'. `
-The error indicates the limitation of per integration runtime, which is currently 50. Refer to [Limits](https://docs.microsoft.com/azure/azure-resource-manager/management/azure-subscription-service-limits#version-2) for details.
+**Cause**: You've reached the integration runtime's capacity limit. You might be running a large amount of data flow by using the same integration runtime at the same time. See [Azure subscription and service limits, quotas, and constraints](https://docs.microsoft.com/azure/azure-resource-manager/management/azure-subscription-service-limits#version-2) for details.
-If you execute large amount of data flow using the same integration runtime at the same time, it might cause this kind of error.
-
-#### Resolution
-- Separate these pipelines for different trigger time to execute.-- Create a new integration runtime, and split these pipelines across multiple integration runtimes.
+**Resolution**:
+
+- Run your pipelines at different trigger times.
+- Create a new integration runtime, and split your pipelines across multiple integration runtimes.
-### How to monitor pipeline failures on regular interval
+### You have activity-level errors and failures in pipelines
-#### Issue
-There is often a need to monitor Data Factory pipelines in intervals, say 5 minutes. You can query and filter the pipeline runs from a data factory using the endpoint.
+Azure Data Factory orchestration allows conditional logic and enables users to take different paths based upon the outcome of a previous activity. It allows four conditional paths: **Upon Success** (default pass), **Upon Failure**, **Upon Completion**, and **Upon Skip**.
-#### Recommendation
-1. Set up an Azure Logic App to query all of the failed pipelines every 5 minutes.
-2. Then, you can report incidents to our ticketing system as per [QueryByFactory](https://docs.microsoft.com/rest/api/datafactory/pipelineruns/querybyfactory).
+Azure Data Factory evaluates the outcome of all leaf-level activities. Pipeline results are successful only if all leaves succeed. If a leaf activity was skipped, we evaluate its parent activity instead.
-#### Reference
-- [External-Send Notifications from Data Factory](https://www.mssqltips.com/sqlservertip/5962/send-notifications-from-an-azure-data-factory-pipeline--part-2/)
+**Resolution**
-### How to handle activity-level errors and failures in pipelines
+1. Implement activity-level checks by following [How to handle pipeline failures and errors](https://techcommunity.microsoft.com/t5/azure-data-factory/understanding-pipeline-failures-and-error-handling/ba-p/1630459).
+1. Use Azure Logic Apps to monitor pipelines in regular intervals following [Query By Factory](https://docs.microsoft.com/rest/api/datafactory/pipelineruns/querybyfactory).
-#### Issue
-Azure Data Factory orchestration allows conditional logic and enables user to take different paths based upon outcomes of a previous activity. It allows four conditional paths: "Upon Success (default pass)", "Upon Failure", "Upon Completion", and "Upon Skip". Using different paths is allowed.
+## Monitor pipeline failures in regular intervals
-Azure Data Factory defines pipeline run success and failure as follows:
+You might need to monitor failed Data Factory pipelines in intervals, say 5 minutes. You can query and filter the pipeline runs from a data factory by using the endpoint.
-- Evaluate outcome for all leaf level activities. If a leaf activity was skipped, we evaluate its parent activity instead.-- Pipeline result is successful if and only if all leaves succeed.
+Set up an Azure logic app to query all of the failed pipelines every 5 minutes, as described in [Query By Factory](https://docs.microsoft.com/rest/api/datafactory/pipelineruns/querybyfactory). Then, you can report incidents to our ticketing system.
-#### Recommendation
-- Implement activity level checks following [How to handle pipeline failures and errors](https://techcommunity.microsoft.com/t5/azure-data-factory/understanding-pipeline-failures-and-error-handling/ba-p/1630459).-- Use Azure Logic App to monitor pipelines in regular intervals following [Query By DataFactory]( https://docs.microsoft.com/rest/api/datafactory/pipelineruns/querybyfactory).
+For more information, go to [Send Notifications from Data Factory, Part 2](https://www.mssqltips.com/sqlservertip/5962/send-notifications-from-an-azure-data-factory-pipeline--part-2/).
## Next steps
@@ -105,4 +89,4 @@ For more troubleshooting help, try these resources:
* [Data Factory feature requests](https://feedback.azure.com/forums/270578-data-factory) * [Azure videos](https://azure.microsoft.com/resources/videos/index/?sort=newest&services=data-factory) * [Microsoft Q&A question page](/answers/topics/azure-data-factory.html)
-* [Twitter information about Data Factory](https://twitter.com/hashtag/DataFactory)
\ No newline at end of file
+* [Twitter information about Data Factory](https://twitter.com/hashtag/DataFactory)
data-factory https://docs.microsoft.com/en-us/azure/data-factory/security-baseline https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/security-baseline.md
@@ -406,7 +406,7 @@ If you are running your Integration Runtime on an Azure Virtual Machine, the adm
**Guidance**: Use privileged access workstations (PAW) with Multi-Factor Authentication (MFA) configured to log into and configure Azure resources.
-* [Learn about Privileged Access Workstations](/windows-server/identity/securing-privileged-access/privileged-access-workstations)
+* [Learn about Privileged Access Workstations](https://4sysops.com/archives/understand-the-microsoft-privileged-access-workstation-paw-security-model/)
* [How to enable MFA in Azure](../active-directory/authentication/howto-mfa-getstarted.md)
data-factory https://docs.microsoft.com/en-us/azure/data-factory/self-hosted-integration-runtime-auto-update https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/self-hosted-integration-runtime-auto-update.md
@@ -1,8 +1,6 @@
--- title: Self-hosted integration runtime auto-update and expire notification description: Learn about self-hosted integration runtime auto-update and expire notification
-services: data-factory
-documentationcenter: ''
ms.service: data-factory ms.workload: data-services ms.topic: conceptual
@@ -28,7 +26,7 @@ The most convenient way is to enable auto-update when you create or edit self-ho
You can check the last update datetime in your self-hosted integration runtime client.
-![Enable auto-update](media/create-self-hosted-integration-runtime/shir-auto-update-2.png)
+![Screenshot of checking the update time](media/create-self-hosted-integration-runtime/shir-auto-update-2.png)
> [!NOTE] > To ensure the stability of self-hosted integration runtime, although we release two versions, we will only update it automatically once every month. So sometimes you will find that the auto-updated version is the previous version of the actual latest version. If you want to get the latest version, you can go to [download center](https://www.microsoft.com/download/details.aspx?id=39717).
@@ -40,4 +38,4 @@ If you want to manually control which version of self-hosted integration runtime
- Review [integration runtime concepts in Azure Data Factory](./concepts-integration-runtime.md). -- Learn how to [create a self-hosted integration runtime in the Azure portal](./create-self-hosted-integration-runtime.md).\ No newline at end of file
+- Learn how to [create a self-hosted integration runtime in the Azure portal](./create-self-hosted-integration-runtime.md).
data-factory https://docs.microsoft.com/en-us/azure/data-factory/tutorial-incremental-copy-powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/tutorial-incremental-copy-powershell.md
@@ -1,6 +1,6 @@
--- title: Incrementally copy a table using PowerShell
-description: In this tutorial, you create an Azure data factory pipeline that incrementally copies data from an Azure SQL database to Azure Blob storage.'
+description: In this tutorial, you create an Azure Data Factory pipeline that incrementally copies data from an Azure SQL database to Azure Blob storage.'
services: data-factory author: dearandyxu ms.author: yexu
@@ -17,7 +17,7 @@ ms.date: 01/22/2018
[!INCLUDE[appliesto-adf-xxx-md](includes/appliesto-adf-xxx-md.md)]
-In this tutorial, you create an Azure data factory with a pipeline that loads delta data from a table in Azure SQL Database to Azure Blob storage.
+In this tutorial, you use Azure Data Factory to create a pipeline that loads delta data from a table in Azure SQL Database to Azure Blob storage.
You perform the following steps in this tutorial:
data-lake-analytics https://docs.microsoft.com/en-us/azure/data-lake-analytics/security-baseline https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-lake-analytics/security-baseline.md
@@ -300,7 +300,7 @@ You can also enable a Just-In-Time access by using Azure AD Privileged Identity
**Guidance**: Use a secure, Azure-managed workstation (also known as a Privileged Access Workstation, or PAW) for administrative tasks that require elevated privileges.
-* [Understand secure, Azure-managed workstations](../active-directory/devices/concept-azure-managed-workstation.md)
+* [Understand secure, Azure-managed workstations](https://4sysops.com/archives/understand-the-microsoft-privileged-access-workstation-paw-security-model/)
* [How to enable Azure AD MFA](../active-directory/authentication/howto-mfa-getstarted.md)
data-share https://docs.microsoft.com/en-us/azure/data-share/security-baseline https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-share/security-baseline.md
@@ -138,7 +138,7 @@ Enable Azure AD MFA and follow Azure Security Center identity and access recomme
**Guidance**: Use a secure, Azure-managed workstation (also known as a Privileged Access Workstation, or PAW) for administrative tasks that require elevated privileges. -- [Understand secure, Azure-managed workstations](../active-directory/devices/concept-azure-managed-workstation.md)
+- [Understand secure, Azure-managed workstations](https://4sysops.com/archives/understand-the-microsoft-privileged-access-workstation-paw-security-model/)
- [How to enable Azure AD MFA](../active-directory/authentication/howto-mfa-getstarted.md)
databox-online https://docs.microsoft.com/en-us/azure/databox-online/security-baseline https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/security-baseline.md
@@ -240,7 +240,7 @@ Additional information is available at the referenced link.
The secured workstations can be centrally managed to enforce secured configuration including strong authentication, software and hardware baselines, restricted logical and network access. -- [Understand privileged access workstations](/azure/active-directory/devices/concept-azure-managed-workstation)
+- [Understand privileged access workstations](https://4sysops.com/archives/understand-the-microsoft-privileged-access-workstation-paw-security-model/)
- [Deploy a privileged access workstation](/azure/active-directory/devices/howto-azure-managed-workstation)
databox https://docs.microsoft.com/en-us/azure/databox/data-box-deploy-ordered https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox/data-box-deploy-ordered.md
@@ -2,20 +2,20 @@
title: Tutorial to order Azure Data Box | Microsoft Docs description: In this tutorial, learn about Azure Data Box, a hybrid solution that allows you to import on-premises data into Azure, and how to order Azure Data Box. services: databox
-author: alkohli
+author: v-dalc
ms.service: databox ms.subservice: pod ms.topic: tutorial
-ms.date: 11/19/2020
+ms.date: 01/13/2021
ms.author: alkohli #Customer intent: As an IT admin, I need to be able to order Data Box to upload on-premises data from my server onto Azure. --- # Tutorial: Order Azure Data Box
-Azure Data Box is a hybrid solution that allows you to import your on-premises data into Azure in a quick, easy, and reliable way. You transfer your data to a Microsoft-supplied 80 TB (usable capacity) storage device, and then ship the device back. This data is then uploaded to Azure.
+Azure Data Box is a hybrid solution that allows you to import your on-premises data into Azure in a quick, easy, and reliable way. You transfer your data to a Microsoft-supplied 80-TB (usable capacity) storage device, and then ship the device back. This data is then uploaded to Azure.
-This tutorial describes how you can order an Azure Data Box. In this tutorial, you learn about:
+This tutorial describes how you can order an Azure Data Box. In this tutorial, you learn about:
> [!div class="checklist"] >
@@ -241,7 +241,7 @@ Do the following steps in the Azure portal to order a device.
|Resource group | The resource group you selected previously. | |Import order name | Provide a friendly name to track the order. <br> The name can have between 3 and 24 characters that can be letters, numbers, and hyphens. <br> The name must start and end with a letter or a number. |
- ![Data Box import Order wizard, Basics screen, with correct info filled in](media/data-box-deploy-ordered/select-data-box-import-06.png)<!--Generic subscription. Cut note. Box command.-->
+ ![Data Box import Order wizard, Basics screen, with correct info filled in](media/data-box-deploy-ordered/select-data-box-import-06.png)
7. On the **Data destination** screen, select the **Data destination** - either storage accounts or managed disks.
@@ -249,7 +249,11 @@ Do the following steps in the Azure portal to order a device.
![Data Box import Order wizard, Data destination screen, with storage accounts selected](media/data-box-deploy-ordered/select-data-box-import-07.png)
- Based on the specified Azure region, select one or more storage accounts from the filtered list of an existing storage account. Data Box can be linked with up to 10 storage accounts. You can also create a new **General-purpose v1**, **General-purpose v2**, or **Blob storage account**.
+ Based on the specified Azure region, select one or more storage accounts from the filtered list of existing storage accounts. Data Box can be linked with up to 10 storage accounts. You can also create a new **General-purpose v1**, **General-purpose v2**, or **Blob storage account**.
+
+ > [!NOTE]
+ > - If you select Azure Premium FileStorage accounts, the provisioned quota on the storage account share will increase to the size of data being copied to the file shares. After the quota is increased, it isn't adjusted again, for example, if for some reason the Data Box can't copy your data.
+ > - This quota is used for billing. After your data is uploaded to the datacenter, you should adjust the quota to meet your needs. For more information, see [Understanding billing](../../articles/storage/files/understanding-billing.md).
Storage accounts with virtual networks are supported. To allow Data Box service to work with secured storage accounts, enable the trusted services within the storage account network firewall settings. For more information, see how to [Add Azure Data Box as a trusted service](../storage/common/storage-network-security.md#exceptions).
@@ -415,7 +419,7 @@ Do the following steps using Azure CLI to order a device:
|sku| The specific Data Box device you are ordering. Valid values are: "DataBox", "DataBoxDisk", and "DataBoxHeavy"| "DataBox" | |email-list| The email addresses associated with the order.| "gusp@contoso.com" | |street-address1| The street address to where the order will be shipped. | "15700 NE 39th St" |
- |street-address2| The secondary address information, such as apartment number or building number. | "Bld 123" |
+ |street-address2| The secondary address information, such as apartment number or building number. | "Building 123" |
|city| The city that the device will be shipped to. | "Redmond" | |state-or-province| The state where the device will be shipped.| "WA" | |country| The country that the device will be shipped. | "United States" |
@@ -534,7 +538,7 @@ Do the following steps using Azure PowerShell to order a device:
|DataBoxType [Required]| The specific Data Box device you are ordering. Valid values are: "DataBox", "DataBoxDisk", and "DataBoxHeavy"| "DataBox" | |EmailId [Required]| The email addresses associated with the order.| "gusp@contoso.com" | |StreetAddress1 [Required]| The street address to where the order will be shipped. | "15700 NE 39th St" |
- |StreetAddress2| The secondary address information, such as apartment number or building number. | "Bld 123" |
+ |StreetAddress2| The secondary address information, such as apartment number or building number. | "Building 123" |
|StreetAddress3| The tertiary address information. | | |City [Required]| The city that the device will be shipped to. | "Redmond" | |StateOrProvinceCode [Required]| The state where the device will be shipped.| "WA" |
@@ -597,7 +601,7 @@ Microsoft then prepares and dispatches your device via a regional carrier. You r
### Track a single order
-To get tracking information about a single, existing Azure Data Box order, run [az databox job show](/cli/azure/ext/databox/databox/job?view=azure-cli-latest#ext-databox-az-databox-job-show&preserve-view=true). The command displays information about the order such as, but not limited to: name, resource group, tracking information, subscription ID, contact information, shipment type, and device sku.
+To get tracking information about a single, existing Azure Data Box order, run [`az databox job show`](/cli/azure/ext/databox/databox/job?view=azure-cli-latest#ext-databox-az-databox-job-show&preserve-view=true). The command displays information about the order such as, but not limited to: name, resource group, tracking information, subscription ID, contact information, shipment type, and device sku.
```azurecli az databox job show --resource-group <resource-group> --name <order-name>
@@ -638,7 +642,7 @@ To get tracking information about a single, existing Azure Data Box order, run [
### List all orders
-If you have ordered multiple devices, you can run [az databox job list](/cli/azure/ext/databox/databox/job?view=azure-cli-latest#ext-databox-az-databox-job-list&preserve-view=true) to view all your Azure Data Box orders. The command lists all orders that belong to a specific resource group. Also displayed in the output: order name, shipping status, Azure region, delivery type, order status. Canceled orders are also included in the list.
+If you have ordered multiple devices, you can run [`az databox job list`](/cli/azure/ext/databox/databox/job?view=azure-cli-latest#ext-databox-az-databox-job-list&preserve-view=true) to view all your Azure Data Box orders. The command lists all orders that belong to a specific resource group. Also displayed in the output: order name, shipping status, Azure region, delivery type, order status. Canceled orders are also included in the list.
The command also displays time stamps of each order. ```azurecli
@@ -714,7 +718,7 @@ To get tracking information about a single, existing Azure Data Box order, run [
### List all orders
-If you have ordered multiple devices, you can run [Get-AzDataBoxJob](/powershell/module/az.databox/Get-AzDataBoxJob) to view all your Azure Data Box orders. The command lists all orders that belong to a specific resource group. Also displayed in the output: order name, shipping status, Azure region, delivery type, order status. Canceled orders are also included in the list.
+If you have ordered multiple devices, you can run [`Get-AzDataBoxJob`](/powershell/module/az.databox/Get-AzDataBoxJob) to view all your Azure Data Box orders. The command lists all orders that belong to a specific resource group. Also displayed in the output: order name, shipping status, Azure region, delivery type, order status. Canceled orders are also included in the list.
The command also displays time stamps of each order. ```azurepowershell
@@ -757,7 +761,7 @@ To delete a canceled order, go to **Overview** and select **Delete** from the co
### Cancel an order
-To cancel an Azure Data Box order, run [az databox job cancel](/cli/azure/ext/databox/databox/job?view=azure-cli-latest#ext-databox-az-databox-job-cancel&preserve-view=true). You are required to specify your reason for canceling the order.
+To cancel an Azure Data Box order, run [`az databox job cancel`](/cli/azure/ext/databox/databox/job?view=azure-cli-latest#ext-databox-az-databox-job-cancel&preserve-view=true). You are required to specify your reason for canceling the order.
```azurecli az databox job cancel --resource-group <resource-group> --name <order-name> --reason <cancel-description>
@@ -794,7 +798,7 @@ To cancel an Azure Data Box order, run [az databox job cancel](/cli/azure/ext/da
### Delete an order
-If you have canceled an Azure Data Box order, you can run [az databox job delete](/cli/azure/ext/databox/databox/job?view=azure-cli-latest#ext-databox-az-databox-job-delete&preserve-view=true) to delete the order.
+If you have canceled an Azure Data Box order, you can run [`az databox job delete`](/cli/azure/ext/databox/databox/job?view=azure-cli-latest#ext-databox-az-databox-job-delete&preserve-view=true) to delete the order.
```azurecli az databox job delete --name [-n] <order-name> --resource-group <resource-group> [--yes] [--verbose]
@@ -867,7 +871,7 @@ PS C:\WINDOWS\system32>
### Delete an order
-If you have canceled an Azure Data Box order, you can run [Remove-AzDataBoxJob](/powershell/module/az.databox/remove-azdataboxjob) to delete the order.
+If you have canceled an Azure Data Box order, you can run [`Remove-AzDataBoxJob`](/powershell/module/az.databox/remove-azdataboxjob) to delete the order.
```azurepowershell Remove-AzDataBoxJob -Name <String> -ResourceGroup <String>
databox https://docs.microsoft.com/en-us/azure/databox/data-box-system-requirements https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox/data-box-system-requirements.md
@@ -1,23 +1,23 @@
--- title: Microsoft Azure Data Box system requirements| Microsoft Docs
-description: Learn about important system requirements for your Azure Data Box and for the clients connecting to the Data Box.
+description: Learn about important system requirements for your Azure Data Box and for clients that connect to the Data Box.
services: databox author: alkohli ms.service: databox ms.subservice: pod ms.topic: article
-ms.date: 10/02/2020
+ms.date: 12/23/2020
ms.author: alkohli --- # Azure Data Box system requirements
-This article describes important system requirements for your Microsoft Azure Data Box and for clients that connect to the Data Box. We recommend you review the information carefully before you deploy your Data Box and then refer to it as needed during deployment and operation.
+This article describes important system requirements for your Microsoft Azure Data Box and for clients that connect to the Data Box. We recommend you review the information carefully before you deploy your Data Box and then refer to it when you need to during deployment and operation.
The system requirements include:
-* **Software requirements:** For hosts connecting to the Data Box, describes supported operating systems, file transfer protocols, storage accounts, storage types, and browsers for the local web UI.
-* **Networking requirements:** For the Data Box, describes network connection and port requirements for optimum operation of the Data Box.
+* **Software requirements:** For hosts that connect to the Data Box, describes supported operating systems, file transfer protocols, storage accounts, storage types, and browsers for the local web UI.
+* **Networking requirements:** For the Data Box, describes requirements for network connections and ports for best operation of the Data Box.
## Software requirements
@@ -50,11 +50,11 @@ The software requirements include supported operating systems, file transfer pro
## Networking requirements
-Your datacenter needs to have high-speed network. We strongly recommend that you have at least one 10-GbE connection. If a 10-GbE connection is not available, a 1-GbE data link can be used to copy data but the copy speeds are affected.
+Your datacenter needs to have high-speed network. We strongly recommend you have at least one 10-GbE connection. If a 10-GbE connection isn't available, you can use a 1-GbE data link to copy data, but the copy speeds are affected.
### Port requirements
-The following table lists the ports that need to be opened in your firewall to allow for SMB or NFS traffic. In this table, *In* (*inbound*) refers to the direction from which incoming client requests access to your device. *Out* (or *outbound*) refers to the direction in which your Data Box device sends data externally, beyond the deployment: for example, outbound to the Internet.
+The following table lists the ports that need to be opened in your firewall to allow for SMB or NFS traffic. In this table, *In* (*inbound*) refers to the direction from which incoming client requests access to your device. *Out* (or *outbound*) refers to the direction in which your Data Box device sends data externally, beyond the deployment. For example, data might be outbound to the Internet.
[!INCLUDE [data-box-port-requirements](../../includes/data-box-port-requirements.md)]
databox https://docs.microsoft.com/en-us/azure/databox/security-baseline https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox/security-baseline.md
@@ -271,7 +271,7 @@ Additionally, to help you keep track of dedicated administrative accounts, you m
**Guidance**: Use a Privileged Access Workstation (PAW) with Azure AD Multi-Factor Authentication (MFA) enabled to log into and configure your Azure Data Box orders.
-* [Privileged Access Workstations](/windows-server/identity/securing-privileged-access/privileged-access-workstations)
+* [Privileged Access Workstations](https://4sysops.com/archives/understand-the-microsoft-privileged-access-workstation-paw-security-model/)
* [Planning a cloud-based Azure AD Multi-Factor Authentication deployment](../active-directory/authentication/howto-mfa-getstarted.md)
ddos-protection https://docs.microsoft.com/en-us/azure/ddos-protection/ddos-protection-overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/ddos-protection/ddos-protection-overview.md
@@ -31,7 +31,7 @@ Azure DDoS protection does not store customer data.
- **Turnkey protection:** Simplified configuration immediately protects all resources on a virtual network as soon as DDoS Protection Standard is enabled. No intervention or user definition is required. - **Always-on traffic monitoring:** Your application traffic patterns are monitored 24 hours a day, 7 days a week, looking for indicators of DDoS attacks. DDoS Protection Standard instantly and automatically mitigates the attack, once it is detected. - **Adaptive tuning:** Intelligent traffic profiling learns your application's traffic over time, and selects and updates the profile that is the most suitable for your service. The profile adjusts as traffic changes over time.-- **Multi-Layered protection:** Provides full stack DDoS protection, when used with a web application firewall, to get protection both at the network layer (Layer 3 and 4, offered by Azure DDoS Protection Standard) and at the application layer (Layer 7, offered by a WAF). WAF offerings include Azure [Application Gateway WAF SKU](../web-application-firewall/ag/ag-overview.md?toc=%2fazure%2fvirtual-network%2ftoc.json) as well as third-party web application firewall offerings available in the [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps?page=1&search=web%20application%20firewall).
+- **Multi-Layered protection:** When deployed with a web application firewall (WAF), DDoS Protection Standard protects both at the network layer (Layer 3 and 4, offered by Azure DDoS Protection Standard) and at the application layer (Layer 7, offered by a WAF). WAF offerings include Azure [Application Gateway WAF SKU](../web-application-firewall/ag/ag-overview.md?toc=%2fazure%2fvirtual-network%2ftoc.json) as well as third-party web application firewall offerings available in the [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps?page=1&search=web%20application%20firewall).
- **Extensive mitigation scale:** Over 60 different attack types can be mitigated, with global capacity, to protect against the largest known DDoS attacks. - **Attack analytics:** Get detailed reports in five-minute increments during an attack, and a complete summary after the attack ends. Stream mitigation flow logs to [Azure Sentinel](../sentinel/connect-azure-ddos-protection.md) or an offline security information and event management (SIEM) system for near real-time monitoring during an attack. - **Attack metrics:** Summarized metrics from each attack are accessible through Azure Monitor.
ddos-protection https://docs.microsoft.com/en-us/azure/ddos-protection/manage-ddos-protection-template https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/ddos-protection/manage-ddos-protection-template.md new file mode 100644
@@ -0,0 +1,150 @@
+---
+title: Create and enable an Azure DDoS Protection plan using an Azure Resource Manager template (ARM template).
+description: Learn how to create and enable an Azure DDoS Protection plan using an Azure Resource Manager template (ARM template).
+services: ddos-protection
+documentationcenter: na
+author: mumian
+ms.service: ddos-protection
+ms.devlang: na
+ms.topic: quickstart
+ms.tgt_pltfrm: na
+ms.workload: infrastructure-services
+ms.custom: subject-armqs
+ms.author: jgao
+ms.date: 01/14/2021
+---
+
+# Quickstart: Create an Azure DDoS Protection Standard using ARM template
+
+This quickstart describes how to use an Azure Resource Manager template (ARM template) to create a distributed denial of service (DDoS) protection plan and virtual network (VNet), then enables the protection plan for the VNet. An Azure DDoS Protection Standard plan defines a set of virtual networks that have DDoS protection enabled across subscriptions. You can configure one DDoS protection plan for your organization and link virtual networks from multiple subscriptions to the same plan.
+
+[!INCLUDE [About Azure Resource Manager](../../includes/resource-manager-quickstart-introduction.md)]
+
+If your environment meets the prerequisites and you're familiar with using ARM templates, select the **Deploy to Azure** button. The template will open in the Azure portal.
+
+[![Deploy to Azure](../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2F101-create-and-enable-ddos-protection-plans%2Fazuredeploy.json)
+
+## Prerequisites
+
+If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+
+## Review the template
+
+The template used in this quickstart is from [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/101-create-and-enable-ddos-protection-plans).
+
+:::code language="json" source="~/quickstart-templates/101-create-and-enable-ddos-protection-plans/azuredeploy.json":::
+
+The template defines two resources:
+
+- [Microsoft.Network/ddosProtectionPlans](/templates/microsoft.network/ddosprotectionplans)
+- [Microsoft.Network/virtualNetworks](/templates/microsoft.network/virtualnetworks)
+
+## Deploy the template
+
+In this example, the template creates a new resource group, a DDoS protection plan, and a VNet.
+
+1. To sign in to Azure and open the template, select the **Deploy to Azure** button.
+
+ [![Deploy to Azure](../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2F101-create-and-enable-ddos-protection-plans%2Fazuredeploy.json)
+
+1. Enter the values to create a new resource group, DDoS protection plan, and VNet name.
+
+ :::image type="content" source="media/manage-ddos-protection-template/ddos-template.png" alt-text="DDoS quickstart template.":::
+
+ - **Subscription**: Name of the Azure subscription where the resources will be deployed.
+ - **Resource group**: Select an existing resource group or create a new resource group.
+ - **Region**: The region where the resource group is deployed, such as East US.
+ - **Ddos Protection Plan Name**: The name of for the new DDoS protection plan.
+ - **Virtual Network Name**: Creates a name for the new VNet.
+ - **Location**: Function that uses the same region as the resource group for resource deployment.
+ - **Vnet Address Prefix**: Use the default value or enter your VNet address.
+ - **Subnet Prefix**: Use the default value or enter your VNet subnet.
+ - **Ddos Protection Plan Enabled**: Default is `true` to enable the DDoS protection plan.
+
+1. Select **Review + create**.
+1. Verify that template validation passed and select **Create** to begin the deployment.
+
+## Review deployed resources
+
+To copy the Azure CLI or Azure PowerShell command, select the **Copy** button. The **Try it** button opens Azure Cloud Shell to run the command.
+
+# [CLI](#tab/CLI)
+
+```azurecli-interactive
+az network ddos-protection show \
+ --resource-group MyResourceGroup \
+ --name MyDdosProtectionPlan
+```
+
+# [PowerShell](#tab/PowerShell)
+
+```azurepowershell-interactive
+Get-AzDdosProtectionPlan -ResourceGroupName 'MyResourceGroup' -Name 'MyDdosProtectionPlan'
+```
+
+---
+
+The output shows the new resources.
+
+# [CLI](#tab/CLI)
+
+```Output
+{
+ "etag": "W/\"abcdefgh-1111-2222-bbbb-987654321098\"",
+ "id": "/subscriptions/b1111111-2222-3333-aaaa-012345678912/resourceGroups/MyResourceGroup/providers/Microsoft.Network/ddosProtectionPlans/MyDdosProtectionPlan",
+ "location": "eastus",
+ "name": "MyDdosProtectionPlan",
+ "provisioningState": "Succeeded",
+ "resourceGroup": "MyResourceGroup",
+ "resourceGuid": null,
+ "tags": null,
+ "type": "Microsoft.Network/ddosProtectionPlans",
+ "virtualNetworks": [
+ {
+ "id": "/subscriptions/b1111111-2222-3333-aaaa-012345678912/resourceGroups/MyResourceGroup/providers/Microsoft.Network/virtualNetworks/MyVNet",
+ "resourceGroup": "MyResourceGroup"
+ }
+ ]
+}
+```
+
+# [PowerShell](#tab/PowerShell)
+
+```Output
+Name : MyDdosProtectionPlan
+Id : /subscriptions/b1111111-2222-3333-aaaa-012345678912/resourceGroups/MyResourceGroup/providers/Microsoft.Network/ddosProtectionPlans/MyDdosProtectionPlan
+Etag : W/"abcdefgh-1111-2222-bbbb-987654321098"
+ProvisioningState : Succeeded
+VirtualNetworks : [
+ {
+ "Id": "/subscriptions/b1111111-2222-3333-aaaa-012345678912/resourceGroups/MyResourceGroup/providers/Microsoft.Network/virtualNetworks/MyVNet"
+ }
+ ]
+```
+
+---
+
+## Clean up resources
+
+When you're finished you can delete the resources. The command deletes the resource group and all the resources it contains.
+
+# [CLI](#tab/CLI)
+
+```azurecli-interactive
+az group delete --name MyResourceGroup
+```
+
+# [PowerShell](#tab/PowerShell)
+
+```azurepowershell-interactive
+Remove-AzResourceGroup -Name 'MyResourceGroup'
+```
+
+---
+
+## Next steps
+
+To learn how to view and configure telemetry for your DDoS protection plan, continue to the tutorials.
+
+> [!div class="nextstepaction"]
+> [View and configure DDoS protection telemetry](telemetry.md)
defender-for-iot https://docs.microsoft.com/en-us/azure/defender-for-iot/architecture https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/architecture.md
@@ -12,15 +12,16 @@ ms.devlang: na
ms.topic: conceptual ms.tgt_pltfrm: na ms.workload: na
-ms.date: 12/02/2020
+ms.date: 1/13/2021
ms.author: shhazam --- # Azure Defender for IoT architecture
-This article describes the functional system architecture of the Defender for IoT solution.
+This article describes the functional system architecture of the Defender for IoT solution. Azure Defender for IoT offers two sets of capabilities to fit your environment's needs, agentless solution for organizations, and agent-based solution for device builders.
-## Defender for IoT components
+## Agentless solution for organizations
+### Defender for IoT components
Defender for IoT connects both to the Azure cloud as well as to on-premises components. The solution is designed for scalability in large and geographically distributed environments with multiple remote locations. This solution enables a multi-layered distributed architecture by country, region, business unit, or zone.
@@ -75,12 +76,12 @@ Managing Azure Defender for IoT across hybrid environments is accomplished via t
- The on-premises management console - The Azure portal
-#### Sensor console
+### Sensor console
Sensor detections are displayed in the sensor console, where they can be viewed, investigated, and analyzed in a network map, asset inventory, and in an extensive range of reports, for example risk assessment reports, data mining queries and attack vectors. You can also use the console to view and handle threats detected by sensor engines, forward information to partner systems, manage users, and more. :::image type="content" source="./media/architecture/sensor-console-v2.png" alt-text="Defender for IoT sensor console":::
-#### On-premises management console
+### On-premises management console
The on-premises management console enables security operations center (SOC) operators to manage and analyze alerts aggregated from multiple sensors into one single dashboard and provides an overall view of the health of the OT networks. This architecture provides a comprehensive unified view of the network at a SOC level, optimized alert handling, and the control of operational network security, ensuring that decision-making and risk management remain flawless.
@@ -99,20 +100,23 @@ Tightly integrated with your SOC workflows and run books, it enables easy priori
:::image type="content" source="media/updates/alerts-and-site-management-v2.png" alt-text="Manage all of your alerts and information.":::
-#### Azure portal
+### Azure portal
The Defender for IoT portal in Azure is used to help you: - Purchase solution appliances+ - Install and update software - Onboard sensors to Azure - Update Threat Intelligence packages
-## Embedded security agent: Built-in mode
+## Agent-based solution for device builders
+
+### Embedded security agent: Built-in mode
In **Built-in** mode, Defender for IoT is enabled when you elect to turn on the **Security** option in your IoT hub. Offering real-time monitoring, recommendations and alerts, built-in mode offers single-step device visibility and unmatched security. Build-in mode does not require agent installation on any devices and uses advanced analytics on logged activities to analyze and protect your field device and IoT hub.
-## Embedded security agent: Enhanced mode
+### Embedded security agent: Enhanced mode
In **Enhanced** mode, after turning on the **Security** option in your IoT hub and installing Defender for IoT device agents on your devices, the agents collect, aggregate, and analyze raw security events from your devices. Raw security events can include IP connections, process creation, user logins, and other security-relevant information. Defender for IoT device agents also handles event aggregation to help avoid high network throughput. The agents are highly customizable, allowing you to use them for specific tasks, such as sending only important information at the fastest SLA, or for aggregating extensive security information and context into larger segments, avoiding higher service costs.
@@ -126,6 +130,8 @@ Using the analytics pipeline, Defender for IoT combines all of the streams of in
Defender for IoT recommendations and alerts (analytics pipeline output) is written to the Log Analytics workspace of each customer. Including the raw events in the workspace as well as the alerts and recommendations enables deep dive investigations and queries using the exact details of the suspicious activities detected.
+:::image type="content" source="media/architecture/micro-agent-architecture.png" alt-text="The micro agent architecture.":::
+ ## See also [Defender for IoT FAQ](resources-frequently-asked-questions.md)
defender-for-iot https://docs.microsoft.com/en-us/azure/defender-for-iot/how-to-install-software https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/how-to-install-software.md
@@ -225,9 +225,9 @@ This article describes how to configure the BIOS by using the configuration file
4. The appliance's credentials are:
- - Username: **cyberx**
+ - Username: **XXX**
- - Password: **xhxvhttju,@4338**
+ - Password: **XXX**
The import server profile operation is initiated.
@@ -269,7 +269,7 @@ To manually configure:
- If the appliance is not a Defender for IoT appliance, open a browser and go to the IP address that was configured before. Sign in with the Dell default administrator privileges. Use **root** for the username and **calvin** for the password.
- - If the appliance is a Defender for IoT appliance, sign in by using **cyberx** for the username and **xhxvhttju,@4338** for the password.
+ - If the appliance is a Defender for IoT appliance, sign in by using **XXX** for the username and **XXX** for the password.
2. After you access the BIOS, go to **Device Settings**.
defender-for-iot https://docs.microsoft.com/en-us/azure/defender-for-iot/how-to-manage-individual-sensors https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/how-to-manage-individual-sensors.md
@@ -4,7 +4,7 @@ description: Learn how to manage individual sensors, including managing activati
author: shhazam-ms manager: rkarlin ms.author: shhazam
-ms.date: 12/22/2020
+ms.date: 01/10/2021
ms.topic: how-to ms.service: azure ---
defender-for-iot https://docs.microsoft.com/en-us/azure/defender-for-iot/references-horizon-api https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/references-horizon-api.md
@@ -4,7 +4,7 @@ description: This guide describes commonly used Horizon methods.
author: shhazam-ms manager: rkarlin ms.author: shhazam
-ms.date: 1/7/2020
+ms.date: 1/5/2021
ms.topic: article ms.service: azure ---
@@ -15,17 +15,19 @@ This guide describes commonly used Horizon methods.
### Getting more information
-For more information about working with Horizon and the CyberX Platform, refer to the following:
+For more information about working with Horizon and the Defender for IoT platform, see the following information:
-- For the Horizon Open Development Environment (ODE) SDK, contact your CyberX representative.
+- For the Horizon Open Development Environment (ODE) SDK, contact your Defender for IoT representative.
- For support and troubleshooting information, contact <support@cyberx-labs.com>.-- To access the Cyberx User Guide from CyberX Console, select :::image type="icon" source="media/references-horizon-api/profile-icon.png"::: and then select **Download User Guide**.+
+- To access the Defender for IoT user guide from the Defender for IoT console, select :::image type="icon" source="media/references-horizon-api/profile.png"::: and then select **Download User Guide**.
+ ## `horizon::protocol::BaseParser` Abstract for all plugins. This consists of two methods: -- For processing plugin filters defined above you. This way Horizon knows how to communicate with the parser
+- For processing plugin filters defined above you. This way Horizon knows how to communicate with the parser.
- For processing the actual data. ## `std::shared_ptr<horizon::protocol::BaseParser> create_parser()`
@@ -34,7 +36,7 @@ The first function that is called for your plugin creates an instance of the par
### Parameters
-None
+None.
### Return value
@@ -44,15 +46,15 @@ shared_ptr to your parser instance.
This function will get called for each plugin registered above.
-In most cases this will be empty. Throw an exception for Horizon to know something bad happened.
+In most cases, this will be empty. Throw an exception for Horizon to know something bad happened.
### Parameters -- A map containing the structure of dissect_as, as defined in the config.json of another plugin which wants to register over you.
+- A map containing the structure of dissect_as, as defined in the config.json of another plugin that wants to register over you.
### Return value
-An array of uint64_t which is the registration processed into a kind of uint64_t. This means in the map, you'll have a list of ports, whose values will be the uin64_t.
+An array of uint64_t, which is the registration processed into a kind of uint64_t. This means in the map, you'll have a list of ports, whose values will be the uin64_t.
## `horizon::protocol::ParserResult horizon::protocol::BaseParser::processLayer(horizon::protocol::management::IProcessingUtils &,horizon::general::IDataBuffer &)`
@@ -64,12 +66,12 @@ Your plugin should be thread safe, as this function may be called from different
### Parameters -- The SDK control unit responsible for storing the data and creating SDK related objects, such as ILayer, fields etc.
+- The SDK control unit responsible for storing the data and creating SDK-related objects, such as ILayer, and fields.
- A helper for reading the data of the raw packet. It is already set with the byte order you defined in the config.json. ### Return value
-The result of the processing. This can be either Success/Malformed/Sanity.
+The result of the processing. This can be either *Success*, *Malformed*, or *Sanity*.
## `horizon::protocol::SanityFailureResult: public horizon::protocol::ParserResult`
@@ -85,7 +87,7 @@ Constructor
## `horizon::protocol::MalformedResult: public horizon::protocol::ParserResult`
-Malformed result, indicated we already recognized the packet as our protocol, but some validation went wrong (reserved bits are on, some field is missing etc.)
+Malformed result, indicated we already recognized the packet as our protocol, but some validation went wrong (reserved bits are on, or some field is missing).
## `horizon::protocol::MalformedResult::MalformedResult(uint64_t)`
@@ -97,7 +99,7 @@ Constructor
## `horizon::protocol::SuccessResult: public horizon::protocol::ParserResult`
-Notifies Horizon of successful processing. When successful, the packet was accepted; the data belongs to us, and all data was extracted.
+Notifies Horizon of successful processing. When successful, the packet was accepted, the data belongs to us, and all data was extracted.
## `horizon::protocol::SuccessResult()`
@@ -105,24 +107,24 @@ Constructor. Created a basic successful result. This means we don't know the dir
## `horizon::protocol::SuccessResult(horizon::protocol::ParserResultDirection)`
-Constructor
+Constructor.
### Parameters -- The direction of packet, if identified. Values can be REQUEST, RESPONSE
+- The direction of packet, if identified. Values can be *REQUEST*, or *RESPONSE*.
## `horizon::protocol::SuccessResult(horizon::protocol::ParserResultDirection, const std::vector<uint64_t> &)`
-Constructor
+Constructor.
### Parameters -- The direction of packet, if we've identified it, can be REQUEST, RESPONSE
+- The direction of packet, if we've identified it, can be *REQUEST*, *RESPONSE*.
- Warnings. These events wonΓÇÖt be failed, but Horizon will be notified. ## `horizon::protocol::SuccessResult(const std::vector<uint64_t> &)`
-Constructor
+Constructor.
### Parameters
@@ -130,11 +132,11 @@ Constructor
## `HorizonID HORIZON_FIELD(const std::string_view &)`
-Converts a string-based reference to a field name (e.g. function_code) to HorizonID
+Converts a string-based reference to a field name (for example, function_code) to HorizonID.
### Parameters -- String to convert
+- String to convert.
### Return value
@@ -150,11 +152,11 @@ A reference to a created layer, so you could add data to it.
## `horizon::protocol::management::IFieldManagement &horizon::protocol::management::IProcessingUtils::getFieldsManager()`
-Gets the field management object, which is responsible for creating fields on different objects e.g. on ILayer
+Gets the field management object, which is responsible for creating fields on different objects, for example, on ILayer.
### Return value
-A reference to the manager
+A reference to the manager.
## `void horizon::protocol::management::IFieldManagement::create(horizon::protocol::ILayer &, HorizonID, uint64_t)`
@@ -162,9 +164,9 @@ Creates a new numeric field of 64 bits on the layer with the requested ID.
### Parameters -- The layer you created earlier-- HorizonID created by the HORIZON_FIELD macro-- The raw value you want to store
+- The layer you created earlier.
+- HorizonID created by the **HORIZON_FIELD** macro.
+- The raw value you want to store.
## `void horizon::protocol::management::IFieldManagement::create(horizon::protocol::ILayer &, HorizonID, std::string)`
@@ -172,19 +174,19 @@ Creates a new string field of on the layer with the requested ID. The memory wil
### Parameters -- The layer you created earlier-- HorizonID created by the HORIZON_FIELD macro-- The raw value you want to store
+- The layer you created earlier.
+- HorizonID created by the **HORIZON_FIELD** macro.
+- The raw value you want to store.
## `void horizon::protocol::management::IFieldManagement::create(horizon::protocol::ILayer &, HorizonID, std::vector<char> &)`
-Creates a new raw value (array of bytes) field of on the layer, with the requested ID. The memory will be move, so be caution, you won't be able to use this value again
+Creates a new raw value (array of bytes) field of on the layer, with the requested ID. The memory will be move, so be caution, you won't be able to use this value again.
### Parameters -- The layer you created earlier-- HorizonID created by the HORIZON_FIELD macro-- The raw value you want to store
+- The layer you created earlier.
+- HorizonID created by the **HORIZON_FIELD** macro.
+- The raw value you want to store.
## `horizon::protocol::IFieldValueArray &horizon::protocol::management::IFieldManagement::create(horizon::protocol::ILayer &, HorizonID, horizon::protocol::FieldValueType)`
@@ -192,40 +194,40 @@ Creates an array value (array) field on the layer of the specified type with the
### Parameters -- The layer you created earlier-- HorizonID created by the HORIZON_FIELD macro-- The type of values that will be stored inside the array
+- The layer you created earlier.
+- HorizonID created by the **HORIZON_FIELD** macro.
+- The type of values that will be stored inside the array.
### Return value
-Reference to an array that you should append values to
+Reference to an array that you should append values to.
## `void horizon::protocol::management::IFieldManagement::create(horizon::protocol::IFieldValueArray &, uint64_t)`
-Appends a new integer value to the array created earlier
+Appends a new integer value to the array created earlier.
### Parameters -- The array created earlier-- The raw value to be stored in the array
+- The array created earlier.
+- The raw value to be stored in the array.
## `void horizon::protocol::management::IFieldManagement::create(horizon::protocol::IFieldValueArray &, std::string)`
-Appends a new string value to the array created earlier. The memory will be move, so be caution, you won't be able to use this value again
+Appends a new string value to the array created earlier. The memory will be move, so be caution, you won't be able to use this value again.
### Parameters -- The array created earlier-- Raw value to be stored in the array
+- The array created earlier.
+- Raw value to be stored in the array.
## `void horizon::protocol::management::IFieldManagement::create(horizon::protocol::IFieldValueArray &, std::vector<char> &)`
-Appends a new raw value to the array created earlier. The memory will be move, so be caution, you won't be able to use this value again
+Appends a new raw value to the array created earlier. The memory will be move, so be caution, you won't be able to use this value again.
### Parameters -- The array created earlier-- Raw value to be stored in the array
+- The array created earlier.
+- Raw value to be stored in the array.
## `bool horizon::general::IDataBuffer::validateRemainingSize(size_t)`
@@ -233,15 +235,15 @@ Checks that the buffer contains at least X bytes.
### Parameters
-Number of bytes should exist
+The number of bytes that should exist.
### Return value
-True if the buffer contains at least X bytes. False otherwise.
+True if the buffer contains at least X bytes. Otherwise, it is `False`.
## `uint8_t horizon::general::IDataBuffer::readUInt8()`
-Reads uint8 value (1 bytes), from the buffer, according to the byte order.
+Reads uint8 value (1 byte), from the buffer, according to the byte order.
### Return value
@@ -277,12 +279,12 @@ Reads into pre-allocated memory, of a specified size, will actually copy the dat
### Parameters -- The memory region to copy the data into-- Size of the memory region, this parameter also defined how many bytes will be copied
+- The memory region to copy the data into.
+- Size of the memory region, this parameter also defined how many bytes will be copied.
## `std::string_view horizon::general::IDataBuffer::readString(size_t)`
-Reads into a string from the buffer
+Reads into a string from the buffer.
### Parameters
defender-for-iot https://docs.microsoft.com/en-us/azure/defender-for-iot/references-horizon-sdk https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/references-horizon-sdk.md new file mode 100644
@@ -0,0 +1,1640 @@
+---
+title: Horizon SDK
+titleSuffix: Azure Defender for IoT
+description: The Horizon SDK lets Azure Defender for IoT developers design dissector plugins that decode network traffic so it can be processed by automated Defender for IoT network analysis programs.
+author: shhazam-ms
+manager: rkarlin
+ms.author: shhazam
+ms.date: 1/13/2021
+ms.topic: article
+ms.service: azure
+---
+
+# Horizon proprietary protocol dissector
+
+Horizon is an Open Development Environment (ODE) used to secure IoT and ICS devices running proprietary protocols.
+
+This environment provides the following solutions for customers and technology partners:
+
+- Unlimited, full support for common, proprietary, custom protocols or protocols that deviate from any standard.
+
+- A new level of flexibility and scope for DPI development.
+
+- A tool that exponentially expands OT visibility and control, without the need to upgrade Defender for IoT platform versions.
+
+- The security of allowing proprietary development without divulging sensitive information.
+
+The Horizon SDK lets Azure Defender for IoT developers design dissector plugins that decode network traffic so it can be processed by automated Defender for IoT network analysis programs.
+
+Protocol dissectors are developed as external plugins and are integrated with an extensive range of Defender for IoT services. For example, services that provide monitoring, alerting and reporting capabilities.
+
+## Secure development environment
+
+The Horizon ODE enables development of custom or proprietary protocols that cannot be shared outside an organization. For example, because of legal regulations or corporate policies.
+
+Develop dissector plugins without:
+
+- revealing any proprietary information about how your protocols are defined.
+
+- sharing any of your sensitive PCAPs.
+
+- violating compliance regulations.
+
+## Customization and localization
+
+The SDK supports various customization options, including:
+
+ - Text for function codes.
+
+ - Full localization text for alerts, events, and protocol parameters. For more information, see [Create mapping files (JSON)](#create-mapping-files-json).
+
+ :::image type="content" source="media/references-horizon-sdk/localization.png" alt-text="View fully localized alerts.":::
+
+## Horizon architecture
+
+The architectural model includes three product layers.
+
+:::image type="content" source="media/references-horizon-sdk/architecture.png" alt-text="https://lh6.googleusercontent.com/YFePqJv_6jbI_oy3lCQv-hHB1Qly9a3QQ05uMnI8UdTwhOuxpNAedj_55wseYEQQG2lue8egZS-mlnQZPWfFU1dF4wzGQSJIlUqeXEHg9CG4M7ASCZroKgbghv-OaNoxr3AIZtIh":::
+
+## Defender for IoT platform layer
+
+Enables immediate integration and real-time monitoring of custom dissector plugins in the Defender for IoT platform, without the need to upgrade the Defender for IoT platform version.
+
+## Defender for IoT services layer
+
+Each service is designed as a pipeline, decoupled from a specific protocol, enabling more efficient, independent development.
+
+Each service is designed as a pipeline, decoupled from a specific protocol. Services listens for traffic on the pipeline. They interact with the plugin data and the traffic captured by the sensors to index deployed protocols and analyze the traffic payload, and enable a more efficient and independent development.
+
+## Custom dissector layer
+
+Enables creation of plugins using the Defender for IoT proprietary SDK (including C++ implementation and JSON configuration) to:
+
+- Define how to identify the protocol
+
+- Define how to map the fields you want to extract from the traffic, and extract them
+
+- Define how to integrate with the Defender for IoT services
+
+ :::image type="content" source="media/references-horizon-sdk/layers.png" alt-text="The built-in layers.":::
+
+Defender for IoT provides basic dissectors for common protocols. You can build your dissectors on top of these protocols.
+
+## Before you begin
+
+## What this SDK contains
+
+This kit contains the header files needed for development. The development process requires basic steps and optional advanced steps, described in this SDK.
+
+Contact <support@cyberx-labs.com> for information on receiving header files and other resources.
+
+## About the environment and setup
+
+### Requirements
+
+- The preferred development environment is Linux. If you are developing in a Windows environment, consider using a VM with a Linux System.
+
+- For the compilation process, use GCC 7.4.0 or higher. Use any standard class from stdlib that is supported under C++17.
+
+- Defender for IoT version 3.0 and above.
+
+### Process
+
+1. [Download](https://www.eclipse.org/) the Eclipse IDE for C/C++ Developers. You can use any other IDE you prefer. This document guides you through configuration using Eclipse IDE.
+
+1. After launching Eclipse IDE and configuring the workspace (where your projects will be stored), press **Ctrl + n**, and create it as a C++ project.
+
+1. On the next screen, set the name to the protocol you want to develop and select the project type as `Shared Library` and `AND Linux GCC`.
+
+1. Edit the project properties, under **C/C++ Build** > **Settings** > **Tool Settings** > **GCC C++ Compiler** > **Miscellaneous** > **Tick Position Independent Code**.
+
+1. Paste the example codes that you received with the SDK and compile it.
+
+1. Add the artifacts (library, config.json, and metadata) to a tar.gz file, and change the file extension to \<XXX>.hdp, where is \<XXX> is the name of the plugin.
+
+### Research
+
+Before you begin, verify that you:
+
+- Read the protocol specification, if available.
+
+- Know which protocol fields you plan to extract.
+
+- Have planned your mapping objectives.
+
+## About plugin files
+
+Three files are defined during the development process.
+
+### JSON configuration file (required)
+
+This file should define the dissector ID and declarations, dependencies, integration requirements, validation parameters, and mapping definitions to translate values to names, numbers to text. For more information, see the following links:
+
+- [Prepare the configuration file (JSON)](#prepare-the-configuration-file-json)
+
+- [Prepare implementation code validations](#prepare-implementation-code-validations)
+
+- [Extract device metadata](#extract-device-metadata)
+
+- [Connect to an indexing service (Baseline)](#connect-to-an-indexing-service-baseline)
+
+### Implementation code: C++ (required)
+
+The Implementation Code (CPP) parses raw traffic, and maps it to values such as services, classes, and function codes. It extracts the layer fields and maps them to their index names from the JSON configuration files. The fields to extract from CPP are defined in config file. for more information, see [Prepare the implementation code (C++)](#prepare-the-implementation-code-c).
+
+### Mapping files (optional)
+
+You can customize plugin output text to meet the needs of your enterprise environment.
+
+:::image type="content" source="media/references-horizon-sdk/localization.png" alt-text="migration":::
+
+You can define and update mapping files to update text without changing the code. Each file can map one or many fields:
+
+ - Mapping of field values to names, for example, 1:Reset, 2:Start, 3:Stop.
+
+ - Mapping text to support multiple languages.
+
+For more information, see [Create mapping files (JSON)](#create-mapping-files-json).
+
+## Create a dissector plugin (overview)
+
+1. Review the [About the environment and setup](#about-the-environment-and-setup) section.
+
+2. [Prepare the implementation code (C++)](#prepare-the-implementation-code-c). Copy the **template.json** file and edit it to meet your needs. Do not change the keys.
+
+3. [Prepare the configuration file (JSON)](#prepare-the-configuration-file-json). Copy the **template.cpp** file and implement an override method. For more information, see [horizon::protocol::BaseParser](#horizonprotocolbaseparser) for details.
+
+4. [Prepare implementation code validations](#prepare-implementation-code-validations).
+
+## Prepare the implementation code (C++)
+
+The CPP file is a parser responsible for:
+
+- Validating the packet header and payload (for example header length, or payload structure).
+
+- Extracting data from the header and payload into defined fields.
+
+- Implementing configured fields extraction by the JSON file.
+
+### What to do
+
+Copy the template **.cpp** file and implement an override method. For more information, see [horizon::protocol::BaseParser](#horizonprotocolbaseparser).
+
+### Basic C++ template sample
+
+This section provides the basic protocol template, with standard functions for a sample Defender for IoT Horizon Protocol.
+
+```C++
+#include ΓÇ£plugin/plugin.hΓÇ¥
+namespace {
+ class CyberxHorizonSDK: public horizon::protocol::BaseParser
+ public:
+ std::vector<uint64_t> processDissectAs(const std::map<std::string,
+ std::vector<std::string>> &filters) const override {
+ return std::vector<uint64_t>();
+ }
+ horizon::protocol::ParserResult processLayer(horizon::protocol::management::IProcessingUtils &ctx,
+ horizon::general::IDataBuffer &data) override {
+ return horizon::protocol::ParserResult();
+ }
+ };
+}
+
+extern "C" {
+ std::shared_ptr<horizon::protocol::BaseParser> create_parser() {
+ return std::make_shared<CyberxHorizonSDK>();
+ }
+}
+
+```
+
+### Basic C++ template description
+
+This section provides the basic protocol template, with a description of standard functions for a sample Defender for IoT Horizon Protocol.
+
+### #include ΓÇ£plugin/plugin.hΓÇ¥
+
+The definition the plugin uses. The header file contains everything needed to complete development.
+
+### horizon::protocol::BaseParser
+
+The communication interface between the Horizon infrastructure and the Plugin layer. For more information, see [Horizon architecture](#horizon-architecture) for an overview on the layers.
+
+The processLayer is the method used to process data.
+
+- The first parameter in the function code is the processing utility used for retrieving data previously processed, creating new fields, and layers.
+
+- The second parameter in the function code is the current data passed from the previous parser.
+
+```C++
+horizon::protocol::ParserResult processLayer(horizon::protocol::management::IProcessingUtils &ctx,
+ horizon::general::IDataBuffer &data) override {
+
+```
+
+### create_parser
+
+Use to create the instance of your parser.
+
+:::image type="content" source="media/references-horizon-sdk/code.png" alt-text="https://lh5.googleusercontent.com/bRNtyLpBA3LvDXttSPbxdBK7sHiHXzGXGhLiX3hJ7zCuFhbVsbBhgJlKI6Fd_yniueQqWbClg5EojDwEZSZ219X1Z7osoa849iE9X8enHnUb5to5dzOx2bQ612XOpWh5xqg0c4vR":::
+
+## Protocol function code sample
+
+This section provides an example of how the code number (2 bytes) and the message length (4 bytes) are extracted.
+
+This is done according to the endianness supplied in the JSON configuration file, which means if the protocol is *little endianness*, and the sensor runs on a machine with little endianness, it will be converted.
+
+A layer is also created to store data. Use the *fieldsManager* from the processing utils to create new fields. A field can have only one of the following types: *STRING*, *NUMBER*, *RAW DATA*, *ARRAY* (of specific type), or *COMPLEX*. This layer may contain a number, raw, or string with ID.
+
+In the sample below, the following two fields are extracted:
+
+- `function_code_number`
+
+- `headerLength`
+
+A new layer is created, and the extracted field is copied into it.
+
+The sample below describes a specific function, which is the main logic implemented for plugin processing.
+
+```C++
+namespace {
+ class CyberxHorizonProtocol: public horizon::protocol::BaseParser {
+ public:
+
+ horizon::protocol::ParserResult processLayer(horizon::protocol::management::IProcessingUtils &ctx,
+ horizon::general::IDataBuffer &data) override {
+ uint16_t codeNumber = data.readUInt16();
+ uint32_t headerLength = data.readUInt32();
+
+ auto &layer = ctx.createNewLayer();
+
+ ctx.getFieldsManager().create(layer,HORIZON_FIELD("code_number"),codeNumber;
+ ctx.getFieldsManager().create(layer,HORIZON_FIELD("header_length"),headerLength);
+ return horizon::protocol::SuccessResult();
+ }
+
+
+```
+
+### Related JSON field
+
+:::image type="content" source="media/references-horizon-sdk/json.png" alt-text="The related json field.":::
+
+## Prepare the configuration file (JSON)
+
+The Horizon SDK uses standard JavaScript Object Notation (JSON), a lightweight format for storing and transporting data and does not require proprietary scripting languages.
+
+This section describes minimal JSON configuration declarations, the related structure and provides a sample config file that defines a protocol. This protocol is automatically integrated with the device discovery service.
+
+## File structure
+
+The sample below describes the file structure.
+
+:::image type="content" source="media/references-horizon-sdk/structure.png" alt-text="The sample of the file structure.":::
+
+### What to do
+
+Copy the template `config.json` file and edit it to meet your needs. Do not change the key. Keys are marked in red in the [Sample JSON configuration file](#sample-json-configuration-file).
+
+### File naming requirements
+
+The JSON Configuration file must be saved as `config.json`.
+
+### JSON Configuration file fields
+
+This section describes the JSON configuration fields you will be defining. Do not change the fields *labels*.
+
+### Basic parameters
+
+This section describes basic parameters.
+
+| Parameter Label | Description | Type |
+|--|--|--|
+| **ID** | The name of the protocol. Delete the default and add the name of your protocol as it appears. | String |
+| **endianess** | Defines how the multi byte data is encoded. Use the term ΓÇ£littleΓÇ¥ or ΓÇ£bigΓÇ¥ only. Taken from the protocol specification or traffic recording | String |
+| **sanity_failure_codes** | These are the codes returned from the parser when there is a sanity conflict regarding the identity of the code. See magic number validation in the C++ section. | String |
+| **malformed_codes** | These are codes that have been properly identified, but an error is detected. For example, if the field length is too short or long, or a value is invalid. | String |
+| **dissect_as** | An array defining where the specific protocol traffic should arrive. | TCP/UDP, port etc. |
+| **fields** | The declaration of which fields will be extracted from the traffic. Each field has its own ID (name), and type (numeric, string, raw, array, complex). For example, the field [function](https://docs.google.com/document/d/14nm8cyoGiaE0ODOYQd_xjULxVz9U_bjfPKkcDhOFr5Q/edit#bookmark=id.6s1zcxa9184k) that is extracted in the Implementation Parser file. The fields written in the config file are the only ones that can be added to the layer. | |
+
+### Other advanced fields
+
+This section describes other fields.
+
+| Parameter Label | Description |
+|-----------------|--------|
+| **allow lists** | You can index the protocol values and display them in Data Mining Reports. These reports reflect your network baseline. :::image type="content" source="media/references-horizon-sdk/data-mining.png" alt-text="A sample of the data mining view."::: <br /> For more information, see [Connect to an indexing service (Baseline)](#connect-to-an-indexing-service-baseline) for details. |
+| **firmware** | You can extract firmware information, define index values, and trigger firmware alerts for the plugin protocol. For more information, see [Extract firmware data](#extract-firmware-data) for details. |
+| **value_mapping** | You can customize plugin output text to meet the needs of your enterprise environment by defining and updating mapping files. For example, map to language files. Changes can easily be implemented to text without changing or impacting the code. For more information, see [Create mapping files (JSON)](#create-mapping-files-json) for details. |
+
+## Sample JSON configuration file
+
+```json
+{
+ "id":"CyberX Horizon Protocol",
+ "endianess": "big",
+ "sanity_failure_codes": {
+ "wrong magic": 0
+ },
+ "malformed_codes": {
+ "not enough bytes": 0
+ },
+ "exports_dissect_as": {
+ },
+ "dissect_as": {
+ "UDP": {
+ "port": ["12345"]
+ }
+ },
+ "fields": [
+{
+ "id": "function",
+ "type": "numeric"
+ },
+ {
+ "id": "sub_function",
+ "type": "numeric"
+ },
+ {
+ "id": "name",
+ "type": "string"
+ },
+ {
+ "id": "model",
+ "type": "string"
+ },
+ {
+ "id": "version",
+ "type": "numeric"
+ }
+ ]
+}
++
+```
+
+## Prepare implementation code validations
+
+This section describes implementation C++ code validation functions and provides sample code. Two layers of validation are available:
+
+- Sanity.
+
+- Malformed Code.
+
+You donΓÇÖt need to create validation code in order to build a functioning plugin. If you donΓÇÖt prepare validation code, you can review sensor Data Mining reports as an indication of successful processing.
+
+Field values can be mapped to the text in mapping files and seamlessly updated without impacting processing.
+
+## Sanity code validations
+
+This validates that the packet transmitted matches the validation parameters of the protocol, which helps you identify the protocol within the traffic.
+
+For example, use the first 8 bytes as the *magic number*. If the sanity fails, a sanity failure response is returned.
+
+For example:
+
+```C++
+ horizon::protocol::ParserResult
+processLayer(horizon::protocol::management::IProcessingUtils &ctx,
+ horizon::general::IDataBuffer
+&data) override {
+
+ uint64_t magic = data.readUInt64();
+
+ if (magic != 0xBEEFFEEB) {
+
+ return horizon::protocol::SanityFailureResult(0);
+
+ }
+```
+
+If other relevant plugins have been deployed, the packet will be validated against them.
+
+## Malformed code validations
+
+Malformed validations are used after the protocol has been positively validated.
+
+If there is a failure to process the packets based on the protocol, a failure response is returned.
+
+:::image type="content" source="media/references-horizon-sdk/failure.png" alt-text="malformed code":::
+
+## C++ sample with validations
+
+According to the function, the process is carried out, as shown in the example below.
+
+### Function 20
+
+- It is processed as firmware.
+
+- The fields are read according to the function.
+
+- The fields are added to the layer.
+
+### Function 10
+
+- The function contains another sub function, which is a more specific operation
+
+- The subfunction is read and added to the layer.
+
+Once this is done, processing is finished. The return value indicates if the dissector layer was successfully processed. If it was, the layer becomes usable.
+
+```C++
+#include "plugin/plugin.h"
+
+#define FUNCTION_FIRMWARE_RESPONSE 20
+
+#define FUNCTION_SUBFUNCTION_REQUEST 10
+
+namespace {
+
+class CyberxHorizonSDK: public horizon::protocol::BaseParser {
+
+ public:
+
+ std::vector<uint64_t> processDissectAs(const std::map<std::string,
+
+ std::vector<std::string>> &filters) const override {
+
+ return std::vector<uint64_t>();
+
+ }
+
+ horizon::protocol::ParserResult processLayer(horizon::protocol::management::IProcessingUtils &ctx,
+
+ horizon::general::IDataBuffer &data) override {
+
+ uint64_t magic = data.readUInt64();
+
+ if (magic != 0xBEEFFEEB) {
+
+ return horizon::protocol::SanityFailureResult(0);
+
+ }
+
+ uint16_t function = data.readUInt16();
+
+ uint32_t length = data.readUInt32();
+
+ if (length > data.getRemaningData()) {
+
+ return horizon::protocol::MalformedResult(0);
+
+ }
+
+ auto &layer = ctx.createNewLayer();
+
+ ctx.getFieldsManager().create(layer, HORIZON_FIELD("function"), function);
+
+ switch (function) {
+
+ case FUNCTION_FIRMWARE_RESPONSE: {
+
+ uint8_t modelLength = data.readUInt8();
+
+ std::string model = data.readString(modelLength);
+
+ uint16_t firmwareVersion = data.readUInt16();
+
+ uint8_t nameLength = data.readUInt8();
+
+ std::string name = data.readString(nameLength);
+
+ ctx.getFieldsManager().create(layer, HORIZON_FIELD("model"), model);
+
+ ctx.getFieldsManager().create(layer, HORIZON_FIELD("version"), firmwareVersion);
+
+ ctx.getFieldsManager().create(layer, HORIZON_FIELD("name"), name);
+
+ }
+
+ break;
+
+ case FUNCTION_SUBFUNCTION_REQUEST: {
+
+ uint8_t subFunction = data.readUInt8();
+
+ ctx.getFieldsManager().create(layer, HORIZON_FIELD("sub_function"), subFunction);
+
+ }
+
+ break;
+
+ }
+
+ return horizon::protocol::SuccessResult();
+
+ }
+
+};
+
+}
+
+extern "C" {
+
+ std::shared_ptr<horizon::protocol::BaseParser> create_parser() {
+
+ return std::make_shared<CyberxHorizonSDK>();
+
+ }
+
+}
+```
+
+## Extract device metadata
+
+You can extract the following metadata on assets:
+
+ - `Is_distributed_control_system` - Indicates if the protocol is part of Distributed Control System. (example: SCADA protocol should be false)
+
+ - `Has_protocol_address` ΓÇô Indicates if there is a protocol address; the specific address for the current protocol, for example MODBUS unit identifier
+
+ - `Is_scada_protocol` - Indicates if the protocol is specific to OT networks
+
+ - `Is_router_potential` - Indicates if the protocol is used mainly by routers. For example, LLDP, CDP, or STP.
+
+In order to achieve this, the JSON configuration file needs to be updated using the metadata property.
+
+## JSON sample with metadata
+
+```json
+
+{
+ "id":"CyberX Horizon Protocol",
+ "endianess": "big",
+ "metadata": {
+ "is_distributed_control_system": false,
+ "has_protocol_address": false,
+ "is_scada_protocol": true,
+ "is_router_potenial": false
+ },
+ "sanity_failure_codes": {
+ "wrong magic": 0
+ },
+ "malformed_codes": {
+ "not enough bytes": 0
+ },
+ "exports_dissect_as": {
+ },
+ "dissect_as": {
+ "UDP": {
+ "port": ["12345"]
+ }
+ },
+ "fields": [
+ {
+ "id": "function",
+ "type": "numeric"
+},
+ {
+ "id": "sub_function",
+ "type": "numeric"
+ },
+ {
+ "id": "name",
+ "type": "string"
+ },
+ {
+ "id": "model",
+ "type": "string"
+ },
+ {
+ "id": "version",
+ "type": "numeric"
+ }
+ ],
+}
+
+```
+
+## Extract programming code
+
+When programming event occurs, you can extract the code content. The extracted content lets you:
+
+- Compare code file content in different programming events.
+
+- Trigger an alert on unauthorized programming.
+
+- Trigger an event for receiving programming code file.
+
+ :::image type="content" source="media/references-horizon-sdk/change.png" alt-text="The programming change log.":::
+
+ :::image type="content" source="media/references-horizon-sdk/view.png" alt-text="View the programming by clicking the button.":::
+
+ :::image type="content" source="media/references-horizon-sdk/unauthorized.png" alt-text="The unauthorized PLC programming alert.":::
+
+In order to achieve this, the JSON configuration file needs to be updated using the `code_extraction` property.
+
+### JSON configuration fields
+
+This section describes the JSON configuration fields.
+
+- **method**
+
+ Indicates the way that programming event files are received.
+
+ ALL (each programming action will cause all the code files to be received even if there are files without changes).
+
+- **file_type**
+
+ Indicates the code content type.
+
+ TEXT (each code file contains textual information).
+
+- **code_data_field**
+
+ Indicates the implementation field to use in order to provide the code content.
+
+ FIELD.
+
+- **code_name_field**
+
+ Indicates the implementation field to use in order to provide the name of the coding file.
+
+ FIELD.
+
+- **size_limit**
+
+ Indicates the size limit of each coding file content in BYTES, if a code file exceeds the set limit it will be dropped. If this field is not specified, the default value will be 15,000,000 that is 15 MB.
+
+ Number.
+
+- **metadata**
+
+ Indicates additional information for a code file.
+
+ Array containing objects with two properties:
+
+ - name (String) -Indicates the metadata key XSense currently supports only the username key.
+
+ - value (Field) - Indicates the implementation field to use in order to provide the metadata data.
+
+## JSON sample with programming code
+
+```json
+{
+ "id":"CyberXHorizonProtocol",
+ "endianess": "big",
+ "metadata": {
+ "is_distributed_control_system": false,
+ "has_protocol_address": false,
+ "is_scada_protocol": true,
+ "is_router_potenial": false
+ },
+ "sanity_failure_codes": {
+ "wrong magic": 0
+ },
+ "malformed_codes": {
+ "not enough bytes": 0
+ },
+ "exports_dissect_as": {
+ },
+ "dissect_as": {
+ "UDP": {
+ "port": ["12345"]
+ }
+ },
+ "fields": [
+ {
+ "id": "function",
+ "type": "numeric"
+ },
+ {
+ "id": "sub_function",
+ "type": "numeric"
+ },
+ {
+ "id": "name",
+ "type": "string"
+ },
+ {
+ "id": "model",
+ "type": "string"
+ },
+ {
+ "id": "version",
+ "type": "numeric"
+ },
+ {
+ "id": "script",
+ "type": "string"
+ },
+ {
+ "id": "script_name",
+ "type": "string"
+ },
+ "id": "username",
+ "type": "string"
+ }
+ ],
+"whitelists": [
+ {
+ "name": "Functions",
+ "alert_title": "New Activity Detected - CyberX Horizon
+ Protocol Function",
+ "alert_text": "There was an attempt by the source to
+ invoke a new function on the destination",
+ "fields": [
+ {
+ "name": "Source",
+ "value": "IPv4.src"
+ },
+ {
+ "name": "Destination",
+ "value": "IPv4.dst"
+ },
+ {
+ "name": "Function",
+ "value": "CyberXHorizonProtocol.function"
+ }
+ ]
+ },
+"firmware": {
+ "alert_text": "Firmware was changed on a network asset.
+ This may be a planned activity,
+ for example an authorized maintenance procedure",
+ "index_by": [
+ {
+ "name": "Device",
+ "value": "IPv4.src",
+ "owner": true
+ }
+ ],
+ "firmware_fields": [,
+ {
+ "name": "Model",
+ "value": "CyberXHorizonProtocol.model",
+ "firmware_index": "model"
+ },
+ {
+ "name": "Revision",
+ "value": "CyberXHorizonProtocol.version",
+ ΓÇ£Firmware_indexΓÇ¥: ΓÇ£firmware_versionΓÇ¥
+ },
+ {
+ "name": "Name",
+ "value": "CyberXHorizonProtocol.name"
+ }
+ ]
+ },
+"code_extraction": {
+ "method": "ALL",
+ "file_type": "TEXT",
+ "code_data_field": "script",
+ "code_name_field": "script_name",
+ "size_limit": 15000000,
+ "metadata": [
+ {
+ "name": "username",
+ "value": "username"
+ }
+ ]
+ }
+}
+
+```
+## Custom horizon alerts
+
+Some protocols function code might indicate an error. For example, if the protocol controls a container with a specific chemical that must be stored always at a specific temperature. In this case, there may be function code indicating an error in the thermometer. For example, if the function code is 25, you can trigger an alert in the Web Console that indicates there is a problem with the container. In such case, you can define deep packet alerts.
+
+Add the **alerts** parameter to the `config.json` of the plugin.
+
+```json
+ΓÇ£alertsΓÇ¥: [{
+ ΓÇ£idΓÇ¥: 1,
+ ΓÇ£messageΓÇ¥: ΓÇ£Problem with thermometer at station {IPv4.src}ΓÇ¥,
+ ΓÇ£titleΓÇ¥: ΓÇ£Thermometer problemΓÇ¥,
+ ΓÇ£expressionΓÇ¥: ΓÇ£{CyberXHorizonProtocol.function} == 25ΓÇ¥
+}]
+
+```
+
+## JSON configuration fields
+
+This section describes the JSON configuration fields.
+
+| Field name | Description | Possible values |
+|--|--|--|
+| **ID** | Represents a single alert ID. It must be unique in this context. | Numeric value 0 - 10000 |
+| **message** | Information displayed to the user. This field allows to you use different fields. | Use any field from your protocol, or any lower layer protocol. |
+| **title** | The alert title | |
+| **expression** | When you want this alert to pop up. | Use any numeric field found in lower layers, or the current layer.</br></br> Each field should be wrapper with `{}`, in order for the SDK to detect it as a field, the supported logical operators are</br> == - Equal</br> <= - Less than or equal</br> >= - More than or equal</br> > - More than</br> < - Less than</br> ~= - Not equal |
+
+## More about expressions
+
+Every time the Horizon SDK invokes the expression and it becomes *true*, an alert will be triggered in the sensor.
+
+Multiple expressions can be included under the same alert. For example,
+
+`{CyberXHorizonProtocol.function} == 25 and {IPv4.src} == 269488144`.
+
+This expression validates the function code only when the packet ipv4 src is 10.10.10.10. Which is a raw representation of the IP address in numeric representation.
+
+You can use `and`, or `or` in order to connect expressions.
+
+## JSON sample custom horizon alerts
+
+```json
+ "id":"CyberX Horizon Protocol",
+ "endianess": "big",
+ "metadata": {
+ "is_distributed_control_system": false,
+ "has_protocol_address": false,
+ "is_scada_protocol": true,
+ "is_router_potenial": false
+ },
+ "sanity_failure_codes": {
+ "wrong magic": 0
+ },
+ "malformed_codes": {
+ "not enough bytes": 0
+ },
+ "exports_dissect_as": {
+ },
+ "dissect_as": {
+ "UDP": {
+ "port": ["12345"]
+ }
+ },
+ …………………………………….
+ ΓÇ£alertsΓÇ¥: [{
+ΓÇ£idΓÇ¥: 1,
+ΓÇ£messageΓÇ¥: ΓÇ£Problem with thermometer at station {IPv4.src}ΓÇ¥,
+ΓÇ£titleΓÇ¥: ΓÇ£Thermometer problemΓÇ¥,
+ΓÇ£expressionΓÇ¥: ΓÇ£{CyberXHorizonProtocol.function} == 25ΓÇ¥
+
+```
+
+## Connect to an indexing service (Baseline)
+
+You can index the protocol values and display them in Data Mining reports.
+
+:::image type="content" source="media/references-horizon-sdk/data-mining.png" alt-text="A view of the data mining option.":::
+
+These values can later be mapped to specific texts, for example-mapping numbers as texts or adding information, in any language.
+
+:::image type="content" source="media/references-horizon-sdk/localization.png" alt-text="migration":::
+
+For more information, see [Create mapping files (JSON)](#create-mapping-files-json) for details.
+
+You can also use values from protocols previously parsed to extract additional information.
+
+For example, for the value, which is based on TCP, you can use the values from IPv4 layer. From this layer you can extract values such as the source of the packet, and the destination.
+
+In order to achieve this, the JSON configuration file needs to be updated using the `whitelist` property.
+
+## Allow list (data mining) fields
+
+The following allow list fields are available:
+
+- name ΓÇô The name used for indexing.
+
+- alert_title ΓÇô A short, unique title that explains the event.
+
+- alert_text ΓÇô Additional information
+
+Multiple allow lists can be added, allowing complete flexibility in indexing.
+
+## JSON sample with indexing
+
+```json
+{
+ "id":"CyberXHorizonProtocol",
+ "endianess": "big",
+ "metadata": {
+ "is_distributed_control_system": false,
+ "has_protocol_address": false,
+ "is_scada_protocol": true,
+ "is_router_potenial": false
+ },
+ "sanity_failure_codes": {
+ "wrong magic": 0
+ },
+ "malformed_codes": {
+ "not enough bytes": 0
+ },
+ "exports_dissect_as": {
+ },
+ "dissect_as": {
+ "UDP": {
+ "port": ["12345"]
+ }
+ },
+ "fields": [
+ {
+ "id": "function",
+ "type": "numeric"
+ },
+ {
+ "id": "sub_function",
+ "type": "numeric"
+ },
+ {
+ "id": "name",
+ "type": "string"
+ },
+ {
+ "id": "model",
+ "type": "string"
+ },
+ {
+ "id": "version",
+ "type": "numeric"
+ }
+ ],
+"whitelists": [
+ {
+ "name": "Functions",
+ "alert_title": "New Activity Detected - CyberX Horizon Protocol Function",
+ "alert_text": "There was an attempt by the source to invoke a new function on the destination",
+ "fields": [
+ {
+ "name": "Source",
+ "value": "IPv4.src"
+ },
+ {
+ "name": "Destination",
+ "value": "IPv4.dst"
+ },
+ {
+ "name": "Function",
+ "value": "CyberXHorizonProtocol.function"
+ }
+ ]
+ }
+
+```
+## Extract firmware data
+
+You can extract firmware information, define index values, and trigger firmware alerts for the plugin protocol. For example,
+
+- Extract the firmware model or version. This information can be further utilized to identify CVEs.
+
+- Trigger an alert when a new firmware version is detected.
+
+In order to achieve this, the JSON configuration file needs to be updated using the firmware property.
+
+## Firmware fields
+
+This section describes the JSON firmware configuration fields.
+
+- **name**
+
+ Indicates how the field is presented in the sensor console.
+
+- **value**
+
+ Indicates the implementation field to use in order to provide the data.
+
+- **firmware_index:**
+
+ Select one:
+ - **model**: The device model. Enables detection of CVEs.
+ - **serial**: The device serial number. The serial number is not always available for all protocols. This value is unique per device.
+ - **rack**: Indicates the rack identifier, if the device is part of a rack.
+ - **slot**: The slot identifier, if the device is part of a rack.
+ - **module_address**: Use to present a hierarchy if the module can be presented behind another device. Applicable instead if a rack slot combination, which is a simpler presentation.
+ - **firmware_version**: Indicates the device version. Enables detection of CVEs.
+ - **alert_text**: Indicates text describing firmware deviations, for example, version changes.
+ - **index_by**: Indicates the fields used to identify and index the device. In the example below the device is identified by its IP address. In certain protocols, a more complex index can be used. For example, if another device connected, and you know how to extract its internal path. For example, the MODBUS Unit ID, can be used as part of the index, as a different combination of IP address and the Unit Identifier.
+ - **firmware_fields**: Indicates which fields are device metadata fields. In this example, the following are used: model, revision, and name. Each protocol can define its own firmware data.
+
+## JSON sample with firmware
+
+```json
+{
+ "id":"CyberXHorizonProtocol",
+ "endianess": "big",
+ "metadata": {
+ "is_distributed_control_system": false,
+ "has_protocol_address": false,
+ "is_scada_protocol": true,
+ "is_router_potenial": false
+ },
+ "sanity_failure_codes": {
+ "wrong magic": 0
+ },
+ "malformed_codes": {
+ "not enough bytes": 0
+ },
+ "exports_dissect_as": {
+ },
+ "dissect_as": {
+ "UDP": {
+ "port": ["12345"]
+ }
+ },
+ "fields": [
+ {
+ "id": "function",
+ "type": "numeric"
+ },
+ {
+ "id": "sub_function",
+ "type": "numeric"
+ },
+ {
+ "id": "name",
+ "type": "string"
+ },
+ {
+ "id": "model",
+ "type": "string"
+ },
+ {
+ "id": "version",
+ "type": "numeric"
+ }
+ ],
+"whitelists": [
+ {
+ "name": "Functions",
+ "alert_title": "New Activity Detected - CyberX Horizon Protocol Function",
+ "alert_text": "There was an attempt by the source to invoke a new function on the destination",
+ "fields": [
+ {
+ "name": "Source",
+ "value": "IPv4.src"
+ },
+ {
+ "name": "Destination",
+ "value": "IPv4.dst"
+ },
+ {
+ "name": "Function",
+ "value": "CyberXHorizonProtocol.function"
+ }
+ ]
+ },
+"firmware": {
+ "alert_text": "Firmware was changed on a network asset.
+ This may be a planned activity, for example an authorized maintenance procedure",
+ "index_by": [
+ {
+ "name": "Device",
+ "value": "IPv4.src",
+ "owner": true
+ }
+ ],
+ "firmware_fields": [,
+ {
+ "name": "Model",
+ "value": "CyberXHorizonProtocol.model",
+ "firmware_index": "model"
+ },
+ {
+ "name": "Revision",
+ "value": "CyberXHorizonProtocol.version",
+ ΓÇ£Firmware_indexΓÇ¥: ΓÇ£firmware_versionΓÇ¥
+ },
+ {
+ "name": "Name",
+ "value": "CyberXHorizonProtocol.name"
+ }
+ ]
+ }
+}
+
+```
+## Extract device attributes
+
+You can enhance device the information available in the Device in Inventory, Data Mining, and other reports.
+
+- Name
+
+- Type
+
+- Vendor
+
+- Operating System
+
+In order to achieve this, the JSON configuration file needs to be updated using the **Properties** property.
+
+You can do this after writing the basic plugin and extracting required fields.
+
+## Properties fields
+
+This section describes the JSON properties configuration fields.
+
+**config_file**
+
+Contains the file name that defines how to process each key in the `key` fields. The config file itself should be in JSON format and be included as part of the plugin protocol folder.
+
+Each key in the JSON defines the set of action that should be done when you extract this key from a packet.
+
+Each key can have:
+
+- **Packet Data** - Indicates the properties that would be updated based on the data extracted from the packet (the implementation field used to provide that data).
+
+- **Static Data** - Indicates predefined set of `property-value` actions that should be updated.
+
+The properties that can be configured in this file are:
+
+- **Name** - Indicates the device name.
+
+- **Type** - Indicates the device type.
+
+- **Vendor** - Indicates device vendor.
+
+- **Description** - Indicates the device firmware model (lower priority than ΓÇ£modelΓÇ¥).
+
+- **operatingSystem** - Indicates the device operating system.
+
+### Fields
+
+| Field | Description |
+|--|--|
+| key | Indicates the key. |
+| value | Indicates the implementation field to use in order to provide the data. |
+| is_static_key | Indicates whether the `key` field is derived as a value from the packet or is it a predefined value. |
+
+### Working with static keys only
+
+If you are working with static keys, then you don't have to configure the `config.file`. You can configure the JSON file only.
+
+## JSON sample with properties
+
+```json
+{
+ "id":"CyberXHorizonProtocol",
+ "endianess": "big",
+ "metadata": {
+ "is_distributed_control_system": false,
+ "has_protocol_address": false,
+ "is_scada_protocol": true,
+ "is_router_potenial": false
+ },
+ "sanity_failure_codes": {
+ "wrong magic": 0
+ },
+ "malformed_codes": {
+ "not enough bytes": 0
+ },
+ "exports_dissect_as": {
+ },
+ "dissect_as": {
+ "UDP": {
+ "port": ["12345"]
+ }
+ },
+ "fields": [
+ {
+ "id": "function",
+ "type": "numeric"
+ },
+
+ "id": "sub_function",
+ "type": "numeric"
+ },
+ {
+ "id": "name",
+ "type": "string"
+ },
+ {
+ "id": "model",
+ "type": "string"
+ },
+ {
+ "id": "version",
+ "type": "numeric"
+ },
+ {
+ "id": "vendor",
+ "type": "string"
+ }
+ ],
+"whitelists": [
+ {
+ "name": "Functions",
+ "alert_title": "New Activity Detected - CyberX Horizon Protocol Function",
+ "alert_text": "There was an attempt by the source to invoke a new function on the destination",
+ "fields": [
+ {
+ "name": "Source",
+ "value": "IPv4.src"
+ },
+ {
+ "name": "Destination",
+ "value": "IPv4.dst"
+ },
+ {
+ "name": "Function",
+ "value": "CyberXHorizonProtocol.function"
+ }
+ ]
+ },
+"firmware": {
+ "alert_text": "Firmware was changed on a network asset.
+ This may be a planned activity, for example an authorized maintenance procedure",
+ "index_by": [
+ {
+ "name": "Device",
+ "value": "IPv4.src",
+ "owner": true
+ }
+ ],
+ "firmware_fields": [,
+ {
+ "name": "Model",
+ "value": "CyberXHorizonProtocol.model",
+ "firmware_index": "model"
+ },
+ {
+ "name": "Revision",
+ "value": "CyberXHorizonProtocol.version",
+ ΓÇ£Firmware_indexΓÇ¥: ΓÇ£firmware_versionΓÇ¥
+ },
+ {
+ "name": "Name",
+ "value": "CyberXHorizonProtocol.name"
+ }
+ ]
+ }
+"properties": {
+ "config_file": "config_file_example",
+"fields": [
+ {
+ "key": "vendor",
+ "value": "CyberXHorizonProtocol.vendor",
+ "is_static_key": true
+ },
+{
+ "key": "name",
+ "value": "CyberXHorizonProtocol.vendor",
+ "is_static_key": true
+ },
+
+]
+ }
+}
+
+```
+
+## CONFIG_FILE_EXAPMLE JSON
+
+```json
+{
+"someKey": {
+ "staticData": {
+ "model": "FlashSystem",
+ "vendor": "IBM",
+ "type": "Storage"}
+ }
+ "packetData": [
+ "nameΓÇ¥
+ ]
+}
+
+```
+
+## Create mapping files (JSON)
+
+You can customize plugin output text to meet the needs of your enterprise environment by defining and update-mapping files. Changes can easily be implemented to text without changing or impacting the code. Each file can map one or many fields.
+
+- Mapping of field values to names, for example 1:Reset, 2:Start, 3:Stop.
+
+- Mapping text to support multiple languages.
+
+Two types of mapping files can be defined.
+
+ - [Simple mapping file](#simple-mapping-file).
+
+ - [Dependency mapping file](#dependency-mapping-file).
+
+ :::image type="content" source="media/references-horizon-sdk/localization.png" alt-text="ether net":::
+
+ :::image type="content" source="media/references-horizon-sdk/unhandled.png" alt-text="A view of the unhandled alerts.":::
+
+ :::image type="content" source="media/references-horizon-sdk/policy-violation.png" alt-text="A list of known policy violations.":::
+
+## File naming and storage requirements
+
+Mapping files should be saved under the metadata folder.
+
+The name of the file should match the JSON config file ID.
+
+:::image type="content" source="media/references-horizon-sdk/json-config.png" alt-text="A sample of a JSON config file.":::
+
+## Simple mapping file
+
+The following sample presents a basic JSON file as a key value.
+
+When you create an allow list, and it contains one or more of the mapped fields. The value will be converted from a number, string, or any type, in to formatted text presented in the mapping file.
+
+```json
+{
+ ΓÇ£10ΓÇ¥: ΓÇ£ReadΓÇ¥,
+ ΓÇ£20ΓÇ¥: ΓÇ£Firmware DataΓÇ¥,
+ ΓÇ£3ΓÇ¥: ΓÇ£WriteΓÇ¥
+}
+
+```
+
+## Dependency-mapping file
+
+To indicate that the file is a dependency file, add the keyword `dependency` to the mapping configuration.
+
+```json
+dependency": { "field": "CyberXHorizonProtocol.function" }}]
+ }],
+ "firmware": {
+ "alert_text": "Firmware was changed on a network asset. This may be a planned activity, for example an authorized maintenance procedure",
+ "index_by": [{ "name": "Device", "value": "IPv4.src", "owner": true }],
+ "firmware_fields": [{ "name": "Model", "value":
+
+```
+
+The file contains a mapping between the dependency field and the function field. For example, between the function, and sub function. The sub function changes according to the function supplied.
+
+In the allow list previously configured, there is no dependency configuration, as shown below.
+
+```json
+"whitelists": [
+{
+"name": "Functions",
+"alert_title": "New Activity Detected - CyberX Horizon Protocol Function",
+"alert_text": "There was an attempt by the source to invoke a new function on the destination",
+"fields": [
+{
+"name": "Source",
+"value": "IPv4.src"
+},
+{
+"name": "Destination",
+"value": "IPv4.dst"
+},
+{
+"name": "Function",
+"value": "CyberXHorizonProtocol.function"
+}
+]
+}
+
+```
+
+The dependency can be based on a specific value or a field. In the example below, it is based on a field. If you base it on a value, define the extract value to be read from the mapping file.
+
+In the example below, the dependency as follow for same value of the field.
+
+For example, in the sub function five, the meaning is changed based on the function.
+
+ - If it is a read function, then five means Read Memory.
+
+ - If it is a write function, the same value is used to read from a file.
+
+ ```json
+ {
+ ΓÇ£10ΓÇ¥: {
+ ΓÇ£5ΓÇ¥: ΓÇ£MemoryΓÇ¥,
+ ΓÇ£6ΓÇ¥: ΓÇ£FileΓÇ¥,
+ ΓÇ£7ΓÇ¥ ΓÇ£RegisterΓÇ¥
+ },
+ ΓÇ£3ΓÇ¥: {
+ ΓÇ£5ΓÇ¥: ΓÇ£FileΓÇ¥,
+ ΓÇ£7ΓÇ¥: ΓÇ£MemoryΓÇ¥,
+ ΓÇ£6ΓÇ¥, ΓÇ£RegisterΓÇ¥
+ }
+ }
+
+ ```
+
+### Sample file
+
+```json
+{
+ "id":"CyberXHorizonProtocol",
+ "endianess": "big",
+ "metadata": {"is_distributed_control_system": false, "has_protocol_address": false, "is_scada_protocol": true, "is_router_potenial": false},
+ "sanity_failure_codes": { "wrong magic": 0 },
+ "malformed_codes": { "not enough bytes": 0 },
+ "exports_dissect_as": { },
+ "dissect_as": { "UDP": { "port": ["12345"] }},
+ "fields": [{ "id": "function", "type": "numeric" }, { "id": "sub_function", "type": "numeric" },
+ {"id": "name", "type": "string" }, { "id": "model", "type": "string" }, { "id": "version", "type": "numeric" }],
+ "whitelists": [{
+ "name": "Functions",
+ "alert_title": "New Activity Detected - CyberX Horizon Protocol Function",
+ "alert_text": "There was an attempt by the source to invoke a new function on the destination",
+ "fields": [{ "name": "Source", "value": "IPv4.src" }, { "name": "Destination", "value": "IPv4.dst" },
+ { "name": "Function", "value": "CyberXHorizonProtocol.function" },
+ { "name": "Sub function", "value": "CyberXHorizonProtocol.sub_function", "dependency": { "field": "CyberXHorizonProtocol.function" }}]
+ }],
+ "firmware": {
+ "alert_text": "Firmware was changed on a network asset. This may be a planned activity, for example an authorized maintenance procedure",
+ "index_by": [{ "name": "Device", "value": "IPv4.src", "owner": true }],
+ "firmware_fields": [{ "name": "Model", "value": "CyberXHorizonProtocol.model", "firmware_index": "model" },
+ { "name": "Revision", "value": "CyberXHorizonProtocol.version", "firmware_index": "firmware_version" },
+ { "name": "Name", "value": "CyberXHorizonProtocol.name" }]
+ },
+ "value_mapping": {
+ "CyberXHorizonProtocol.function": {
+ "file": "function-mapping"
+ },
+ "CyberXHorizonProtocol.sub_function": {
+ "dependency": true,
+ "file": "sub_function-mapping"
+ }
+ }
+}
+
+```
+
+## JSON sample with mapping
+
+```json
+{
+ "id":"CyberXHorizonProtocol",
+ "endianess": "big",
+ "metadata": {
+ "is_distributed_control_system": false,
+ "has_protocol_address": false,
+ "is_scada_protocol": true,
+ "is_router_potenial": false
+ },
+ "sanity_failure_codes": {
+ "wrong magic": 0
+ },
+ "malformed_codes": {
+ "not enough bytes": 0
+ },
+ "exports_dissect_as": {
+ },
+ "dissect_as": {
+ "UDP": {
+ "port": ["12345"]
+ }
+ },
+ "fields": [
+ {
+ "id": "function",
+ "type": "numeric"
+ },
+ {
+ "id": "sub_function",
+ "type": "numeric"
+ },
+ {
+ "id": "name",
+ "type": "string"
+ },
+ {
+ "id": "model",
+ "type": "string"
+ },
+ {
+ "id": "version",
+ "type": "numeric"
+ }
+ ],
+"whitelists": [
+ {
+ "name": "Functions",
+ "alert_title": "New Activity Detected - CyberX Horizon Protocol Function",
+ "alert_text": "There was an attempt by the source to invoke a new function on the destination",
+ "fields": [
+ {
+ "name": "Source",
+ "value": "IPv4.src"
+ },
+ {
+ "name": "Destination",
+ "value": "IPv4.dst"
+ },
+ {
+ "name": "Function",
+ "value": "CyberXHorizonProtocol.function"
+ },
+ {
+ ΓÇ£nameΓÇ¥: ΓÇ£Sub functionΓÇ¥,
+ ΓÇ£valueΓÇ¥: ΓÇ£CyberXHorizonProtocol.sub_functionΓÇ¥,
+ ΓÇ£dependencyΓÇ¥: {
+ ΓÇ£fieldΓÇ¥: ΓÇ£CyberXHorizonProtocol.functionΓÇ¥
+ }
+ ]
+ },
+"firmware": {
+ "alert_text": "Firmware was changed on a network asset. This may be a planned activity, for example an authorized maintenance procedure",
+ "index_by": [
+ {
+ "name": "Device",
+ "value": "IPv4.src",
+ "owner": true
+ }
+ ],
+ "firmware_fields": [,
+ {
+ "name": "Model",
+ "value": "CyberXHorizonProtocol.model",
+ "firmware_index": "model"
+ },
+ {
+ "name": "Revision",
+ "value": "CyberXHorizonProtocol.version",
+ ΓÇ£Firmware_indexΓÇ¥: ΓÇ£firmware_versionΓÇ¥
+ },
+ {
+ "name": "Name",
+ "value": "CyberXHorizonProtocol.name"
+ }
+ ]
+ },
+"value_mapping": {
+ "CyberXHorizonProtocol.function": {
+ "file": "function-mapping"
+ },
+ "CyberXHorizonProtocol.sub_function": {
+ "dependency": true,
+ "file": "sub_function-mapping"
+ }
+}
+
+```
+## Package, upload, and monitor the plugin
+
+This section describes how to
+
+ - Package your plugin.
+
+ - Upload your plugin.
+
+ - Monitor and debug the plugin to evaluate how well it is performing.
+
+To package the plugin:
+
+1. Add the **artifact** (can be, library, config.json, or metadata) to a `tar.gz` file.
+
+1. Change the file extension to \<XXX.hdp>, where is \<XXX> is the name of the plugin.
+
+To sign in to the Horizon Console:
+
+1. Sign in your sensor CLI as an administrator, CyberX, or Support user.
+
+2. In the file: `/var/cyberx/properties/horizon.properties` change the **ui.enabled** property to **true** (`horizon.properties:ui.enabled=true`).
+
+3. Sign in to the sensor console.
+
+4. Select the **Horizon** option from the main menu.
+
+ :::image type="content" source="media/references-horizon-sdk/horizon.png" alt-text="Select the horizon option from the left side pane.":::
+
+ The Horizon Console opens.
+
+ :::image type="content" source="media/references-horizon-sdk/plugins.png" alt-text="A view of the Horizon console and all of its plugins.":::
+
+## Plugins pane
+
+The plugin pane lists:
+
+ - Infrastructure plugins: Infrastructure plugins installed by default with Defender for IoT.
+
+ - Application plugins: Application plugins installed by default with Defender for IoT and other plugins developed by Defender for IoT, or external developers.
+
+Enable and disable plugins that have been uploaded using the toggle.
+
+:::image type="content" source="media/references-horizon-sdk/toggle.png" alt-text="The CIP toggle.":::
+
+### Uploading a plugin
+
+After creating and packaging your plugin, you can upload it to the Defender for IoT sensor. To achieve full coverage of your network, you should upload the plugin to each sensor in your organization.
+
+To upload:
+
+1. Sign in to your sensor.
++
+2. Select **Upload**.
+
+ :::image type="content" source="media/references-horizon-sdk/upload.png" alt-text="Upload your plugins.":::
+
+3. Browse to your plugin and drag it to the plugin dialog box. Verify that the prefix is `.hdp`. The plugin loads.
+
+## Plugin status overview
+
+The Horizon console **Overview** window provides information about the plugin you uploaded and lets you disable and enable them.
+
+:::image type="content" source="media/references-horizon-sdk/overview.png" alt-text="The overview of the Horizon console.":::
+
+| Field | Description |
+|--|--|
+| Application | The name of the plugin you uploaded. |
+| :::image type="content" source="media/references-horizon-sdk/switch.png" alt-text="The on and off switch."::: | Toggle **On** or **Off** the plugin. Defender for IoT will not handle protocol traffic defined in the plugin when you toggle off the plugin. |
+| Time | The time the data was last analyzed. Updated every 5 seconds. |
+| PPS | The number of packets per second. |
+| Bandwidth | The average bandwidth detected within the last 5 seconds. |
+| Malforms | Malformed validations are used after the protocol has been positively validated. If there is a failure to process the packets based on the protocol, a failure response is returned. <br><br>This column indicates the number of malform errors in the past 5 seconds. For more information, see [Malformed code validations](#malformed-code-validations) for details. |
+| Warnings | Packets match the structure and specification but there is unexpected behavior based on the plugin warning configuration. |
+| Errors | The number of packets that failed basic protocol validations. Validates that the packet matches the protocol definitions. The Number displayed here indicates that number of errors detected in the past 5 seconds. For more information, see [Sanity code validations](#sanity-code-validations) for details. |
+| :::image type="content" source="media/references-horizon-sdk/monitor.png" alt-text="The monitor icon."::: | Review details about malform and warnings detected for your plugin. |
+
+## Plugin details
+
+You can monitor real-time plugin behavior by analyzing the number of *Malform* and *Warnings* detected for your plugin. An option is available to freeze the screen and export for further investigation
+
+:::image type="content" source="media/references-horizon-sdk/snmp.png" alt-text="The SNMP monitor screen.":::
+
+To Monitor:
+
+Select the Monitor button for your plugin from the Overview.
+
+Next Steps
+
+Set up your [Horizon API](references-horizon-api.md)
devtest-labs https://docs.microsoft.com/en-us/azure/devtest-labs/security-baseline https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/devtest-labs/security-baseline.md
@@ -178,7 +178,7 @@ Additionally, to help you keep track of dedicated administrative accounts, you m
### 3.6: Use dedicated machines (Privileged Access Workstations) for all administrative tasks **Guidance:** Use privileged access workstations (PAWs) with MFA configured to log into and configure Azure resources. -- [Learn about Privileged Access Workstations](/windows-server/identity/securing-privileged-access/privileged-access-workstations)
+- [Learn about Privileged Access Workstations](https://4sysops.com/archives/understand-the-microsoft-privileged-access-workstation-paw-security-model/)
- [How to enable MFA in Azure](../active-directory/authentication/howto-mfa-getstarted.md) **Azure Security Center monitoring:** N/A
digital-twins https://docs.microsoft.com/en-us/azure/digital-twins/how-to-integrate-maps https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/how-to-integrate-maps.md
@@ -31,7 +31,7 @@ This how-to will cover:
* You'll be extending this twin with an additional endpoint and route. You will also be adding another function to your function app from that tutorial. * Follow the Azure Maps [*Tutorial: Use Azure Maps Creator to create indoor maps*](../azure-maps/tutorial-creator-indoor-maps.md) to create an Azure Maps indoor map with a *feature stateset*. * [Feature statesets](../azure-maps/creator-indoor-maps.md#feature-statesets) are collections of dynamic properties (states) assigned to dataset features such as rooms or equipment. In the Azure Maps tutorial above, the feature stateset stores room status that you will be displaying on a map.
- * You will need your feature *stateset ID* and Azure Maps *subscription ID*.
+ * You will need your feature *stateset ID* and Azure Maps *subscription key*.
### Topology
@@ -72,7 +72,7 @@ This pattern reads from the room twin directly, rather than the IoT device, whic
## Create a function to update maps
-You're going to create an Event Grid-triggered function inside your function app from the end-to-end tutorial ([*Tutorial: Connect an end-to-end solution*](./tutorial-end-to-end.md)). This function will unpack those notifications and send updates to an Azure Maps feature stateset to update the temperature of one room.
+You're going to create an *Event Grid-triggered function* inside your function app from the end-to-end tutorial ([*Tutorial: Connect an end-to-end solution*](./tutorial-end-to-end.md)). This function will unpack those notifications and send updates to an Azure Maps feature stateset to update the temperature of one room.
See the following document for reference info: [*Azure Event Grid trigger for Azure Functions*](../azure-functions/functions-bindings-event-grid-trigger.md).
@@ -83,8 +83,8 @@ Replace the function code with the following code. It will filter out only updat
You'll need to set two environment variables in your function app. One is your [Azure Maps primary subscription key](../azure-maps/quick-demo-map-app.md#get-the-primary-key-for-your-account), and one is your [Azure Maps stateset ID](../azure-maps/tutorial-creator-indoor-maps.md#create-a-feature-stateset). ```azurecli-interactive
-az functionapp config appsettings set --settings "subscription-key=<your-Azure-Maps-primary-subscription-key> -g <your-resource-group> -n <your-App-Service-(function-app)-name>"
-az functionapp config appsettings set --settings "statesetID=<your-Azure-Maps-stateset-ID> -g <your-resource-group> -n <your-App-Service-(function-app)-name>
+az functionapp config appsettings set --name <your-App-Service-(function-app)-name> --resource-group <your-resource-group> --settings "subscription-key=<your-Azure-Maps-primary-subscription-key>"
+az functionapp config appsettings set --name <your-App-Service-(function-app)-name> --resource-group <your-resource-group> --settings "statesetID=<your-Azure-Maps-stateset-ID>"
``` ### View live updates on your map
@@ -94,7 +94,7 @@ To see live-updating temperature, follow the steps below:
1. Begin sending simulated IoT data by running the **DeviceSimulator** project from the Azure Digital Twins [*Tutorial: Connect an end-to-end solution*](tutorial-end-to-end.md). The instructions for this are in the [*Configure and run the simulation*](././tutorial-end-to-end.md#configure-and-run-the-simulation) section. 2. Use [the **Azure Maps Indoor** module](../azure-maps/how-to-use-indoor-module.md) to render your indoor maps created in Azure Maps Creator. 1. Copy the HTML from the [*Example: Use the Indoor Maps Module*](../azure-maps/how-to-use-indoor-module.md#example-use-the-indoor-maps-module) section of the indoor maps [*Tutorial: Use the Azure Maps Indoor Maps module*](../azure-maps/how-to-use-indoor-module.md) to a local file.
- 1. Replace the *tilesetId* and *statesetID* in the local HTML file with your values.
+ 1. Replace the *subscription key*, *tilesetId*, and *statesetID* in the local HTML file with your values.
1. Open that file in your browser. Both samples send temperature in a compatible range, so you should see the color of room 121 update on the map about every 30 seconds.
digital-twins https://docs.microsoft.com/en-us/azure/digital-twins/how-to-manage-graph https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/how-to-manage-graph.md
@@ -158,7 +158,7 @@ Here is the console output of the above program:
> [!TIP] > The twin graph is a concept of creating relationships between twins. If you want to view the visual representation of the twin graph, see the [*Visualization*](how-to-manage-graph.md#visualization) section of this article.
-### Create a twin graph from a CSV file
+## Create graph from a CSV file
In practical use cases, twin hierarchies will often be created from data stored in a different database, or perhaps in a spreadsheet or a CSV file. This section illustrates how to read data from a CSV file and create a twin graph out of it.
dms https://docs.microsoft.com/en-us/azure/dms/security-baseline https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/dms/security-baseline.md
@@ -253,7 +253,7 @@ You should ensure that the credentials (such as password, certificate, or smart
**Guidance**: Secured, isolated workstations are critically important for the security of sensitive roles like administrators, developers, and critical service operators. Use highly secured user workstations and/or Azure Bastion for administrative tasks. Use Azure Active Directory, Microsoft Defender Advanced Threat Protection (ATP), and/or Microsoft Intune to deploy a secure and managed user workstation for administrative tasks. The secured workstations can be centrally managed to enforce secured configuration including strong authentication, software and hardware baselines, restricted logical and network access. -- [Understand privileged access workstations](../active-directory/devices/concept-azure-managed-workstation.md)
+- [Understand privileged access workstations](https://4sysops.com/archives/understand-the-microsoft-privileged-access-workstation-paw-security-model/.md)
- [Deploy a privileged access workstation](../active-directory/devices/howto-azure-managed-workstation.md)
dns https://docs.microsoft.com/en-us/azure/dns/security-baseline https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/dns/security-baseline.md
@@ -165,7 +165,7 @@ You can also enable just-in-time access to administrative accounts using Azure
**Guidance**: Use a secure, Azure-managed workstation (also known as a Privileged Access Workstation, or PAW) for administrative tasks that require elevated privileges. -- [Understand secure, Azure-managed workstations](../active-directory/devices/concept-azure-managed-workstation.md)
+- [Understand secure, Azure-managed workstations](https://4sysops.com/archives/understand-the-microsoft-privileged-access-workstation-paw-security-model/)
- [How to enable MFA in Azure](../active-directory/authentication/howto-mfa-getstarted.md)
event-hubs https://docs.microsoft.com/en-us/azure/event-hubs/event-hubs-about https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-hubs/event-hubs-about.md
@@ -2,7 +2,7 @@
title: What is Azure Event Hubs? - a Big Data ingestion service | Microsoft Docs description: Learn about Azure Event Hubs, a Big Data streaming service that ingests millions of events per second. ms.topic: overview
-ms.date: 06/23/2020
+ms.date: 01/13/2021
--- # Azure Event Hubs ΓÇö A big data streaming platform and event ingestion service
@@ -61,7 +61,7 @@ Event Hubs contains the following [key components](event-hubs-features.md):
The following figure shows the Event Hubs stream processing architecture:
-![Event Hubs](./media/event-hubs-about/event_hubs_architecture.png)
+![Event Hubs](./media/event-hubs-about/event_hubs_architecture.svg)
## Event Hubs on Azure Stack Hub Event Hubs on Azure Stack Hub allows you to realize hybrid cloud scenarios. Streaming and event-based solutions are supported, for both on-premises and Azure cloud processing. Whether your scenario is hybrid (connected), or disconnected, your solution can support processing of events/streams at large scale. Your scenario is only bound by the Event Hubs cluster size, which you can provision according to your needs.
event-hubs https://docs.microsoft.com/en-us/azure/event-hubs/event-hubs-features https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-hubs/event-hubs-features.md
@@ -100,7 +100,7 @@ The following examples show the consumer group URI convention:
The following figure shows the Event Hubs stream processing architecture:
-![Event Hubs architecture](./media/event-hubs-features/event_hubs_architecture.png)
+![Event Hubs architecture](./media/event-hubs-about/event_hubs_architecture.svg)
### Stream offsets
event-hubs https://docs.microsoft.com/en-us/azure/event-hubs/security-baseline https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-hubs/security-baseline.md
@@ -358,7 +358,7 @@ How to monitor identity and access within Azure Security Center: https://docs.mi
**Guidance**: Use privileged access workstations (PAW) with Multi-Factor Authentication (MFA) configured to log into and configure Event Hub-enabled resources.
-Learn about Privileged Access Workstations: https://docs.microsoft.com/windows-server/identity/securing-privileged-access/privileged-access-workstations
+Learn about Privileged Access Workstations: https://4sysops.com/archives/understand-the-microsoft-privileged-access-workstation-paw-security-model/
How to enable MFA in Azure: https://docs.microsoft.com/azure/active-directory/authentication/howto-mfa-getstarted
expressroute https://docs.microsoft.com/en-us/azure/expressroute/security-baseline https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/expressroute/security-baseline.md
@@ -299,7 +299,7 @@ Additionally, to help you keep track of dedicated administrative accounts, you m
**Guidance**: Use a Privileged Access Workstation (PAW) with Azure AD Multi-Factor Authentication (MFA) enabled to log into and configure your Azure Sentinel-related resources.
-* [Privileged Access Workstations](/windows-server/identity/securing-privileged-access/privileged-access-workstations)
+* [Privileged Access Workstations](https://4sysops.com/archives/understand-the-microsoft-privileged-access-workstation-paw-security-model/)
* [Planning a cloud-based Azure AD Multi-Factor Authentication deployment](../active-directory/authentication/howto-mfa-getstarted.md)
firewall-manager https://docs.microsoft.com/en-us/azure/firewall-manager/security-baseline https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/firewall-manager/security-baseline.md
@@ -208,7 +208,7 @@ You should ensure that the credentials (such as password, certificate, or smart
**Guidance**: Secured, isolated workstations are critically important for the security of sensitive roles like administrators, developers, and critical service operators. Use highly secured user workstations for performing administrative management tasks with your Azure Firewall Manager resources in production environments. Use Azure Active Directory, Microsoft Defender Advanced Threat Protection (ATP), and/or Microsoft Intune to deploy a secure and managed user workstation for administrative tasks. The secured workstations can be centrally managed to enforce secured configuration, including strong authentication, software and hardware baselines, and restricted logical and network access. -- [Understand privileged access workstations](../active-directory/devices/concept-azure-managed-workstation.md)
+- [Understand privileged access workstations](https://4sysops.com/archives/understand-the-microsoft-privileged-access-workstation-paw-security-model/)
- [Deploy a privileged access workstation](../active-directory/devices/howto-azure-managed-workstation.md)
firewall https://docs.microsoft.com/en-us/azure/firewall/security-baseline https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/firewall/security-baseline.md
@@ -274,7 +274,7 @@ You can also enable a Just-In-Time / Just-Enough-Access by using Azure AD Privil
**Guidance**: Use PAWs (privileged access workstations) with multi-factor authentication(MFA) configured to log into and configure Azure Firewall and related resources. -- [Learn about Privileged Access Workstations](/windows-server/identity/securing-privileged-access/privileged-access-workstations)
+- [Learn about Privileged Access Workstations](https://4sysops.com/archives/understand-the-microsoft-privileged-access-workstation-paw-security-model/)
- [How to enable MFA in Azure](../active-directory/authentication/howto-mfa-getstarted.md)
frontdoor https://docs.microsoft.com/en-us/azure/frontdoor/security-baseline https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/frontdoor/security-baseline.md
@@ -84,7 +84,7 @@ Ensure restricted access to management, identity, and security systems that have
Use highly secured user workstations with Azure Bastion for administrative tasks. Choose Azure Active Directory (Azure AD), Microsoft Defender Advanced Threat Protection (ATP), and Microsoft Intune to deploy secure and managed user workstations for administrative tasks. The secured workstations must be centrally managed to enforce secured configuration including strong authentication, software and hardware baselines, restricted logical and network access. -- [Understand privileged access workstations](../active-directory/devices/concept-azure-managed-workstation.md)
+- [Understand privileged access workstations](https://4sysops.com/archives/understand-the-microsoft-privileged-access-workstation-paw-security-model/)
- [Deploy a privileged access workstation](../active-directory/devices/howto-azure-managed-workstation.md)
governance https://docs.microsoft.com/en-us/azure/governance/policy/concepts/azure-security-benchmark-baseline https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/concepts/azure-security-benchmark-baseline.md
@@ -57,7 +57,7 @@ You can also enable a Just-In-Time / Just-Enough-Access solution by using [Azure
**Guidance**: Use PAWs (privileged access workstations) with MFA configured to log into and configure Azure resources.
-* [Learn about Privileged Access Workstations](/windows-server/identity/securing-privileged-access/privileged-access-workstations)
+* [Learn about Privileged Access Workstations](https://4sysops.com/archives/understand-the-microsoft-privileged-access-workstation-paw-security-model/)
* [How to enable MFA in Azure](../../../active-directory/authentication/howto-mfa-getstarted.md)
governance https://docs.microsoft.com/en-us/azure/governance/policy/concepts/definition-structure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/concepts/definition-structure.md
@@ -339,8 +339,7 @@ within an **allOf** operation.
### Conditions
-A condition evaluates whether a **field** or the **value** accessor meets certain criteria. The
-supported conditions are:
+A condition evaluates whether a value meets certain criteria. The supported conditions are:
- `"equals": "stringValue"` - `"notEquals": "stringValue"`
@@ -376,14 +375,9 @@ letter, `.` to match any character, and any other character to match that actual
are case-insensitive. Case-insensitive alternatives are available in **matchInsensitively** and **notMatchInsensitively**.
-In an **\[\*\] alias** array field value, each element in the array is evaluated individually with
-logical **and** between elements. For more information, see [Referencing array resource properties](../how-to/author-policies-for-arrays.md#referencing-array-resource-properties).
- ### Fields
-Conditions are formed by using fields. A field matches properties in the resource request payload
-and describes the state of the resource.
-
+Conditions that evaluate whether the values of properties in the resource request payload meet certain criteria can be formed using a **field** expression.
The following fields are supported: - `name`
@@ -393,6 +387,7 @@ The following fields are supported:
- `kind` - `type` - `location`
+ - Location fields are normalized to support various formats. For example, `East US 2` is considered equal to `eastus2`.
- Use **global** for resources that are location agnostic. - `id` - Returns the resource ID of the resource that is being evaluated.
@@ -417,6 +412,10 @@ The following fields are supported:
> `tags.<tagName>`, `tags[tagName]`, and `tags[tag.with.dots]` are still acceptable ways of > declaring a tags field. However, the preferred expressions are those listed above.
+> [!NOTE]
+> In **field** expressions referring to **\[\*\] alias**, each element in the array is evaluated individually with logical **and** between elements.
+> For more information, see [Referencing array resource properties](../how-to/author-policies-for-arrays.md#referencing-array-resource-properties).
+ #### Use tags with parameters A parameter value can be passed to a tag field. Passing a parameter to a tag field increases the
@@ -451,9 +450,7 @@ using the `resourcegroup()` lookup function.
### Value
-Conditions can also be formed using **value**. **value** checks conditions against
-[parameters](#parameters), [supported template functions](#policy-functions), or literals. **value**
-is paired with any supported [condition](#conditions).
+Conditions that evaluate whether a value meets certain criteria can be formed using a **value** expression. Values can be literals, the values of [parameters](#parameters), or the returned values of any [supported template functions](#policy-functions).
> [!WARNING] > If the result of a _template function_ is an error, policy evaluation fails. A failed evaluation
@@ -558,14 +555,11 @@ evaluation.
### Count
-Conditions that count how many members of an array in the resource payload satisfy a condition
-expression can be formed using **count** expression. Common scenarios are checking whether 'at least
-one of', 'exactly one of', 'all of', or 'none of' the array members satisfy the condition. **count**
-evaluates each [\[\*\] alias](#understanding-the--alias) array member for a condition expression and
-sums the _true_ results, which is then compared to the expression operator. **Count** expressions
-may be added up to three times to a single **policyRule** definition.
+Conditions that count how many members of an array meet certain criteria can be formed using a **count** expression. Common scenarios are checking whether 'at least one of', 'exactly one of', 'all of', or 'none of' the array members satisfy a condition. **Count** evaluates each array member for a condition expression and sums the _true_ results, which is then compared to the expression operator.
-The structure of the **count** expression is:
+#### Field count
+
+Count how many members of an array in the request payload satisfy a condition expression. The structure of **field count** expressions is:
```json {
@@ -579,13 +573,11 @@ The structure of the **count** expression is:
} ```
-The following properties are used with **count**:
+The following properties are used with **field count**:
-- **count.field** (required): Contains the path to the array and must be an array alias. If the
- array is missing, the expression is evaluated to _false_ without considering the condition
- expression.
-- **count.where** (optional): The condition expression to individually evaluate each [\[\*\]
- alias](#understanding-the--alias) array member of **count.field**. If this property isn't
+- **count.field** (required): Contains the path to the array and must be an array alias.
+- **count.where** (optional): The condition expression to individually evaluate for each [\[\*\]
+ alias](#understanding-the--alias) array member of `count.field`. If this property isn't
provided, all array members with the path of 'field' are evaluated to _true_. Any [condition](../concepts/definition-structure.md#conditions) can be used inside this property. [Logical operators](#logical-operators) can be used inside this property to create complex
@@ -594,9 +586,55 @@ The following properties are used with **count**:
**count.where** condition expression. A numeric [condition](../concepts/definition-structure.md#conditions) should be used.
-For more details on how to work with array properties in Azure Policy, including detailed explanation on how the count expression is evaluated, see [Referencing array resource properties](../how-to/author-policies-for-arrays.md#referencing-array-resource-properties).
+**Field count** expressions can enumerate the same field array up to three times in a single **policyRule** definition.
+
+For more details on how to work with array properties in Azure Policy, including detailed explanation on how the **field count** expression is evaluated, see [Referencing array resource properties](../how-to/author-policies-for-arrays.md#referencing-array-resource-properties).
+
+#### Value count
+Count how many members of an array satisfy a condition. The array can be a literal array or a [reference to array parameter](#using-a-parameter-value). The structure of **value count** expressions is:
+
+```json
+{
+ "count": {
+ "value": "<literal array | array parameter reference>",
+ "name": "<index name>",
+ "where": {
+ /* condition expression */
+ }
+ },
+ "<condition>": "<compare the count of true condition expression array members to this value>"
+}
+```
+
+The following properties are used with **value count**:
+
+- **count.value** (required): The array to evaluate.
+- **count.name** (required): The index name, composed of English letters and digits. Defines a name for the value of the array member evaluated in the current iteration. The name is used for referencing the current value inside the `count.where` condition. Optional when the **count** expression is not in a child of another **count** expression. When not provided, the index name is implicitly set to `"default"`.
+- **count.where** (optional): The condition expression to individually evaluate for each array member of `count.value`. If this property isn't provided, all array members are evaluated to _true_. Any [condition](../concepts/definition-structure.md#conditions) can be used inside this property. [Logical operators](#logical-operators) can be used inside this property to create complex evaluation requirements. The value of the currently enumerated array member can be accessed by calling the [current](#the-current-function) function.
+- **\<condition\>** (required): The value is compared to the number of items that met the `count.where` condition expression. A numeric [condition](../concepts/definition-structure.md#conditions) should be used.
+
+The following limits are enforced:
+- Up to 10 **value count** expressions can be used in a single **policyRule** definition.
+- Each **value count** expressions can perform up to 100 iterations. This number includes the number of iterations performed by any parent **value count** expressions.
+
+#### The current function
+
+The `current()` function is only available inside the `count.where` condition. It returns the value of the array member that is currently enumerated by a the **count** expression evaluation.
+
+**Value count usage**
+
+- `current(<index name defined in count.name>)`. For example: `current('arrayMember')`.
+- `current()`. Allowed only when the **value count** expression is not a child of another **count** expression. Returns the same value as above.
+
+If the value returned by the call is an object, property accessors are supported. For example: `current('objectArrayMember').property`.
+
+**Field count usage**
+
+- `current(<the array alias defined in count.field>)`. For example, `current('Microsoft.Test/resource/enumeratedArray[*]')`.
+- `current()`. Allowed only when the **field count** expression is not a child of another **count** expression. Returns the same value as above.
+- `current(<alias of a property of the array member>)`. For example, `current('Microsoft.Test/resource/enumeratedArray[*].property')`.
-#### Count examples
+#### Field count examples
Example 1: Check if an array is empty
@@ -682,21 +720,165 @@ expression
} ```
-Example 6: Use `field()` function inside the `where` conditions to access the literal value of the currently evaluated array member. This condition checks that there are no security rules with an even numbered _priority_ value.
+Example 6: Use `current()` function inside the `where` conditions to access the value of the currently enumerated array member in a template function. This condition checks whether a virtual network contains an address prefix that is not under the 10.0.0.0/24 CIDR range.
```json { "count": {
- "field": "Microsoft.Network/networkSecurityGroups/securityRules[*]",
+ "field": "Microsoft.Network/virtualNetworks/addressSpace.addressPrefixes[*]",
+ "where": {
+ "value": "[ipRangeContains('10.0.0.0/24', current('Microsoft.Network/virtualNetworks/addressSpace.addressPrefixes[*]'))]",
+ "equals": false
+ }
+ },
+ "greater": 0
+}
+```
+
+Example 7: Use `field()` function inside the `where` conditions to access the value of the currently enumerated array member. This condition checks whether a virtual network contains an address prefix that is not under the 10.0.0.0/24 CIDR range.
+
+```json
+{
+ "count": {
+ "field": "Microsoft.Network/virtualNetworks/addressSpace.addressPrefixes[*]",
"where": {
- "value": "[mod(first(field('Microsoft.Network/networkSecurityGroups/securityRules[*].priority')), 2)]",
- "equals": 0
+ "value": "[ipRangeContains('10.0.0.0/24', first(field(('Microsoft.Network/virtualNetworks/addressSpace.addressPrefixes[*]')))]",
+ "equals": false
} }, "greater": 0 } ```
+#### Value count examples
+
+Example 1: Check if resource name matches any of the given name patterns.
+
+```json
+{
+ "count": {
+ "value": [ "prefix1_*", "prefix2_*" ],
+ "name": "pattern",
+ "where": {
+ "field": "name",
+ "like": "[current('pattern')]"
+ }
+ },
+ "greater": 0
+}
+```
+
+Example 2: Check if resource name matches any of the given name patterns. The `current()` function doesn't specify an index name. The outcome is the same is the previous example.
+
+```json
+{
+ "count": {
+ "value": [ "prefix1_*", "prefix2_*" ],
+ "where": {
+ "field": "name",
+ "like": "[current()]"
+ }
+ },
+ "greater": 0
+}
+```
+
+Example 3: Check if resource name matches any of the given name patterns provided by an array parameter.
+
+```json
+{
+ "count": {
+ "value": "[parameters('namePatterns')]",
+ "name": "pattern",
+ "where": {
+ "field": "name",
+ "like": "[current('pattern')]"
+ }
+ },
+ "greater": 0
+}
+```
+
+Example 4: Check if any of the virtual network address prefixes is not under the list of approved prefixes.
+
+```json
+{
+ "count": {
+ "field": "Microsoft.Network/virtualNetworks/addressSpace.addressPrefixes[*]",
+ "where": {
+ "count": {
+ "value": "[parameters('approvedPrefixes')]",
+ "name": "approvedPrefix",
+ "where": {
+ "value": "[ipRangeContains(current('approvedPrefix'), current('Microsoft.Network/virtualNetworks/addressSpace.addressPrefixes[*]'))]",
+ "equals": true
+ },
+ },
+ "equals": 0
+ }
+ },
+ "greater": 0
+}
+```
+
+Example 5: Check that all the reserved NSG rules are defined in an NSG. The properties of the reserved NSG rules are defined in an array parameter containing objects.
+
+Parameter value:
+
+```json
+[
+ {
+ "priority": 101,
+ "access": "deny",
+ "direction": "inbound",
+ "destinationPortRange": 22
+ },
+ {
+ "priority": 102,
+ "access": "deny",
+ "direction": "inbound",
+ "destinationPortRange": 3389
+ }
+]
+```
+
+Policy:
+```json
+{
+ "count": {
+ "value": "[parameters('reservedNsgRules')]",
+ "name": "reservedNsgRule",
+ "where": {
+ "count": {
+ "field": "Microsoft.Network/networkSecurityGroups/securityRules[*]",
+ "where": {
+ "allOf": [
+ {
+ "field": "Microsoft.Network/networkSecurityGroups/securityRules[*].priority",
+ "equals": "[current('reservedNsgRule').priority]"
+ },
+ {
+ "field": "Microsoft.Network/networkSecurityGroups/securityRules[*].access",
+ "equals": "[current('reservedNsgRule').access]"
+ },
+ {
+ "field": "Microsoft.Network/networkSecurityGroups/securityRules[*].direction",
+ "equals": "[current('reservedNsgRule').direction]"
+ },
+ {
+ "field": "Microsoft.Network/networkSecurityGroups/securityRules[*].destinationPortRange",
+ "equals": "[current('reservedNsgRule').destinationPortRange]"
+ }
+ ]
+ }
+ },
+ "equals": 1
+ }
+ },
+ "equals": "[length(parameters('reservedNsgRules'))]"
+}
+```
+ ### Effect Azure Policy supports the following types of effect:
@@ -773,7 +955,6 @@ The following functions are only available in policy rules:
} ``` - - `ipRangeContains(range, targetRange)` - **range**: [Required] string - String specifying a range of IP addresses. - **targetRange**: [Required] string - String specifying a range of IP addresses.
@@ -785,6 +966,8 @@ The following functions are only available in policy rules:
- CIDR range (examples: `10.0.0.0/24`, `2001:0DB8::/110`) - Range defined by start and end IP addresses (examples: `192.168.0.1-192.168.0.9`, `2001:0DB8::-2001:0DB8::3:FFFF`)
+- `current(indexName)`
+ - Special function that can only used inside [count expressions](#count).
#### Policy function example
governance https://docs.microsoft.com/en-us/azure/governance/policy/concepts/guest-configuration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/concepts/guest-configuration.md
@@ -1,7 +1,7 @@
--- title: Learn to audit the contents of virtual machines description: Learn how Azure Policy uses the Guest Configuration client to audit settings inside virtual machines.
-ms.date: 10/14/2020
+ms.date: 01/14/2021
ms.topic: conceptual --- # Understand Azure Policy's Guest Configuration
@@ -144,21 +144,16 @@ For Arc connected servers in private datacenters, allow traffic using the follow
## Managed identity requirements
-Policy definitions in the initiative
-[Deploy prerequisites to enable Guest Configuration policies on virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F12794019-7a00-42cf-95c2-882eed337cc8)
-enable a system-assigned managed identity, if one doesn't exist. There are two policy definitions in
-the initiative that manage identity creation. The IF conditions in the policy definitions ensure the
-correct behavior based on the current state of the machine resource in Azure.
+Policy definitions in the initiative _Deploy prerequisites to enable Guest Configuration policies on
+virtual machines_ enable a system-assigned managed identity, if one doesn't exist. There are two
+policy definitions in the initiative that manage identity creation. The IF conditions in the policy
+definitions ensure the correct behavior based on the current state of the machine resource in Azure.
If the machine doesn't currently have any managed identities, the effective policy will be:
-[\[Preview\]: Add system-assigned managed identity to enable Guest Configuration assignments on
-virtual machines with no
-identities](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3cf2ab00-13f1-4d0c-8971-2ac904541a7e)
+[Add system-assigned managed identity to enable Guest Configuration assignments on virtual machines with no identities](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3cf2ab00-13f1-4d0c-8971-2ac904541a7e)
If the machine currently has a user-assigned system identity, the effective policy will be:
-[\[Preview\]: Add system-assigned managed identity to enable Guest Configuration assignments on
-virtual machines with a user-assigned
-identity](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F497dff13-db2a-4c0f-8603-28fa3b331ab6)
+[Add system-assigned managed identity to enable Guest Configuration assignments on VMs with a user-assigned identity](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F497dff13-db2a-4c0f-8603-28fa3b331ab6)
## Guest Configuration definition requirements
@@ -183,9 +178,9 @@ data](../how-to/get-compliance-data.md).
#### Auditing operating system settings following industry baselines
-One initiative in Azure Policy provides the ability to audit operating system settings following a
-"baseline". The definition, _\[Preview\]: Audit Windows VMs that do not match Azure security
-baseline settings_ includes a set of rules based on Active Directory Group Policy.
+One initiative in Azure Policy audits operating system settings following a "baseline". The
+definition, _\[Preview\]: Windows machines should meet requirements for the Azure security baseline_
+includes a set of rules based on Active Directory Group Policy.
Most of the settings are available as parameters. Parameters allow you to customize what is audited. Align the policy with your requirements or map the policy to third-party information such as
governance https://docs.microsoft.com/en-us/azure/governance/policy/how-to/author-policies-for-arrays https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/how-to/author-policies-for-arrays.md
@@ -14,11 +14,8 @@ used in several different ways:
multiple options - Part of a [policy rule](../concepts/definition-structure.md#policy-rule) using the conditions **in** or **notIn**-- Part of a policy rule that evaluates the [\[\*\]
- alias](../concepts/definition-structure.md#understanding-the--alias) to evaluate:
- - Scenarios such as **None**, **Any**, or **All**
- - Complex scenarios with **count**
-- In the [append effect](../concepts/effects.md#append) to replace or add to an existing array
+- Part of a policy rule that counts how many array members satisfy a condition
+- In the [append](../concepts/effects.md#append) and [modify](../concepts/effects.md#modify) effects to update an existing array
This article covers each use by Azure Policy and provides several example definitions.
@@ -113,55 +110,121 @@ To use this string with each SDK, use the following commands:
- REST API: In the _PUT_ [create](/rest/api/resources/policyassignments/create) operation as part of the Request Body as the value of the **properties.parameters** property
-## Array conditions
+## Using arrays in conditions
-The policy rule [conditions](../concepts/definition-structure.md#conditions) that an _array_
-**type** of parameter may be used with is limited to `in` and `notIn`. Take the following policy
-definition with condition `equals` as an example:
+### `In` and `notIn`
+
+The `in` and `notIn` conditions only work with array values. They check the existence of a value in an array. The array can be a literal JSON array or a reference to an array parameter. For example:
```json {
- "policyRule": {
- "if": {
- "not": {
- "field": "location",
- "equals": "[parameters('allowedLocations')]"
- }
+ "field": "tags.environment",
+ "in": [ "dev", "test" ]
+}
+```
+
+```json
+{
+ "field": "location",
+ "notIn": "[parameters('allowedLocations')]"
+}
+```
+
+### Value count
+
+The [value count](../concepts/definition-structure.md#value-count) expression count how many array members meet a condition. It provides a way to evaluate the same condition multiple times, using different values on each iteration. For example, the following condition checks whether the resource name matches any pattern from an array of patterns:
+
+```json
+{
+ "count": {
+ "value": [ "test*", "dev*", "prod*" ],
+ "name": "pattern",
+ "where": {
+ "field": "name",
+ "like": "[current('pattern')]"
+ }
},
- "then": {
- "effect": "audit"
- }
- },
- "parameters": {
- "allowedLocations": {
- "type": "Array",
- "metadata": {
- "description": "The list of allowed locations for resources.",
- "displayName": "Allowed locations",
- "strongType": "location"
- }
- }
- }
+ "greater": 0
+}
+```
+
+In order to evaluate the expression, Azure Policy evaluates the `where` condition 3 times, once for each member of `[ "test*", "dev*", "prod*" ]`, counting how many times it was evaluated to `true`. On every iteration, the value of the current array member is paired with the `pattern` index name defined by `count.name`. This value can then be referenced inside the `where` condition by calling a special template function: `current('pattern')`.
+
+| Iteration | `current('pattern')` returned value |
+|:---|:---|
+| 1 | `"test*"` |
+| 2 | `"dev*"` |
+| 3 | `"prod*"` |
+
+The condition is true only if the resulted count is greater than 0.
+
+To make the condition above more generic, use parameter reference instead of a literal array:
+
+ ```json
+{
+ "count": {
+ "value": "[parameters('patterns')]",
+ "name": "pattern",
+ "where": {
+ "field": "name",
+ "like": "[current('pattern')]"
+ }
+ },
+ "greater": 0
} ```
-Attempting to create this policy definition through the Azure portal leads to an error such as this
-error message:
+When the **value count** expression is not under any other **count** expression, `count.name` is optional and the `current()` function can be used without any arguments:
-- "The policy '{GUID}' could not be parameterized because of validation errors. Please check if
- policy parameters are properly defined. The inner exception 'Evaluation result of language
- expression '[parameters('allowedLocations')]' is type 'Array', expected type is 'String'.'."
+```json
+{
+ "count": {
+ "value": "[parameters('patterns')]",
+ "where": {
+ "field": "name",
+ "like": "[current()]"
+ }
+ },
+ "greater": 0
+}
+```
+
+**Value count** also support arrays of complex objects, allowing for more complex conditions. For example, the following condition defines a desired tag value for each name pattern and checks whether the resource name matches the pattern, but doesn't have the required tag value:
+
+```json
+{
+ "count": {
+ "value": [
+ { "pattern": "test*", "envTag": "dev" },
+ { "pattern": "dev*", "envTag": "dev" },
+ { "pattern": "prod*", "envTag": "prod" },
+ ],
+ "name": "namePatternRequiredTag",
+ "where": {
+ "allOf": [
+ {
+ "field": "name",
+ "like": "[current('namePatternRequiredTag').pattern]"
+ },
+ {
+ "field": "tags.env",
+ "notEquals": "[current('namePatternRequiredTag').envTag]"
+ }
+ ]
+ }
+ },
+ "greater": 0
+}
+```
-The expected **type** of condition `equals` is _string_. Since **allowedLocations** is defined as
-**type** _array_, the policy engine evaluates the language expression and throws the error. With the
-`in` and `notIn` condition, the policy engine expects the **type** _array_ in the language
-expression. To resolve this error message, change `equals` to either `in` or `notIn`.
+For useful examples, see [value count examples](../concepts/definition-structure.md#value-count-examples).
## Referencing array resource properties Many use cases require working with array properties in the evaluated resource. Some scenarios require referencing an entire array (for example, checking its length). Others require applying a condition to each individual array member (for example, ensure that all firewall rule block access from the internet). Understanding the different ways Azure Policy can reference resource properties, and how these references behave when they refer to array properties is the key for writing conditions that cover these scenarios. ### Referencing resource properties+ Resource properties can be referenced by Azure Policy using [aliases](../concepts/definition-structure.md#aliases) There are two ways to reference the values of a resource property within Azure Policy: - Use [field](../concepts/definition-structure.md#fields) condition to check whether **all** selected resource properties meet a condition. Example:
@@ -240,9 +303,9 @@ If the array contains objects, a `[*]` alias can be used to select the value of
} ```
-This condition is true if the values of all `property` properties in `objectArray` are equal to `"value"`.
+This condition is true if the values of all `property` properties in `objectArray` are equal to `"value"`. For more examples, see [additional \[\*\] alias examples](#appendix--additional--alias-examples).
-When using the `field()` function to reference an array alias, the returned value is an array of all the selected values. This behavior means that the common use case of the `field()` function, the ability to apply template functions to resource property values, is very limited. The only template functions that can be used in this case are the ones that accept array arguments. For example, it's possible to get the length of the array with `[length(field('Microsoft.Test/resourceType/objectArray[*].property'))]`. However, more complex scenarios like applying template function to each array members and comparing it to a desired value are only possible when using the `count` expression. For more information, see [Count expression](#count-expressions).
+When using the `field()` function to reference an array alias, the returned value is an array of all the selected values. This behavior means that the common use case of the `field()` function, the ability to apply template functions to resource property values, is very limited. The only template functions that can be used in this case are the ones that accept array arguments. For example, it's possible to get the length of the array with `[length(field('Microsoft.Test/resourceType/objectArray[*].property'))]`. However, more complex scenarios like applying template function to each array members and comparing it to a desired value are only possible when using the `count` expression. For more information, see [Field count expression](#field-count-expressions).
To summarize, see the following example resource content and the selected values returned by various aliases:
@@ -296,9 +359,9 @@ When using the `field()` function on the example resource content, the results a
| `[field('Microsoft.Test/resourceType/objectArray[*].nestedArray')]` | `[[ 1, 2 ], [ 3, 4 ]]` | | `[field('Microsoft.Test/resourceType/objectArray[*].nestedArray[*]')]` | `[1, 2, 3, 4]` |
-## Count expressions
+### Field count expressions
-[Count](../concepts/definition-structure.md#count) expressions count how many array members meet a condition and compare the count to a target value. `Count` is more intuitive and versatile for evaluating arrays compared to `field` conditions. The syntax is:
+[Field count](../concepts/definition-structure.md#field-count) expressions count how many array members meet a condition and compare the count to a target value. `Count` is more intuitive and versatile for evaluating arrays compared to `field` conditions. The syntax is:
```json {
@@ -310,7 +373,7 @@ When using the `field()` function on the example resource content, the results a
} ```
-When used without a 'where' condition, `count` simply returns the length of an array. With the example resource content from the previous section, the following `count` expression is evaluated to `true` since `stringArray` has three members:
+When used without a `where` condition, `count` simply returns the length of an array. With the example resource content from the previous section, the following `count` expression is evaluated to `true` since `stringArray` has three members:
```json {
@@ -335,6 +398,7 @@ This behavior also works with nested arrays. For example, the following `count`
The power of `count` is in the `where` condition. When it's specified, Azure Policy enumerates the array members and evaluate each against the condition, counting how many array members evaluated to `true`. Specifically, in each iteration of the `where` condition evaluation, Azure Policy selects a single array member ***i*** and evaluate the resource content against the `where` condition **as if ***i*** is the only member of the array**. Having only one array member available in each iteration provides a way to apply complex conditions on each individual array member. Example:+ ```json { "count": {
@@ -347,7 +411,7 @@ Example:
"equals": 1 } ```
-In order to evaluate the `count` expression, Azure Policy evaluates the `where` condition 3 times, once for each member of `stringArray`, counting how many times it was evaluated to `true`. When the `where` condition refers the the `Microsoft.Test/resourceType/stringArray[*]` array members, instead of selecting all the members of `stringArray`, it will only select a single array member every time:
+In order to evaluate the `count` expression, Azure Policy evaluates the `where` condition 3 times, once for each member of `stringArray`, counting how many times it was evaluated to `true`. When the `where` condition refers to the `Microsoft.Test/resourceType/stringArray[*]` array members, instead of selecting all the members of `stringArray`, it will only select a single array member every time:
| Iteration | Selected `Microsoft.Test/resourceType/stringArray[*]` values | `where` Evaluation result | |:---|:---|:---|
@@ -358,6 +422,7 @@ In order to evaluate the `count` expression, Azure Policy evaluates the `where`
And thus the `count` will return `1`. Here's a more complex expression:+ ```json { "count": {
@@ -387,6 +452,7 @@ Here's a more complex expression:
And thus the `count` returns `1`. The fact that the `where` expression is evaluated against the **entire** request content (with changes only to the array member that is currently being enumerated) means that the `where` condition can also refer to fields outside the array:+ ```json { "count": {
@@ -405,6 +471,7 @@ The fact that the `where` expression is evaluated against the **entire** request
| 2 | `tags.env` => `"prod"` | `true` | Nested count expressions are also allowed:+ ```json { "count": {
@@ -438,9 +505,33 @@ Nested count expressions are also allowed:
| 2 | `Microsoft.Test/resourceType/objectArray[*].property` => `"value2`</br> `Microsoft.Test/resourceType/objectArray[*].nestedArray[*]` => `3`, `4` | 1 | `Microsoft.Test/resourceType/objectArray[*].nestedArray[*]` => `3` | | 2 | `Microsoft.Test/resourceType/objectArray[*].property` => `"value2`</br> `Microsoft.Test/resourceType/objectArray[*].nestedArray[*]` => `3`, `4` | 2 | `Microsoft.Test/resourceType/objectArray[*].nestedArray[*]` => `4` |
-### The `field()` function inside `where` conditions
+#### Accessing current array member with template functions
+
+When using template functions, use the `current()` function to access the value of the current array member or the values of any of its properties. To access the value of the current array member, pass the alias defined in `count.field` or any of its child aliases as an argument to the `current()` function. For example:
+
+```json
+{
+ "count": {
+ "field": "Microsoft.Test/resourceType/objectArray[*]",
+ "where": {
+ "value": "[current('Microsoft.Test/resourceType/objectArray[*].property')]",
+ "like": "value*"
+ }
+ },
+ "equals": 2
+}
+
+```
+
+| Iteration | `current()` returned value | `where` Evaluation result |
+|:---|:---|:---|
+| 1 | The value of `property` in the first member of `objectArray[*]`: `value1` | `true` |
+| 2 | The value of `property` in the first member of `objectArray[*]`: `value2` | `true` |
+
+#### The field function inside where conditions
-The way `field()` functions behave when inside a `where` condition is based on the following concepts:
+The `field()` function can also be used to access the value of the current array member as long as the **count** expression is not inside an **existence condition** (`field()` function always refer the the resource evaluated in the **if** condition).
+The behavior of `field()` when referring to the evaluated array is based on the following concepts:
1. Array aliases are resolved into a collection of values selected from all array members. 1. `field()` functions referencing array aliases return an array with the selected values. 1. Referencing the counted array alias inside the `where` condition returns a collection with a single value selected from the array member that is evaluated in the current iteration.
@@ -486,7 +577,7 @@ Therefore, when there's a need to access the value of the counted array alias wi
| 2 | `Microsoft.Test/resourceType/stringArray[*]` => `"b"` </br> `[first(field('Microsoft.Test/resourceType/stringArray[*]'))]` => `"b"` | `true` | | 3 | `Microsoft.Test/resourceType/stringArray[*]` => `"c"` </br> `[first(field('Microsoft.Test/resourceType/stringArray[*]'))]` => `"c"` | `true` |
-For useful examples, see [Count examples](../concepts/definition-structure.md#count-examples).
+For useful examples, see [Field count examples](../concepts/definition-structure.md#field-count-examples).
## Modifying arrays
@@ -509,6 +600,60 @@ The [append](../concepts/effects.md#append) and [modify](../concepts/effects.md#
For more information, see the [append examples](../concepts/effects.md#append-examples).
+## Appendix- additional [*] alias examples
+
+It is recommended to use the [field count expressions](#field-count-expressions) to check whether 'all of' or 'any of' the members of an array in the request content meet a condition. However, for some simple conditions it is possible to achieve the same result by using a field accessor with an array alias (as described in [Referencing the array members collection](#referencing-the-array-members-collection)). This can be useful in policy rules that exceed the limit of allowed **count** expressions. Here are examples for common use cases:
+
+The example policy rule for the scenario table below:
+
+```json
+"policyRule": {
+ "if": {
+ "allOf": [
+ {
+ "field": "Microsoft.Storage/storageAccounts/networkAcls.ipRules",
+ "exists": "true"
+ },
+ <-- Condition (see table below) -->
+ ]
+ },
+ "then": {
+ "effect": "[parameters('effectType')]"
+ }
+}
+```
+
+The **ipRules** array is as follows for the scenario table below:
+
+```json
+"ipRules": [
+ {
+ "value": "127.0.0.1",
+ "action": "Allow"
+ },
+ {
+ "value": "192.168.1.1",
+ "action": "Allow"
+ }
+]
+```
+
+For each condition example below, replace `<field>` with `"field": "Microsoft.Storage/storageAccounts/networkAcls.ipRules[*].value"`.
+
+The following outcomes are the result of the combination of the condition and the example policy
+rule and array of existing values above:
+
+|Condition |Outcome | Scenario |Explanation |
+|-|-|-|-|
+|`{<field>,"notEquals":"127.0.0.1"}` |Nothing |None match |One array element evaluates as false (127.0.0.1 != 127.0.0.1) and one as true (127.0.0.1 != 192.168.1.1), so the **notEquals** condition is _false_ and the effect isn't triggered. |
+|`{<field>,"notEquals":"10.0.4.1"}` |Policy effect |None match |Both array elements evaluate as true (10.0.4.1 != 127.0.0.1 and 10.0.4.1 != 192.168.1.1), so the **notEquals** condition is _true_ and the effect is triggered. |
+|`"not":{<field>,"notEquals":"127.0.0.1" }` |Policy effect |One or more match |One array element evaluates as false (127.0.0.1 != 127.0.0.1) and one as true (127.0.0.1 != 192.168.1.1), so the **notEquals** condition is _false_. The logical operator evaluates as true (**not** _false_), so the effect is triggered. |
+|`"not":{<field>,"notEquals":"10.0.4.1"}` |Nothing |One or more match |Both array elements evaluate as true (10.0.4.1 != 127.0.0.1 and 10.0.4.1 != 192.168.1.1), so the **notEquals** condition is _true_. The logical operator evaluates as false (**not** _true_), so the effect isn't triggered. |
+|`"not":{<field>,"Equals":"127.0.0.1"}` |Policy effect |Not all match |One array element evaluates as true (127.0.0.1 == 127.0.0.1) and one as false (127.0.0.1 == 192.168.1.1), so the **Equals** condition is _false_. The logical operator evaluates as true (**not** _false_), so the effect is triggered. |
+|`"not":{<field>,"Equals":"10.0.4.1"}` |Policy effect |Not all match |Both array elements evaluate as false (10.0.4.1 == 127.0.0.1 and 10.0.4.1 == 192.168.1.1), so the **Equals** condition is _false_. The logical operator evaluates as true (**not** _false_), so the effect is triggered. |
+|`{<field>,"Equals":"127.0.0.1"}` |Nothing |All match |One array element evaluates as true (127.0.0.1 == 127.0.0.1) and one as false (127.0.0.1 == 192.168.1.1), so the **Equals** condition is _false_ and the effect isn't triggered. |
+|`{<field>,"Equals":"10.0.4.1"}` |Nothing |All match |Both array elements evaluate as false (10.0.4.1 == 127.0.0.1 and 10.0.4.1 == 192.168.1.1), so the **Equals** condition is _false_ and the effect isn't triggered. |
+ ## Next steps - Review examples at [Azure Policy samples](../samples/index.md).
governance https://docs.microsoft.com/en-us/azure/governance/policy/overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/overview.md
@@ -1,7 +1,7 @@
--- title: Overview of Azure Policy description: Azure Policy is a service in Azure, that you use to create, assign and, manage policy definitions in your Azure environment.
-ms.date: 10/05/2020
+ms.date: 01/14/2021
ms.topic: overview --- # What is Azure Policy?
@@ -126,8 +126,9 @@ If none of the Built-in roles have the permissions required, create a
### Resources covered by Azure Policy
-Azure Policy evaluates all resources in Azure and Arc enabled resources. For certain resource
-providers such as [Guest Configuration](./concepts/guest-configuration.md),
+Azure Policy evaluates all Azure resources at or below subscription-level, including Arc enabled
+resources. For certain resource providers such as
+[Guest Configuration](./concepts/guest-configuration.md),
[Azure Kubernetes Service](../../aks/intro-kubernetes.md), and [Azure Key Vault](../../key-vault/general/overview.md), there's a deeper integration for managing settings and objects. To find out more, see
hdinsight https://docs.microsoft.com/en-us/azure/hdinsight/hdinsight-apps-install-applications https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hdinsight/hdinsight-apps-install-applications.md
@@ -32,7 +32,7 @@ The following list shows the published applications:
|[Starburst Presto for Azure HDInsight](https://azuremarketplace.microsoft.com/marketplace/apps/starburstdatainc1579800938563.starburst-presto?tab=Overview) |Hadoop |Presto is a fast and scalable distributed SQL query engine. Architected for the separation of storage and compute, Presto is perfect for querying data in Azure Data Lake Storage, Azure Blob Storage, SQL and NoSQL databases, and other data sources. | |[StreamSets Data Collector for HDInsight Cloud](https://azuremarketplace.microsoft.com/marketplace/apps/streamsets.streamsets-data-collector-hdinsight) |Hadoop,HBase,Spark,Kafka |StreamSets Data Collector is a lightweight, powerful engine that streams data in real time. Use Data Collector to route and process data in your data streams. It comes with a 30 day trial license. | |[Trifacta Wrangler Enterprise](https://azuremarketplace.microsoft.com/marketplace/apps/trifacta.trifacta-db?tab=Overview) |Hadoop, Spark,HBase |Trifacta Wrangler Enterprise for HDInsight supports enterprise-wide data wrangling for any scale of data. The cost of running Trifacta on Azure is a combination of Trifacta subscription costs plus the Azure infrastructure costs for the virtual machines. |
-|[Unifi Data Platform](https://unifisoftware.com/platform/) |Hadoop,HBase,Storm,Spark |The Unifi Data Platform is a seamlessly integrated suite of self-service data tools designed to empower the business user to tackle data challenges that drive incremental revenue, reduce costs or operational complexity. |
+|[Unifi Data Platform](https://www.crunchbase.com/organization/unifi-software) |Hadoop,HBase,Storm,Spark |The Unifi Data Platform is a seamlessly integrated suite of self-service data tools designed to empower the business user to tackle data challenges that drive incremental revenue, reduce costs or operational complexity. |
|[Unraveldata APM](https://azuremarketplace.microsoft.com/marketplace/apps/unravel-data.unravel-app) |Spark |Unravel Data app for HDInsight Spark cluster. | |[Waterline AI-Driven Data Catalog](https://azuremarketplace.microsoft.com/marketplace/apps/waterline_data.waterline_data) |Spark |Waterline catalogs, organizes, and governs data using AI to auto-tag data with business terms. Waterline's business literate catalog is a critical, success component for self-service analytics, compliance and governance, and IT management initiatives. |
hdinsight https://docs.microsoft.com/en-us/azure/hdinsight/security-baseline https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hdinsight/security-baseline.md
@@ -422,7 +422,7 @@ https://docs.microsoft.com/azure/security-center/security-center-identity-access
Learn about Privileged Access Workstations:
-https://docs.microsoft.com/windows-server/identity/securing-privileged-access/privileged-access-workstations
+https://4sysops.com/archives/understand-the-microsoft-privileged-access-workstation-paw-security-model/
How to enable MFA in Azure:
iot-central https://docs.microsoft.com/en-us/azure/iot-central/core/howto-create-custom-rules https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/howto-create-custom-rules.md
@@ -244,7 +244,7 @@ This solution uses a Stream Analytics query to detect when a device stops sendin
| Event Hub namespace | Your Event Hub namespace | | Event Hub name | Use existing - **centralexport** |
-1. Under **Jobs topology**, select **Outputs**, choose **+ Add**, and then choose **Azure function**.
+1. Under **Jobs topology**, select **Outputs**, choose **+ Add**, and then choose **Azure Function**.
1. Use the information in the following table to configure the output, then choose **Save**: | Setting | Value |
@@ -350,4 +350,4 @@ In this how-to guide, you learned how to:
* Create a Stream Analytics query that detects when a device has stopped sending data. * Send an email notification using the Azure Functions and SendGrid services.
-Now that you know how to create custom rules and notifications, the suggested next step is to learn how to [Extend Azure IoT Central with custom analytics](howto-create-custom-analytics.md).
\ No newline at end of file
+Now that you know how to create custom rules and notifications, the suggested next step is to learn how to [Extend Azure IoT Central with custom analytics](howto-create-custom-analytics.md).
iot-central https://docs.microsoft.com/en-us/azure/iot-central/core/howto-set-up-template https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/howto-set-up-template.md
@@ -35,9 +35,17 @@ In an IoT Central application, a device template uses a device model to describe
- Design the device template in IoT Central, and then [implement its device model in your device code](concepts-telemetry-properties-commands.md). - Import a device template from the [Azure Certified for IoT device catalog](https://aka.ms/iotdevcat). Customize the device template to your requirements in IoT Central.
+> [!NOTE]
+> IoT Central requires the full model with all the referenced interfaces in the same file, when you import a model from the model repository use the keyword ΓÇ£expandedΓÇ¥ to get the full version.
+For example. https://devicemodels.azure.com/dtmi/com/example/thermostat-1.expanded.json
+ - Author a device model using the [Digital Twins Definition Language (DTDL) - version 2](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/dtdlv2.md). Visual Studio code has an extension that supports authoring DTDL models. To learn more, see [Install and use the DTDL authoring tools](../../iot-pnp/howto-use-dtdl-authoring-tools.md). Then publish the model to the public model repository. To learn more, see [Device model repository](../../iot-pnp/concepts-model-repository.md). Implement your device code from the model, and connect your real device to your IoT Central application. IoT Central finds and imports the device model from the public repository for you and generates a device template. You can then add any cloud properties, customizations, and dashboards your IoT Central application needs to the device template. - Author a device model using the DTDL. Implement your device code from the model. Manually import the device model into your IoT Central application, and then add any cloud properties, customizations, and dashboards your IoT Central application needs.
+> [!TIP]
+> IoT Central requires the full model with all the referenced interfaces in the same file. When you import a model from the model repository use the keyword *expanded* to get the full version.
+> For example, [https://devicemodels.azure.com/dtmi/com/example/thermostat-1.expanded.json](https://devicemodels.azure.com/dtmi/com/example/thermostat-1.expanded.json).
+ You can also add device templates to an IoT Central application using the [REST API](/learn/modules/manage-iot-central-apps-with-rest-api/) or the [CLI](howto-manage-iot-central-from-cli.md). Some [application templates](concepts-app-templates.md) already include device templates that are useful in the scenario the application template supports. For example, see [In-store analytics architecture](../retail/store-analytics-architecture.md).
iot-hub https://docs.microsoft.com/en-us/azure/iot-hub/iot-hub-tls-support https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/iot-hub-tls-support.md
@@ -5,7 +5,7 @@
author: jlian ms.service: iot-fundamentals ms.topic: conceptual
- ms.date: 11/25/2020
+ ms.date: 01/14/2020
ms.author: jlian ---
@@ -41,9 +41,16 @@ For added security, configure your IoT Hubs to *only* allow client connections t
* South Central US * West US 2 * US Gov Arizona
-* US Gov Virginia
+* US Gov Virginia (TLS 1.0/1.1 support isn't available in this region - TLS 1.2 enforcement must be enabled or IoT hub creation fails)
-For this purpose, provision a new IoT Hub in any of the supported regions and set the `minTlsVersion` property to `1.2` in your Azure Resource Manager template's IoT hub resource specification:
+To enable TLS 1.2 enforcement, follow the steps in [Create IoT hub in Azure portal](/.iot-hub-create-through-portal.md), except
+
+- Choose a **Region** from one in the list above.
+- Under **Management -> Advanced -> Transport Layer Security (TLS) -> Minimum TLS version**, select **1.2**. This setting only appears for IoT hub created in supported region.
+
+ :::image type="content" source="media/iot-hub-tls-12-enforcement.png" alt-text="Screenshot showing how to turn on TLS 1.2 enforcement during IoT hub creation":::
+
+To use ARM template for creation, provision a new IoT Hub in any of the supported regions and set the `minTlsVersion` property to `1.2` in the resource specification:
```json {
@@ -134,4 +141,4 @@ Official SDK support for this public preview feature isn't yet available. To get
## Next steps - To learn more about IoT Hub security and access control, see [Control access to IoT Hub](iot-hub-devguide-security.md).-- To learn more about using X509 certificate for device authentication, see [Device Authentication using X.509 CA Certificates](iot-hub-x509ca-overview.md)\ No newline at end of file
+- To learn more about using X509 certificate for device authentication, see [Device Authentication using X.509 CA Certificates](iot-hub-x509ca-overview.md)
iot-hub https://docs.microsoft.com/en-us/azure/iot-hub/security-baseline https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/security-baseline.md
@@ -327,7 +327,7 @@ Enable Azure AD MFA to protect your overall Azure tenant, benefiting all service
**Guidance**: Use a secure, Azure-managed workstation (also known as a Privileged Access Workstation, or PAW) for administrative tasks that require elevated privileges. -- [Understand secure, Azure-managed workstations](../active-directory/devices/concept-azure-managed-workstation.md)
+- [Understand secure, Azure-managed workstations](https://4sysops.com/archives/understand-the-microsoft-privileged-access-workstation-paw-security-model/)
- [How to enable Azure AD MFA](../active-directory/authentication/howto-mfa-getstarted.md)
iot-hub https://docs.microsoft.com/en-us/azure/iot-hub/tutorial-routing https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/tutorial-routing.md
@@ -132,13 +132,13 @@ Now set up the routing for the storage account. You go to the Message Routing pa
2. Select the IoT hub under the list of resources. This tutorial uses **ContosoTestHub**.
-3. Select **Message Routing**. In the **Message Routing** pane, select +**Add**. On the **Add a Route** pane, select +**Add** next to the Endpoint field to show the supported endpoints, as displayed in the following picture:
+3. Select **Message Routing**. In the **Message Routing** pane, select +**Add**. On the **Add a Route** pane, select +**Add endpoint** next to the Endpoint field to show the supported endpoints, as displayed in the following picture:
- ![Start adding an endpoint for a route](./media/tutorial-routing/message-routing-add-a-route-w-storage-ep.png)
+ ![Start adding an endpoint for a route](./media/tutorial-routing/message-routing-add-a-route-with-storage-endpoint-ver2.png)
-4. Select **Blob storage**. You see the **Add a storage endpoint** pane.
+4. Select **Storage**. You see the **Add a storage endpoint** pane.
- ![Adding an endpoint](./media/tutorial-routing/message-routing-add-storage-ep.png)
+ ![Adding an endpoint](./media/tutorial-routing/message-routing-add-storage-endpoint-ver2.png)
5. Enter a name for the endpoint. This tutorial uses **ContosoStorageEndpoint**.
key-vault https://docs.microsoft.com/en-us/azure/key-vault/general/security-baseline https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/key-vault/general/security-baseline.md
@@ -396,7 +396,7 @@ https://docs.microsoft.com/azure/security-center/security-center-identity-access
**Guidance**: Use a Privileged Access Workstation (PAW) with Azure AD Multi-Factor Authentication (MFA) configured to log into and configure Key Vault enabled resources.
-Privileged Access Workstations: https://docs.microsoft.com/windows-server/identity/securing-privileged-access/privileged-access-workstations
+Privileged Access Workstations: https://4sysops.com/archives/understand-the-microsoft-privileged-access-workstation-paw-security-model/
Planning a cloud-based Azure AD Multi-Factor Authentication deployment: https://docs.microsoft.com/azure/active-directory/authentication/howto-mfa-getstarted
lighthouse https://docs.microsoft.com/en-us/azure/lighthouse/security-baseline https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/lighthouse/security-baseline.md
@@ -204,7 +204,7 @@ You should ensure that the credentials (such as password, certificate, or smart
**Guidance**: Secured, isolated workstations are critically important for the security of sensitive roles like administrators, developers, and critical service operators. Depending on your requirements, you can use highly secured user workstations and/or Azure Bastion for performing administrative tasks with Azure Lighthouse in production environments. Use Azure Active Directory, Microsoft Defender Advanced Threat Protection (ATP), and/or Microsoft Intune to deploy a secure and managed user workstation for administrative tasks. The secured workstations can be centrally managed to enforce secured configuration, including strong authentication, software and hardware baselines, and restricted logical and network access. -- [Understand privileged access workstations](../active-directory/devices/concept-azure-managed-workstation.md)
+- [Understand privileged access workstations](https://4sysops.com/archives/understand-the-microsoft-privileged-access-workstation-paw-security-model/)
- [Deploy a privileged access workstation](../active-directory/devices/howto-azure-managed-workstation.md)
load-balancer https://docs.microsoft.com/en-us/azure/load-balancer/load-balancer-faqs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/load-balancer/load-balancer-faqs.md
@@ -30,7 +30,7 @@ NAT rules are used to specify a backend resource to route traffic to. For exampl
## What is IP 168.63.129.16? The virtual IP address for the host tagged as the Azure infrastructure Load Balancer where the Azure Health Probes originate. When configuring backend instances, they must allow traffic from this IP address to successfully respond to health probes. This rule does not interact with access to your Load Balancer frontend. If you're not using the Azure Load Balancer, you can override this rule. You can learn more about service tags [here](../virtual-network/service-tags-overview.md#available-service-tags).
-## Can I use Global VNET peering with Basic Load Balancer?
+## Can I use Global VNet peering with Basic Load Balancer?
No. Basic Load Balancer does not support Global VNET peering. You can use a Standard Load Balancer instead. See the [upgrade from Basic to Standard](upgrade-basic-standard.md) article for seamless upgrade. ## How can I discover the public IP that an Azure VM uses?
@@ -39,6 +39,9 @@ There are many ways to determine the public source IP address of an outbound con
By using the nslookup command, you can send a DNS query for the name myip.opendns.com to the OpenDNS resolver. The service returns the source IP address that was used to send the query. When you run the following query from your VM, the response is the public IP used for that VM: ```nslookup myip.opendns.com resolver1.opendns.com```
+
+## Can I add a VM from the same availability set to different backend pools of a Load Balancer?
+No, this is not possible.
## How do connections to Azure Storage in the same region work? Having outbound connectivity via the scenarios above is not necessary to connect to Storage in the same region as the VM. If you do not want this, use network security groups (NSGs) as explained above. For connectivity to Storage in other regions, outbound connectivity is required. Please note that when connecting to Storage from a VM in the same region, the source IP address in the Storage diagnostic logs will be an internal provider address, and not the public IP address of your VM. If you wish to restrict access to your Storage account to VMs in one or more Virtual Network subnets in the same region, use [Virtual Network service endpoints](../virtual-network/virtual-network-service-endpoints-overview.md) and not your public IP address when configuring your storage account firewall. Once service endpoints are configured, you will see your Virtual Network private IP address in your Storage diagnostic logs and not the internal provider address.
@@ -48,4 +51,4 @@ Standard Load Balancer and Standard Public IP introduces abilities and different
Using outbound rules allows you fine grained control over all aspects of outbound connectivity. ## Next Steps
-If your question is not listed above, please send feedback about this page with your question. This will create a GitHub issue for the product team to ensure all of our valued customer questions are answered.
\ No newline at end of file
+If your question is not listed above, please send feedback about this page with your question. This will create a GitHub issue for the product team to ensure all of our valued customer questions are answered.
logic-apps https://docs.microsoft.com/en-us/azure/logic-apps/security-baseline https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/logic-apps/security-baseline.md
@@ -442,7 +442,7 @@ For connectors that use Azure Active Directory (Azure AD) OAuth, creating a conn
**Guidance**: Use privileged access workstations (PAW) with Multi-Factor Authentication (MFA) configured to log into and configure Azure resources. -- [Learn about Privileged Access Workstations](/windows-server/identity/securing-privileged-access/privileged-access-workstations)
+- [Learn about Privileged Access Workstations](https://4sysops.com/archives/understand-the-microsoft-privileged-access-workstation-paw-security-model/)
- [How to enable MFA in Azure](../active-directory/authentication/howto-mfa-getstarted.md)
machine-learning https://docs.microsoft.com/en-us/azure/machine-learning/algorithm-module-reference/train-vowpal-wabbit-model https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/algorithm-module-reference/train-vowpal-wabbit-model.md
@@ -83,7 +83,7 @@ Vowpal Wabbit supports incremental training by adding new data to an existing mo
2. Connect the previously trained model to the **Pre-trained Vowpal Wabbit Model** input port of the module. 3. Connect the new training data to the **Training data** input port of the module. 4. In the parameters pane of **Train Vowpal Wabbit Model**, specify the format of the new training data, and also the training data file name if the input dataset is a directory.
-5. Select the **Output readable model file ** and **Output inverted hash file** options if the corresponding files need to be saved in the run records.
+5. Select the **Output readable model file** and **Output inverted hash file** options if the corresponding files need to be saved in the run records.
6. Submit the pipeline. 7. Select the module and select **Register dataset** under **Outputs+logs** tab in the right pane, to preserve the updated model in your Azure Machine Learning workspace. If you don't specify a new name, the updated model overwrites the existing saved model.
machine-learning https://docs.microsoft.com/en-us/azure/machine-learning/concept-deep-learning-vs-machine-learning https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/concept-deep-learning-vs-machine-learning.md
@@ -8,7 +8,7 @@ ms.subservice: core
ms.topic: conceptual ms.author: lazzeri author: FrancescaLazzeri
-ms.date: 12/15/2020
+ms.date: 01/14/2020
ms.custom: contperf-fy21q1,contperfq1 ---
@@ -52,7 +52,7 @@ The following table compares the two techniques in more detail:
| **Execution time** | Takes comparatively little time to train, ranging from a few seconds to a few hours. | Usually takes a long time to train because a deep learning algorithm involves many layers. | | **Output** | The output is usually a numerical value, like a score or a classification. | The output can have multiple formats, like a text, a score or a sound. |
-## Transfer learning
+## What is transfer learning
Training deep learning models often requires large amounts of training data, high-end compute resources (GPU, TPU), and a longer training time. In scenarios when you don't have any of these available to you, you can shortcut the training process using a technique known as *transfer learning.*
@@ -60,7 +60,7 @@ Transfer learning is a technique that applies knowledge gained from solving one
Due to the structure of neural networks, the first set of layers usually contain lower-level features, whereas the final set of layers contains higher-level feature that are closer to the domain in question. By repurposing the final layers for use in a new domain or problem, you can significantly reduce the amount of time, data, and compute resources needed to train the new model. For example, if you already have a model that recognizes cars, you can repurpose that model using transfer learning to also recognize trucks, motorcycles, and other kinds of vehicles.
-Learn how to apply transfer learning for image classification using an open-source framework in Azure Machine Learning : [Classify images by using a Pytorch model](./how-to-train-pytorch.md?WT.mc_id=docs-article-lazzeri).
+Learn how to apply transfer learning for image classification using an open-source framework in Azure Machine Learning : [Train a deep learning PyTorch model using transfer learning](./how-to-train-pytorch.md?WT.mc_id=docs-article-lazzeri).
## Deep learning use cases
machine-learning https://docs.microsoft.com/en-us/azure/machine-learning/concept-differential-privacy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/concept-differential-privacy.md
@@ -9,6 +9,7 @@ services: machine-learning
ms.service: machine-learning ms.subservice: core ms.topic: conceptual
+ms.custom: responsible-ml
#intent: As a data scientist, I want to know what differential privacy is and how SmartNoise can help me implement a differentially private system. ---
machine-learning https://docs.microsoft.com/en-us/azure/machine-learning/concept-fairness-ml https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/concept-fairness-ml.md
@@ -9,6 +9,7 @@ ms.topic: conceptual
ms.author: luquinta author: luisquintanilla ms.date: 08/05/2020
+ms.custom: responsible-ml
#Customer intent: As a data scientist, I want to learn about assessing and mitigating fairness in machine learning models. ---
machine-learning https://docs.microsoft.com/en-us/azure/machine-learning/concept-open-source https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/concept-open-source.md
@@ -8,7 +8,7 @@ ms.subservice: core
ms.topic: conceptual author: luisquintanilla ms.author: luquinta
-ms.date: 12/16/2020
+ms.date: 01/14/2020
--- # Open-source integration with Azure Machine Learning projects
@@ -37,11 +37,11 @@ Open-source machine learning algorithms known as neural networks, a subset of ma
Open-source deep learning frameworks and how-to guides include:
- * [PyTorch](https://github.com/pytorch/pytorch): [Train a deep learning image classification model using transfer learning in PyTorch](how-to-train-pytorch.md)
+ * [PyTorch](https://github.com/pytorch/pytorch): [Train a deep learning image classification model using transfer learning](how-to-train-pytorch.md)
* [TensorFlow](https://github.com/tensorflow/tensorflow): [Recognize handwritten digits using TensorFlow](how-to-train-tensorflow.md) * [Keras](https://github.com/keras-team/keras): [Build a neural network to analyze images using Keras](how-to-train-keras.md)
-Training a deep learning model from scratch often requires large amounts of time, data, and compute resources. You can shortcut the training process by using transfer learning. Transfer learning is a technique that applies knowledge gained from solving one problem to a different but related problem. This means you can take an existing model repurpose it. See the [deep learning article](concept-deep-learning-vs-machine-learning.md#transfer-learning) to learn more about transfer learning.
+Training a deep learning model from scratch often requires large amounts of time, data, and compute resources. You can shortcut the training process by using transfer learning. Transfer learning is a technique that applies knowledge gained from solving one problem to a different but related problem. This means you can take an existing model repurpose it. See the [deep learning vs machine learning article](concept-deep-learning-vs-machine-learning.md#what-is-transfer-learning) to learn more about transfer learning.
### Reinforcement learning: Ray RLLib
machine-learning https://docs.microsoft.com/en-us/azure/machine-learning/concept-responsible-ml https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/concept-responsible-ml.md
@@ -9,6 +9,7 @@ ms.topic: conceptual
ms.author: luquinta author: luisquintanilla ms.date: 12/21/2020
+ms.custom: responsible-ml
#intent: As a data scientist, I want to know learn what responsible machine learning is and how I can use it in Azure Machine Learning ---
machine-learning https://docs.microsoft.com/en-us/azure/machine-learning/how-to-differential-privacy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-differential-privacy.md
@@ -6,7 +6,7 @@ services: machine-learning
ms.service: machine-learning ms.subservice: core ms.topic: conceptual
-ms.custom: how-to
+ms.custom: how-to, responsible-ml
ms.author: slbird author: slbird ms.reviewer: luquinta
machine-learning https://docs.microsoft.com/en-us/azure/machine-learning/how-to-homomorphic-encryption-seal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-homomorphic-encryption-seal.md
@@ -9,7 +9,7 @@ services: machine-learning
ms.service: machine-learning ms.subservice: core ms.topic: conceptual
-ms.custom: how-to, devx-track-python, deploy
+ms.custom: how-to, devx-track-python, deploy, responsible-ml
#intent: As a data scientist, I want to deploy a service that uses homomorphic encryption to make predictions on encrypted data ---
machine-learning https://docs.microsoft.com/en-us/azure/machine-learning/how-to-machine-learning-fairness-aml https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-machine-learning-fairness-aml.md
@@ -10,7 +10,7 @@ author: mesameki
ms.reviewer: luquinta ms.date: 11/16/2020 ms.topic: conceptual
-ms.custom: how-to, devx-track-python
+ms.custom: how-to, devx-track-python, responsible-ml
--- # Use Azure Machine Learning with the Fairlearn open-source package to assess the fairness of ML models (preview)
machine-learning https://docs.microsoft.com/en-us/azure/machine-learning/how-to-machine-learning-interpretability-aml https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-machine-learning-interpretability-aml.md
@@ -10,7 +10,7 @@ author: minthigpen
ms.reviewer: Luis.Quintanilla ms.date: 07/09/2020 ms.topic: conceptual
-ms.custom: how-to, devx-track-python
+ms.custom: how-to, devx-track-python, responsible-ml
--- # Use the interpretability package to explain ML models & predictions in Python (preview)
machine-learning https://docs.microsoft.com/en-us/azure/machine-learning/how-to-machine-learning-interpretability-automl https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-machine-learning-interpretability-automl.md
@@ -6,7 +6,7 @@ services: machine-learning
ms.service: machine-learning ms.subservice: core ms.topic: conceptual
-ms.custom: how-to, automl
+ms.custom: how-to, automl, responsible-ml
ms.author: mithigpe author: minthigpen ms.date: 07/09/2020
machine-learning https://docs.microsoft.com/en-us/azure/machine-learning/how-to-machine-learning-interpretability https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-machine-learning-interpretability.md
@@ -6,7 +6,7 @@ services: machine-learning
ms.service: machine-learning ms.subservice: core ms.topic: conceptual
-ms.custom: how-to
+ms.custom: how-to, responsible-ml
ms.author: mithigpe author: minthigpen ms.reviewer: Luis.Quintanilla
machine-learning https://docs.microsoft.com/en-us/azure/machine-learning/how-to-train-pytorch https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-train-pytorch.md
@@ -8,7 +8,7 @@ ms.subservice: core
ms.author: minxia author: mx-iao ms.reviewer: peterlu
-ms.date: 12/10/2020
+ms.date: 01/14/2020
ms.topic: conceptual ms.custom: how-to
@@ -19,7 +19,7 @@ ms.custom: how-to
In this article, learn how to run your [PyTorch](https://pytorch.org/) training scripts at enterprise scale using Azure Machine Learning.
-The example scripts in this article are used to classify chicken and turkey images to build a deep learning neural network (DNN) based on PyTorch's transfer learning [tutorial](https://pytorch.org/tutorials/beginner/transfer_learning_tutorial.html). Transfer learning is a technique that applies knowledge gained from solving one problem to a different but related problem. This shortcuts the training process by requiring less data, time, and compute resources than training from scratch.
+The example scripts in this article are used to classify chicken and turkey images to build a deep learning neural network (DNN) based on PyTorch's transfer learning [tutorial](https://pytorch.org/tutorials/beginner/transfer_learning_tutorial.html). Transfer learning is a technique that applies knowledge gained from solving one problem to a different but related problem. This shortcuts the training process by requiring less data, time, and compute resources than training from scratch. See the [deep learning vs machine learning](./concept-deep-learning-vs-machine-learning.md#what-is-transfer-learning) article to learn more about transfer learning.
Whether you're training a deep learning PyTorch model from the ground-up or you're bringing an existing model into the cloud, you can use Azure Machine Learning to scale out open-source training jobs using elastic cloud compute resources. You can build, deploy, version, and monitor production-grade models with Azure Machine Learning.
machine-learning https://docs.microsoft.com/en-us/azure/machine-learning/security-baseline https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/security-baseline.md
@@ -368,7 +368,7 @@ You can also enable a just-in-time access to administrative accounts by using Az
a secure, Azure-managed workstation (also known as a Privileged Access Workstation, or PAW) for administrative tasks that require elevated privileges. -- [Understand secure, Azure-managed workstations](../active-directory/devices/concept-azure-managed-workstation.md)
+- [Understand secure, Azure-managed workstations](https://4sysops.com/archives/understand-the-microsoft-privileged-access-workstation-paw-security-model/)
- [How to enable Azure AD MFA](../active-directory/authentication/howto-mfa-getstarted.md)
mariadb https://docs.microsoft.com/en-us/azure/mariadb/security-baseline https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/mariadb/security-baseline.md
@@ -358,7 +358,7 @@ How to monitor identity and access within Azure Security Center: https://docs.mi
**Guidance**: Use PAWs (privileged access workstations) with MFA configured to log into and configure Azure resources.
-Learn about Privileged Access Workstations: https://docs.microsoft.com/windows-server/identity/securing-privileged-access/privileged-access-workstations
+Learn about Privileged Access Workstations: https://4sysops.com/archives/understand-the-microsoft-privileged-access-workstation-paw-security-model/
How to enable MFA in Azure: https://docs.microsoft.com/azure/active-directory/authentication/howto-mfa-getstarted
marketplace https://docs.microsoft.com/en-us/azure/marketplace/gtm-your-marketplace-benefits https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/gtm-your-marketplace-benefits.md
@@ -60,7 +60,7 @@ If you publish a trial or consulting proof of concept, implementation, or worksh
The table below summarizes the eligibility requirements for list, trial, and consulting offers:
-![Go-To-Market benefits](./media/marketplace-publishers-guide/gtm-eligibility-requirements.png)
+![Go-To-Market benefits](./media/marketplace-publishers-guide/go-to-market-gtm-eligibility-requirements.png)
Detailed descriptions for all these benefits can be found in the [Marketplace Rewards program deck](https://aka.ms/marketplacerewards).
marketplace https://docs.microsoft.com/en-us/azure/marketplace/partner-center-portal/commercial-marketplace-lead-management-instructions-dynamics https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/partner-center-portal/commercial-marketplace-lead-management-instructions-dynamics.md
@@ -32,19 +32,19 @@ The following user permissions are necessary to complete the steps in this artic
1. Open Dynamics 365 Customer Engagement by going to the URL for your Dynamics instance, such as `https://tenant.crm.dynamics.com`. 1. Select the gear icon on the top bar, and then select **Advanced Settings**.
-
- ![Dynamics 365 Advanced Settings menu item](./media/commercial-marketplace-lead-management-instructions-dynamics/dynamics-advanced-settings.png)
+
+ ![Dynamics 365 Advanced Settings menu item](media/commercial-marketplace-lead-management-instructions-dynamics/dynamics-advanced-settings.png)
1. On the **Settings** page, open the **Settings** menu on the top bar and select **Solutions**. >[!NOTE] >If you don't see the options in the following screen, you don't have the permissions you need to proceed. Contact an admin on your Dynamics 365 Customer Engagement instance.
- ![Dynamics 365 Solutions option](./media/commercial-marketplace-lead-management-instructions-dynamics/dynamics-solutions.png)
+ ![Dynamics 365 Solutions option](media/commercial-marketplace-lead-management-instructions-dynamics/dynamics-solutions.png)
1. On the **Solutions** page, select **Import** and go to where you saved the **Microsoft Marketplace Lead Writer** solution that you downloaded in step 1.
- ![Import button](./media/commercial-marketplace-lead-management-instructions-dynamics/dynamics-crm-import.png)
+ ![Import button](media/commercial-marketplace-lead-management-instructions-dynamics/dynamics-crm-import.png)
1. Complete importing the solution by following the Import solution wizard.
@@ -67,43 +67,43 @@ To configure Azure Active Directory for Dynamics 365 Customer Engagement:
1. Select **Properties**, and copy the **Directory ID** value on the **Directory properties** page. Save this value because you'll need to provide it in the publishing portal to receive leads for your marketplace offer.
- ![Azure Active Directory Properties menu item](./media/commercial-marketplace-lead-management-instructions-dynamics/aad-properties.png)
+ ![Azure Active Directory Properties menu item](media/commercial-marketplace-lead-management-instructions-dynamics/aad-properties.png)
1. Select **App registrations** from the Azure Active Directory left pane, and then select **New registration** on that page. 1. Enter a meaningful name for the application name. 1. Under **Supported account types**, select **Accounts in any organizational directory**.
-1. Under **Redirect URI (optional)**, select **Web** and enter a URI, such as `https://contosoapp1/auth`.
+1. Under **Redirect URI (optional)**, select **Web** and enter a URI, such as `https://contosoapp1/auth`.
1. Select **Register**.
- ![Register an application page](./media/commercial-marketplace-lead-management-instructions-dynamics/register-an-application.png)
+ ![Register an application page](media/commercial-marketplace-lead-management-instructions-dynamics/register-an-application.png)
1. Now that your application is registered, access the application's overview page. Copy the **Application (client) ID** value on that page. Save this value because you'll need to provide it in the publishing portal and in Dynamics 365 to receive leads for your marketplace offer.
- ![Application (client) ID box](./media/commercial-marketplace-lead-management-instructions-dynamics/application-id.png)
+ ![Application (client) ID box](media/commercial-marketplace-lead-management-instructions-dynamics/application-id.png)
1. Select **Certificates & secrets** from the app's left pane, and select the **New client secret** button. Enter a meaningful description for the client secret, and select the **Never** option under **Expires**. Select **Add** to create the client secret.
- ![Certificates & secrets menu item](./media/commercial-marketplace-lead-management-instructions-dynamics/aad-certificates-secrets.png)
+ ![Certificates & secrets menu item](media/commercial-marketplace-lead-management-instructions-dynamics/aad-certificates-secrets.png)
1. As soon as the client secret is successfully created, copy the **Client secret** value. You won't be able to retrieve the value after you leave the page. Save this value because you'll need to provide it in the publishing portal to receive leads for your marketplace offer. 1. Select **API permissions** from the app's left pane, and then select **+ Add a permission**. 1. Select **Microsoft APIs**, and then select **Dynamics CRM** as the API.
-1. Under **What type of permissions does your application require?**, make sure **Delegated permissions** is selected.
+1. Under **What type of permissions does your application require?**, make sure **Delegated permissions** is selected.
1. Under **Permission**, select the **user_impersonation** check box for **Access Common Data Service as organization users**. Then select **Add permissions**.
- ![Add permissions button](./media/commercial-marketplace-lead-management-instructions-dynamics/api-permissions.png)
+ ![Add permissions button](media/commercial-marketplace-lead-management-instructions-dynamics/api-permissions.png)
1. After you complete steps 1 through 14 in the Azure portal, go to your Dynamics 365 Customer Engagement instance by going to the URL, such as `https://tenant.crm.dynamics.com`. 1. Select the gear icon on the top bar, and then select **Advanced Settings**. 1. On the **Settings** page, open the **Settings** menu on the top bar and select **Security**. 1. On the **Security** page, select **Users**. On the **Users** page, select the **Enabled Users** drop-down and then select **Application Users**.
-1. Select **New** to create a new user.
+1. Select **New** to create a new user.
- ![Create a new user](./media/commercial-marketplace-lead-management-instructions-dynamics/application-users.png)
+ ![Create a new user](media/commercial-marketplace-lead-management-instructions-dynamics/application-users.png)
1. In the **New User** pane, make sure that **USER: APPLICATION USER** is selected. Provide a username, full name, and email address for the user that you want to use with this connection. Also, paste in the **Application ID** for the app you created in the Azure portal from step 8. Select **Save & Close** to finish adding the user.
- ![New User pane](./media/commercial-marketplace-lead-management-instructions-dynamics/new-user-info.png)
+ ![New User pane](media/commercial-marketplace-lead-management-instructions-dynamics/new-user-info.png)
1. Go to the "Security settings" section in this article to finish configuring the connection for this user.
@@ -117,7 +117,7 @@ To configure Office 365 for Dynamics 365 Customer Engagement:
1. Select **Add a user**.
- ![Microsoft 365 admin center Add a user option](./media/commercial-marketplace-lead-management-instructions-dynamics/ms-365-add-user.png)
+ ![Microsoft 365 admin center Add a user option](media/commercial-marketplace-lead-management-instructions-dynamics/ms-365-add-user.png)
1. Create a new user for the lead writer service. Configure the following settings:
@@ -128,7 +128,7 @@ To configure Office 365 for Dynamics 365 Customer Engagement:
Save these values because you'll need to provide the **Username** and **Password** values in the publishing portal to receive leads for your marketplace offer.
-![Microsoft 365 admin center New user pane](./media/commercial-marketplace-lead-management-instructions-dynamics/ms-365-new-user.png)
+![Microsoft 365 admin center New user pane](media/commercial-marketplace-lead-management-instructions-dynamics/ms-365-new-user.png)
## Security settings
@@ -137,32 +137,32 @@ The final step is to enable the user you created to write the leads.
1. Open Dynamics 365 Customer Engagement by going to the URL for your Dynamics instance, such as `https://tenant.crm.dynamics.com`. 1. Select the gear icon on the top bar, and then select **Advanced Settings**. 1. On the **Settings** page, open the **Settings** menu on the top bar and select **Security**.
-1. On the **Security** page, select **Users** and select the user that you created in the "Configure user permissions" section of this document. Then select **Manage Roles**.
+1. On the **Security** page, select **Users** and select the user that you created in the "Configure user permissions" section of this document. Then select **Manage Roles**.
- ![Manage Roles tab](./media/commercial-marketplace-lead-management-instructions-dynamics/security-manage-roles.png)
+ ![Manage Roles tab](media/commercial-marketplace-lead-management-instructions-dynamics/security-manage-roles.png)
1. Search for the role name **Microsoft Marketplace Lead Writer**, and select it to assign the user the role.
- ![Manage User Roles pane](./media/commercial-marketplace-lead-management-instructions-dynamics/security-manage-user-roles.png)
+ ![Manage User Roles pane](media/commercial-marketplace-lead-management-instructions-dynamics/security-manage-user-roles.png)
>[!NOTE] >This role is created by the solution that you imported and only has permissions to write the leads and to track the solution version to ensure compatibility. 1. Go back to the **Security** page, and select **Security Roles**. Search for the role **Microsoft Marketplace Lead Writer**, and select it.
- ![Security Roles pane](./media/commercial-marketplace-lead-management-instructions-dynamics/security-roles.png)
+ ![Security Roles pane](media/commercial-marketplace-lead-management-instructions-dynamics/security-roles.png)
-1. In the security role, select the **Core Records** tab. Search for the **User Entity UI Settings** item. Enable the Create, Read, and Write permissions to User (1/4 yellow circle) for that entity by clicking once in each of the corresponding circles.
+1. In the security role, select the **Core Records** tab. Search for the **User Entity UI Settings** item. Enable the Create, Read, and Write permissions to User (1/4 yellow circle) for that entity by selecting the corresponding radio buttons.
- ![Microsoft Marketplace Lead Writer Core Records tab](./media/commercial-marketplace-lead-management-instructions-dynamics/marketplace-lead-writer.png)
+ ![Microsoft Marketplace Lead Writer Core Records tab](media/commercial-marketplace-lead-management-instructions-dynamics/marketplace-lead-writer.png)
-1. On the **Customization** tab, search for the **System Job** item. Enable the Read, Write, and AppendTo permissions to Organization (solid green circles) for that entity by clicking four times in each of the corresponding circles.
+1. On the **Customization** tab, search for the **System Job** item. Enable the Read, Write, and AppendTo permissions to Organization (solid green radio buttons) for that entity by selecting the corresponding radio buttons.
- ![Microsoft Marketplace Lead Writer Customization tab](./media/commercial-marketplace-lead-management-instructions-dynamics/marketplace-lead-writer-customization.png)
+ ![Microsoft Marketplace Lead Writer Customization tab](media/commercial-marketplace-lead-management-instructions-dynamics/marketplace-lead-writer-customization.png)
1. Select **Save and close**.
-## Configure your offer to send leads to Dynamics 365 Customer Engagement
+## Configure your offer to send leads to Dynamics 365 Customer Engagement
To configure the lead management information for your offer in the publishing portal:
@@ -173,25 +173,25 @@ To configure the lead management information for your offer in the publishing po
1. In the Connection details pop-up window, select **Dynamics 365 Customer Engagement** for the lead destination.
- ![Lead destination box](./media/commercial-marketplace-lead-management-instructions-dynamics/connection-details-lead-destination.png)
+ ![Lead destination box](media/commercial-marketplace-lead-management-instructions-dynamics/connection-details-lead-destination.png)
1. Enter the **URL** for the Dynamics 365 instance, such as `https://contoso.crm4.dynamics.com`.
-1. Select the method of **Authentication**, either Azure Active Directory or Office 365.
+1. Select the method of **Authentication**, either Azure Active Directory or Office 365.
1. If you selected **Azure Active Directory**, enter the **Application (client) ID** (for example, `23456052-aaaa-bbbb-8662-1234df56788f`), **Directory ID** (for example, `12345678-8af1-4asf-1234-12234d01db47`), and **Client secret** (for example, `1234ABCDEDFRZ/G/FdY0aUABCEDcqhbLn/ST122345nBc=`).
- ![Authentication with Azure Active Directory selected](./media/commercial-marketplace-lead-management-instructions-dynamics/connection-details-application-id.png)
+ ![Authentication with Azure Active Directory selected](media/commercial-marketplace-lead-management-instructions-dynamics/connection-details-application-id.png)
1. If you selected **Office 365**, enter the **User name** (for example, `contoso@contoso.onmicrosoft.com`) and **Password** (for example, `P@ssw0rd`).
- ![Office 365 User name box](./media/commercial-marketplace-lead-management-instructions-dynamics/connection-details-authentication.png)
+ ![Office 365 User name box](media/commercial-marketplace-lead-management-instructions-dynamics/connection-details-authentication.png)
1. For **Contact email**, enter email addresses for people in your company who should receive email notifications when a new lead is received. You can enter multiple email addresses by separating them with semicolons. 1. Select **OK**. To make sure you've successfully connected to a lead destination, select the **Validate** button. If successful, you'll have a test lead in the lead destination.
-![Contact email box](./media/commercial-marketplace-lead-management-instructions-dynamics/dynamics-connection-details.png)
+![Contact email box](media/commercial-marketplace-lead-management-instructions-dynamics/dynamics-connection-details.png)
>[!NOTE]
->You must finish configuring the rest of the offer and publish it before you can receive leads for the offer.
\ No newline at end of file
+>You must finish configuring the rest of the offer and publish it before you can receive leads for the offer.
marketplace https://docs.microsoft.com/en-us/azure/marketplace/policies-terms https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/policies-terms.md
@@ -21,7 +21,6 @@ Offers on the commercial marketplace must comply with our policies and terms. We
- [Commercial marketplace certification policies](/legal/marketplace/certification-policies?context=/azure/marketplace/context/context) - [Microsoft AppSource and Azure Marketplace review policies](/legal/marketplace/rating-review-policies?context=/azure/marketplace/context/context)-- [Azure Marketplace participation policies](/legal/marketplace/participation-policy?context=/azure/marketplace/context/context) - [Azure Marketplace terms](/legal/marketplace/terms?context=/azure/marketplace/context/context) ## Next steps
migrate https://docs.microsoft.com/en-us/azure/migrate/security-baseline https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/migrate/security-baseline.md
@@ -201,7 +201,7 @@ approval is also supported.
**Guidance**: Secured, isolated workstations are critically important for the security of sensitive roles like administrators, developers, and critical service operators. Use highly secured user workstations and/or Azure Bastion for administrative tasks. Use Azure Active Directory, Microsoft Defender Advanced Threat Protection (ATP), and/or Microsoft Intune to deploy a secure and managed user workstation for administrative tasks. The secured workstations can be centrally managed to enforce secured configuration including strong authentication, software and hardware baselines, restricted logical and network access. -- [Understand privileged access workstations](../active-directory/devices/concept-azure-managed-workstation.md)
+- [Understand privileged access workstations](https://4sysops.com/archives/understand-the-microsoft-privileged-access-workstation-paw-security-model/)
- [Deploy a privileged access workstation](../active-directory/devices/howto-azure-managed-workstation.md)
mysql https://docs.microsoft.com/en-us/azure/mysql/concepts-certificate-rotation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/mysql/concepts-certificate-rotation.md
@@ -5,7 +5,7 @@ author: mksuni
ms.author: sumuth ms.service: mysql ms.topic: conceptual
-ms.date: 09/02/2020
+ms.date: 01/13/2021
--- # Understanding the changes in the Root CA change for Azure Database for MySQL
@@ -15,118 +15,144 @@ Azure Database for MySQL will be changing the root certificate for the client ap
>[!NOTE] > Based on the feedback from customers we have extended the root certificate deprecation for our existing Baltimore Root CA from October 26th, 2020 till February 15, 2021. We hope this extension provide sufficient lead time for our users to implement the client changes if they are impacted.
+> [!NOTE]
+> Bias-free communication
+>
+> Microsoft supports a diverse and inclusionary environment. This article contains references to the words _master_ and _slave_. The Microsoft [style guide for bias-free communication](https://github.com/MicrosoftDocs/microsoft-style-guide/blob/master/styleguide/bias-free-communication.md) recognizes these as exclusionary words. The words are used in this article for consistency because they are currently the words that appears in the software. When the software is updated to remove the words, this article will be updated to be in alignment.
+>
+ ## What update is going to happen? In some cases, applications use a local certificate file generated from a trusted Certificate Authority (CA) certificate file to connect securely. Currently customers can only use the predefined certificate to connect to an Azure Database for MySQL server, which is located [here](https://www.digicert.com/CACerts/BaltimoreCyberTrustRoot.crt.pem). However, [Certificate Authority (CA) Browser forum](https://cabforum.org/) recently published reports of multiple certificates issued by CA vendors to be non-compliant.
-As per the industry's compliance requirements, CA vendors began revoking CA certificates for non-compliant CAs, requiring servers to use certificates issued by compliant CAs, and signed by CA certificates from those compliant CAs. Since Azure Database for MySQL currently uses one of these non-compliant certificates, which client applications use to validate their SSL connections, we need to ensure that appropriate actions are taken (described below) to minimize the potential impact to your MySQL servers.
+As per the industry's compliance requirements, CA vendors began revoking CA certificates for non-compliant CAs, requiring servers to use certificates issued by compliant CAs, and signed by CA certificates from those compliant CAs. Since Azure Database for MySQL currently uses one of these non-compliant certificates, which client applications use to validate their SSL connections, we need to ensure that appropriate actions are taken (described later in this topic) to minimize the potential impact to your MySQL servers.
-The new certificate will be used starting February 15, 2021 (02/15/2021).If you use either CA validation or full validation of the server certificate when connecting from a MySQL client (sslmode=verify-ca or sslmode=verify-full), you need to update your application configuration before February 15, 2021 (03/15/2021).
+The new certificate will be used starting February 15, 2021 (02/15/2021). If you use either CA validation or full validation of the server certificate when connecting from a MySQL client (sslmode=verify-ca or sslmode=verify-full), you need to update your application configuration before February 15, 2021 (03/15/2021).
## How do I know if my database is going to be affected? All applications that use SSL/TLS and verify the root certificate needs to update the root certificate. You can identify whether your connections verify the root certificate by reviewing your connection string.-- If your connection string includes `sslmode=verify-ca` or `sslmode=verify-identity`, you need to update the certificate.-- If your connection string includes `sslmode=disable`, `sslmode=allow`, `sslmode=prefer`, or `sslmode=require`, you do not need to update certificates. -- If using Java connectors and your connection string includes useSSL=false or requireSSL=false, you do not need to update certificates.-- If your connection string does not specify sslmode, you do not need to update certificates.
-If you are using a client that abstracts the connection string away, review the client's documentation to understand whether it verifies certificates.
-To understand Azure Database for MySQL sslmode review the [SSL mode descriptions](concepts-ssl-connection-security.md#ssl-default-settings).
+* If your connection string includes `sslmode=verify-ca` or `sslmode=verify-identity`, you need to update the certificate.
+* If your connection string includes `sslmode=disable`, `sslmode=allow`, `sslmode=prefer`, or `sslmode=require`, you don't need to update certificates.
+* If using Java connectors and your connection string includes useSSL=false or requireSSL=false, you don't need to update certificates.
+* If your connection string doesn't specify sslmode, you don't need to update certificates.
-To avoid your application's availability being interrupted due to certificates being unexpectedly revoked, or to update a certificate, which has been revoked, refer to the [**"What do I need to do to maintain connectivity"**](concepts-certificate-rotation.md#what-do-i-need-to-do-to-maintain-connectivity) section.
+If you're using a client that abstracts the connection string away, review the client's documentation to understand whether it verifies certificates.
+To understand Azure Database for MySQL sslmode, review the [SSL mode descriptions](concepts-ssl-connection-security.md#ssl-default-settings).
+
+To avoid your application's availability being interrupted as a result of certificates being unexpectedly revoked, or to update a certificate that has been revoked, refer to the [**"What do I need to do to maintain connectivity"**](concepts-certificate-rotation.md#what-do-i-need-to-do-to-maintain-connectivity) section.
## What do I need to do to maintain connectivity
-To avoid your application's availability being interrupted due to certificates being unexpectedly revoked, or to update a certificate, which has been revoked, follow the steps below. The idea is to create a new *.pem* file, which combines the current cert and the new one and during the SSL cert validation once of the allowed values will be used. Refer to the steps below:
+To avoid your application's availability being interrupted due to certificates being unexpectedly revoked, or to update a certificate that has been revoked, use the following steps. The idea is to create a new *.pem* file, which combines the current cert and the new one and during the SSL cert validation one of the allowed values will be used. Refer to the following steps:
+
+* Download BaltimoreCyberTrustRoot & DigiCertGlobalRootG2 Root CA from the following links:
-* Download BaltimoreCyberTrustRoot & DigiCertGlobalRootG2 Root CA from links below:
- * https://www.digicert.com/CACerts/BaltimoreCyberTrustRoot.crt.pem
- * https://cacerts.digicert.com/DigiCertGlobalRootG2.crt.pem
+ * [https://www.digicert.com/CACerts/BaltimoreCyberTrustRoot.crt.pem](https://www.digicert.com/CACerts/BaltimoreCyberTrustRoot.crt.pem)
+ * [https://cacerts.digicert.com/DigiCertGlobalRootG2.crt.pem](https://cacerts.digicert.com/DigiCertGlobalRootG2.crt.pem)
-* Generate a combined CA certificate store with both **BaltimoreCyberTrustRoot** and **DigiCertGlobalRootG2** certificates are included.
- * For Java (MySQL Connector/J) users, execute:
+* Generate a combined CA certificate store with both **BaltimoreCyberTrustRoot** and **DigiCertGlobalRootG2** certificates are included.
+
+ * For Java (MySQL Connector/J) users, execute:
+
+ ```console
+ keytool -importcert -alias MySQLServerCACert -file D:\BaltimoreCyberTrustRoot.crt.pem -keystore truststore -storepass password -noprompt
+ ```
- ```console
- keytool -importcert -alias MySQLServerCACert -file D:\BaltimoreCyberTrustRoot.crt.pem -keystore truststore -storepass password -noprompt
- ```
+ ```console
+ keytool -importcert -alias MySQLServerCACert2 -file D:\DigiCertGlobalRootG2.crt.pem -keystore truststore -storepass password -noprompt
+ ```
- ```console
- keytool -importcert -alias MySQLServerCACert2 -file D:\DigiCertGlobalRootG2.crt.pem -keystore truststore -storepass password -noprompt
- ```
+ Then replace the original keystore file with the new generated one:
- Then replace the original keystore file with the new generated one:
- * System.setProperty("javax.net.ssl.trustStore","path_to_truststore_file");
- * System.setProperty("javax.net.ssl.trustStorePassword","password");
+ * System.setProperty("javax.net.ssl.trustStore","path_to_truststore_file");
+ * System.setProperty("javax.net.ssl.trustStorePassword","password");
- * For .NET (MySQL Connector/NET, MySQLConnector) users, make sure **BaltimoreCyberTrustRoot** and **DigiCertGlobalRootG2** both exist in Windows Certificate Store, Trusted Root Certification Authorities. If any certificates do not exist, import the missing certificate.
+ * For .NET (MySQL Connector/NET, MySQLConnector) users, make sure **BaltimoreCyberTrustRoot** and **DigiCertGlobalRootG2** both exist in Windows Certificate Store, Trusted Root Certification Authorities. If any certificates don't exist, import the missing certificate.
![Azure Database for MySQL .net cert](media/overview/netconnecter-cert.png)
- * For .NET users on Linux using SSL_CERT_DIR, make sure **BaltimoreCyberTrustRoot** and **DigiCertGlobalRootG2** both exist in the directory indicated by SSL_CERT_DIR. If any certificates do not exist, create the missing certificate file.
+ * For .NET users on Linux using SSL_CERT_DIR, make sure **BaltimoreCyberTrustRoot** and **DigiCertGlobalRootG2** both exist in the directory indicated by SSL_CERT_DIR. If any certificates don't exist, create the missing certificate file.
- * For other (MySQL Client/MySQL Workbench/C/C++/Go/Python/Ruby/PHP/NodeJS/Perl/Swift) users, you can merge two CA certificate files like this format below</b>
+ * For other (MySQL Client/MySQL Workbench/C/C++/Go/Python/Ruby/PHP/NodeJS/Perl/Swift) users, you can merge two CA certificate files into the following format:</b>
- </br>-----BEGIN CERTIFICATE-----
- </br>(Root CA1: BaltimoreCyberTrustRoot.crt.pem)
- </br>-----END CERTIFICATE-----
- </br>-----BEGIN CERTIFICATE-----
- </br>(Root CA2: DigiCertGlobalRootG2.crt.pem)
- </br>-----END CERTIFICATE-----
+ </br>-----BEGIN CERTIFICATE-----
+ </br>(Root CA1: BaltimoreCyberTrustRoot.crt.pem)
+ </br>-----END CERTIFICATE-----
+ </br>-----BEGIN CERTIFICATE-----
+ </br>(Root CA2: DigiCertGlobalRootG2.crt.pem)
+ </br>-----END CERTIFICATE-----
-* Replace the original root CA pem file with the combined root CA file and restart your application/client.
-* In future, after the new certificate deployed on the server side, you can change your CA pem file to DigiCertGlobalRootG2.crt.pem.
+* Replace the original root CA pem file with the combined root CA file and restart your application/client.
+* In future, after the new certificate deployed on the server side, you can change your CA pem file to DigiCertGlobalRootG2.crt.pem.
## What can be the impact of not updating the certificate?
-If you are using the Azure Database for MySQL issued certificate as documented here, your application's availability might be interrupted since the database will not be reachable. Depending on your application, you may receive a variety of error messages including but not limited to:
-* Invalid certificate/revoked certificate
-* Connection timed out
+
+If you're using the Azure Database for MySQL issued certificate as documented here, your application's availability might be interrupted since the database will not be reachable. Depending on your application, you may receive various error messages including, but not limited to,:
+
+* Invalid certificate/revoked certificate
+* Connection timed out
> [!NOTE]
-> Please do not drop or alter **Baltimore certificate** until the cert change is made. We will send a communication once the change is done, after which it is safe for them to drop the Baltimore certificate.
+> Please do not drop or alter **Baltimore certificate** until the cert change is made. We will send a communication after the change is done, after which it is safe for them to drop the Baltimore certificate.
## Frequently asked questions
-### 1. If I am not using SSL/TLS, do I still need to update the root CA?
-No actions required if you are not using SSL/TLS.
+### 1. If I'm not using SSL/TLS, do I still need to update the root CA?
+
+ No actions required if you're not using SSL/TLS.
-### 2. If I am using SSL/TLS, do I need to restart my database server to update the root CA?
-No, you do not need to restart the database server to start using the new certificate. This root certificate is a client-side change and the incoming client connections need to use the new certificate to ensure that they can connect to the database server.
+### 2. If I'm using SSL/TLS, do I need to restart my database server to update the root CA?
-### 3. What will happen if I do not update the root certificate before February 15, 2021 (02/15/2021)?
-If you do not update the root certificate before February 15, 2021 (02/15/2021), your applications that connect via SSL/TLS and does verification for the root certificate will be unable to communicate to the MySQL database server and application will experience connectivity issues to your MySQL database server.
+No, you don't need to restart the database server to start using the new certificate. This root certificate is a client-side change and the incoming client connections need to use the new certificate to ensure that they can connect to the database server.
+
+### 3. What will happen if I don't update the root certificate before February 15, 2021 (02/15/2021)?
+
+If you don't update the root certificate before February 15, 2021 (02/15/2021), your applications that connect via SSL/TLS and does verification for the root certificate will be unable to communicate to the MySQL database server and application will experience connectivity issues to your MySQL database server.
### 4. What is the impact if using App Service with Azure Database for MySQL?
-For Azure app services, connecting to Azure Database for MySQL, we can have two possible scenarios and it depends on how on you are using SSL with your application.
-* This new certificate has been added to App Service at platform level. If you are using the SSL certificates included on App Service platform in your application, then no action is needed.
-* If you are explicitly including the path to SSL cert file in your code, then you would need to download the new cert and update the code to use the new cert. A good example of this scenario is when you use custom containers in App Service as shared in the [App Service documentation](../app-service/tutorial-multi-container-app.md#configure-database-variables-in-wordpress)
+
+For Azure app services connecting to Azure Database for MySQL, there are two possible scenarios and depending on how on you're using SSL with your application.
+
+* This new certificate has been added to App Service at platform level. If you're using the SSL certificates included on App Service platform in your application, then no action is needed.
+* If you're explicitly including the path to SSL cert file in your code, then you would need to download the new cert and update the code to use the new cert. A good example of this scenario is when you use custom containers in App Service as shared in the [App Service documentation](../app-service/tutorial-multi-container-app.md#configure-database-variables-in-wordpress)
### 5. What is the impact if using Azure Kubernetes Services (AKS) with Azure Database for MySQL?
-If you are trying to connect to the Azure Database for MySQL using Azure Kubernetes Services (AKS), it is similar to access from a dedicated customers host environment. Refer to the steps [here](../aks/ingress-own-tls.md).
+
+If you're trying to connect to the Azure Database for MySQL using Azure Kubernetes Services (AKS), it's similar to access from a dedicated customers host environment. Refer to the steps [here](../aks/ingress-own-tls.md).
### 6. What is the impact if using Azure Data Factory to connect to Azure Database for MySQL?
-For connector using Azure Integration Runtime, the connector leverage certificates in the Windows Certificate Store in the Azure-hosted environment. These certificates are already compatible to the newly applied certificates and therefore no action is needed.
-For connector using Self-hosted Integration Runtime where you explicitly include the path to SSL cert file in your connection string, you will need to download the [new certificate](https://cacerts.digicert.com/DigiCertGlobalRootG2.crt.pem) and update the connection string to use it.
+For a connector using Azure Integration Runtime, the connector leverage certificates in the Windows Certificate Store in the Azure-hosted environment. These certificates are already compatible to the newly applied certificates and therefore no action is needed.
+
+For a connector using Self-hosted Integration Runtime where you explicitly include the path to SSL cert file in your connection string, you'll need to download the [new certificate](https://cacerts.digicert.com/DigiCertGlobalRootG2.crt.pem) and update the connection string to use it.
### 7. Do I need to plan a database server maintenance downtime for this change?
-No. Since the change here is only on the client side to connect to the database server, there is no maintenance downtime needed for the database server for this change.
+
+No. Since the change here is only on the client side to connect to the database server, there's no maintenance downtime needed for the database server for this change.
### 8. What if I cannot get a scheduled downtime for this change before February 15, 2021 (02/15/2021)?
-Since the clients used for connecting to the server needs to be updating the certificate information as described in the fix section [here](./concepts-certificate-rotation.md#what-do-i-need-to-do-to-maintain-connectivity), we do not need to a downtime for the server in this case.
+
+Since the clients used for connecting to the server needs to be updating the certificate information as described in the fix section [here](./concepts-certificate-rotation.md#what-do-i-need-to-do-to-maintain-connectivity), we don't need to a downtime for the server in this case.
### 9. If I create a new server after February 15, 2021 (02/15/2021), will I be impacted?+ For servers created after February 15, 2021 (02/15/2021), you can use the newly issued certificate for your applications to connect using SSL.
-### 10. How often does Microsoft update their certificates or what is the expiry policy?
+### 10. How often does Microsoft update their certificates or what is the expiry policy?
+ These certificates used by Azure Database for MySQL are provided by trusted Certificate Authorities (CA). So the support of these certificates on Azure Database for MySQL is tied to the support of these certificates by CA. However, as in this case, there can be unforeseen bugs in these predefined certificates, which need to be fixed at the earliest.
-### 11. If I am using read replicas, do I need to perform this update only on source server or the read replicas?
-Since this update is a client-side change, if the client used to read data from the replica server, you will need to apply the changes for those clients as well.
+### 11. If I'm using read replicas, do I need to perform this update only on source server or the read replicas?
-### 12. If I am using Data-in replication, do I need to perform any action?
-If you are using [Data-in replication](concepts-data-in-replication.md) to connect to Azure Database for MySQL, there are two things to consider:
-* If the data-replication is from a virtual machine (on-prem or Azure virtual machine) to Azure Database for MySQL, you need to check if SSL is being used to create the replica. Run **SHOW SLAVE STATUS** and check the following setting.
+Since this update is a client-side change, if the client used to read data from the replica server, you'll need to apply the changes for those clients as well.
+
+### 12. If I'm using Data-in replication, do I need to perform any action?
+
+If you're using [Data-in replication](concepts-data-in-replication.md) to connect to Azure Database for MySQL, there are two things to consider:
+
+* If the data-replication is from a virtual machine (on-prem or Azure virtual machine) to Azure Database for MySQL, you need to check if SSL is being used to create the replica. Run **SHOW SLAVE STATUS** and check the following setting.
```azurecli-interactive Master_SSL_Allowed : Yes
@@ -137,16 +163,19 @@ If you are using [Data-in replication](concepts-data-in-replication.md) to conne
Master_SSL_Key : ~\azure_mysqlclient_key.pem ```
- If you do see the certificate is provided for the CA_file, SSL_Cert and SSL_Key, you will need to update the file by adding the [new certificate](https://cacerts.digicert.com/DigiCertGlobalRootG2.crt.pem).
+ If you do see that the certificate is provided for the CA_file, SSL_Cert, and SSL_Key, you'll need to update the file by adding the [new certificate](https://cacerts.digicert.com/DigiCertGlobalRootG2.crt.pem).
-* If the data-replication is between two Azure Database for MySQL, then you will need to reset the replica by executing
+* If the data-replication is between two Azure Database for MySQL, then you'll need to reset the replica by executing
**CALL mysql.az_replication_change_master** and provide the new dual root certificate as last parameter [master_ssl_ca](howto-data-in-replication.md#link-source-and-replica-servers-to-start-data-in-replication) ### 13. Do we have server-side query to verify if SSL is being used?
-To verify if you are using SSL connection to connect to the server refer [SSL verification](howto-configure-ssl.md#step-4-verify-the-ssl-connection).
+
+To verify if you're using SSL connection to connect to the server refer [SSL verification](howto-configure-ssl.md#step-4-verify-the-ssl-connection).
### 14. Is there an action needed if I already have the DigiCertGlobalRootG2 in my certificate file?
-No. There is no action needed if your certificate file already has the **DigiCertGlobalRootG2**.
+
+No. There's no action needed if your certificate file already has the **DigiCertGlobalRootG2**.
### 15. What if I have further questions?+ If you have questions, get answers from community experts in [Microsoft Q&A](mailto:AzureDatabaseforMySQL@service.microsoft.com). If you have a support plan and you need technical help, [contact us](mailto:AzureDatabaseforMySQL@service.microsoft.com).
mysql https://docs.microsoft.com/en-us/azure/mysql/concepts-query-store https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/mysql/concepts-query-store.md
@@ -63,7 +63,7 @@ SELECT * FROM mysql.query_store_wait_stats;
## Finding wait queries > [!NOTE]
-> Wait statistics should not be enabled during peak workload hours or be turned on indefinitely for sensitive workloads. <br>For workloads running with high CPU utilization or on servers configured with lower vCores, use caution when enabling wait statistics. It should not be turned on indefinitely.
+> Wait statistics should not be enabled during peak workload hours or be turned on indefinitely for sensitive workloads. <br>For workloads running with high CPU utilization or on servers configured with lower vCores, use caution when enabling wait statistics. It should not be turned on indefinitely.
Wait event types combine different wait events into buckets by similarity. Query Store provides the wait event type, specific wait event name, and the query in question. Being able to correlate this wait information with the query runtime statistics means you can gain a deeper understanding of what contributes to query performance characteristics.
@@ -73,7 +73,7 @@ Here are some examples of how you can gain more insights into your workload usin
|---|---| |High Lock waits | Check the query texts for the affected queries and identify the target entities. Look in Query Store for other queries modifying the same entity, which is executed frequently and/or have high duration. After identifying these queries, consider changing the application logic to improve concurrency, or use a less restrictive isolation level. | |High Buffer IO waits | Find the queries with a high number of physical reads in Query Store. If they match the queries with high IO waits, consider introducing an index on the underlying entity, to do seeks instead of scans. This would minimize the IO overhead of the queries. Check the **Performance Recommendations** for your server in the portal to see if there are index recommendations for this server that would optimize the queries. |
-|High Memory waits | Find the top memory consuming queries in Query Store. These queries are probably delaying further progress of the affected queries. Check the **Performance Recommendations** for your server in the portal to see if there are index recommendations that would optimize these queries.|
+|High Memory waits | Find the top memory consuming queries in Query Store. These queries are probably delaying further progress of the affected queries. Check the **Performance Recommendations** for your server in the portal to see if there are index recommendations that would optimize these queries. |
## Configuration options
@@ -102,7 +102,7 @@ Use the [Azure portal](howto-server-parameters.md) or [Azure CLI](howto-confi
## Views and functions
-View and manage Query Store using the following views and functions. Anyone in the [select privilege public role](howto-create-users.md#to-create-additional-admin-users-in-azure-database-for-mysql) can use these views to see the data in Query Store. These views are only available in the **mysql** database.
+View and manage Query Store using the following views and functions. Anyone in the [select privilege public role](howto-create-users.md#to-create-more-admin-users-in-azure-database-for-mysql) can use these views to see the data in Query Store. These views are only available in the **mysql** database.
Queries are normalized by looking at their structure after removing literals and constants. If two queries are identical except for literal values, they will have the same hash.
mysql https://docs.microsoft.com/en-us/azure/mysql/concepts-read-replicas https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/mysql/concepts-read-replicas.md
@@ -5,7 +5,8 @@ author: savjani
ms.author: pariks ms.service: mysql ms.topic: conceptual
-ms.date: 10/26/2020
+ms.date: 01/13/2021
+ms.custom: references_regions
--- # Read replicas in Azure Database for MySQL
@@ -19,42 +20,45 @@ To learn more about MySQL replication features and issues, see the [MySQL replic
> [!NOTE] > Bias-free communication >
-> Microsoft supports a diverse and inclusionary environment. This article contains references to the word _slave_. The Microsoft [style guide for bias-free communication](https://github.com/MicrosoftDocs/microsoft-style-guide/blob/master/styleguide/bias-free-communication.md) recognizes this as an exclusionary word. The word is used in this article for consistency because it's currently the word that appears in the software. When the software is updated to remove the word, this article will be updated to be in alignment.
+> Microsoft supports a diverse and inclusionary environment. This article contains references to the words _master_ and _slave_. The Microsoft [style guide for bias-free communication](https://github.com/MicrosoftDocs/microsoft-style-guide/blob/master/styleguide/bias-free-communication.md) recognizes these as exclusionary words. The words are used in this article for consistency because they are currently the words that appears in the software. When the software is updated to remove the words, this article will be updated to be in alignment.
> ## When to use a read replica
-The read replica feature helps to improve the performance and scale of read-intensive workloads. Read workloads can be isolated to the replicas, while write workloads can be directed to the master.
+The read replica feature helps to improve the performance and scale of read-intensive workloads. Read workloads can be isolated to the replicas, while write workloads can be directed to the source.
A common scenario is to have BI and analytical workloads use the read replica as the data source for reporting.
-Because replicas are read-only, they don't directly reduce write-capacity burdens on the master. This feature isn't targeted at write-intensive workloads.
+Because replicas are read-only, they don't directly reduce write-capacity burdens on the source. This feature isn't targeted at write-intensive workloads.
-The read replica feature uses MySQL asynchronous replication. The feature isn't meant for synchronous replication scenarios. There will be a measurable delay between the source and the replica. The data on the replica eventually becomes consistent with the data on the master. Use this feature for workloads that can accommodate this delay.
+The read replica feature uses MySQL asynchronous replication. The feature isn't meant for synchronous replication scenarios. There will be a measurable delay between the source and the replica. The data on the replica eventually becomes consistent with the data on the source. Use this feature for workloads that can accommodate this delay.
## Cross-region replication+ You can create a read replica in a different region from your source server. Cross-region replication can be helpful for scenarios like disaster recovery planning or bringing data closer to your users.
-You can have a source server in any [Azure Database for MySQL region](https://azure.microsoft.com/global-infrastructure/services/?products=mysql). A source server can have a replica in its paired region or the universal replica regions. The picture below shows which replica regions are available depending on your source region.
+You can have a source server in any [Azure Database for MySQL region](https://azure.microsoft.com/global-infrastructure/services/?products=mysql). A source server can have a replica in its paired region or the universal replica regions. The following picture shows which replica regions are available depending on your source region.
[ :::image type="content" source="media/concepts-read-replica/read-replica-regions.png" alt-text="Read replica regions":::](media/concepts-read-replica/read-replica-regions.png#lightbox) ### Universal replica regions+ You can create a read replica in any of the following regions, regardless of where your source server is located. The supported universal replica regions include: Australia East, Australia Southeast, Brazil South, Canada Central, Canada East, Central US, East Asia, East US, East US 2, Japan East, Japan West, Korea Central, Korea South, North Central US, North Europe, South Central US, Southeast Asia, UK South, UK West, West Europe, West US, West US 2, West Central US. ### Paired regions+ In addition to the universal replica regions, you can create a read replica in the Azure paired region of your source server. If you don't know your region's pair, you can learn more from the [Azure Paired Regions article](../best-practices-availability-paired-regions.md).
-If you are using cross-region replicas for disaster recovery planning, we recommend you create the replica in the paired region instead of one of the other regions. Paired regions avoid simultaneous updates and prioritize physical isolation and data residency.
+If you're using cross-region replicas for disaster recovery planning, we recommend you create the replica in the paired region instead of one of the other regions. Paired regions avoid simultaneous updates and prioritize physical isolation and data residency.
However, there are limitations to consider:
-* Regional availability: Azure Database for MySQL is available in France Central, UAE North, and Germany Central. However, their paired regions are not available.
-
-* Uni-directional pairs: Some Azure regions are paired in one direction only. These regions include West India, Brazil South, and US Gov Virginia.
- This means that a source server in West India can create a replica in South India. However, a source server in South India cannot create a replica in West India. This is because West India's secondary region is South India, but South India's secondary region is not West India.
+* Regional availability: Azure Database for MySQL is available in France Central, UAE North, and Germany Central. However, their paired regions aren't available.
+
+* Uni-directional pairs: Some Azure regions are paired in one direction only. These regions include West India, Brazil South, and US Gov Virginia.
+ This means that a source server in West India can create a replica in South India. However, a source server in South India can't create a replica in West India. This is because West India's secondary region is South India, but South India's secondary region isn't West India.
## Create a replica
@@ -93,7 +97,7 @@ If you see increased replication lag, refer to [troubleshooting replication late
You can stop replication between a source and a replica. After replication is stopped between a source server and a read replica, the replica becomes a standalone server. The data in the standalone server is the data that was available on the replica at the time the stop replication command was started. The standalone server doesn't catch up with the source server.
-When you choose to stop replication to a replica, it loses all links to its previous source and other replicas. There is no automated failover between a source and its replica.
+When you choose to stop replication to a replica, it loses all links to its previous source and other replicas. There's no automated failover between a source and its replica.
> [!IMPORTANT] > The standalone server can't be made into a replica again.
@@ -103,22 +107,22 @@ Learn how to [stop replication to a replica](howto-read-replicas-portal.md).
## Failover
-There is no automated failover between source and replica servers.
+There's no automated failover between source and replica servers.
-Since replication is asynchronous, there is lag between the source and the replica. The amount of lag can be influenced by a number of factors like how heavy the workload running on the source server is and the latency between data centers. In most cases, replica lag ranges between a few seconds to a couple minutes. You can track your actual replication lag using the metric *Replica Lag*, which is available for each replica. This metric shows the time since the last replayed transaction. We recommend that you identify what your average lag is by observing your replica lag over a period of time. You can set an alert on replica lag, so that if it goes outside your expected range, you can take action.
+Since replication is asynchronous, there's lag between the source and the replica. The amount of lag can be influenced by many factors like how heavy the workload running on the source server is and the latency between data centers. In most cases, replica lag ranges between a few seconds to a couple minutes. You can track your actual replication lag using the metric *Replica Lag*, which is available for each replica. This metric shows the time since the last replayed transaction. We recommend that you identify what your average lag is by observing your replica lag over a period of time. You can set an alert on replica lag, so that if it goes outside your expected range, you can take action.
> [!Tip] > If you failover to the replica, the lag at the time you delink the replica from the source will indicate how much data is lost.
-Once you have decided you want to failover to a replica,
+After you've decided you want to failover to a replica:
1. Stop replication to the replica<br/>
- This step is necessary to make the replica server able to accept writes. As part of this process, the replica server will be delinked from the master. Once you initiate stop replication, the backend process typically takes about 2 minutes to complete. See the [stop replication](#stop-replication) section of this article to understand the implications of this action.
-
+ This step is necessary to make the replica server able to accept writes. As part of this process, the replica server will be delinked from the source. After you initiate stop replication, the backend process typically takes about 2 minutes to complete. See the [stop replication](#stop-replication) section of this article to understand the implications of this action.
+ 2. Point your application to the (former) replica<br/>
- Each server has a unique connection string. Update your application to point to the (former) replica instead of the master.
-
-Once your application is successfully processing reads and writes, you have completed the failover. The amount of downtime your application experiences will depend on when you detect an issue and complete steps 1 and 2 above.
+ Each server has a unique connection string. Update your application to point to the (former) replica instead of the source.
+
+After your application is successfully processing reads and writes, you've completed the failover. The amount of downtime your application experiences will depend on when you detect an issue and complete steps 1 and 2 listed previously.
## Global transaction identifier (GTID)
@@ -134,11 +138,11 @@ The following server parameters are available for configuring GTID:
|`enforce_gtid_consistency`|Enforces GTID consistency by allowing execution of only those statements that can be logged in a transactionally safe manner. This value must be set to `ON` before enabling GTID replication. |`OFF`|`OFF`: All transactions are allowed to violate GTID consistency. <br> `ON`: No transaction is allowed to violate GTID consistency. <br> `WARN`: All transactions are allowed to violate GTID consistency, but a warning is generated. | > [!NOTE]
-> Once GTID is enabled, you cannot turn it back off. If you need to turn GTID OFF, please contact support.
+> After GTID is enabled, you cannot turn it back off. If you need to turn GTID OFF, please contact support.
To enable GTID and configure the consistency behavior, update the `gtid_mode` and `enforce_gtid_consistency` server parameters using the [Azure portal](howto-server-parameters.md), [Azure CLI](howto-configure-server-parameters-using-cli.md), or [PowerShell](howto-configure-server-parameters-using-powershell.md).
-If GTID is enabled on a source server (`gtid_mode` = ON), newly created replicas will also have GTID enabled and use GTID replication. To keep replication consistent, you cannot update `gtid_mode` on the source or replica server(s).
+If GTID is enabled on a source server (`gtid_mode` = ON), newly created replicas will also have GTID enabled and use GTID replication. To keep replication consistent, you can't update `gtid_mode` on the source or replica server(s).
## Considerations and limitations
@@ -159,10 +163,10 @@ A read replica is created as a new Azure Database for MySQL server. An existing
### Replica configuration
-A replica is created by using the same server configuration as the master. After a replica is created, several settings can be changed independently from the source server: compute generation, vCores, storage, and backup retention period. The pricing tier can also be changed independently, except to or from the Basic tier.
+A replica is created by using the same server configuration as the source. After a replica is created, several settings can be changed independently from the source server: compute generation, vCores, storage, and backup retention period. The pricing tier can also be changed independently, except to or from the Basic tier.
> [!IMPORTANT]
-> Before a source server configuration is updated to new values, update the replica configuration to equal or greater values. This action ensures the replica can keep up with any changes made to the master.
+> Before a source server configuration is updated to new values, update the replica configuration to equal or greater values. This action ensures the replica can keep up with any changes made to the source.
Firewall rules and parameter settings are inherited from the source server to the replica when the replica is created. Afterwards, the replica's rules are independent.
@@ -183,31 +187,33 @@ Users on the source server are replicated to the read replicas. You can only con
To prevent data from becoming out of sync and to avoid potential data loss or corruption, some server parameters are locked from being updated when using read replicas. The following server parameters are locked on both the source and replica servers:-- [`innodb_file_per_table`](https://dev.mysql.com/doc/refman/8.0/en/innodb-file-per-table-tablespaces.html) -- [`log_bin_trust_function_creators`](https://dev.mysql.com/doc/refman/5.7/en/replication-options-binary-log.html#sysvar_log_bin_trust_function_creators)
-The [`event_scheduler`](https://dev.mysql.com/doc/refman/5.7/en/server-system-variables.html#sysvar_event_scheduler) parameter is locked on the replica servers.
+* [`innodb_file_per_table`](https://dev.mysql.com/doc/refman/8.0/en/innodb-file-per-table-tablespaces.html)
+* [`log_bin_trust_function_creators`](https://dev.mysql.com/doc/refman/5.7/en/replication-options-binary-log.html#sysvar_log_bin_trust_function_creators)
-To update one of the above parameters on the source server, please delete replica servers, update the parameter value on the master, and recreate replicas.
+The [`event_scheduler`](https://dev.mysql.com/doc/refman/5.7/en/server-system-variables.html#sysvar_event_scheduler) parameter is locked on the replica servers.
+
+To update one of the above parameters on the source server, delete replica servers, update the parameter value on the source, and recreate replicas.
### GTID GTID is supported on:-- MySQL versions 5.7 and 8.0 -- Servers that support storage up to 16 TB. Refer to the [pricing tier](concepts-pricing-tiers.md#storage) article for the full list of regions that support 16 TB storage.
-GTID is OFF by default. Once GTID is enabled, you cannot turn it back off. If you need to turn GTID OFF, please contact support.
+* MySQL versions 5.7 and 8.0.
+* Servers that support storage up to 16 TB. Refer to the [pricing tier](concepts-pricing-tiers.md#storage) article for the full list of regions that support 16 TB storage.
+
+GTID is OFF by default. After GTID is enabled, you can't turn it back off. If you need to turn GTID OFF, contact support.
-If GTID is enabled on a source server, newly created replicas will also have GTID enabled and use GTID replication. To keep replication consistent, you cannot update `gtid_mode` on the source or replica server(s).
+If GTID is enabled on a source server, newly created replicas will also have GTID enabled and use GTID replication. To keep replication consistent, you can't update `gtid_mode` on the source or replica server(s).
### Other -- Creating a replica of a replica is not supported.-- In-memory tables may cause replicas to become out of sync. This is a limitation of the MySQL replication technology. Read more in the [MySQL reference documentation](https://dev.mysql.com/doc/refman/5.7/en/replication-features-memory.html) for more information.-- Ensure the source server tables have primary keys. Lack of primary keys may result in replication latency between the source and replicas.-- Review the full list of MySQL replication limitations in the [MySQL documentation](https://dev.mysql.com/doc/refman/5.7/en/replication-features.html)
+* Creating a replica of a replica isn't supported.
+* In-memory tables may cause replicas to become out of sync. This is a limitation of the MySQL replication technology. Read more in the [MySQL reference documentation](https://dev.mysql.com/doc/refman/5.7/en/replication-features-memory.html) for more information.
+* Ensure the source server tables have primary keys. Lack of primary keys may result in replication latency between the source and replicas.
+* Review the full list of MySQL replication limitations in the [MySQL documentation](https://dev.mysql.com/doc/refman/5.7/en/replication-features.html)
## Next steps -- Learn how to [create and manage read replicas using the Azure portal](howto-read-replicas-portal.md)-- Learn how to [create and manage read replicas using the Azure CLI and REST API](howto-read-replicas-cli.md)\ No newline at end of file
+* Learn how to [create and manage read replicas using the Azure portal](howto-read-replicas-portal.md)
+* Learn how to [create and manage read replicas using the Azure CLI and REST API](howto-read-replicas-cli.md)
\ No newline at end of file
mysql https://docs.microsoft.com/en-us/azure/mysql/flexible-server/concepts-read-replicas https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/mysql/flexible-server/concepts-read-replicas.md
@@ -5,7 +5,7 @@ author: ambhatna
ms.author: ambhatna ms.service: mysql ms.topic: conceptual
-ms.date: 10/26/2020
+ms.date: 01/14/2021
--- # Read replicas in Azure Database for MySQL - Flexible Server
@@ -19,14 +19,14 @@ On the applications side, the application is typically developed in Java or php
The read replica feature allows you to replicate data from an Azure Database for MySQL flexible server to a read-only server. You can replicate from the source server to up to **10** replicas. Replicas are updated asynchronously using the MySQL engine's native binary log (binlog) file position-based replication technology. To learn more about binlog replication, see the [MySQL binlog replication overview](https://dev.mysql.com/doc/refman/5.7/en/binlog-replication-configuration-overview.html).
-Replicas are new servers that you manage similar to your source Azure Database for MySQL flexible servers. You will incur billing charges for each read replica based on the provisioned compute in vCores and storage in GB/ month. For more information, refer to [pricing](./concepts-compute-storage.md#pricing).
+Replicas are new servers that you manage similar to your source Azure Database for MySQL flexible servers. You will incur billing charges for each read replica based on the provisioned compute in vCores and storage in GB/ month. For more information, see [pricing](./concepts-compute-storage.md#pricing).
To learn more about MySQL replication features and issues, see the [MySQL replication documentation](https://dev.mysql.com/doc/refman/5.7/en/replication-features.html). > [!NOTE] > Bias-free communication >
-> Microsoft supports a diverse and inclusionary environment. This article contains references to the word _slave_. The Microsoft [style guide for bias-free communication](https://github.com/MicrosoftDocs/microsoft-style-guide/blob/master/styleguide/bias-free-communication.md) recognizes this as an exclusionary word. The word is used in this article for consistency because it's currently the word that appears in the software. When the software is updated to remove the word, this article will be updated to be in alignment.
+> Microsoft supports a diverse and inclusionary environment. This article contains references to the words _master_ and _slave_. The Microsoft [style guide for bias-free communication](https://github.com/MicrosoftDocs/microsoft-style-guide/blob/master/styleguide/bias-free-communication.md) recognizes these as exclusionary words. The words are used in this article for consistency because they are currently the words that appears in the software. When the software is updated to remove the words, this article will be updated to be in alignment.
> ## Common use cases for read replica
@@ -35,7 +35,7 @@ The read replica feature helps to improve the performance and scale of read-inte
Common scenarios are:
-* Scaling read-workloads coming from the application by using lightweight connection proxy like [ProxySQL](https://aka.ms/ProxySQLLoadBalanceReplica) or using microservices based pattern to scale out your read queries coming from the application to read replicas
+* Scaling read-workloads coming from the application by using lightweight connection proxy like [ProxySQL](https://aka.ms/ProxySQLLoadBalanceReplica) or using microservices-based pattern to scale out your read queries coming from the application to read replicas
* BI or analytical reporting workloads can use read replicas as data source for reporting * For IoT or Manufacturing scenario where telemetry information is ingested into MySQL database engine while multiple read replicas are use for reporting of data
@@ -88,24 +88,24 @@ Learn how to [stop replication to a replica](how-to-read-replicas-portal.md).
## Failover
-There is no automated failover between source and replica servers.
+There is no automated failover between source and replica servers.
Read replicas is meant for scaling of read intensive workloads and is not designed to meet high availability needs of a server. There is no automated failover between source and replica servers. Stopping the replication on read replica to bring it online in read write mode is the means by which this manual failover is performed.
-Since replication is asynchronous, there is lag between the source and the replica. The amount of lag can be influenced by a number of factors like how heavy the workload running on the source server is and the latency between data centers. In most cases, replica lag ranges between a few seconds to a couple minutes. You can track your actual replication lag using the metric *Replica Lag*, which is available for each replica. This metric shows the time since the last replayed transaction. We recommend that you identify what your average lag is by observing your replica lag over a period of time. You can set an alert on replica lag, so that if it goes outside your expected range, you can take action.
+Since replication is asynchronous, there is lag between the source and the replica. The amount of lag can be influenced by many factors like how heavy the workload running on the source server is and the latency between data centers. In most cases, replica lag ranges between a few seconds to a couple minutes. You can track your actual replication lag using the metric *Replica Lag*, which is available for each replica. This metric shows the time since the last replayed transaction. We recommend that you identify what your average lag is by observing your replica lag over a period of time. You can set an alert on replica lag, so that if it goes outside your expected range, you can take action.
> [!Tip] > If you failover to the replica, the lag at the time you delink the replica from the source will indicate how much data is lost.
-Once you have decided you want to failover to a replica,
+After you have decided you want to failover to a replica:
1. Stop replication to the replica<br/>
- This step is necessary to make the replica server able to accept writes. As part of this process, the replica server will be delinked from the source. Once you initiate stop replication, the backend process typically takes about 2 minutes to complete. See the [stop replication](#stop-replication) section of this article to understand the implications of this action.
-
+ This step is necessary to make the replica server able to accept writes. As part of this process, the replica server will be delinked from the source. After you initiate stop replication, the backend process typically takes about 2 minutes to complete. See the [stop replication](#stop-replication) section of this article to understand the implications of this action.
+ 2. Point your application to the (former) replica<br/> Each server has a unique connection string. Update your application to point to the (former) replica instead of the source.
-
-Once your application is successfully processing reads and writes, you have completed the failover. The amount of downtime your application experiences will depend on when you detect an issue and complete steps 1 and 2 above.
+
+After your application is successfully processing reads and writes, you have completed the failover. The amount of downtime your application experiences will depend on when you detect an issue and complete steps 1 and 2 above.
## Considerations and limitations
@@ -120,10 +120,10 @@ Once your application is successfully processing reads and writes, you have comp
| Stopped replicas | If you stop replication between a source server and a read replica, the stopped replica becomes a standalone server that accepts both reads and writes. The standalone server can't be made into a replica again. | | Deleted source and standalone servers | When a source server is deleted, replication is stopped to all read replicas. These replicas automatically become standalone servers and can accept both reads and writes. The source server itself is deleted. | | User accounts | Users on the source server are replicated to the read replicas. You can only connect to a read replica using the user accounts available on the source server. |
-| Server parameters | To prevent data from becoming out of sync and to avoid potential data loss or corruption, some server parameters are locked from being updated when using read replicas. <br> The following server parameters are locked on both the source and replica servers:<br> - [`innodb_file_per_table`](https://dev.mysql.com/doc/refman/8.0/en/innodb-file-per-table-tablespaces.html) <br> - [`log_bin_trust_function_creators`](https://dev.mysql.com/doc/refman/5.7/en/replication-options-binary-log.html#sysvar_log_bin_trust_function_creators) <br> The [`event_scheduler`](https://dev.mysql.com/doc/refman/5.7/en/server-system-variables.html#sysvar_event_scheduler) parameter is locked on the replica servers. <br> To update one of the above parameters on the source server, please delete replica servers, update the parameter value on the source, and recreate replicas. |
+| Server parameters | To prevent data from becoming out of sync and to avoid potential data loss or corruption, some server parameters are locked from being updated when using read replicas. <br> The following server parameters are locked on both the source and replica servers:<br> - [`innodb_file_per_table`](https://dev.mysql.com/doc/refman/8.0/en/innodb-file-per-table-tablespaces.html) <br> - [`log_bin_trust_function_creators`](https://dev.mysql.com/doc/refman/5.7/en/replication-options-binary-log.html#sysvar_log_bin_trust_function_creators) <br> The [`event_scheduler`](https://dev.mysql.com/doc/refman/5.7/en/server-system-variables.html#sysvar_event_scheduler) parameter is locked on the replica servers. <br> To update one of the above parameters on the source server, delete replica servers, update the parameter value on the source, and recreate replicas. |
| Other | - Creating a replica of a replica is not supported. <br> - In-memory tables may cause replicas to become out of sync. This is a limitation of the MySQL replication technology. Read more in the [MySQL reference documentation](https://dev.mysql.com/doc/refman/5.7/en/replication-features-memory.html) for more information. <br>- Ensure the source server tables have primary keys. Lack of primary keys may result in replication latency between the source and replicas.<br>- Review the full list of MySQL replication limitations in the [MySQL documentation](https://dev.mysql.com/doc/refman/5.7/en/replication-features.html) | ## Next steps -- Learn how to [create and manage read replicas using the Azure portal](how-to-read-replicas-portal.md)-- Learn how to [create and manage read replicas using the Azure CLI](how-to-read-replicas-cli.md)\ No newline at end of file
+* Learn how to [create and manage read replicas using the Azure portal](how-to-read-replicas-portal.md)
+* Learn how to [create and manage read replicas using the Azure CLI](how-to-read-replicas-cli.md)
\ No newline at end of file
mysql https://docs.microsoft.com/en-us/azure/mysql/howto-create-users https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/mysql/howto-create-users.md
@@ -5,7 +5,7 @@ author: savjani
ms.author: pariks ms.service: mysql ms.topic: how-to
-ms.date: 10/1/2020
+ms.date: 01/13/2021
--- # Create databases and users in Azure Database for MySQL
@@ -15,33 +15,31 @@ ms.date: 10/1/2020
This article describes how to create users in Azure Database for MySQL. > [!NOTE]
-> **Bias-free communication**
+> Bias-free communication
>
-> Microsoft supports a diverse and inclusionary environment. This article contains references to the word *slave*. The Microsoft [style guide for bias-free communication](https://github.com/MicrosoftDocs/microsoft-style-guide/blob/master/styleguide/bias-free-communication.md) recognizes this as an exclusionary word. The word is used in this article for consistency because it's the word that currently appears in the software. When the software is updated to remove the word, this article will be updated to be in alignment.
+> Microsoft supports a diverse and inclusionary environment. This article contains references to the words _master_ and _slave_. The Microsoft [style guide for bias-free communication](https://github.com/MicrosoftDocs/microsoft-style-guide/blob/master/styleguide/bias-free-communication.md) recognizes these as exclusionary words. The words are used in this article for consistency because they are currently the words that appears in the software. When the software is updated to remove the words, this article will be updated to be in alignment.
> When you first created your Azure Database for MySQL server, you provided a server admin user name and password. For more information, see this [Quickstart](quickstart-create-mysql-server-database-using-azure-portal.md). You can determine your server admin user name in the Azure portal.
-The server admin user has these privileges:
+The server admin user has these privileges:
SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, RELOAD, PROCESS, REFERENCES, INDEX, ALTER, SHOW DATABASES, CREATE TEMPORARY TABLES, LOCK TABLES, EXECUTE, REPLICATION SLAVE, REPLICATION CLIENT, CREATE VIEW, SHOW VIEW, CREATE ROUTINE, ALTER ROUTINE, CREATE USER, EVENT, TRIGGER -
-After you create an Azure Database for MySQL server, you can use the first server admin account to create additional users and grant admin access to them. You can also use the server admin account to create less privileged users that have access to individual database schemas.
+After you create an Azure Database for MySQL server, you can use the first server admin account to create more users and grant admin access to them. You can also use the server admin account to create less privileged users that have access to individual database schemas.
> [!NOTE] > The SUPER privilege and DBA role aren't supported. Review the [privileges](concepts-limits.md#privileges--data-manipulation-support) in the limitations article to understand what's not supported in the service. > > Password plugins like `validate_password` and `caching_sha2_password` aren't supported by the service. - ## To create a database with a non-admin user in Azure Database for MySQL 1. Get the connection information and admin user name. To connect to your database server, you need the full server name and admin sign-in credentials. You can easily find the server name and sign-in information on the server **Overview** page or on the **Properties** page in the Azure portal. 2. Use the admin account and password to connect to your database server. Use your preferred client tool, such as MySQL Workbench, mysql.exe, or HeidiSQL.
-
+ If you're not sure how to connect, see [connect and query data for Single Server](./connect-workbench.md) or [connect and query data for Flexible Server](./flexible-server/connect-workbench.md). 3. Edit and run the following SQL code. Replace the placeholder value `db_user` with your intended new user name. Replace the placeholder value `testdb` with your database name.
@@ -68,25 +66,26 @@ After you create an Azure Database for MySQL server, you can use the first serve
5. Sign in to the server, specifying the designated database and using the new user name and password. This example shows the mysql command line. When you use this command, you'll be prompted for the user's password. Use your own server name, database name, and user name.
- # [Single Server](#tab/single-server)
+ ### [Single Server](#tab/single-server)
```azurecli-interactive mysql --host mydemoserver.mysql.database.azure.com --database testdb --user db_user@mydemoserver -p ```
- # [Flexible Server](#tab/flexible-server)
+
+ ### [Flexible Server](#tab/flexible-server)
```azurecli-interactive mysql --host mydemoserver.mysql.database.azure.com --database testdb --user db_user -p ``` ---
-## To create additional admin users in Azure Database for MySQL
+## To create more admin users in Azure Database for MySQL
1. Get the connection information and admin user name. To connect to your database server, you need the full server name and admin sign-in credentials. You can easily find the server name and sign-in information on the server **Overview** page or on the **Properties** page in the Azure portal. 2. Use the admin account and password to connect to your database server. Use your preferred client tool, such as MySQL Workbench, mysql.exe, or HeidiSQL.
-
+ If you're not sure how to connect, see [Use MySQL Workbench to connect and query data](./connect-workbench.md). 3. Edit and run the following SQL code. Replace the placeholder value `new_master_user` with your new user name. This syntax grants the listed privileges on all the database schemas (*.*) to the user (`new_master_user` in this example).
@@ -114,7 +113,8 @@ All Azure Database for MySQL servers are created with a user called "azure_super
## Next steps Open the firewall for the IP addresses of the new users' machines to enable them to connect:-- [Create and manage firewall rules on Single Server](howto-manage-firewall-using-portal.md) -- [ Create and manage firewall rules on Flexible Server](flexible-server/how-to-connect-tls-ssl.md)+
+* [Create and manage firewall rules on Single Server](howto-manage-firewall-using-portal.md)
+* [Create and manage firewall rules on Flexible Server](flexible-server/how-to-connect-tls-ssl.md)
For more information about user account management, see the MySQL product documentation for [User account management](https://dev.mysql.com/doc/refman/5.7/en/access-control.html), [GRANT syntax](https://dev.mysql.com/doc/refman/5.7/en/grant.html), and [Privileges](https://dev.mysql.com/doc/refman/5.7/en/privileges-provided.html).
mysql https://docs.microsoft.com/en-us/azure/mysql/howto-data-in-replication https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/mysql/howto-data-in-replication.md
@@ -5,7 +5,7 @@ author: savjani
ms.author: pariks ms.service: mysql ms.topic: how-to
-ms.date: 9/29/2020
+ms.date: 01/13/2021
--- # How to configure Azure Database for MySQL Data-in Replication
@@ -15,7 +15,7 @@ This article describes how to set up [Data-in Replication](concepts-data-in-repl
> [!NOTE] > Bias-free communication >
-> Microsoft supports a diverse and inclusionary environment. This article contains references to the word _slave_. The Microsoft [style guide for bias-free communication](https://github.com/MicrosoftDocs/microsoft-style-guide/blob/master/styleguide/bias-free-communication.md) recognizes this as an exclusionary word. The word is used in this article for consistency because it's currently the word that appears in the software. When the software is updated to remove the word, this article will be updated to be in alignment.
+> Microsoft supports a diverse and inclusionary environment. This article contains references to the words _master_ and _slave_. The Microsoft [style guide for bias-free communication](https://github.com/MicrosoftDocs/microsoft-style-guide/blob/master/styleguide/bias-free-communication.md) recognizes these as exclusionary words. The words are used in this article for consistency because they are currently the words that appears in the software. When the software is updated to remove the words, this article will be updated to be in alignment.
> To create a replica in the Azure Database for MySQL service, [Data-in Replication](concepts-data-in-replication.md) synchronizes data from a source MySQL server on-premises, in virtual machines (VMs), or in cloud database services. Data-in Replication is based on the binary log (binlog) file position-based replication native to MySQL. To learn more about binlog replication, see the [MySQL binlog replication overview](https://dev.mysql.com/doc/refman/5.7/en/binlog-replication-configuration-overview.html).
@@ -30,48 +30,56 @@ Review the [limitations and requirements](concepts-data-in-replication.md#limita
> [!IMPORTANT] > The Azure Database for MySQL server must be created in the General Purpose or Memory Optimized pricing tiers.
- >
+ >
-1. Create same user accounts and corresponding privileges
+2. Create same user accounts and corresponding privileges
- User accounts are not replicated from the source server to the replica server. If you plan on providing users with access to the replica server, you need to manually create all accounts and corresponding privileges on this newly created Azure Database for MySQL server.
+ User accounts aren't replicated from the source server to the replica server. If you plan on providing users with access to the replica server, you need to create all accounts and corresponding privileges manually on this newly created Azure Database for MySQL server.
-1. Add the source server's IP address to the replica's firewall rules.
+3. Add the source server's IP address to the replica's firewall rules.
Update firewall rules using the [Azure portal](howto-manage-firewall-using-portal.md) or [Azure CLI](howto-manage-firewall-using-cli.md). ## Configure the source server
-The following steps prepare and configure the MySQL server hosted on-premises, in a virtual machine, or database service hosted by other cloud providers for Data-in Replication. This server is the "master" in Data-in replication.
+The following steps prepare and configure the MySQL server hosted on-premises, in a virtual machine, or database service hosted by other cloud providers for Data-in Replication. This server is the "source" in Data-in replication.
-1. Review the [master server requirements](concepts-data-in-replication.md#requirements) before proceeding.
+1. Review the [source server requirements](concepts-data-in-replication.md#requirements) before proceeding.
-2. Ensure the source server allows both inbound and outbound traffic on port 3306 and that the source server has a **public IP address**, the DNS is publicly accessible, or has a fully qualified domain name (FQDN).
-
- Test connectivity to the source server by attempting to connect from a tool such as the MySQL command-line hosted on another machine or from the [Azure Cloud Shell](../cloud-shell/overview.md) available in the Azure portal.
+2. Ensure that the source server allows both inbound and outbound traffic on port 3306, and that it has a **public IP address**, the DNS is publicly accessible, or has a fully qualified domain name (FQDN).
- If your organization has strict security policies and will not allow all IP addresses on the source server to enable communication from Azure to your source server, you can potentially use the below command to determine the IP address of your MySQL server.
+ Test connectivity to the source server by attempting to connect from a tool such as the MySQL command line hosted on another machine or from the [Azure Cloud Shell](../cloud-shell/overview.md) available in the Azure portal.
+
+ If your organization has strict security policies and won't allow all IP addresses on the source server to enable communication from Azure to your source server, you can potentially use the below command to determine the IP address of your MySQL server.
+
+ 1. Sign in to your Azure Database for MySQL using a tool such as the MySQL command line.
- 1. Sign in to your Azure Database for MySQL using a tool like MySQL command-line.
2. Execute the below query.+ ```bash mysql> SELECT @@global.redirect_server_host; ```+ Below is some sample output:
- ```bash
+
+ ```bash
+-----------------------------------------------------------+ | @@global.redirect_server_host | +-----------------------------------------------------------+ | e299ae56f000.tr1830.westus1-a.worker.database.windows.net | +-----------------------------------------------------------+ ```
- 3. Exit from the MySQL command-line.
- 4. Execute the below in the ping utility to get the IP address.
+
+ 3. Exit from the MySQL command line.
+ 4. Execute the following command in the ping utility to get the IP address.
+ ```bash ping <output of step 2b>
- ```
- For example:
- ```bash
+ ```
+
+ For example:
+
+ ```bash
C:\Users\testuser> ping e299ae56f000.tr1830.westus1-a.worker.database.windows.net Pinging tr1830.westus1-a.worker.database.windows.net (**11.11.111.111**) 56(84) bytes of data. ```
@@ -80,8 +88,8 @@ The following steps prepare and configure the MySQL server hosted on-premises, i
> [!NOTE] > This IP address may change due to maintenance/deployment operations. This method of connectivity is only for customers who cannot afford to allow all IP address on 3306 port.
-
-1. Turn on binary logging
+
+3. Turn on binary logging
Check to see if binary logging has been enabled on the source by running the following command:
@@ -89,29 +97,29 @@ The following steps prepare and configure the MySQL server hosted on-premises, i
SHOW VARIABLES LIKE 'log_bin'; ```
- If the variable [`log_bin`](https://dev.mysql.com/doc/refman/8.0/en/replication-options-binary-log.html#sysvar_log_bin) is returned with the value "ON", binary logging is enabled on your server.
+ If the variable [`log_bin`](https://dev.mysql.com/doc/refman/8.0/en/replication-options-binary-log.html#sysvar_log_bin) is returned with the value "ON", binary logging is enabled on your server.
If `log_bin` is returned with the value "OFF", turn on binary logging by editing your my.cnf file so that `log_bin=ON` and restart your server for the change to take effect.
-1. Source server settings
+4. Source server settings
- Data-in Replication requires parameter `lower_case_table_names` to be consistent between the source and replica servers. This parameter is 1 by default in Azure Database for MySQL.
+ Data-in Replication requires parameter `lower_case_table_names` to be consistent between the source and replica servers. This parameter is 1 by default in Azure Database for MySQL.
```sql SET GLOBAL lower_case_table_names = 1; ```
-1. Create a new replication role and set up permission
+5. Create a new replication role and set up permission
- Create a user account on the source server that is configured with replication privileges. This can be done through SQL commands or a tool like MySQL Workbench. Consider whether you plan on replicating with SSL as this will need to be specified when creating the user. Refer to the MySQL documentation to understand how to [add user accounts](https://dev.mysql.com/doc/refman/5.7/en/user-names.html) on your source server.
+ Create a user account on the source server that is configured with replication privileges. This can be done through SQL commands or a tool like MySQL Workbench. Consider whether you plan on replicating with SSL as this will need to be specified when creating the user. Refer to the MySQL documentation to understand how to [add user accounts](https://dev.mysql.com/doc/refman/5.7/en/user-names.html) on your source server.
- In the commands below, the new replication role created is able to access the source from any machine, not just the machine that hosts the source itself. This is done by specifying "syncuser@'%'" in the create user command. See the MySQL documentation to learn more about [specifying account names](https://dev.mysql.com/doc/refman/5.7/en/account-names.html).
+ In the following commands, the new replication role created can access the source from any machine, not just the machine that hosts the source itself. This is done by specifying "syncuser@'%'" in the create user command. See the MySQL documentation to learn more about [specifying account names](https://dev.mysql.com/doc/refman/5.7/en/account-names.html).
**SQL Command** *Replication with SSL*
- To require SSL for all user connections, use the following command to create a user:
+ To require SSL for all user connections, use the following command to create a user:
```sql CREATE USER 'syncuser'@'%' IDENTIFIED BY 'yourpassword';
@@ -120,7 +128,7 @@ The following steps prepare and configure the MySQL server hosted on-premises, i
*Replication without SSL*
- If SSL is not required for all connections, use the following command to create a user:
+ If SSL isn't required for all connections, use the following command to create a user:
```sql CREATE USER 'syncuser'@'%' IDENTIFIED BY 'yourpassword';
@@ -129,19 +137,19 @@ The following steps prepare and configure the MySQL server hosted on-premises, i
**MySQL Workbench**
- To create the replication role in MySQL Workbench, open the **Users and Privileges** panel from the **Management** panel. Then click on **Add Account**.
-
+ To create the replication role in MySQL Workbench, open the **Users and Privileges** panel from the **Management** panel, and then select **Add Account**.
+ :::image type="content" source="./media/howto-data-in-replication/users_privileges.png" alt-text="Users and Privileges":::
- Type in the username into the **Login Name** field.
+ Type in the username into the **Login Name** field.
:::image type="content" source="./media/howto-data-in-replication/syncuser.png" alt-text="Sync user":::
-
- Click on the **Administrative Roles** panel and then select **Replication Slave** from the list of **Global Privileges**. Then click on **Apply** to create the replication role.
+
+ Select the **Administrative Roles** panel and then select **Replication Slave** from the list of **Global Privileges**. Then select **Apply** to create the replication role.
:::image type="content" source="./media/howto-data-in-replication/replicationslave.png" alt-text="Replication Slave":::
-1. Set the source server to read-only mode
+6. Set the source server to read-only mode
Before starting to dump out the database, the server needs to be placed in read-only mode. While in read-only mode, the source will be unable to process any write transactions. Evaluate the impact to your business and schedule the read-only window in an off-peak time if necessary.
@@ -150,41 +158,42 @@ The following steps prepare and configure the MySQL server hosted on-premises, i
SET GLOBAL read_only = ON; ```
-1. Get binary log file name and offset
+7. Get binary log file name and offset
Run the [`show master status`](https://dev.mysql.com/doc/refman/5.7/en/show-master-status.html) command to determine the current binary log file name and offset.
-
+ ```sql show master status; ```
- The results should be like following. Make sure to note the binary file name as it will be used in later steps.
+
+ The results should appear similar to the following. Make sure to note the binary file name, as it will be used in later steps.
:::image type="content" source="./media/howto-data-in-replication/masterstatus.png" alt-text="Master Status Results":::
-
+ ## Dump and restore source server 1. Determine which databases and tables you want to replicate into Azure Database for MySQL and perform the dump from the source server.
-
- You can use mysqldump to dump databases from your master. For details, refer to [Dump & Restore](concepts-migrate-dump-restore.md). It is unnecessary to dump MySQL library and test library.
-1. Set source server to read/write mode
+ You can use mysqldump to dump databases from your master. For details, refer to [Dump & Restore](concepts-migrate-dump-restore.md). It's unnecessary to dump MySQL library and test library.
- Once the database has been dumped, change the source MySQL server back to read/write mode.
+2. Set source server to read/write mode.
+
+ After the database has been dumped, change the source MySQL server back to read/write mode.
```sql SET GLOBAL read_only = OFF; UNLOCK TABLES; ```
-1. Restore dump file to new server
+3. Restore dump file to new server.
Restore the dump file to the server created in the Azure Database for MySQL service. Refer to [Dump & Restore](concepts-migrate-dump-restore.md) for how to restore a dump file to a MySQL server. If the dump file is large, upload it to a virtual machine in Azure within the same region as your replica server. Restore it to the Azure Database for MySQL server from the virtual machine. ## Link source and replica servers to start Data-in Replication
-1. Set source server
+1. Set source server.
- All Data-in Replication functions are done by stored procedures. You can find all procedures at [Data-in Replication Stored Procedures](./reference-stored-procedures.md). The stored procedures can be run in the MySQL shell or MySQL Workbench.
+ All Data-in Replication functions are done by stored procedures. You can find all procedures at [Data-in Replication Stored Procedures](./reference-stored-procedures.md). The stored procedures can be run in the MySQL shell or MySQL Workbench.
To link two servers and start replication, login to the target replica server in the Azure DB for MySQL service and set the external instance as the source server. This is done by using the `mysql.az_replication_change_master` stored procedure on the Azure DB for MySQL server.
@@ -198,61 +207,63 @@ The following steps prepare and configure the MySQL server hosted on-premises, i
- master_log_file: binary log file name from running `show master status` - master_log_pos: binary log position from running `show master status` - master_ssl_ca: CA certificate's context. If not using SSL, pass in empty string.
- - It is recommended to pass this parameter in as a variable. See the following examples for more information.
+
+ It's recommended to pass this parameter in as a variable. For more information, see the following examples.
> [!NOTE]
- > If the source server is hosted in an Azure VM, set "Allow access to Azure services" to "ON" to allow the source and replica servers to communicate with each other. This setting can be changed from the **Connection security** options. Refer to [manage firewall rules using portal](howto-manage-firewall-using-portal.md) for more information.
-
+ > If the source server is hosted in an Azure VM, set "Allow access to Azure services" to "ON" to allow the source and replica servers to communicate with each other. This setting can be changed from the **Connection security** options. For more information, see [Manage firewall rules using the portal](howto-manage-firewall-using-portal.md) .
+ **Examples**
-
+ *Replication with SSL*
-
- The variable `@cert` is created by running the following MySQL commands:
-
+
+ The variable `@cert` is created by running the following MySQL commands:
+ ```sql SET @cert = '-----BEGIN CERTIFICATE----- PLACE YOUR PUBLIC KEY CERTIFICATE'`S CONTEXT HERE -----END CERTIFICATE-----' ```
-
- Replication with SSL is set up between a source server hosted in the domain "companya.com" and a replica server hosted in Azure Database for MySQL. This stored procedure is run on the replica.
-
+
+ Replication with SSL is set up between a source server hosted in the domain "companya.com" and a replica server hosted in Azure Database for MySQL. This stored procedure is run on the replica.
+ ```sql CALL mysql.az_replication_change_master('master.companya.com', 'syncuser', 'P@ssword!', 3306, 'mysql-bin.000002', 120, @cert); ```+ *Replication without SSL*
-
+ Replication without SSL is set up between a source server hosted in the domain "companya.com" and a replica server hosted in Azure Database for MySQL. This stored procedure is run on the replica.
-
+ ```sql CALL mysql.az_replication_change_master('master.companya.com', 'syncuser', 'P@ssword!', 3306, 'mysql-bin.000002', 120, ''); ```
-1. Filtering
-
+2. Filtering.
+ If you want to skip replicating some tables from your master, update the `replicate_wild_ignore_table` server parameter on your replica server. You can provide more than one table pattern using a comma-separated list.
- Review the [MySQL documentation](https://dev.mysql.com/doc/refman/8.0/en/replication-options-replica.html#option_mysqld_replicate-wild-ignore-table) to learn more about this parameter.
-
+ Review the [MySQL documentation](https://dev.mysql.com/doc/refman/8.0/en/replication-options-replica.html#option_mysqld_replicate-wild-ignore-table) to learn more about this parameter.
+ To update the parameter, you can use the [Azure portal](howto-server-parameters.md) or [Azure CLI](howto-configure-server-parameters-using-cli.md).
-1. Start replication
+3. Start replication.
- Call the `mysql.az_replication_start` stored procedure to initiate replication.
+ Call the `mysql.az_replication_start` stored procedure to start replication.
```sql CALL mysql.az_replication_start; ```
-1. Check replication status
+4. Check replication status.
Call the [`show slave status`](https://dev.mysql.com/doc/refman/5.7/en/show-slave-status.html) command on the replica server to view the replication status.
-
+ ```sql show slave status; ```
- If the state of `Slave_IO_Running` and `Slave_SQL_Running` are "yes" and the value of `Seconds_Behind_Master` is "0", replication is working well. `Seconds_Behind_Master` indicates how late the replica is. If the value is not "0", it means that the replica is processing updates.
+ If the state of `Slave_IO_Running` and `Slave_SQL_Running` are "yes" and the value of `Seconds_Behind_Master` is "0", replication is working well. `Seconds_Behind_Master` indicates how late the replica is. If the value isn't "0", it means that the replica is processing updates.
## Other stored procedures
@@ -274,11 +285,12 @@ CALL mysql.az_replication_remove_master;
### Skip replication error
-To skip a replication error and allow replication to proceed, use the following stored procedure:
-
+To skip a replication error and allow replication to continue, use the following stored procedure:
+ ```sql CALL mysql.az_replication_skip_counter; ``` ## Next steps-- Learn more about [Data-in Replication](concepts-data-in-replication.md) for Azure Database for MySQL.\ No newline at end of file+
+- Learn more about [Data-in Replication](concepts-data-in-replication.md) for Azure Database for MySQL.
mysql https://docs.microsoft.com/en-us/azure/mysql/howto-redirection https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/mysql/howto-redirection.md
@@ -17,6 +17,9 @@ Sign in to the [Azure portal](https://portal.azure.com). Create an Azure Databas
For details, refer to how to create an Azure Database for MySQL server using the [Azure portal](quickstart-create-mysql-server-database-using-azure-portal.md) or [Azure CLI](quickstart-create-mysql-server-database-using-azure-cli.md).
+> [!IMPORTANT]
+> Redirection is currently not supported with [Private Link for Azure Database for MySQL](concepts-data-access-security-private-link.md).
+ ## Enable redirection On your Azure Database for MySQL server, configure the `redirect_enabled` parameter to `ON` to allow connections with redirection mode. To update this server parameter, use the [Azure portal](howto-server-parameters.md) or [Azure CLI](howto-configure-server-parameters-using-cli.md).
mysql https://docs.microsoft.com/en-us/azure/mysql/howto-troubleshoot-replication-latency https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/mysql/howto-troubleshoot-replication-latency.md
@@ -6,27 +6,30 @@ author: savjani
ms.author: pariks ms.service: mysql ms.topic: troubleshooting
-ms.date: 10/25/2020
+ms.date: 01/13/2021
--- # Troubleshoot replication latency in Azure Database for MySQL [!INCLUDE[applies-to-single-flexible-server](./includes/applies-to-single-flexible-server.md)]
-The [read replica](concepts-read-replicas.md) feature allows you to replicate data from an Azure Database for MySQL server to a read-only replica server. You can scale out workloads by routing read and reporting queries from the application to replica servers. This setup reduces the pressure on the source server. It also improves overall performance and latency of the application as it scales.
+The [read replica](concepts-read-replicas.md) feature allows you to replicate data from an Azure Database for MySQL server to a read-only replica server. You can scale out workloads by routing read and reporting queries from the application to replica servers. This setup reduces the pressure on the source server. It also improves overall performance and latency of the application as it scales.
-Replicas are updated asynchronously by using the MySQL engine's native binary log (binlog) file position-based replication technology. For more information, see [MySQL binlog file position-based replication configuration overview](https://dev.mysql.com/doc/refman/5.7/en/binlog-replication-configuration-overview.html).
+Replicas are updated asynchronously by using the MySQL engine's native binary log (binlog) file position-based replication technology. For more information, see [MySQL binlog file position-based replication configuration overview](https://dev.mysql.com/doc/refman/5.7/en/binlog-replication-configuration-overview.html).
-The replication lag on the secondary read replicas depends several factors. These factors include but aren't limited to:
+The replication lag on the secondary read replicas depends several factors. These factors include but aren't limited to:
- Network latency. - Transaction volume on the source server. - Compute tier of the source server and secondary read replica server.-- Queries running on the source server and secondary server.
+- Queries running on the source server and secondary server.
In this article, you'll learn how to troubleshoot replication latency in Azure Database for MySQL. You'll also understand some common causes of increased replication latency on replica servers. > [!NOTE]
-> This article contains references to the term slave, a term that Microsoft no longer uses. When the term is removed from the software, weΓÇÖll remove it from this article.
+> Bias-free communication
+>
+> Microsoft supports a diverse and inclusionary environment. This article contains references to the words _master_ and _slave_. The Microsoft [style guide for bias-free communication](https://github.com/MicrosoftDocs/microsoft-style-guide/blob/master/styleguide/bias-free-communication.md) recognizes these as exclusionary words. The words are used in this article for consistency because they are currently the words that appears in the software. When the software is updated to remove the words, this article will be updated to be in alignment.
+>
## Replication concepts
@@ -41,7 +44,7 @@ Azure Database for MySQL provides the metric for replication lag in seconds in [
To understand the cause of increased replication latency, connect to the replica server by using [MySQL Workbench](connect-workbench.md) or [Azure Cloud Shell](https://shell.azure.com). Then run following command.
->[!NOTE]
+>[!NOTE]
> In your code, replace the example values with your replica server name and admin username. The admin username requires `@\<servername>` for Azure Database for MySQL. ```azurecli-interactive
@@ -86,7 +89,6 @@ Here's a typical output:
>[!div class="mx-imgBorder"] > :::image type="content" source="./media/howto-troubleshoot-replication-latency/show-status.png" alt-text="Monitoring replication latency"::: - The output contains a lot of information. Normally, you need to focus on only the rows that the following table describes. |Metric|Description|
@@ -116,7 +118,7 @@ The following sections address scenarios in which high replication latency is co
### Network latency or high CPU consumption on the source server
-If you see the following values, then replication latency is likely caused by high network latency or high CPU consumption on the source server.
+If you see the following values, then replication latency is likely caused by high network latency or high CPU consumption on the source server.
``` Slave_IO_State: Waiting for master to send event
@@ -126,7 +128,7 @@ Relay_Master_Log_File: the file sequence is smaller than Master_Log_File, e.g. m
In this case, the IO thread is running and is waiting on the source server. The source server has already written to binary log file number 20. The replica has received only up to file number 10. The primary factors for high replication latency in this scenario are network speed or high CPU utilization on the source server.
-In Azure, network latency within a region can typically be measured milliseconds. Across regions, latency ranges from milliseconds to seconds.
+In Azure, network latency within a region can typically be measured milliseconds. Across regions, latency ranges from milliseconds to seconds.
In most cases, the connection delay between IO threads and the source server is caused by high CPU utilization on the source server. The IO threads are processed slowly. You can detect this problem by using Azure Monitor to check CPU utilization and the number of concurrent connections on the source server.
@@ -142,18 +144,17 @@ Master_Log_File: the binary file sequence is larger then Relay_Master_Log_File,
Relay_Master_Log_File: the file sequence is smaller then Master_Log_File, e.g. mysql-bin.00010 ```
-The output shows that the replica can retrieve the binary log behind the source server. But the replica IO thread indicates that the relay log space is full already.
+The output shows that the replica can retrieve the binary log behind the source server. But the replica IO thread indicates that the relay log space is full already.
-Network speed isn't causing the delay. The replica is trying to catch up. But the updated binary log size exceeds the upper limit of the relay log space.
+Network speed isn't causing the delay. The replica is trying to catch up. But the updated binary log size exceeds the upper limit of the relay log space.
To troubleshoot this issue, enable the [slow query log](concepts-server-logs.md) on the source server. Use slow query logs to identify long-running transactions on the source server. Then tune the identified queries to reduce the latency on the server. Replication latency of this sort is commonly caused by the data load on the source server. When source servers have weekly or monthly data loads, replication latency is unfortunately unavoidable. The replica servers eventually catch up after the data load on the source server finishes. - ### Slowness on the replica server
-If you observe the following values, then the problem might be on the replica server.
+If you observe the following values, then the problem might be on the replica server.
``` Slave_IO_State: Waiting for master to send event
@@ -166,7 +167,7 @@ Exec_Master_Log_Pos: The position of slave reads from master binary log file is
Seconds_Behind_Master: There is latency and the value here is greater than 0 ```
-In this scenario, the output shows that both the IO thread and the SQL thread are running well. The replica reads the same binary log file that the source server writes. However, some latency on the replica server reflects the same transaction from the source server.
+In this scenario, the output shows that both the IO thread and the SQL thread are running well. The replica reads the same binary log file that the source server writes. However, some latency on the replica server reflects the same transaction from the source server.
The following sections describe common causes of this kind of latency.
@@ -174,13 +175,13 @@ The following sections describe common causes of this kind of latency.
Azure Database for MySQL uses row-based replication. The source server writes events to the binary log, recording changes in individual table rows. The SQL thread then replicates those changes to the corresponding table rows on the replica server. When a table lacks a primary key or unique key, the SQL thread scans all rows in the target table to apply the changes. This scan can cause replication latency.
-In MySQL, the primary key is an associated index that ensures fast query performance because it can't include NULL values. If you use the InnoDB storage engine, the table data is physically organized to do ultra-fast lookups and sorts based on the primary key.
+In MySQL, the primary key is an associated index that ensures fast query performance because it can't include NULL values. If you use the InnoDB storage engine, the table data is physically organized to do ultra-fast lookups and sorts based on the primary key.
We recommend that you add a primary key on tables in the source server before you create the replica server. Add primary keys on the source server and then re-create read replicas to help improve replication latency. Use the following query to find out which tables are missing a primary key on the source server:
-```sql
+```sql
select tab.table_schema as database_name, tab.table_name from information_schema.tables tab left join information_schema.table_constraints tco
@@ -196,19 +197,19 @@ order by tab.table_schema, tab.table_name;
#### Long-running queries on the replica server
-The workload on the replica server can make the SQL thread lag behind the IO thread. Long-running queries on the replica server are one of the common causes of high replication latency. To troubleshoot this problem, enable the [slow query log](concepts-server-logs.md) on the replica server.
+The workload on the replica server can make the SQL thread lag behind the IO thread. Long-running queries on the replica server are one of the common causes of high replication latency. To troubleshoot this problem, enable the [slow query log](concepts-server-logs.md) on the replica server.
Slow queries can increase resource consumption or slow down the server so that the replica can't catch up with the source server. In this scenario, tune the slow queries. Faster queries prevent blockage of the SQL thread and improve replication latency significantly. - #### DDL queries on the source server+ On the source server, a data definition language (DDL) command like [`ALTER TABLE`](https://dev.mysql.com/doc/refman/5.7/en/alter-table.html) can take a long time. While the DDL command is running, thousands of other queries might be running in parallel on the source server. When the DDL is replicated, to ensure database consistency, the MySQL engine runs the DDL in a single replication thread. During this task, all other replicated queries are blocked and must wait until the DDL operation finishes on the replica server. Even online DDL operations cause this delay. DDL operations increase replication latency.
-If you enabled the [slow query log](concepts-server-logs.md) on the source server, you can detect this latency problem by checking for a DDL command that ran on the source server. Through index dropping, renaming, and creating, you can use the INPLACE algorithm for the ALTER TABLE. You might need to copy the table data and rebuild the table.
+If you enabled the [slow query log](concepts-server-logs.md) on the source server, you can detect this latency problem by checking for a DDL command that ran on the source server. Through index dropping, renaming, and creating, you can use the INPLACE algorithm for the ALTER TABLE. You might need to copy the table data and rebuild the table.
-Typically, concurrent DML is supported for the INPLACE algorithm. But you can briefly take an exclusive metadata lock on the table when you prepare and run the operation. So for the CREATE INDEX statement, you can use the clauses ALGORITHM and LOCK to influence the method for table copying and the level of concurrency for reading and writing. You can still prevent DML operations by adding a FULLTEXT index or SPATIAL index.
+Typically, concurrent DML is supported for the INPLACE algorithm. But you can briefly take an exclusive metadata lock on the table when you prepare and run the operation. So for the CREATE INDEX statement, you can use the clauses ALGORITHM and LOCK to influence the method for table copying and the level of concurrency for reading and writing. You can still prevent DML operations by adding a FULLTEXT index or SPATIAL index.
The following example creates an index by using ALGORITHM and LOCK clauses.
@@ -220,24 +221,25 @@ Unfortunately, for a DDL statement that requires a lock, you can't avoid replica
#### Downgraded replica server
-In Azure Database for MySQL, read replicas use the same server configuration as the source server. You can change the replica server configuration after it has been created.
+In Azure Database for MySQL, read replicas use the same server configuration as the source server. You can change the replica server configuration after it has been created.
-If the replica server is downgraded, the workload can consume more resources, which in turn can lead to replication latency. To detect this problem, use Azure Monitor to check the CPU and memory consumption of the replica server.
+If the replica server is downgraded, the workload can consume more resources, which in turn can lead to replication latency. To detect this problem, use Azure Monitor to check the CPU and memory consumption of the replica server.
In this scenario, we recommend that you keep the replica server's configuration at values equal to or greater than the values of the source server. This configuration allows the replica to keep up with the source server. #### Improving replication latency by tuning the source server parameters
-In Azure Database for MySQL, by default, replication is optimized to run with parallel threads on replicas. When high-concurrency workloads on the source server cause the replica server to fall behind, you can improve the replication latency by configuring the parameter binlog_group_commit_sync_delay on the source server.
+In Azure Database for MySQL, by default, replication is optimized to run with parallel threads on replicas. When high-concurrency workloads on the source server cause the replica server to fall behind, you can improve the replication latency by configuring the parameter binlog_group_commit_sync_delay on the source server.
-The binlog_group_commit_sync_delay parameter controls how many microseconds the binary log commit waits before synchronizing the binary log file. The benefit of this parameter is that instead of immediately applying every committed transaction, the source server sends the binary log updates in bulk. This delay reduces IO on the replica and helps improve performance.
+The binlog_group_commit_sync_delay parameter controls how many microseconds the binary log commit waits before synchronizing the binary log file. The benefit of this parameter is that instead of immediately applying every committed transaction, the source server sends the binary log updates in bulk. This delay reduces IO on the replica and helps improve performance.
-It might be useful to set the binlog_group_commit_sync_delay parameter to 1000 or so. Then monitor the replication latency. Set this parameter cautiously, and use it only for high-concurrency workloads.
+It might be useful to set the binlog_group_commit_sync_delay parameter to 1000 or so. Then monitor the replication latency. Set this parameter cautiously, and use it only for high-concurrency workloads.
-> [!IMPORTANT]
+> [!IMPORTANT]
> In replica server, binlog_group_commit_sync_delay parameter is recommended to be 0. This is recommended because unlike source server, the replica server won't have high-concurrency and increasing the value for binlog_group_commit_sync_delay on replica server could inadvertently cause replication lag to increase.
-For low-concurrency workloads that include many singleton transactions, the binlog_group_commit_sync_delay setting can increase latency. Latency can increase because the IO thread waits for bulk binary log updates even if only a few transactions are committed.
+For low-concurrency workloads that include many singleton transactions, the binlog_group_commit_sync_delay setting can increase latency. Latency can increase because the IO thread waits for bulk binary log updates even if only a few transactions are committed.
## Next steps+ Check out the [MySQL binlog replication overview](https://dev.mysql.com/doc/refman/5.7/en/binlog-replication-configuration-overview.html).
mysql https://docs.microsoft.com/en-us/azure/mysql/security-baseline https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/mysql/security-baseline.md
@@ -366,7 +366,7 @@ Separately, control plane access for MySQL is available via REST API and support
**Guidance**: Use privileged access workstations (PAWs) with Multi-Factor Authentication (MFA) configured to log into and configure Azure resources. -- [Learn about Privileged Access Workstations](/windows-server/identity/securing-privileged-access/privileged-access-workstations)
+- [Learn about Privileged Access Workstations](https://4sysops.com/archives/understand-the-microsoft-privileged-access-workstation-paw-security-model/)
- [How to enable MFA in Azure](../active-directory/authentication/howto-mfa-getstarted.md)
network-watcher https://docs.microsoft.com/en-us/azure/network-watcher/security-baseline https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/network-watcher/security-baseline.md
@@ -291,7 +291,7 @@ Additionally, to help you keep track of dedicated administrative accounts, you m
**Guidance**: Use a Privileged Access Workstation (PAW) with Azure AD Multi-Factor Authentication (MFA) enabled to log into and configure your Azure Sentinel-related resources.
-* [Privileged Access Workstations](/windows-server/identity/securing-privileged-access/privileged-access-workstations)
+* [Privileged Access Workstations](https://4sysops.com/archives/understand-the-microsoft-privileged-access-workstation-paw-security-model/)
* [Planning a cloud-based Azure AD Multi-Factor Authentication deployment](../active-directory/authentication/howto-mfa-getstarted.md)
notification-hubs https://docs.microsoft.com/en-us/azure/notification-hubs/xamarin-notification-hubs-ios-push-notification-apns-get-started https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/notification-hubs/xamarin-notification-hubs-ios-push-notification-apns-get-started.md
@@ -13,7 +13,7 @@ ms.tgt_pltfrm: mobile-xamarin-ios
ms.devlang: dotnet ms.topic: tutorial ms.custom: "mvc, devx-track-csharp"
-ms.date: 07/07/2020
+ms.date: 01/12/2021
ms.author: sethm ms.reviewer: thsomasu ms.lastreviewed: 05/23/2019
@@ -84,120 +84,53 @@ Completing this tutorial is a prerequisite for all other Notification Hubs tutor
7. In `AppDelegate.cs`, add the following using statement: ```csharp
- using WindowsAzure.Messaging;
+ using WindowsAzure.Messaging.NotificationHubs;
using UserNotifications ```
-8. Declare an instance of `SBNotificationHub`:
+8. Create an implementation of the `MSNotificationHubDelegate` in the `AppDelegate.cs`:
```csharp
- private SBNotificationHub Hub { get; set; }
- ```
-
-9. In `AppDelegate.cs`, update `FinishedLaunching()` to match the following code:
-
- ```csharp
- public override bool FinishedLaunching(UIApplication application, NSDictionary launchOptions)
+ public class AzureNotificationHubListener : MSNotificationHubDelegate
{
- if (UIDevice.CurrentDevice.CheckSystemVersion(10, 0))
+ public override void DidReceivePushNotification(MSNotificationHub notificationHub, MSNotificationHubMessage message)
{
- UNUserNotificationCenter.Current.RequestAuthorization(UNAuthorizationOptions.Alert | UNAuthorizationOptions.Badge | UNAuthorizationOptions.Sound,
- (granted, error) => InvokeOnMainThread(UIApplication.SharedApplication.RegisterForRemoteNotifications));
- }
- else if (UIDevice.CurrentDevice.CheckSystemVersion(8, 0))
- {
- var pushSettings = UIUserNotificationSettings.GetSettingsForTypes(
- UIUserNotificationType.Alert | UIUserNotificationType.Badge | UIUserNotificationType.Sound,
- new NSSet());
- UIApplication.SharedApplication.RegisterUserNotificationSettings(pushSettings);
- UIApplication.SharedApplication.RegisterForRemoteNotifications();
- }
- else
- {
- UIRemoteNotificationType notificationTypes = UIRemoteNotificationType.Alert | UIRemoteNotificationType.Badge | UIRemoteNotificationType.Sound;
- UIApplication.SharedApplication.RegisterForRemoteNotificationTypes(notificationTypes);
}-
- return true;
} ```
-10. In `AppDelegate.cs`, override the `RegisteredForRemoteNotifications()` method:
+9. In `AppDelegate.cs`, update `FinishedLaunching()` to match the following code:
```csharp
- public override void RegisteredForRemoteNotifications(UIApplication application, NSData deviceToken)
+ public override bool FinishedLaunching(UIApplication application, NSDictionary launchOptions)
{
- Hub = new SBNotificationHub(Constants.ListenConnectionString, Constants.NotificationHubName);
-
- Hub.UnregisterAll (deviceToken, (error) => {
- if (error != null)
- {
- System.Diagnostics.Debug.WriteLine("Error calling Unregister: {0}", error.ToString());
- return;
- }
-
- NSSet tags = null; // create tags if you want
- Hub.RegisterNative(deviceToken, tags, (errorCallback) => {
- if (errorCallback != null)
- System.Diagnostics.Debug.WriteLine("RegisterNative error: " + errorCallback.ToString());
- });
- });
- }
- ```
-
-11. In `AppDelegate.cs`, override the `ReceivedRemoteNotification()` method:
+ // Set the Message listener
+ MSNotificationHub.SetDelegate(new AzureNotificationHubListener());
+
+ // Start the SDK
+ MSNotificationHub.Start(ListenConnectionString, NotificationHubName);
- ```csharp
- public override void ReceivedRemoteNotification(UIApplication application, NSDictionary userInfo)
- {
- ProcessNotification(userInfo, false);
+ return true;
} ```
-12. In `AppDelegate.cs`, create the `ProcessNotification()` method:
+10. In `AppDelegate.cs`, implement the `DidReceivePushNotification` method for the `AzureNotificationHubListener` class:
```csharp
- void ProcessNotification(NSDictionary options, bool fromFinishedLaunching)
+ public override void DidReceivePushNotification(MSNotificationHub notificationHub, MSNotificationHubMessage message)
{
- // Check to see if the dictionary has the aps key. This is the notification payload you would have sent
- if (null != options && options.ContainsKey(new NSString("aps")))
- {
- //Get the aps dictionary
- NSDictionary aps = options.ObjectForKey(new NSString("aps")) as NSDictionary;
-
- string alert = string.Empty;
-
- //Extract the alert text
- // NOTE: If you're using the simple alert by just specifying
- // " aps:{alert:"alert msg here"} ", this will work fine.
- // But if you're using a complex alert with Localization keys, etc.,
- // your "alert" object from the aps dictionary will be another NSDictionary.
- // Basically the JSON gets dumped right into a NSDictionary,
- // so keep that in mind.
- if (aps.ContainsKey(new NSString("alert")))
- alert = (aps [new NSString("alert")] as NSString).ToString();
-
- //If this came from the ReceivedRemoteNotification while the app was running,
- // we of course need to manually process things like the sound, badge, and alert.
- if (!fromFinishedLaunching)
- {
- //Manually show an alert
- if (!string.IsNullOrEmpty(alert))
- {
- var myAlert = UIAlertController.Create("Notification", alert, UIAlertControllerStyle.Alert);
- myAlert.AddAction(UIAlertAction.Create("OK", UIAlertActionStyle.Default, null));
- UIApplication.SharedApplication.KeyWindow.RootViewController.PresentViewController(myAlert, true, null);
- }
- }
- }