Updates from: 03/29/2022 01:14:20
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Custom Domain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/custom-domain.md
The following diagram illustrates Azure Front Door integration:
When using custom domains, consider the following: -- You can set up multiple custom domains. For the maximum number of supported custom domains, see [Azure AD service limits and restrictions](../active-directory/enterprise-users/directory-service-limits-restrictions.md) for Azure AD B2C and [Azure subscription and service limits, quotas, and constraints](../azure-resource-manager/management/azure-subscription-service-limits.md#azure-front-door-service-limits) for Azure Front Door.
+- You can set up multiple custom domains. For the maximum number of supported custom domains, see [Azure AD service limits and restrictions](../active-directory/enterprise-users/directory-service-limits-restrictions.md) for Azure AD B2C and [Azure subscription and service limits, quotas, and constraints](../azure-resource-manager/management/azure-subscription-service-limits.md#azure-front-door-classic-limits) for Azure Front Door.
- Azure Front Door is a separate Azure service, so extra charges will be incurred. For more information, see [Front Door pricing](https://azure.microsoft.com/pricing/details/frontdoor). - To use Azure Front Door [Web Application Firewall](../web-application-firewall/afds/afds-overview.md), you need to confirm your firewall configuration and rules work correctly with your Azure AD B2C user flows. - After you configure custom domains, users will still be able to access the Azure AD B2C default domain name *<tenant-name>.b2clogin.com* (unless you're using a custom policy and you [block access](#block-access-to-the-default-domain-name).
active-directory-b2c Partner Eid Me https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-eid-me.md
+
+ Title: Configure Azure Active Directory B2C with eID-Me
+
+description: Learn how to integrate Azure AD B2C authentication with eID-Me for identity verification
++++++ Last updated : 1/30/2022++
+zone_pivot_groups: b2c-policy-type
++
+# Configure eID-Me with Azure Active Directory B2C for identity verification
+++
+In this sample article, we provide guidance on how to integrate Azure Active Directory B2C (Azure AD B2C) authentication with [eID-Me](https://bluink.ca). eID-Me is an identity verification and decentralized digital identity solution for Canadian citizens. With eID-Me, Azure AD B2C tenants can strongly verify the identity of their users, obtain verified identity claims during sign up and sign in, and support multifactor authentication (MFA) and password-free sign-in using a secure digital identity. It enables organizations to meet Identity Assurance Level (IAL) 2 and Know Your Customer (KYC) requirements. This solution provides users secure sign-up and sign in experience while reducing fraud.
+++
+## Prerequisites
+
+To get started, you'll need:
+
+- [A Relying Party account with eID-Me](https://bluink.ca/eid-me/solutions/id-verification#contact-form).
+
+- An Azure subscription. If you don't have one, get a [free
+account](https://azure.microsoft.com/free).
+
+- An [Azure AD B2C tenant](tutorial-create-tenant.md) that is linked to your Azure subscription.
+
+- A [trial or production version](https://bluink.ca/eid-me/download) of eID-Me smartphone apps for users.
+
+- Complete the steps in the article [get started with custom policies in Azure Active Directory B2C](./tutorial-create-user-flows.md?pivots=b2c-custom-policy).
++
+## Scenario description
+
+eID-Me integrates with Azure AD B2C as an OpenID Connect (OIDC) identity provider. The following components comprise the eID-Me solution with Azure AD B2C:
++
+- **An Azure AD B2C tenant**: Your Azure AD B2C tenant need be configured as a Relying Party in eID-Me. This allows the eID-Me identity provider to trust your Azure AD B2C tenant for sign up and sign in.
++
+- **An Azure AD B2C tenant application**: Although not strictly required, it's assumed that tenants need to have an Azure AD B2C tenant application. The application can receive identity claims received by Azure AD B2C during an eID-Me transaction.
++
+- **eID-Me smartphone apps**: Users of your Azure AD B2C tenant need to have the eID-Me smartphone app for iOS or Android.
++
+- **Issued eID-Me digital identities**: Before using eID-Me, users need to successfully go through the eID-Me identity proofing process. They need to have been issued a digital identity to the digital wallet within the app. This process is done from home and usually takes minutes provided the users have valid identity documents.
++
+The eID-Me apps also provide strong authentication of the user during any transaction. X509 public key authentication using a private signing key contained within the eID-Me digital identity provides passwordless MFA.
+
+The following diagram shows the identity proofing process, which occurs outside of Azure AD B2C flows.
+
+![Screenshot shows the architecture of an identity proofing process flow in eID-Me](./media/partner-eid-me/partner-eid-me-identity-proofing.png)
+
+| Steps | Description |
+| :- | :-- |
+| 1. | User uploads a selfie capture into the eID-Me smartphone application. |
+| 2. | User scans and uploads a government issued identification document such as Passport or Driver license into the eID-Me smartphone application. |
+| 3. | The eID-Me smartphone application submits this data to eID-Me identity service for verification. |
+| 4. | A digital identity is issued to the user and saved in the application. |
+
+The following architecture diagram shows the implementation.
+
+![Screenshot shows the architecture of an Azure AD B2C integration with eID-Me](./media/partner-eid-me/partner-eid-me-architecture-diagram.png)
+
+| Steps | Description |
+| :- | :-- |
+| 1. | User opens Azure AD B2C's sign in page, and then signs in or signs up by entering their username. |
+| 2. | User is forwarded to Azure AD B2CΓÇÖs combined sign-in and sign-up policy. |
+| 3. | Azure AD B2C redirects the user to the eID-Me identity router using the OIDC authorization code flow. |
+| 4. | The eID-Me router sends a push notification to the userΓÇÖs mobile app including all context details of the authentication and authorization request. |
+| 5. | The user reviews the authentication challenge; if accepted the user is prompted for identity claims, proving the userΓÇÖs identity. |
+| 6. | The challenge response is returned to the eID-Me router. |
+| 7. | The eID-Me router then replies to Azure AD B2C with the authentication result. |
+| 8. | Response from Azure AD B2C is sent as an ID token to the application. |
+| 9. | Based on the authentication result, the user is granted or denied access. |
++
+## Onboard with eID-Me
+
+[Contact eID-Me](https://bluink.ca/contact) and configure a test or production environment to set up Azure AD B2C tenants as a Relying Party. Tenants must determine what identity claims they'll need from their consumers as they sign up using eID-Me.
+
+## Integrate eID-Me with Azure AD B2C
+
+### Step 1 - Configure an application in eID-Me
+
+To configure your tenant application as a Relying Party in eID-Me the following information should be supplied to eID-Me:
+
+| Property | Description |
+| : | : |
+| Name | Azure AD B2C/your desired application name |
+| Domain | name.onmicrosoft.com |
+| Redirect URIs | https://jwt.ms |
+| Redirect URLs | https://your-B2C-tenant-name.b2clogin.com/your-B2C-tenant-name.onmicrosoft.com/oauth2/authresp<br>For Example: `https://fabrikam.b2clogin.com/fabrikam.onmicrosoft.com/oauth2/authresp`<br>If you use a custom domain, enter https://your-domain-name/your-tenant-name.onmicrosoft.com/oauth2/authresp.<br> Replace your-domain-name with your custom domain, and your-tenant-name with the name of your tenant. |
+| URL for application home page | Will be displayed to the end user |
+| URL for application privacy policy | Will be displayed to the end user |
+
+eID-Me will provide a Client ID and a Client Secret once the Relying Party has been configured with eID-Me.
+
+>[!NOTE]
+>You'll need Client ID and Client secret later to configure the Identity provider in Azure AD B2C.
++
+### Step 2 - Add a new Identity provider in Azure AD B2C
+
+1. Sign in to the [Azure portal](https://portal.azure.com/#home) as the global administrator of your Azure AD B2C tenant.
+
+2. Make sure you're using the directory that contains your Azure AD B2C tenant by selecting the **Directory + subscription** filter in the top menu and choosing the directory that contains your tenant.
+
+3. Choose **All services** in the top-left corner of the Azure portal, search for and select **Azure AD B2C**.
+
+4. Navigate to **Dashboard** > **Azure Active Directory B2C** > **Identity providers**.
+
+5. Select **New OpenID Connect Provider**.
+
+6. Select **Add**.
+
+### Step 3 - Configure an Identity provider
+
+To configure an identity provider, follow these steps:
+
+1. Select **Identity provider type** > **OpenID Connect**
+
+2. Fill out the form to set up the Identity provider:
+
+ | Property | Value |
+ | : | :- |
+ | Name | Enter eID-Me Passwordless/a name of your choice |
+ | Client ID | Provided by eID-Me |
+ | Client Secret | Provided by eID-Me |
+ | Scope | openid email profile |
+ | Response type | code |
+ | Response mode | form post |
+
+3. Select **OK**.
+
+4. Select **Map this identity providerΓÇÖs claims**.
+
+5. Fill out the form to map the Identity provider:
+
+ | Property | Value |
+ | :-- | :- |
+ | User ID | sub |
+ | Display name | name |
+ | Given name | given_name |
+ | Surname | family_name |
+ | Email | email |
+
+6. Select **Save** to complete the setup for your new OIDC Identity provider.
+
+### Step 4 - Configure multi-factor authentication
+
+eID-Me is a decentralized digital identity with strong two-factor user authentication built in. Since eID-Me is already a multi-factor authenticator, you don't need to configure any multi-factor authentication settings in your user flows when using eID-Me. eID-Me offers a fast and simple user experience, which also eliminates the need for any additional passwords.
+
+### Step 5 - Create a user flow policy
+
+You should now see eID-Me as a new OIDC Identity provider listed within your B2C identity providers.
+
+1. In your Azure AD B2C tenant, under **Policies**, select **User flows**.
+
+2. Select **New user flow**
+
+3. Select **Sign up and sign in** > **Version** > **Create**.
+
+4. Enter a **Name** for your policy.
+
+5. In the Identity providers section, select your newly created eID-Me Identity provider.
+
+6. Select **None** for Local Accounts to disable email and password-based authentication.
+
+7. Select **Run user flow**
+
+8. In the form, enter the Replying URL, such as `https://jwt.ms`.
+
+9. The browser will be redirected to the eID-Me sign-in page. Enter the account name registered during User registration. The user will receive a push notification to their mobile device where the eID-Me application is installed; upon opening the notification, the user will be presented with an authentication challenge
+
+10. Once the authentication challenge is accepted, the browser will redirect the user to the replying URL.
+
+## Next steps
+
+For additional information, review the following articles:
+
+- [eID-Me and Azure AD B2C integration guide](https://bluink.ca/eid-me/azure-b2c-integration-guide)
+
+- [Custom policies in Azure AD B2C](./custom-policy-overview.md)
+
+- [Get started with custom policies in Azure AD B2C](./tutorial-create-user-flows.md?pivots=b2c-custom-policy)
+++
+>[!NOTE]
+>In Azure AD B2C, [**custom policies**](./user-flow-overview.md) are designed primarily to address complex scenarios. For most scenarios, we recommend that you use built-in [**user flows**](./user-flow-overview.md).
+
+### Step 2 - Create a policy key
+
+Store the client secret that you previously recorded in your Azure AD B2C tenant.
+
+1. Sign in to the [Azure portal](https://portal.azure.com/).
+
+2. Make sure you're using the directory that contains your Azure AD B2C tenant. Select the **Directories + subscriptions** icon in the portal toolbar.
+
+3. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch**.
+
+4. Choose **All services** in the top-left corner of the Azure portal, and then search for and select **Azure AD B2C**.
+
+5. On the Overview page, select **Identity Experience Framework**.
+
+6. Select **Policy Keys** and then select **Add**.
+
+7. For **Options**, choose `Manual`.
+
+8. Enter a **Name** for the policy key. For example, `eIDMeClientSecret`. The prefix `B2C_1A_` is added automatically to the name of your key.
+
+9. In **Secret**, enter your client secret that you previously recorded.
+
+10. For **Key usage**, select `Signature`.
+
+11. Select **Create**.
+
+### Step 3- Configure eID-Me as an Identity provider
+
+To enable users to sign in using eID-Me decentralized identity, you need to define eID-Me as a claims provider that Azure AD B2C can communicate with through an endpoint. The endpoint provides a set of claims that are used by Azure AD B2C to verify a specific user has authenticated using digital ID available on their device, proving the userΓÇÖs identity.
+
+You can define eID-Me as a claims provider by adding it to the **ClaimsProvider** element in the extension file of your policy
+
+1. Open the `TrustFrameworkExtensions.xml`.
+
+2. Find the **ClaimsProviders** element. If it doesn't exist, add it under the root element.
+
+3. Add a new **ClaimsProvider** as follows:
+
+ ```xml
+ <ClaimsProvider>
+ <Domain>eID-Me</Domain>
+ <DisplayName>eID-Me</DisplayName>
+ <TechnicalProfiles>
+ <TechnicalProfile Id="eID-Me-OIDC">
+ <!-- The text in the following DisplayName element is shown to the user on the claims provider
+ selection screen. -->
+ <DisplayName>eID-Me for Sign In</DisplayName>
+ <Protocol Name="OpenIdConnect" />
+ <Metadata>
+ <Item Key="ProviderName">https://eid-me.bluink.ca</Item>
+ <Item Key="METADATA">https://demoeid.bluink.ca/.well-known/openid-configuration</Item>
+ <Item Key="response_types">code</Item>
+ <Item Key="scope">openid email profile</Item>
+ <Item Key="response_mode">form_post</Item>
+ <Item Key="HttpBinding">POST</Item>
+ <Item Key="token_endpoint_auth_method">client_secret_post</Item>
+ <Item Key="client_id">eid_me_rp_client_id</Item>
+ <Item Key="UsePolicyInRedirectUri">false</Item>
+ </Metadata>
+ <CryptographicKeys>
+ <Key Id="client_secret" StorageReferenceId="B2C_1A_eIDMeClientSecret" />
+ </CryptographicKeys>
+ <InputClaims />
+ <OutputClaims>
+ <OutputClaim ClaimTypeReferenceId="issuerUserId" PartnerClaimType="sub" />
+ <OutputClaim ClaimTypeReferenceId="tenantId" PartnerClaimType="tid" />
+ <OutputClaim ClaimTypeReferenceId="givenName" PartnerClaimType="given_name" />
+ <OutputClaim ClaimTypeReferenceId="surName" PartnerClaimType="family_name" />
+ <OutputClaim ClaimTypeReferenceId="displayName" PartnerClaimType="name" />
+ <OutputClaim ClaimTypeReferenceId="email" PartnerClaimType="email" />
+ <OutputClaim ClaimTypeReferenceId="IAL" PartnerClaimType="identity_assurance_level_achieved" DefaultValue="unknown IAL" />
+ <OutputClaim ClaimTypeReferenceId="authenticationSource" DefaultValue="socialIdpAuthentication" AlwaysUseDefaultValue="true" />
+ <OutputClaim ClaimTypeReferenceId="identityProvider" PartnerClaimType="iss" />
+ <OutputClaim ClaimTypeReferenceId="locality" PartnerClaimType="locality" DefaultValue="unknown locality" />
+ <OutputClaim ClaimTypeReferenceId="region" PartnerClaimType="region" DefaultValue="unknown region" />
+ </OutputClaims>
+ <OutputClaimsTransformations>
+ <OutputClaimsTransformation ReferenceId="CreateRandomUPNUserName" />
+ <OutputClaimsTransformation ReferenceId="CreateUserPrincipalName" />
+ <OutputClaimsTransformation ReferenceId="CreateAlternativeSecurityId" />
+ <OutputClaimsTransformation ReferenceId="CreateSubjectClaimFromAlternativeSecurityId" />
+ </OutputClaimsTransformations>
+ <UseTechnicalProfileForSessionManagement ReferenceId="SM-SocialLogin" />
+ </TechnicalProfile>
+ </TechnicalProfiles>
+ </ClaimsProvider>
+ ```
+
+4. Set **eid_me_rp_client_id** with your eID-Me Relying Party Client ID.
+
+5. Save the file.
+
+There are additional identity claims that eID-Me supports and can be added.
+
+1. Open the `TrustFrameworksExtension.xml`
+
+2. Find the `BuildingBlocks` element. This is where additional identity claims that eID-Me supports can be added. Full lists of supported eID-Me identity claims with descriptions are mentioned at [http://www.oid-info.com/get/1.3.6.1.4.1.50715](http://www.oid-info.com/get/1.3.6.1.4.1.50715) with the OIDC identifiers used here [https://eid-me.bluink.ca/.well-known/openid-configuration](https://eid-me.bluink.ca/.well-known/openid-configuration).
+
+ ```xml
+ <BuildingBlocks>
+ <ClaimsSchema>
+ <ClaimType Id="IAL">
+ <DisplayName>Identity Assurance Level</DisplayName>
+ <DataType>string</DataType>
+ <DefaultPartnerClaimTypes>
+ <Protocol Name="OpenIdConnect" PartnerClaimType="identity_assurance_level_achieved" />
+ </DefaultPartnerClaimTypes>
+ <AdminHelpText>The Identity Assurance Level Achieved during proofing of the digital identity.</AdminHelpText>
+ <UserHelpText>The Identity Assurance Level Achieved during proofing of the digital identity.</UserHelpText>
+ <UserInputType>Readonly</UserInputType>
+ </ClaimType>
+
+ <ClaimType Id="picture">
+ <DisplayName>Portrait Photo</DisplayName>
+ <DataType>string</DataType>
+ <DefaultPartnerClaimTypes>
+ <Protocol Name="OpenIdConnect" PartnerClaimType="thumbnail_portrait" />
+ </DefaultPartnerClaimTypes>
+ <AdminHelpText>The portrait photo of the user.</AdminHelpText>
+ <UserHelpText>Your portrait photo.</UserHelpText>
+ <UserInputType>Readonly</UserInputType>
+ </ClaimType>
+
+ <ClaimType Id="middle_name">
+ <DisplayName>Portrait Photo</DisplayName>
+ <DataType>string</DataType>
+ <DefaultPartnerClaimTypes>
+ <Protocol Name="OpenIdConnect" PartnerClaimType="middle_name" />
+ </DefaultPartnerClaimTypes>
+ <UserHelpText>Your middle name.</UserHelpText>
+ <UserInputType>TextBox</UserInputType>
+ </ClaimType>
+
+ <ClaimType Id="birthdate">
+ <DisplayName>Date of Birth</DisplayName>
+ <DataType>string</DataType>
+ <DefaultPartnerClaimTypes>
+ <Protocol Name="OpenIdConnect" PartnerClaimType="birthdate" />
+ </DefaultPartnerClaimTypes>
+ <AdminHelpText>The user's date of birth.</AdminHelpText>
+ <UserHelpText>Your date of birth.</UserHelpText>
+ <UserInputType>TextBox</UserInputType>
+ </ClaimType>
+
+ <ClaimType Id="gender">
+ <DisplayName>Gender</DisplayName>
+ <DataType>string</DataType>
+ <DefaultPartnerClaimTypes>
+ <Protocol Name="OpenIdConnect" PartnerClaimType="gender" />
+ </DefaultPartnerClaimTypes>
+ <AdminHelpText>The user's gender.</AdminHelpText>
+ <UserHelpText>Your gender.</UserHelpText>
+ <UserInputType>TextBox</UserInputType>
+ </ClaimType>
+
+ <ClaimType Id="street_address">
+ <DisplayName>Locality/City</DisplayName>
+ <DataType>string</DataType>
+ <DefaultPartnerClaimTypes>
+ <Protocol Name="OpenIdConnect" PartnerClaimType="street_address" />
+ </DefaultPartnerClaimTypes>
+ <AdminHelpText>The user's full street address, which MAY include house number, street name, post office box.</AdminHelpText>
+ <UserHelpText>Your street address of residence.</UserHelpText>
+ <UserInputType>TextBox</UserInputType>
+ </ClaimType>
+
+ <ClaimType Id="locality">
+ <DisplayName>Locality/City</DisplayName>
+ <DataType>string</DataType>
+ <DefaultPartnerClaimTypes>
+ <Protocol Name="OpenIdConnect" PartnerClaimType="locality" />
+ </DefaultPartnerClaimTypes>
+ <AdminHelpText>The user's current city or locality of residence.</AdminHelpText>
+ <UserHelpText>Your current city or locality of residence.</UserHelpText>
+ <UserInputType>TextBox</UserInputType>
+ </ClaimType>
+
+ <ClaimType Id="region">
+ <DisplayName>Province or Territory</DisplayName>
+ <DataType>string</DataType>
+ <DefaultPartnerClaimTypes>
+ <Protocol Name="OpenIdConnect" PartnerClaimType="region" />
+ </DefaultPartnerClaimTypes>
+ <AdminHelpText>The user's current province or territory of residence.</AdminHelpText>
+ <UserHelpText>Your current province or territory of residence.</UserHelpText>
+ <UserInputType>TextBox</UserInputType>
+ </ClaimType>
+
+ <ClaimType Id="country">
+ <DisplayName>Country</DisplayName>
+ <DataType>string</DataType>
+ <DefaultPartnerClaimTypes>
+ <Protocol Name="OpenIdConnect" PartnerClaimType="country" />
+ </DefaultPartnerClaimTypes>
+ <AdminHelpText>The user's current country of residence.</AdminHelpText>
+ <UserHelpText>Your current country of residence.</UserHelpText>
+ <UserInputType>TextBox</UserInputType>
+ </ClaimType>
+
+ <ClaimType Id="dl_number">
+ <DisplayName>Driver's Licence Number</DisplayName>
+ <DataType>string</DataType>
+ <DefaultPartnerClaimTypes>
+ <Protocol Name="OpenIdConnect" PartnerClaimType="dl_number" />
+ </DefaultPartnerClaimTypes>
+ <AdminHelpText>The user's driver's licence number.</AdminHelpText>
+ <UserHelpText>Your driver's licence number.</UserHelpText>
+ <UserInputType>TextBox</UserInputType>
+ </ClaimType>
+
+ <ClaimType Id="dl_class">
+ <DisplayName>Driver's Licence Class</DisplayName>
+ <DataType>string</DataType>
+ <DefaultPartnerClaimTypes>
+ <Protocol Name="OpenIdConnect" PartnerClaimType="dl_class" />
+ </DefaultPartnerClaimTypes>
+ <AdminHelpText>The user's driver's licence class.</AdminHelpText>
+ <UserHelpText>Your driver's licence class.</UserHelpText>
+ <UserInputType>TextBox</UserInputType>
+ </ClaimType>
+ </ClaimsSchema>
+
+ ```
+
+### Step 4 - Add a user journey
+
+At this point, the identity provider has been set up, but it's not yet available in any of the sign-in pages. If you don't have your own custom user journey, create a duplicate of an existing template user journey, otherwise continue to the next step.
+
+1. Open the `TrustFrameworkBase.xml` file from the starter pack.
+
+2. Find and copy the entire contents of the **UserJourneys** element that includes ID=`SignUpOrSignIn`.
+
+3. Open the `TrustFrameworkExtensions.xml` and find the **UserJourneys** element. If the element doesn't exist, add one.
+
+4. Paste the entire content of the **UserJourney** element that you copied as a child of the **UserJourneys** element.
+
+5. Rename the ID of the user journey. For example, ID=`CustomSignUpSignIn`
+
+### Step 5 - Add the identity provider to a user journey
+
+Now that you have a user journey, add the new identity provider to the user journey.
+
+1. Find the orchestration step element that includes Type=`CombinedSignInAndSignUp`, or Type=`ClaimsProviderSelection` in the user journey. It's usually the first orchestration step. The **ClaimsProviderSelections** element contains a list of identity providers that a user can sign in with. The order of the elements controls the order of the sign-in buttons presented to the user. Add a **ClaimsProviderSelection** XML element. Set the value of **TargetClaimsExchangeId** to a friendly name.
+
+2. In the next orchestration step, add a **ClaimsExchange** element. Set the **Id** to the value of the target claims exchange ID. Update the value of **TechnicalProfileReferenceId** to the ID of the technical profile you created earlier.
+
+ The following XML demonstrates **7** orchestration steps of a user journey with the identity provider:
+
+ ```xml
+ <UserJourney Id="eIDME-SignUpOrSignIn">
+ <OrchestrationSteps>
+ <OrchestrationStep Order="1" Type="CombinedSignInAndSignUp" ContentDefinitionReferenceId="api.signuporsignin">
+ <ClaimsProviderSelections>
+ <ClaimsProviderSelection TargetClaimsExchangeId="eIDMeExchange" />
+ </ClaimsProviderSelections>
+ </OrchestrationStep>
+ <!-- Check if the user has selected to sign in using one of the social providers -->
+ <OrchestrationStep Order="2" Type="ClaimsExchange">
+ <Preconditions>
+ <Precondition Type="ClaimsExist" ExecuteActionsIf="true">
+ <Value>objectId</Value>
+ <Action>SkipThisOrchestrationStep</Action>
+ </Precondition>
+ </Preconditions>
+ <ClaimsExchanges>
+ <ClaimsExchange Id="eIDMeExchange" TechnicalProfileReferenceId="eID-Me-OIDC" />
+ </ClaimsExchanges>
+ </OrchestrationStep>
+ <!-- For social IDP authentication, attempt to find the user account in the directory. -->
+ <OrchestrationStep Order="3" Type="ClaimsExchange">
+ <Preconditions>
+ <Precondition Type="ClaimEquals" ExecuteActionsIf="true">
+ <Value>authenticationSource</Value>
+ <Value>localAccountAuthentication</Value>
+ <Action>SkipThisOrchestrationStep</Action>
+ </Precondition>
+ </Preconditions>
+ <ClaimsExchanges>
+ <ClaimsExchange Id="AADUserReadUsingAlternativeSecurityId" TechnicalProfileReferenceId="AAD-UserReadUsingAlternativeSecurityId-NoError" />
+ </ClaimsExchanges>
+ </OrchestrationStep>
+ <!-- Show self-asserted page only if the directory does not have the user account already (i.e. we do not have an objectId). -->
+ <OrchestrationStep Order="4" Type="ClaimsExchange">
+ <Preconditions>
+ <Precondition Type="ClaimsExist" ExecuteActionsIf="true">
+ <Value>objectId</Value>
+ <Action>SkipThisOrchestrationStep</Action>
+ </Precondition>
+ </Preconditions>
+ <ClaimsExchanges>
+ <ClaimsExchange Id="SelfAsserted-Social" TechnicalProfileReferenceId="SelfAsserted-Social" />
+ </ClaimsExchanges>
+ </OrchestrationStep>
+ <!-- This step reads any user attributes that we may not have received when authenticating using ESTS so they can be sent in the token. -->
+ <OrchestrationStep Order="5" Type="ClaimsExchange">
+ <Preconditions>
+ <Precondition Type="ClaimEquals" ExecuteActionsIf="true">
+ <Value>authenticationSource</Value>
+ <Value>socialIdpAuthentication</Value>
+ <Action>SkipThisOrchestrationStep</Action>
+ </Precondition>
+ </Preconditions>
+ <ClaimsExchanges>
+ <ClaimsExchange Id="AADUserReadWithObjectId" TechnicalProfileReferenceId="AAD-UserReadUsingObjectId" />
+ </ClaimsExchanges>
+ </OrchestrationStep>
+ <!-- The previous step (SelfAsserted-Social) could have been skipped if there were no attributes to collect
+ from the user. So, in that case, create the user in the directory if one does not already exist
+ (verified using objectId which would be set from the last step if account was created in the directory. -->
+ <OrchestrationStep Order="6" Type="ClaimsExchange">
+ <Preconditions>
+ <Precondition Type="ClaimsExist" ExecuteActionsIf="true">
+ <Value>objectId</Value>
+ <Action>SkipThisOrchestrationStep</Action>
+ </Precondition>
+ </Preconditions>
+ <ClaimsExchanges>
+ <ClaimsExchange Id="AADUserWrite" TechnicalProfileReferenceId="AAD-UserWriteUsingAlternativeSecurityId" />
+ </ClaimsExchanges>
+ </OrchestrationStep>
+ <OrchestrationStep Order="7" Type="SendClaims" CpimIssuerTechnicalProfileReferenceId="JwtIssuer" />
+ </OrchestrationSteps>
+ <ClientDefinition ReferenceId="DefaultWeb" />
+ </UserJourney>
+
+ ```
+
+### Step 6 - Configure the relying party policy
+
+The relying party policy specifies the user journey which Azure AD B2C will execute. You can also control what claims are passed to your application by adjusting the **OutputClaims** element of the **eID-Me-OIDC-Signup** TechnicalProfile element. In this sample, the application will receive the userΓÇÖs postal code, locality, region, IAL, portrait, middle name, and birth date. It also receives the boolean **signupConditionsSatisfied** claim, which indicates whether an account has been created or not:
+
+ ```xml
+ <RelyingParty>
+ <DefaultUserJourney ReferenceId="eIDMe-SignUpOrSignIn" />
+ <TechnicalProfile Id="PolicyProfile">
+ <DisplayName>PolicyProfile</DisplayName>
+ <Protocol Name="OpenIdConnect" />
+ <OutputClaims>
+ <OutputClaim ClaimTypeReferenceId="displayName" />
+ <OutputClaim ClaimTypeReferenceId="givenName" />
+ <OutputClaim ClaimTypeReferenceId="surname" />
+ <OutputClaim ClaimTypeReferenceId="email" />
+ <OutputClaim ClaimTypeReferenceId="objectId" PartnerClaimType="sub"/>
+ <OutputClaim ClaimTypeReferenceId="identityProvider" />
+ <OutputClaim ClaimTypeReferenceId="tenantId" AlwaysUseDefaultValue="true" DefaultValue="{Policy:TenantObjectId}" />
+ <OutputClaim ClaimTypeReferenceId="postalCode" PartnerClaimType="postal_code" DefaultValue="unknown postal_code" />
+ <OutputClaim ClaimTypeReferenceId="locality" PartnerClaimType="locality" DefaultValue="unknown locality" />
+ <OutputClaim ClaimTypeReferenceId="region" PartnerClaimType="region" DefaultValue="unknown region" />
+ <OutputClaim ClaimTypeReferenceId="IAL" PartnerClaimType="identity_assurance_level_achieved" DefaultValue="unknown IAL" />
+ <OutputClaim ClaimTypeReferenceId="picture" PartnerClaimType="thumbnail_portrait" DefaultValue="unknown portrait" />
+ <OutputClaim ClaimTypeReferenceId="middle_name" PartnerClaimType="middle_name" DefaultValue="unknown middle name" />
+ <OutputClaim ClaimTypeReferenceId="birthdate" PartnerClaimType="birthdate" DefaultValue="unknown DOB" />
+ <OutputClaim ClaimTypeReferenceId="newUser" PartnerClaimType="signupConditionsSatisfied" DefaultValue="false" />
+ </OutputClaims>
+ <SubjectNamingInfo ClaimType="sub" />
+ </TechnicalProfile>
+ </RelyingParty>
+
+ ```
+
+### Step 7 - Upload the custom policy
+
+1. Sign in to the [Azure portal](https://portal.azure.com/#home).
+
+2. Make sure you're using the directory that contains your Azure AD B2C tenant. Select the **Directories + subscriptions** icon in the portal toolbar.
+
+3. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch**.
+
+4. In the [Azure portal](https://portal.azure.com/#home), search for and select **Azure AD B2C**.
+
+5. Under Policies, select **Identity Experience Framework**.
+Select **Upload Custom Policy**, and then upload the two policy files that you changed, in the following order: the extension policy, for example `TrustFrameworkBase.xml`, then the relying party policy, such as `SignUp.xml`.
+
+### Step 8 - Test your custom policy
+
+1. Select your relying party policy, for example `B2C_1A_signup`.
+
+2. For **Application**, select a web application that you [previously registered](./tutorial-register-applications.md). The **Reply URL** should show `https://jwt.ms`.
+
+3. Select the **Run now** button.
+
+4. The sign-up policy should invoke eID-Me immediately. If sign-in is used, then select eID-Me to sign in with eID-Me.
+
+If the sign-in process is successful, your browser is redirected to `https://jwt.ms`, which displays the contents of the token returned by Azure AD B2C.
+
+## Next steps
+
+For additional information, review the following articles:
+
+- [Custom policies in Azure AD B2C](./custom-policy-overview.md)
+
+- [Get started with custom policies in Azure AD B2C](./tutorial-create-user-flows.md?pivots=b2c-custom-policy)
+
+- [Sample code to integrate Azure AD B2C with eID-Me](https://github.com/bluink-stephen/eID-Me_Azure_AD_B2C)
+
+- [eID-Me and Azure AD B2C integration guide](https://bluink.ca/eid-me/azure-b2c-integration-guide)
+
active-directory-b2c Partner Gallery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-gallery.md
Microsoft partners with the following ISVs for identity verification and proofin
| ISV partner | Description and integration walkthroughs | |:-|:--|
-|![Screenshot of an Experian logo.](./medi) is an identity verification and proofing provider that performs risk assessments based on user attributes to prevent fraud. |
-|![Screenshot of an IDology logo.](./medi) is an identity verification and proofing provider with ID verification solutions, fraud prevention solutions, compliance solutions, and others.|
-|![Screenshot of a Jumio logo.](./medi) is an ID verification service, which enables real-time automated ID verification, safeguarding customer data. |
+| ![Screenshot of a eid-me logo](./medi) is an identity verification and decentralized digital identity solution for Canadian citizens. It enables organizations to meet Identity Assurance Level (IAL) 2 and Know Your Customer (KYC) requirements. |
+| ![Screenshot of an Experian logo.](./medi) is an identity verification and proofing provider that performs risk assessments based on user attributes to prevent fraud. |
+| ![Screenshot of an IDology logo.](./medi) is an identity verification and proofing provider with ID verification solutions, fraud prevention solutions, compliance solutions, and others.|
+| ![Screenshot of a Jumio logo.](./medi) is an ID verification service, which enables real-time automated ID verification, safeguarding customer data. |
| ![Screenshot of a LexisNexis logo.](./medi) is a profiling and identity validation provider that verifies user identification and provides comprehensive risk assessment based on userΓÇÖs device. | | ![Screenshot of a Onfido logo](./medi) is a document ID and facial biometrics verification solution that allows companies to meet *Know Your Customer* and identity requirements in real time. |
active-directory-b2c User Profile Attributes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/user-profile-attributes.md
The table below lists the [user resource type](/graph/api/resources/user) attrib
|immutableId |String|An identifier that is typically used for users migrated from on-premises Active Directory.|No|No|Persisted, Output| |legalAgeGroupClassification|String|Legal age group classification. Read-only and calculated based on ageGroup and consentProvidedForMinor properties. Allowed values: null, minorWithOutParentalConsent, minorWithParentalConsent, minorNoParentalConsentRequired, notAdult, and adult.|Yes|No|Persisted, Output| |legalCountry<sup>1</sup> |String|Country/Region for legal purposes.|No|No|Persisted, Output|
-|mail |String|Email address for the user. Example: "bob@contoso.com". NOTE: Accent characters are not allowed.|Yes|No|Persisted, Output|
|mailNickName |String|The mail alias for the user. Max length 64.|No|No|Persisted, Output| |mobile (mobilePhone) |String|The primary cellular telephone number for the user. Max length 64.|Yes|No|Persisted, Output| |netId |String|Net ID.|No|No|Persisted, Output|
active-directory Scenario Desktop Acquire Token Wam https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-desktop-acquire-token-wam.md
Using an authentication broker such as WAM has numerous benefits.
- Enhanced security (your app does not have to manage the powerful refresh token) - Better support for Windows Hello, Conditional Access and FIDO keys - Integration with Windows' "Email and Accounts" view-- Better Single Sing-On (users don't have to reenter passwords)
+- Better Single Sign-On (users don't have to reenter passwords)
- Most bug fixes and enhancements will be shipped with Windows ## WAM limitations
active-directory Concept Workload Identity Risk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/identity-protection/concept-workload-identity-risk.md
We detect risk on workload identities across sign-in behavior and offline indica
| Suspicious Sign-ins | Offline | This risk detection indicates sign-in properties or patterns that are unusual for this service principal. <br><br> The detection learns the baselines sign-in behavior for workload identities in your tenant in between 2 and 60 days, and fires if one or more of the following unfamiliar properties appear during a later sign-in: IP address / ASN, target resource, user agent, hosting/non-hosting IP change, IP country, credential type. <br><br> Because of the programmatic nature of workload identity sign-ins, we provide a timestamp for the suspicious activity instead of flagging a specific sign-in event. <br><br> Sign-ins that are initiated after an authorized configuration change may trigger this detection. | | Unusual addition of credentials to an OAuth app | Offline | This detection is discovered by [Microsoft Defender for Cloud Apps](/defender-cloud-apps/investigate-anomaly-alerts#unusual-addition-of-credentials-to-an-oauth-app). This detection identifies the suspicious addition of privileged credentials to an OAuth app. This can indicate that an attacker has compromised the app, and is using it for malicious activity. | | Admin confirmed account compromised | Offline | This detection indicates an admin has selected 'Confirm compromised' in the Risky Workload Identities UI or using riskyServicePrincipals API. To see which admin has confirmed this account compromised, check the accountΓÇÖs risk history (via UI or API). |
+| Leaked Credentials (public preview) | Offline | This risk detection indicates that the account's valid credentials have been leaked. This leak can occur when someone checks in the credentials in public code artifact on GitHub, or when the credentials are leaked through a data breach. <br><br> When the Microsoft leaked credentials service acquires credentials from GitHub, the dark web, paste sites, or other sources, they're checked against current valid credentials in Azure AD to find valid matches. |
## Identify risky workload identities
active-directory View Applications Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/view-applications-portal.md
Select options according to what you're looking for:
1. Under **Application Status**, choose **Any**, **Disabled**, or **Enabled**. The **Any** option includes both disabled and enabled applications. 1. Under **Application Visibility**, choose **Any**, or **Hidden**. The **Hidden** option shows applications that are in the tenant, but aren't visible to users. 1. After choosing the options you want, select **Apply**.
-1. Select **Add filters** to add more options for filtering the search results. The other that exist are:
+1. Select **Add filters** to add more options for filtering the search results. The other options are:
- **Application ID** - **Created on** - **Assignment required**
active-directory Pim Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/pim-configure.md
Privileged Identity Management provides time-based and approval-based role activ
- Get **notifications** when privileged roles are activated - Conduct **access reviews** to ensure users still need roles - Download **audit history** for internal or external audit-- Prevents removal of the **last active Global Administrator** role assignment
+- Prevents removal of the **last active Global Administrator** and **Privileged Role Administrator** role assignments
## What can I do with it?
active-directory 15Five Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/15five-provisioning-tutorial.md
Add 15Five from the Azure AD application gallery to start managing provisioning
The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* When assigning users and groups to 15Five, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add additional roles.
- * Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+* If you need additional roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
+ ## Step 5. Configure automatic user provisioning to 15Five This section guides you through the steps to configure the Azure AD provisioning service to create, update, and disable users and/or groups in 15Five based on user and/or group assignments in Azure AD.
active-directory 8X8 Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/8x8-provisioning-tutorial.md
The Azure AD provisioning service allows you to scope who will be provisioned ba
If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* When assigning users and groups to 8x8, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add additional roles.
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+* If you need additional roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
## Step 5. Configure automatic user provisioning to 8x8
active-directory Adobe Identity Management Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/adobe-identity-management-provisioning-tutorial.md
Add Adobe Identity Management from the Azure AD application gallery to start man
The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* When assigning users and groups to Adobe Identity Management, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add additional roles.
+* If you need additional roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
-* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+* If you need additional roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
## Step 5. Configure automatic user provisioning to Adobe Identity Management
active-directory Alertmedia Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/alertmedia-provisioning-tutorial.md
Add AlertMedia from the Azure AD application gallery to start managing provision
The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* When assigning users and groups to AlertMedia, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add additional roles.
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+* If you need additional roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
## Step 5. Configure automatic user provisioning to AlertMedia
active-directory Alexishr Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/alexishr-provisioning-tutorial.md
Add AlexisHR from the Azure AD application gallery to start managing provisionin
The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* When assigning users and groups to AlexisHR, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add additional roles.
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+* If you need additional roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
## Step 5. Configure automatic user provisioning to AlexisHR
active-directory Appaegis Isolation Access Cloud Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/appaegis-isolation-access-cloud-provisioning-tutorial.md
Add Appaegis Isolation Access Cloud from the Azure AD application gallery to sta
The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* When assigning users and groups to Appaegis Isolation Access Cloud, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add more roles.
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control it by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+* If you need additional roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
## Step 5. Configure automatic user provisioning to Appaegis Isolation Access Cloud
active-directory Apple Business Manager Provision Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/apple-business-manager-provision-tutorial.md
Add Apple Business Manager from the Azure AD application gallery to start managi
The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* When assigning users to Apple Business Manager, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add additional roles.
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+* If you need additional roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
## Step 5. Configure automatic user provisioning to Apple Business Manager
active-directory Apple School Manager Provision Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/apple-school-manager-provision-tutorial.md
Add Apple School Manager from the Azure AD application gallery to start managing
The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* When assigning users to Apple School Manager, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add additional roles.
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+* If you need additional roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
## Step 5. Configure automatic user provisioning to Apple School Manager
active-directory Asana Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/asana-provisioning-tutorial.md
Add Asana from the Azure AD application gallery to start managing provisioning t
The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* When assigning users and groups to Asana, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add more roles.
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+* If you need additional roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
## Step 5. Configure automatic user provisioning to Asana
active-directory Askspoke Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/askspoke-provisioning-tutorial.md
Add askSpoke from the Azure AD application gallery to start managing provisionin
The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md). -- When assigning users and groups to askSpoke, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add additional roles.
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-- Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+* If you need additional roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
## Step 5. Configure automatic user provisioning to askSpoke
active-directory Atea Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/atea-provisioning-tutorial.md
Add Atea from the Azure AD application gallery to start managing provisioning to
The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user and group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* When assigning users and groups to Atea, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add other roles.
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control it by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+* If you need additional roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
## Step 5. Configure automatic user provisioning to Atea
active-directory Atlassian Cloud Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/atlassian-cloud-provisioning-tutorial.md
Add Atlassian Cloud from the Azure AD application gallery to start managing prov
The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* When assigning users and groups to Atlassian Cloud, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add additional roles.
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+* If you need additional roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
## Step 5. Configuring automatic user provisioning to Atlassian Cloud
active-directory Auditboard Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/auditboard-provisioning-tutorial.md
Add AuditBoard from the Azure AD application gallery to start managing provision
The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* When assigning users to AuditBoard, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add additional roles.
-
-* Start small. Test with a small set of users before rolling out to everyone. When scope for provisioning is set to assigned users, you can control this by assigning one or two users to the app. When scope is set to all users, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+* If you need additional roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
## Step 5. Configure automatic user provisioning to AuditBoard
active-directory Autodesk Sso Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/autodesk-sso-provisioning-tutorial.md
Add Autodesk SSO from the Azure AD application gallery to start managing provisi
The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* When assigning users and groups to Autodesk SSO, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add more roles.
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+* If you need additional roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
## Step 5. Configure automatic user provisioning to Autodesk SSO
active-directory Aws Single Sign On Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/aws-single-sign-on-provisioning-tutorial.md
Add AWS Single Sign-On from the Azure AD application gallery to start managing p
The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* When assigning users and groups to AWS Single Sign-On, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add additional roles.
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+* If you need additional roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
## Step 5. Configure automatic user provisioning to AWS Single Sign-On
active-directory Benq Iam Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/benq-iam-provisioning-tutorial.md
Add BenQ IAM from the Azure AD application gallery to start managing provisionin
The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* When assigning users and groups to BenQ IAM, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add additional roles.
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+* If you need additional roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
## Step 5. Configure automatic user provisioning to BenQ IAM
active-directory Bentley Automatic User Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/bentley-automatic-user-provisioning-tutorial.md
Add Bentley - Automatic User Provisioning from the Azure AD application gallery
The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* When assigning users and groups to Bentley - Automatic User Provisioning, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add additional roles.
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+* If you need additional roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
## Step 5. Configure automatic user provisioning to Bentley - Automatic User Provisioning
active-directory Bic Cloud Design Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/bic-cloud-design-provisioning-tutorial.md
Add BIC Cloud Design from the Azure AD application gallery to start managing pro
The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* When assigning users and groups to BIC Cloud Design, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add more roles.
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+* If you need additional roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
## Step 5. Configure automatic user provisioning to BIC Cloud Design
active-directory Bizagi Studio For Digital Process Automation Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/bizagi-studio-for-digital-process-automation-provisioning-tutorial.md
With the Azure AD provisioning service, you can scope who is provisioned based o
Note the following points about scoping:
-* When you're assigning users and groups to Bizagi Studio for Digital Process Automation, you must select a role other than **Default Access**. Users with the default access role are excluded from provisioning, and are marked in the provisioning logs as will be marked as not effectively entitled. If the only role available on the application is the default access role, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add more roles.
-
-* Start small. Test with a small set of users and groups before rolling out to everyone. When the scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When the scope is set to all users and groups, you can specify an [attribute-based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+* If you need additional roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
## Configure automatic user provisioning
active-directory Bldng App Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/bldng-app-provisioning-tutorial.md
Add BLDNG APP from the Azure AD application gallery to start managing provisioni
The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* When assigning users and groups to BLDNG APP, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add additional roles.
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+* If you need additional roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
## Step 5. Configure automatic user provisioning to BLDNG APP
active-directory Blogin Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/blogin-provisioning-tutorial.md
Add BlogIn from the Azure AD application gallery to start managing provisioning
The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* When assigning users and groups to BlogIn, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add additional roles.
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+* If you need additional roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
## Step 5. Configure automatic user provisioning to BlogIn
active-directory Bluejeans Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/bluejeans-provisioning-tutorial.md
Add BlueJeans from the Azure AD application gallery to start managing provisioni
The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users to the application. If you choose to scope who will be provisioned based solely on attributes of the user, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* When assigning users to BlueJeans, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add additional roles.
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
* Start small. Test with a small set of users before rolling out to everyone. When scope for provisioning is set to assigned users, you can control this by assigning one or two users to the app. When scope is set to all users, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
active-directory Boxcryptor Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/boxcryptor-provisioning-tutorial.md
Add Boxcryptor from the Azure AD application gallery to start managing provision
The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* When assigning users and groups to Boxcryptor, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add additional roles.
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+* If you need additional roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
## Step 5. Configure automatic user provisioning to Boxcryptor
active-directory Bpanda Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/bpanda-provisioning-tutorial.md
Add Bpanda from the Azure AD application gallery to start managing provisioning
The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* When assigning users and groups to Bpanda, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add other roles.
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+* If you need additional roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
## Step 5. Configure automatic user provisioning to Bpanda
active-directory Britive Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/britive-provisioning-tutorial.md
Add Britive from the Azure AD application gallery to start managing provisioning
The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* When assigning users and groups to Britive, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add additional roles.
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+* If you need additional roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
## Step 5. Configure automatic user provisioning to Britive
active-directory Browserstack Single Sign On Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/browserstack-single-sign-on-provisioning-tutorial.md
Add BrowserStack Single Sign-on from the Azure AD application gallery to start m
The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users to the application. If you choose to scope who will be provisioned based solely on attributes of the user, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* When assigning users to BrowserStack Single Sign-on, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add additional roles.
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
* Start small. Test with a small set of users before rolling out to everyone. When scope for provisioning is set to assigned users, you can control this by assigning one or two users to the app. When scope is set to all users, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
active-directory Bullseyetdp Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/bullseyetdp-provisioning-tutorial.md
Add BullseyeTDP from the Azure AD application gallery to start managing provisio
The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* When assigning users and groups to BullseyeTDP, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add more roles.
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+* If you need additional roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
## Step 5. Configure automatic user provisioning to BullseyeTDP
active-directory Cato Networks Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/cato-networks-provisioning-tutorial.md
Add Cato Networks from the Azure AD application gallery to start managing provis
The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* When assigning users and groups to Cato Networks, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add more roles.
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control provisioning by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+* If you need additional roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
## Step 5. Configure automatic user provisioning to Cato Networks
active-directory Chaos Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/chaos-provisioning-tutorial.md
Add Chaos from the Azure AD application gallery to start managing provisioning t
The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../manage-apps/define-conditional-rules-for-provisioning-user-accounts.md).
-* When assigning users and groups to Chaos, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add additional roles.
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+* If you need additional roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
## Step 5. Configure automatic user provisioning to Chaos
active-directory Chatwork Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/chatwork-provisioning-tutorial.md
Add Chatwork from the Azure AD application gallery to start managing provisionin
The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* When assigning users and groups to Chatwork, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add more roles.
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control it by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+* If you need additional roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
## Step 5. Configure automatic user provisioning to Chatwork
active-directory Checkproof Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/checkproof-provisioning-tutorial.md
Add CheckProof from the Azure AD application gallery to start managing provision
The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* When assigning users and groups to CheckProof, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add additional roles.
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+* If you need additional roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
## Step 5. Configure automatic user provisioning to CheckProof
active-directory Cinode Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/cinode-provisioning-tutorial.md
Add Cinode from the Azure AD application gallery to start managing provisioning
The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* When assigning users and groups to Cinode, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add additional roles.
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+* If you need additional roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
## Step 5. Configure automatic user provisioning to Cinode
active-directory Cisco Umbrella User Management Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/cisco-umbrella-user-management-provisioning-tutorial.md
Add Cisco Umbrella User Management from the Azure AD application gallery to star
The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* When assigning users and groups to Cisco Umbrella User Management, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add additional roles.
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+* If you need additional roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
## Step 6. Configure automatic user provisioning to Cisco Umbrella User Management
active-directory Clebex Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/clebex-provisioning-tutorial.md
Add Clebex from the Azure AD application gallery to start managing provisioning
The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* When assigning users and groups to Clebex, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add additional roles.
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+* If you need additional roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
## Step 5. Configure automatic user provisioning to Clebex
active-directory Cloud Academy Sso Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/cloud-academy-sso-provisioning-tutorial.md
Add Cloud Academy - SSO from the Azure AD application gallery to start managing
The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* When assigning users and groups to Cloud Academy - SSO, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add additional roles.
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+* If you need additional roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
## Step 5. Configure automatic user provisioning to Cloud Academy - SSO
active-directory Coda Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/coda-provisioning-tutorial.md
Add Coda from the Azure AD application gallery to start managing provisioning to
The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users to the application. If you choose to scope who will be provisioned based solely on attributes of the user, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* When assigning users to Coda, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add additional roles.
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
* Start small. Test with a small set of users before rolling out to everyone. When scope for provisioning is set to assigned users, you can control this by assigning one or two users to the app. When scope is set to all users, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
active-directory Code42 Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/code42-provisioning-tutorial.md
Add Code42 from the Azure AD application gallery to start managing provisioning
The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* When assigning users and groups to Code42, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add additional roles.
- * Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+* If you need additional roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
+ ## Step 5. Configure automatic user provisioning to Code42
active-directory Cofense Provision Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/cofense-provision-tutorial.md
Add Cofense Recipient Sync from the Azure AD application gallery to start managi
The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* When assigning users and groups to Cofense Recipient Sync, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add additional roles.
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+* If you need additional roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
## Step 5. Configure automatic user provisioning to Cofense Recipient Sync
active-directory Cybsafe Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/cybsafe-provisioning-tutorial.md
Add CybSafe from the Azure AD application gallery to start managing provisioning
The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* When assigning users and groups to CybSafe, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add additional roles.
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+* If you need additional roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
## Step 5. Configure automatic user provisioning to CybSafe
active-directory Directprint Io Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/directprint-io-provisioning-tutorial.md
Add directprint.io from the Azure AD application gallery to start managing provi
The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* When assigning users and groups to directprint.io, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add additional roles.
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+* If you need additional roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
## Step 5. Configure automatic user provisioning to directprint.io
active-directory Documo Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/documo-provisioning-tutorial.md
Add Documo from the Azure AD application gallery to start managing provisioning
The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* When assigning users and groups to Documo, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add more roles.
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control it by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+* If you need additional roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
## Step 5. Configure automatic user provisioning to Documo
active-directory Eletive Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/eletive-provisioning-tutorial.md
Add Eletive from the Azure AD application gallery to start managing provisioning
The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* When assigning users and groups to Eletive, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add roles.
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control the scope by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+* If you need additional roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
## Step 5. Configure automatic user provisioning to Eletive
active-directory Envoy Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/envoy-provisioning-tutorial.md
Add Envoy from the Azure AD application gallery to start managing provisioning t
The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* When assigning users and groups to Envoy, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add additional roles.
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+* If you need additional roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
## Step 5. Configure automatic user provisioning to Envoy
active-directory Evercate Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/evercate-provisioning-tutorial.md
Add Evercate from the Azure AD application gallery to start managing provisionin
The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* When assigning users and groups to Evercate, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add more roles.
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+* If you need additional roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
## Step 5. Configure automatic user provisioning to Evercate
active-directory Exium Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/exium-provisioning-tutorial.md
Add Exium from the Azure AD application gallery to start managing provisioning t
The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* When assigning users and groups to Exium, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add extra roles.
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+* If you need additional roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
## Step 5. Configure automatic user provisioning to Exium
active-directory Facebook Work Accounts Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/facebook-work-accounts-provisioning-tutorial.md
Add Facebook Work Accounts from the Azure AD application gallery to start managi
The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* When assigning users and groups to Facebook Work Accounts, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add more roles.
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control it by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+* If you need additional roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
## Step 4. Configure automatic user provisioning to Facebook Work Accounts
active-directory Fortes Change Cloud Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/fortes-change-cloud-provisioning-tutorial.md
Add Fortes Change Cloud from the Azure AD application gallery to start managing
The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* When assigning users and groups to Fortes Change Cloud, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add additional roles.
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+* If you need additional roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
## Step 5. Configure automatic user provisioning to Fortes Change Cloud
active-directory Fortisase Sia Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/fortisase-sia-tutorial.md
Title: 'Tutorial: Azure Active Directory single sign-on (SSO) integration with FortiSASE SIA | Microsoft Docs'
-description: Learn how to configure single sign-on between Azure Active Directory and FortiSASE SIA.
+ Title: 'Tutorial: Azure AD SSO integration with FortiSASE'
+description: Learn how to configure single sign-on between Azure Active Directory and FortiSASE.
Previously updated : 02/19/2021 Last updated : 03/25/2022
-# Tutorial: Azure Active Directory single sign-on (SSO) integration with FortiSASE SIA
+# Tutorial: Azure AD SSO integration with FortiSASE
-In this tutorial, you'll learn how to integrate FortiSASE SIA with Azure Active Directory (Azure AD). When you integrate FortiSASE SIA with Azure AD, you can:
+In this tutorial, you'll learn how to integrate FortiSASE with Azure Active Directory (Azure AD). When you integrate FortiSASE with Azure AD, you can:
-* Control in Azure AD who has access to FortiSASE SIA.
-* Enable your users to be automatically signed-in to FortiSASE SIA with their Azure AD accounts.
+* Control in Azure AD who has access to FortiSASE.
+* Enable your users to be automatically signed-in to FortiSASE with their Azure AD accounts.
* Manage your accounts in one central location - the Azure portal. ## Prerequisites
In this tutorial, you'll learn how to integrate FortiSASE SIA with Azure Active
To get started, you need the following items: * An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
-* FortiSASE SIA single sign-on (SSO) enabled subscription.
+* FortiSASE single sign-on (SSO) enabled subscription.
+* Along with Cloud Application Administrator, Application Administrator can also add or manage applications in Azure AD.
+For more information, see [Azure built-in roles](../roles/permissions-reference.md).
## Scenario description In this tutorial, you configure and test Azure AD SSO in a test environment.
-* FortiSASE SIA supports **SP** initiated SSO
+* FortiSASE supports **SP** initiated SSO.
-* FortiSASE SIA supports **Just In Time** user provisioning
+* FortiSASE supports **Just In Time** user provisioning.
+## Add FortiSASE from the gallery
-## Adding FortiSASE SIA from the gallery
-
-To configure the integration of FortiSASE SIA into Azure AD, you need to add FortiSASE SIA from the gallery to your list of managed SaaS apps.
+To configure the integration of FortiSASE into Azure AD, you need to add FortiSASE from the gallery to your list of managed SaaS apps.
1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account. 1. On the left navigation pane, select the **Azure Active Directory** service. 1. Navigate to **Enterprise Applications** and then select **All Applications**. 1. To add new application, select **New application**.
-1. In the **Add from the gallery** section, type **FortiSASE SIA** in the search box.
-1. Select **FortiSASE SIA** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-
+1. In the **Add from the gallery** section, type **FortiSASE** in the search box.
+1. Select **FortiSASE** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-## Configure and test Azure AD SSO for FortiSASE SIA
+## Configure and test Azure AD SSO for FortiSASE
-Configure and test Azure AD SSO with FortiSASE SIA using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in FortiSASE SIA.
+Configure and test Azure AD SSO with FortiSASE using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in FortiSASE.
-To configure and test Azure AD SSO with FortiSASE SIA, perform the following steps:
+To configure and test Azure AD SSO with FortiSASE, perform the following steps:
1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature. 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon. 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
-1. **[Configure FortiSASE SIA SSO](#configure-fortisase-sia-sso)** - to configure the single sign-on settings on application side.
- 1. **[Create FortiSASE SIA test user](#create-fortisase-sia-test-user)** - to have a counterpart of B.Simon in FortiSASE SIA that is linked to the Azure AD representation of user.
+1. **[Configure FortiSASE SSO](#configure-fortisase-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create FortiSASE test user](#create-fortisase-test-user)** - to have a counterpart of B.Simon in FortiSASE that is linked to the Azure AD representation of user.
1. **[Test SSO](#test-sso)** - to verify whether the configuration works. ## Configure Azure AD SSO Follow these steps to enable Azure AD SSO in the Azure portal.
-1. In the Azure portal, on the **FortiSASE SIA** application integration page, find the **Manage** section and select **single sign-on**.
+1. In the Azure portal, on the **FortiSASE** application integration page, find the **Manage** section and select **single sign-on**.
1. On the **Select a single sign-on method** page, select **SAML**. 1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings. ![Edit Basic SAML Configuration](common/edit-urls.png)
-1. On the **Basic SAML Configuration** section, enter the values for the following fields:
+1. On the **Basic SAML Configuration** section, perform the following steps:
a. In the **Identifier (Entity ID)** text box, type a URL using the following pattern: `https://<TENANTHOSTNAME>.edge.prod.fortisase.com/remote/saml/metadata`
Follow these steps to enable Azure AD SSO in the Azure portal.
`https://<TENANTHOSTNAME>.edge.prod.fortisase.com/remote/login` > [!NOTE]
- > These values are not real. Update these values with the actual Identifier, Reply URL and Sign on URL. Contact [FortiSASE SIA Client support team](mailto:fgc@fortinet.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+ > These values are not real. Update these values with the actual Identifier, Reply URL and Sign on URL. Contact [FortiSASE Client support team](mailto:fgc@fortinet.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
-1. FortiSASE SIA application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes.
+1. FortiSASE application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes.
![image](common/default-attributes.png)
-1. In addition to above, FortiSASE SIA application expects few more attributes to be passed back in SAML response, which are shown below. These attributes are also pre populated but you can review them as per your requirements.
+1. In addition to above, FortiSASE application expects few more attributes to be passed back in SAML response, which are shown below. These attributes are also pre populated but you can review them as per your requirements.
| Name | Source Attribute| | | |
Follow these steps to enable Azure AD SSO in the Azure portal.
![The Certificate download link](common/certificatebase64.png)
-1. On the **Set up FortiSASE SIA** section, copy the appropriate URL(s) based on your requirement.
+1. On the **Set up FortiSASE** section, copy the appropriate URL(s) based on your requirement.
![Copy configuration URLs](common/copy-configuration-urls.png)+ ### Create an Azure AD test user In this section, you'll create a test user in the Azure portal called B.Simon.
In this section, you'll create a test user in the Azure portal called B.Simon.
### Assign the Azure AD test user
-In this section, you'll enable B.Simon to use Azure single sign-on by granting access to FortiSASE SIA.
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to FortiSASE.
1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
-1. In the applications list, select **FortiSASE SIA**.
+1. In the applications list, select **FortiSASE**.
1. In the app's overview page, find the **Manage** section and select **Users and groups**. 1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog. 1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen. 1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected. 1. In the **Add Assignment** dialog, click the **Assign** button.
-## Configure FortiSASE SIA SSO
+## Configure FortiSASE SSO
-To configure single sign-on on **FortiSASE SIA** side, you need to send the downloaded **Certificate (Base64)** and appropriate copied URLs from Azure portal to [FortiSASE SIA support team](mailto:fgc@fortinet.com). They set this setting to have the SAML SSO connection set properly on both sides.
+To configure single sign-on on **FortiSASE** side, you need to send the downloaded **Certificate (Base64)** and appropriate copied URLs from Azure portal to [FortiSASE support team](mailto:fgc@fortinet.com). They set this setting to have the SAML SSO connection set properly on both sides.
-### Create FortiSASE SIA test user
+### Create FortiSASE test user
-In this section, a user called Britta Simon is created in FortiSASE SIA. FortiSASE SIA supports just-in-time user provisioning, which is enabled by default. There is no action item for you in this section. If a user doesn't already exist in FortiSASE SIA, a new one is created after authentication.
+In this section, a user called Britta Simon is created in FortiSASE. FortiSASE supports just-in-time user provisioning, which is enabled by default. There is no action item for you in this section. If a user doesn't already exist in FortiSASE, a new one is created after authentication.
## Test SSO In this section, you test your Azure AD single sign-on configuration with following options.
-* Click on **Test this application** in Azure portal. This will redirect to FortiSASE SIA Sign-on URL where you can initiate the login flow.
+* Click on **Test this application** in Azure portal. This will redirect to FortiSASE Sign-on URL where you can initiate the login flow.
-* Go to FortiSASE SIA Sign-on URL directly and initiate the login flow from there.
+* Go to FortiSASE Sign-on URL directly and initiate the login flow from there.
-* You can use Microsoft My Apps. When you click the FortiSASE SIA tile in the My Apps, this will redirect to FortiSASE SIA Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
+* You can use Microsoft My Apps. When you click the FortiSASE tile in the My Apps, this will redirect to FortiSASE Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
## Next steps
-Once you configure FortiSASE SIA you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-any-app).
+Once you configure FortiSASE you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-any-app).
active-directory Frankli Io Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/frankli-io-provisioning-tutorial.md
Add frankli from the Azure AD application gallery to start managing provisioning
The Azure AD provisioning service allows you to scope who will be provisioned. It's based on assignment to the application and or based on attributes of the user and group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* When assigning users and groups to frankli, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add more roles.
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control provisioning by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+* If you need additional roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
## Step 5. Configure automatic user provisioning to frankli
active-directory Freshservice Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/freshservice-provisioning-tutorial.md
Add Freshservice Provisioning from the Azure AD application gallery to start man
The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users to the application. If you choose to scope who will be provisioned based solely on attributes of the user, you can use a scoping filter as described [here](../manage-apps/define-conditional-rules-for-provisioning-user-accounts.md).
-* When assigning users to Freshservice Provisioning, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add additional roles.
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
* Start small. Test with a small set of users before rolling out to everyone. When scope for provisioning is set to assigned users, you can control this by assigning one or two users to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../manage-apps/define-conditional-rules-for-provisioning-user-accounts.md).
active-directory Fuze Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/fuze-provisioning-tutorial.md
Add Fuze from the Azure AD application gallery to start managing provisioning to
The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* When assigning users to Fuze, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add additional roles.
-
-* Start small. Test with a small set of users before rolling out to everyone. When scope for provisioning is set to assigned users, you can control this by assigning one or two users to the app. When scope is set to all users, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+* If you need additional roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
## Step 5. Configuring automatic user provisioning to Fuze This section guides you through the steps to configure the Azure AD provisioning service to create, update, and disable users and/or groups in Fuze based on user and/or group assignments in Azure AD.
active-directory G Suite Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/g-suite-provisioning-tutorial.md
Add G Suite from the Azure AD application gallery to start managing provisioning
The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* When assigning users and groups to G Suite, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add additional roles.
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+* If you need additional roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
## Step 5. Configure automatic user provisioning to G Suite
active-directory Github Ae Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/github-ae-provisioning-tutorial.md
Add GitHub AE from the Azure AD application gallery to start managing provisioni
The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user and/or group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and/or groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user and/or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* When assigning users and groups to GitHub AE, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add additional roles.
-
-* Start small. Test with a small set of users and/or groups before rolling out to everyone. When scope for provisioning is set to assigned users and/or groups, you can control this by assigning one or two users and/or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+* If you need additional roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
## Step 5. Configure automatic user provisioning to GitHub AE
active-directory Github Enterprise Managed User Oidc Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/github-enterprise-managed-user-oidc-provisioning-tutorial.md
Add GitHub Enterprise Managed User (OIDC) from the Azure AD application gallery
The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* When assigning users and groups to GitHub Enterprise Managed User (OIDC), you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs.
- * Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+* If you need additional roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
+ ## Step 5. Configure automatic user provisioning to GitHub Enterprise Managed User (OIDC)
active-directory Github Enterprise Managed User Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/github-enterprise-managed-user-provisioning-tutorial.md
Add GitHub Enterprise Managed User from the Azure AD application gallery to star
The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* When assigning users and groups to GitHub Enterprise Managed User, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs.
- * Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+* If you need additional roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
+ ## Step 5. Configure automatic user provisioning to GitHub Enterprise Managed User
active-directory Global Relay Identity Sync Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/global-relay-identity-sync-provisioning-tutorial.md
Add Global Relay Identity Sync from the Azure AD application gallery to start ma
The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+* If you need additional roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
## Step 5. Configure automatic user provisioning to Global Relay Identity Sync
active-directory Golinks Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/golinks-provisioning-tutorial.md
Add GoLinks from the Azure AD application gallery to start managing provisioning
The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* When assigning users and groups to GoLinks, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add other roles.
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+* If you need additional roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
## Step 5. Configure automatic user provisioning to GoLinks
active-directory Gong Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/gong-provisioning-tutorial.md
Add Gong from the Azure AD application gallery to start managing provisioning to
The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* When assigning users and groups to Gong, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add more roles.
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+* If you need additional roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
## Step 5. Configure automatic user provisioning to Gong
active-directory Grouptalk Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/grouptalk-provisioning-tutorial.md
Learn more about adding an application from the gallery [here](../manage-apps/ad
The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* When assigning users and groups to GroupTalk, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add additional roles.
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+* If you need additional roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
## Step 5. Configure automatic user provisioning to GroupTalk
active-directory Gtmhub Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/gtmhub-provisioning-tutorial.md
Add Gtmhub from the Azure AD application gallery to start managing provisioning
The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* When assigning users and groups to Gtmhub, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add additional roles.
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+* If you need additional roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
## Step 5. Configure automatic user provisioning to Gtmhub
active-directory H5mag Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/h5mag-provisioning-tutorial.md
Add H5mag from the Azure AD application gallery to start managing provisioning t
The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* When assigning users and groups to H5mag, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add additional roles.
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+* If you need additional roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
## Step 5. Configure automatic user provisioning to H5mag
active-directory Helloid Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/helloid-provisioning-tutorial.md
Add HelloID from the Azure AD application gallery to start managing provisioning
The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* When assigning users and groups to HelloID, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add additional roles.
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+* If you need additional roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
## Step 5. Configure automatic user provisioning to HelloID
active-directory Holmes Cloud Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/holmes-cloud-provisioning-tutorial.md
Add Holmes Cloud from the Azure AD application gallery to start managing provisi
The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* When assigning users and groups to Holmes Cloud, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add additional roles.
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+* If you need additional roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
## Step 5. Configure automatic user provisioning to Holmes Cloud
active-directory Hootsuite Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/hootsuite-provisioning-tutorial.md
Add Hootsuite from the Azure AD application gallery to start managing provisioni
The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* When assigning users and groups to Hootsuite, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add more roles.
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+* If you need additional roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
## Step 5. Configure automatic user provisioning to Hootsuite
active-directory Hoxhunt Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/hoxhunt-provisioning-tutorial.md
Add Hoxhunt from the Azure AD application gallery to start managing provisioning
The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* When assigning users and groups to Hoxhunt, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add additional roles.
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+* If you need additional roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
## Step 5. Configure automatic user provisioning to Hoxhunt
active-directory Ideo Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/ideo-provisioning-tutorial.md
Add IDEO from the Azure AD application gallery to start managing provisioning to
The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* When assigning users and groups to IDEO, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add additional roles.
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+* If you need additional roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
## Step 5. Configure automatic user provisioning to IDEO
active-directory Insight4grc Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/insight4grc-provisioning-tutorial.md
Add Insight4GRC from the Azure AD application gallery to start managing provisio
The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* When assigning users and groups to Insight4GRC, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add additional roles.
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+* If you need additional roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
## Step 5. Configure automatic user provisioning to Insight4GRC
active-directory Insite Lms Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/insite-lms-provisioning-tutorial.md
Add Insite LMS from the Azure AD application gallery to start managing provision
The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* When assigning users and groups to Insite LMS, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add more roles.
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+* If you need additional roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
## Step 5. Configure automatic user provisioning to Insite LMS
active-directory Introdus Pre And Onboarding Platform Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/introdus-pre-and-onboarding-platform-provisioning-tutorial.md
Add introDus Pre and Onboarding Platform from the Azure AD application gallery t
The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* When assigning users and groups to introDus Pre and Onboarding Platform, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add additional roles.
- * Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+* If you need additional roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
+ ## Step 5. Configure automatic user provisioning to introDus Pre and Onboarding Platform
active-directory Invision Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/invision-provisioning-tutorial.md
Add InVision from the Azure AD application gallery to start managing provisionin
The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* When assigning users and groups to InVision, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add additional roles.
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+* If you need additional roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
## Step 5. Configure automatic user provisioning to InVision
active-directory Invitedesk Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/invitedesk-provisioning-tutorial.md
Add InviteDesk from the Azure AD application gallery to start managing provision
The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* When assigning users and groups to InviteDesk, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add additional roles.
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+* If you need additional roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
## Step 5. Configure automatic user provisioning to InviteDesk
active-directory Iprova Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/iprova-provisioning-tutorial.md
Add iProva from the Azure AD application gallery to start managing provisioning
The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* When assigning users and groups to iProva, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add additional roles.
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+* If you need additional roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
## Step 5. Configure automatic user provisioning to iProva
active-directory Iris Intranet Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/iris-intranet-provisioning-tutorial.md
Add Iris Intranet from the Azure AD application gallery to start managing provis
The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* When assigning users and groups to Iris Intranet, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add additional roles.
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+* If you need additional roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
## Step 5. Configure automatic user provisioning to Iris Intranet
active-directory Jostle Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/jostle-provisioning-tutorial.md
Add Jostle from the Azure AD application gallery to start managing provisioning
The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* When assigning users and groups to Jostle, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add more roles.
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+* If you need additional roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
## Step 5. Configure automatic user provisioning to Jostle
active-directory Joyn Fsm Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/joyn-fsm-provisioning-tutorial.md
Add Joyn FSM from the Azure AD application gallery to start managing provisionin
The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* When assigning users and groups to Joyn FSM, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add more roles.
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+* If you need additional roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
## Step 5. Configure automatic user provisioning to Joyn FSM
active-directory Juno Journey Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/juno-journey-provisioning-tutorial.md
Add Juno Journey from the Azure AD application gallery to start managing provisi
The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* When assigning users and groups to Juno Journey, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add additional roles.
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+* If you need additional roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
## Step 5. Configure automatic user provisioning to Juno Journey
active-directory Kisi Physical Security Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/kisi-physical-security-provisioning-tutorial.md
Add Kisi Physical Security from the Azure AD application gallery to start managi
The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* When assigning users and groups to Kisi Physical Security, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add more roles.
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+* If you need additional roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
## Step 5. Configure automatic user provisioning to Kisi Physical Security
active-directory Klaxoon Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/klaxoon-provisioning-tutorial.md
Add Klaxoon from the Azure AD application gallery to start managing provisioning
The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* When assigning users and groups to Klaxoon, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add additional roles.
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+* If you need additional roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
## Step 5. Configure automatic user provisioning to Klaxoon
active-directory Klaxoon Saml Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/klaxoon-saml-provisioning-tutorial.md
Add Klaxoon SAML from the Azure AD application gallery to start managing provisi
The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* When assigning users and groups to Klaxoon SAML, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add additional roles.
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+* If you need additional roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
## Step 5. Configure automatic user provisioning to Klaxoon
active-directory Kpifire Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/kpifire-provisioning-tutorial.md
Add kpifire from the Azure AD application gallery to start managing provisioning
The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* When assigning users and groups to kpifire, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add more roles.
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+* If you need additional roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
## Step 5. Configure automatic user provisioning to kpifire
active-directory Kpn Grip Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/kpn-grip-provisioning-tutorial.md
Add KPN Grip from the Azure AD application gallery to start managing provisionin
The Azure AD provisioning service allows you to scope who will be provisioned. It's based on assignment to the application and or based on attributes of the user and group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* When assigning users and groups to KPN Grip, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add more roles.
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control provisioning by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+* If you need additional roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
## Step 5. Configure automatic user provisioning to KPN Grip
active-directory Lanschool Air Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/lanschool-air-provisioning-tutorial.md
Add LanSchool Air from the Azure AD application gallery to start managing provis
The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* When assigning users and groups to LanSchool Air, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add more roles.
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+* If you need additional roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
## Step 5. Configure automatic user provisioning to LanSchool Air
active-directory Limblecmms Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/limblecmms-provisioning-tutorial.md
Add LimbleCMMS from the Azure AD application gallery to start managing provision
The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* When assigning users and groups to LimbleCMMS, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add additional roles.
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+* If you need additional roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
## Step 5. Configure automatic user provisioning to LimbleCMMS
active-directory Linkedin Learning Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/linkedin-learning-provisioning-tutorial.md
Add LinkedIn Learning from the Azure AD application gallery to start managing pr
The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* When assigning users and groups to LinkedIn Learning, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add additional roles.
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+* If you need additional roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
## Step 5. Configure automatic user provisioning to LinkedIn Learning
active-directory Logicgate Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/logicgate-provisioning-tutorial.md
Add LogicGate from the Azure AD application gallery to start managing provisioni
The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* When assigning users and groups to LogicGate, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add additional roles.
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+* If you need additional roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
## Step 5. Configure automatic user provisioning to LogicGate
active-directory Logmein Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/logmein-provisioning-tutorial.md
Add LogMeIn from the Azure AD application gallery to start managing provisioning
The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* When assigning users and groups to LogMeIn, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add additional roles.
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+* If you need additional roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
## Step 5. Configure automatic user provisioning to LogMeIn
active-directory Lucidchart Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/lucidchart-provisioning-tutorial.md
Add Lucidchart from the Azure AD application gallery to start managing provision
The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* When assigning users and groups to Lucidchart, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add additional roles.
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+* If you need additional roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
## Step 5. Configure automatic user provisioning to Lucidchart
active-directory Maptician Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/maptician-provisioning-tutorial.md
Add Maptician from the Azure AD application gallery to start managing provisioni
The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* When assigning users and groups to Maptician, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add more roles.
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control it by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+* If you need additional roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
## Step 5. Configure automatic user provisioning to Maptician
active-directory Mediusflow Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/mediusflow-provisioning-tutorial.md
Add MediusFlow from the Azure AD application gallery to start managing provision
The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* When assigning users and groups to MediusFlow, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add additional roles.
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control it by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+* If you need additional roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
## Step 5. Configure automatic user provisioning to MediusFlow
active-directory Meta Networks Connector Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/meta-networks-connector-provisioning-tutorial.md
Add Meta Networks Connector from the Azure AD application gallery to start manag
The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* When assigning users and groups to Meta Networks Connector, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add more roles.
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+* If you need additional roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
active-directory Mixpanel Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/mixpanel-provisioning-tutorial.md
Add Mixpanel from the Azure AD application gallery to start managing provisionin
The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* When assigning users and groups to Mixpanel, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add additional roles.
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+* If you need additional roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
## Step 5. Configure automatic user provisioning to Mixpanel
active-directory Mondaycom Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/mondaycom-provisioning-tutorial.md
Add monday.com from the Azure AD application gallery to start managing provision
The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* When assigning users and groups to monday.com, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add additional roles.
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+* If you need additional roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
## Step 5. Configure automatic user provisioning to monday.com
active-directory Mural Identity Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/mural-identity-provisioning-tutorial.md
Add MURAL Identity from the Azure AD application gallery to start managing provi
The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* When assigning users and groups to MURAL Identity, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add additional roles.
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+* If you need additional roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
## Step 5. Configure automatic user provisioning to MURAL Identity
active-directory Mx3 Diagnostics Connector Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/mx3-diagnostics-connector-provisioning-tutorial.md
Add MX3 Diagnostics Connector from the Azure AD application gallery to start man
The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* When assigning users and groups to MX3 Diagnostics Connector, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add more roles.
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+* If you need additional roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
## Step 5. Configure automatic user provisioning to MX3 Diagnostics Connector
active-directory Myday Provision Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/myday-provision-tutorial.md
Add myday from the Azure AD application gallery to start managing provisioning t
The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* When assigning users and groups to myday, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add additional roles.
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+* If you need additional roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
## Step 5. Configure automatic user provisioning to myday
active-directory Netpresenter Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/netpresenter-provisioning-tutorial.md
Add Netpresenter Next from the Azure AD application gallery to start managing pr
The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* When assigning users and groups to Netpresenter Next, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add another roles.
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control it by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+* If you need additional roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
## Step 5. Configure automatic user provisioning to Netpresenter Next
active-directory New Relic By Organization Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/new-relic-by-organization-provisioning-tutorial.md
Add New Relic by Organization from the Azure AD application gallery to start man
The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* When assigning users and groups to New Relic by Organization, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add additional roles.
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+* If you need additional roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
## Step 5. Configure automatic user provisioning to New Relic by Organization
active-directory Notion Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/notion-tutorial.md
On the same settings page, under **Email domains** click **Contact support** to
After your email domains are approved and added, enable SAML SSO using the **Enable SAML** toggle.
-After successful testing, you may enforce SAML SSO using the **Enforce SAML** toggle. Please note that your Notion workspace administrastrators retain the ability to log in with email, but all other members will have to use SAML SSO to log in to Notion.
+After successful testing, you may enforce SAML SSO using the **Enforce SAML** toggle. Please note that your Notion workspace administrators retain the ability to log in with email, but all other members will have to use SAML SSO to log in to Notion.
### Create Notion test user
active-directory Olfeo Saas Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/olfeo-saas-provisioning-tutorial.md
Add Olfeo SAAS from the Azure AD application gallery to start managing provision
The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* When assigning users and groups to Olfeo SAAS, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add more roles.
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+* If you need additional roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
## Step 5. Configure automatic user provisioning to Olfeo SAAS
active-directory Open Text Directory Services Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/open-text-directory-services-provisioning-tutorial.md
Add OpenText Directory Services from the Azure AD application gallery to start m
The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* When assigning users and groups to OpenText Directory Services, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add additional roles.
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+* If you need additional roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
## Step 5. Configure automatic user provisioning to OpenText Directory Services
active-directory Oracle Cloud Infrastructure Console Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/oracle-cloud-infrastructure-console-provisioning-tutorial.md
Add Oracle Cloud Infrastructure Console from the Azure AD application gallery to
The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+* If you need additional roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
* If you need additional roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
active-directory Palo Alto Networks Scim Connector Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/palo-alto-networks-scim-connector-provisioning-tutorial.md
Add Palo Alto Networks SCIM Connector from the Azure AD application gallery to s
The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* When assigning users and groups to Palo Alto Networks SCIM Connector, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add more roles.
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+* If you need additional roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
## Step 5. Configure automatic user provisioning to Palo Alto Networks SCIM Connector
active-directory Papercut Cloud Print Management Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/papercut-cloud-print-management-provisioning-tutorial.md
Add PaperCut Cloud Print Management from the Azure AD application gallery to sta
The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* When assigning users and groups to PaperCut Cloud Print Management, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add additional roles.
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+* If you need additional roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
## Step 5. Configure automatic user provisioning to PaperCut Cloud Print Management
active-directory Parkhere Corporate Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/parkhere-corporate-tutorial.md
+
+ Title: 'Tutorial: Azure AD SSO integration with ParkHere Corporate'
+description: Learn how to configure single sign-on between Azure Active Directory and ParkHere Corporate.
++++++++ Last updated : 03/25/2022++++
+# Tutorial: Azure AD SSO integration with ParkHere Corporate
+
+In this tutorial, you'll learn how to integrate ParkHere Corporate with Azure Active Directory (Azure AD). When you integrate ParkHere Corporate with Azure AD, you can:
+
+* Control in Azure AD who has access to ParkHere Corporate.
+* Enable your users to be automatically signed-in to ParkHere Corporate with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+## Prerequisites
+
+To get started, you need the following items:
+
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* ParkHere Corporate single sign-on (SSO) enabled subscription.
+* Along with Cloud Application Administrator, Application Administrator can also add or manage applications in Azure AD.
+For more information, see [Azure built-in roles](../roles/permissions-reference.md).
+
+## Scenario description
+
+In this tutorial, you configure and test Azure AD SSO in a test environment.
+
+* ParkHere Corporate supports **IDP** initiated SSO.
+
+## Add ParkHere Corporate from the gallery
+
+To configure the integration of ParkHere Corporate into Azure AD, you need to add ParkHere Corporate from the gallery to your list of managed SaaS apps.
+
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **ParkHere Corporate** in the search box.
+1. Select **ParkHere Corporate** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+
+## Configure and test Azure AD SSO for ParkHere Corporate
+
+Configure and test Azure AD SSO with ParkHere Corporate using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in ParkHere Corporate.
+
+To configure and test Azure AD SSO with ParkHere Corporate, perform the following steps:
+
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure ParkHere Corporate SSO](#configure-parkhere-corporate-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create ParkHere Corporate test user](#create-parkhere-corporate-test-user)** - to have a counterpart of B.Simon in ParkHere Corporate that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+
+## Configure Azure AD SSO
+
+Follow these steps to enable Azure AD SSO in the Azure portal.
+
+1. In the Azure portal, on the **ParkHere Corporate** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Edit Basic SAML Configuration](common/edit-urls.png)
+
+1. On the **Basic SAML Configuration** section, the application is pre-configured and the necessary URLs are already pre-populated with Azure. The user needs to save the configuration by clicking the **Save** button.
+
+1. On the **Set up single sign-on with SAML** page, In the **SAML Signing Certificate** section, click copy button to copy **App Federation Metadata Url** and save it on your computer.
+
+ ![The Certificate download link](common/copy-metadataurl.png)
+
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+### Assign the Azure AD test user
+
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to ParkHere Corporate.
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **ParkHere Corporate**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure ParkHere Corporate SSO
+
+To configure single sign-on on **ParkHere Corporate** side, you need to send the **App Federation Metadata Url** to [ParkHere Corporate support team](mailto:support@park-here.eu). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create ParkHere Corporate test user
+
+In this section, you create a user called Britta Simon in ParkHere Corporate. Work with [ParkHere Corporate support team](mailto:support@park-here.eu) to add the users in the ParkHere Corporate platform. Users must be created and activated before you use single sign-on.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+* Click on Test this application in Azure portal and you should be automatically signed in to the ParkHere Corporate for which you set up the SSO.
+
+* You can use Microsoft My Apps. When you click the ParkHere Corporate tile in the My Apps, you should be automatically signed in to the ParkHere Corporate for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Next steps
+
+Once you configure ParkHere Corporate you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Parsable Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/parsable-provisioning-tutorial.md
Add Parsable from the Azure AD application gallery to start managing provisionin
The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* When assigning users and groups to Parsable, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add additional roles.
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+* If you need additional roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
## Step 5. Configure automatic user provisioning to Parsable
active-directory Peripass Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/peripass-provisioning-tutorial.md
Add Peripass from the Azure AD application gallery to start managing provisionin
The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* When assigning users and groups to Peripass, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add more roles.
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control it by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+* If you need additional roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
## Step 5. Configure automatic user provisioning to Peripass
active-directory Plandisc Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/plandisc-provisioning-tutorial.md
Add Plandisc from the Azure AD application gallery to start managing provisionin
The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user and group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* When assigning users and groups to Plandisc, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add more roles.
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control it by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+* If you need additional roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
## Step 5. Configure automatic user provisioning to Plandisc
active-directory Preciate Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/preciate-provisioning-tutorial.md
Add Preciate from the Azure AD application gallery to start managing provisionin
The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* When assigning users and groups to Preciate, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add other roles.
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control it by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+* If you need additional roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
## Step 5. Configure automatic user provisioning to Preciate
active-directory Printer Logic Saas Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/printer-logic-saas-provisioning-tutorial.md
Add PrinterLogic SaaS from the Azure AD application gallery to start managing pr
The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* When assigning users and groups to PrinterLogic SaaS, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add additional roles.
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+* If you need additional roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
## Step 5. Configure automatic user provisioning to PrinterLogic SaaS
active-directory Prodpad Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/prodpad-provisioning-tutorial.md
Add ProdPad from the Azure AD application gallery to start managing provisioning
The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* When assigning users and groups to ProdPad, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add more roles.
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+* If you need additional roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
## Step 5. Configure automatic user provisioning to ProdPad
active-directory Proware Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/proware-provisioning-tutorial.md
Add Proware from the Azure AD application gallery to start managing provisioning
The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* When assigning users and groups to Proware, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add additional roles.
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+* If you need additional roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
## Step 5. Configure automatic user provisioning to Proware
active-directory Purecloud By Genesys Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/purecloud-by-genesys-provisioning-tutorial.md
Add Genesys Cloud for Azure from the Azure AD application gallery to start manag
The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user and group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* When assigning users and groups to Genesys Cloud for Azure, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add more roles.
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control provisioning by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+* If you need additional roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
## Step 5. Configure automatic user provisioning to Genesys Cloud for Azure
active-directory Real Links Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/real-links-provisioning-tutorial.md
Add Real Links from the Azure AD application gallery to start managing provision
The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* When assigning users and groups to Real Links, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add more roles.
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control it by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+* If you need additional roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
## Step 5. Configure automatic user provisioning to Real Links
active-directory Ringcentral Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/ringcentral-provisioning-tutorial.md
Add RingCentral from the Azure AD application gallery to start managing provisio
The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* When assigning users and groups to RingCentral, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add additional roles.
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+* If you need additional roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
## Step 5. Configure automatic user provisioning to RingCentral
active-directory Rollbar Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/rollbar-provisioning-tutorial.md
Add Rollbar from the Azure AD application gallery to start managing provisioning
The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* When assigning users and groups to Rollbar, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add additional roles.
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+* If you need additional roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
## Step 5. Configure automatic user provisioning to Rollbar
active-directory Rouse Sales Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/rouse-sales-provisioning-tutorial.md
Add Rouse Sales from the Azure AD application gallery to start managing provisio
The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* When assigning users and groups to Rouse Sales, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add additional roles.
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+* If you need additional roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
## Step 5. Configure automatic user provisioning to Rouse Sales
active-directory Samanage Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/samanage-provisioning-tutorial.md
Add SolarWinds Service Desk from the Azure AD application gallery to start manag
The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* When assigning users and groups to SolarWinds Service Desk, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add additional roles.
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+* If you need additional roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
## Step 5. Configure automatic user provisioning to SolarWinds Service Desk
active-directory Sap Analytics Cloud Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/sap-analytics-cloud-provisioning-tutorial.md
Add SAP Analytics Cloud from the Azure AD application gallery to start managing
The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* When assigning users and groups to SAP Analytics Cloud, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add additional roles.
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+* If you need additional roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
## Step 5. Configure automatic user provisioning to SAP Analytics Cloud
active-directory Schoolstream Asa Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/schoolstream-asa-provisioning-tutorial.md
If you have previously setup SchoolStream ASA for SSO you can use the same appli
The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* When assigning users and groups to SchoolStream ASA, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add additional roles.
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+* If you need additional roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
## Step 5. Configure automatic user provisioning to SchoolStream ASA
active-directory Secure Deliver Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/secure-deliver-provisioning-tutorial.md
Add SECURE DELIVER from the Azure AD application gallery to start managing provi
The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* When assigning users and groups to SECURE DELIVER, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add additional roles.
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+* If you need additional roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
## Step 5. Configure automatic user provisioning to SECURE DELIVER
active-directory Secure Login Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/secure-login-provisioning-tutorial.md
Add SecureLogin from the Azure AD application gallery to start managing provisio
The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* When assigning users and groups to SecureLogin, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add additional roles.
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+* If you need additional roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
## Step 5. Configure automatic user provisioning to SecureLogin
active-directory Segment Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/segment-provisioning-tutorial.md
Add Segment from the Azure AD application gallery to start managing provisioning
The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* When assigning users and groups to Segment, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add additional roles.
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+* If you need additional roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
## Step 5. Configure automatic user provisioning to Segment
active-directory Sentry Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/sentry-provisioning-tutorial.md
Add Sentry from the Azure AD application gallery to start managing provisioning
The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* When assigning users and groups to Sentry, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add additional roles.
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+* If you need additional roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
## Step 5. Configure automatic user provisioning to Sentry
active-directory Servicenow Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/servicenow-provisioning-tutorial.md
Keep these tips in mind:
* When you're assigning users and groups to ServiceNow, you must select a role other than Default Access. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the Default Access role, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add more roles.
-* Start small. Test with a small set of users and groups before rolling out to everyone. When the scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When the scope is set to all users and groups, you can specify an [attribute-based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+* If you need additional roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
## Step 5: Configure automatic user provisioning to ServiceNow
active-directory Shopify Plus Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/shopify-plus-provisioning-tutorial.md
Add Shopify Plus from the Azure AD application gallery to start managing provisi
The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* When assigning users and groups to Shopify Plus, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add additional roles.
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+* If you need additional roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
## Step 5. Configure automatic user provisioning to Shopify Plus
active-directory Sigma Computing Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/sigma-computing-provisioning-tutorial.md
Add Sigma Computing from the Azure AD application gallery to start managing prov
The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* When assigning users and groups to Sigma Computing, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add additional roles.
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+* If you need additional roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
## Step 5. Configure automatic user provisioning to Sigma Computing
active-directory Slack Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/slack-provisioning-tutorial.md
Add Slack from the Azure AD application gallery to start managing provisioning t
The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* When assigning users and groups to Slack, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add additional roles.
- * Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+* If you need additional roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
+ ## Step 4. Configure automatic user provisioning to Slack This section guides you through connecting your Azure AD to Slack's user account provisioning API, and configuring the provisioning service to create, update, and disable assigned user accounts in Slack based on user and group assignment in Azure AD.
active-directory Smallstep Ssh Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/smallstep-ssh-provisioning-tutorial.md
Add Smallstep SSH from the Azure AD application gallery to start managing provis
The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* When assigning users and groups to Smallstep SSH, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add other roles.
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+* If you need additional roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
## Step 5. Configure automatic user provisioning to Smallstep SSH
active-directory Smartsheet Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/smartsheet-provisioning-tutorial.md
Add Smartsheet from the Azure AD application gallery to start managing provision
The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* When assigning users and groups to Smartsheet, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add additional roles.
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
* To ensure parity in user role assignments between Smartsheet and Azure AD, it is recommended to utilize the same role assignments populated in the full Smartsheet user list. To retrieve this user list from Smartsheet, navigate to **Account Admin > User Management > More Actions > Download User List (csv)**.
The Azure AD provisioning service allows you to scope who will be provisioned ba
* If a user has multiple roles assigned in Smartsheet, you **MUST** ensure that these role assignments are replicated in Azure AD to avoid a scenario where users could lose access to Smartsheet objects permanently. Each unique role in Smartsheet **MUST** be assigned to a different group in Azure AD. The user **MUST** then be added to each of the groups corresponding to roles desired.
-* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+* If you need additional roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
## Step 5. Configure automatic user provisioning to Smartsheet
active-directory Snowflake Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/snowflake-provisioning-tutorial.md
Keep these tips in mind:
* When you're assigning users and groups to Snowflake, you must select a role other than Default Access. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the Default Access role, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add more roles.
-* Start small. Test with a small set of users and groups before rolling out to everyone. When the scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When the scope is set to all users and groups, you can specify an [attribute-based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+* If you need additional roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
## Step 5: Configure automatic user provisioning to Snowflake
active-directory Sosafe Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/sosafe-provisioning-tutorial.md
Add SoSafe from the Azure AD application gallery to start managing provisioning
The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* When assigning users and groups to SoSafe, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add additional roles.
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+* If you need additional roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
## Step 5. Configure automatic user provisioning to SoSafe
active-directory Splashtop Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/splashtop-provisioning-tutorial.md
Add Splashtop from the Azure AD application gallery to start managing provisioni
The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user and group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* When assigning users and groups to Splashtop, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add other roles.
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+* If you need additional roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
## Step 5. Configure automatic user provisioning to Splashtop
active-directory Swit Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/swit-provisioning-tutorial.md
Add Swit from the Azure AD application gallery to start managing provisioning to
The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* When assigning users and groups to Swit, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add more roles.
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control it by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+* If you need additional roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
## Step 5. Configure automatic user provisioning to Swit
active-directory Talentech Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/talentech-provisioning-tutorial.md
Add Talentech from the Azure AD application gallery to start managing provisioni
The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* When assigning users and groups to Talentech, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add extra roles.
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+* If you need additional roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
## Step 5. Configure automatic user provisioning to Talentech
active-directory Tap App Security Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/tap-app-security-provisioning-tutorial.md
Add TAP App Security from the Azure AD application gallery to start managing pro
The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* When assigning users and groups to TAP App Security, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add more roles.
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+* If you need additional roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
## Step 5. Configure automatic user provisioning to TAP App Security
active-directory Taskize Connect Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/taskize-connect-provisioning-tutorial.md
Add Taskize Connect from the Azure AD application gallery to start managing prov
The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* When assigning users and groups to Taskize Connect, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add more roles.
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control it by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+* If you need additional roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
## Step 5. Configure automatic user provisioning to Taskize Connect
active-directory Teamgo Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/teamgo-provisioning-tutorial.md
Add Teamgo from the Azure AD application gallery to start managing provisioning
The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* When assigning users and groups to Teamgo, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add additional roles.
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+* If you need additional roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
## Step 5. Configure automatic user provisioning to Teamgo
active-directory Teamviewer Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/teamviewer-provisioning-tutorial.md
Add TeamViewer from the Azure AD application gallery to start managing provision
The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* When assigning users and groups to TeamViewer, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add additional roles.
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+* If you need additional roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
## Step 5. Configure automatic user provisioning to TeamViewer
active-directory Terratrue Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/terratrue-provisioning-tutorial.md
Add TerraTrue from the Azure AD application gallery to start managing provisioni
The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* When assigning users and groups to TerraTrue, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add more roles.
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+* If you need additional roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
## Step 5. Configure automatic user provisioning to TerraTrue
active-directory Thrive Lxp Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/thrive-lxp-provisioning-tutorial.md
Add Thrive LXP from the Azure AD application gallery to start managing provision
The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* When assigning users and groups to Thrive LXP, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add additional roles.
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+* If you need additional roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
## Step 5. Configure automatic user provisioning to Thrive LXP
active-directory Timeclock 365 Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/timeclock-365-provisioning-tutorial.md
Add TimeClock 365 from the Azure AD application gallery to start managing provis
The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* When assigning users and groups to TimeClock 365, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add additional roles.
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+* If you need additional roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
## Step 5. Configure automatic user provisioning to TimeClock 365
active-directory Timeclock 365 Saml Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/timeclock-365-saml-provisioning-tutorial.md
Add TimeClock 365 SAML from the Azure AD application gallery to start managing p
The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* When assigning users and groups to TimeClock 365 SAML, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add additional roles.
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+* If you need additional roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
## Step 5. Configure automatic user provisioning to TimeClock 365 SAML
active-directory Travelperk Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/travelperk-provisioning-tutorial.md
Add TravelPerk from the Azure AD application gallery to start managing provision
The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md). -- When assigning users to TravelPerk, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add additional roles.
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-- Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+* If you need additional roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
## Step 5. Configure automatic user provisioning to TravelPerk
active-directory Tribeloo Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/tribeloo-provisioning-tutorial.md
Add Tribeloo from the Azure AD application gallery to start managing provisionin
The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* When assigning users and groups to Tribeloo, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add additional roles.
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+* If you need additional roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
## Step 5. Configure automatic user provisioning to Tribeloo
active-directory Twingate Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/twingate-provisioning-tutorial.md
Add Twingate from the Azure AD application gallery to start managing provisionin
The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* When assigning users and groups to Twingate, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add additional roles.
- * Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+* If you need additional roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
+ ## Step 5. Configure automatic user provisioning to Twingate
active-directory Unifi Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/unifi-provisioning-tutorial.md
Add UNIFI from the Azure AD application gallery to start managing provisioning t
The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* When assigning users and groups to UNIFI, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add additional roles.
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+* If you need additional roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
## Step 5. Configure automatic user provisioning to UNIFI
active-directory Visibly Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/visibly-provisioning-tutorial.md
Add Visibly from the Azure AD application gallery to start managing provisioning
The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* When assigning users and groups to Visibly, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add additional roles.
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+* If you need additional roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
## Step 5. Configure automatic user provisioning to Visibly
active-directory Vonage Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/vonage-provisioning-tutorial.md
Add Vonage from the Azure AD application gallery to start managing provisioning
The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* When assigning users and groups to Vonage, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add additional roles.
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+* If you need additional roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
## Step 5. Configure automatic user provisioning to Vonage
active-directory Webroot Security Awareness Training Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/webroot-security-awareness-training-provisioning-tutorial.md
Add Webroot Security Awareness Training from the Azure AD application gallery to
The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* When assigning users and groups to Webroot Security Awareness Training, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add additional roles.
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+* If you need additional roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
## Step 5. Configure automatic user provisioning to Webroot Security Awareness Training
active-directory Wedo Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/wedo-provisioning-tutorial.md
Add WEDO from the Azure AD application gallery to start managing provisioning to
The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* When assigning users and groups to WEDO, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add additional roles.
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+* If you need additional roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
## Step 5. Configure automatic user provisioning to WEDO
active-directory Workplace By Facebook Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/workplace-by-facebook-provisioning-tutorial.md
Add Workplace by Facebook from the Azure AD application gallery to start managin
The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* When assigning users to Workplace by Facebook, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add more roles.
-
-* Start small. Test with a small set of users before rolling out to everyone. When scope for provisioning is set to assigned users, you can control this by assigning one or two users to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+* If you need additional roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
## Step 5. Configure automatic user provisioning to Workplace by Facebook This section guides you through the steps to configure the Azure AD provisioning service to create, update, and disable users in Workplace by Facebook App based on user assignments in Azure AD.
active-directory Zapier Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/zapier-provisioning-tutorial.md
Add Zapier from the Azure AD application gallery to start managing provisioning
The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* When assigning users and groups to Zapier, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add additional roles.
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+* If you need additional roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
## Step 5. Configure automatic user provisioning to Zapier
active-directory Zero Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/zero-provisioning-tutorial.md
Add Zero from the Azure AD application gallery to start managing provisioning to
The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* When assigning users and groups to Zero, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add additional roles.
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+* If you need additional roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
## Step 5. Configure automatic user provisioning to Zero
active-directory Zip Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/zip-provisioning-tutorial.md
Add Zip from the Azure AD application gallery to start managing provisioning to
The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* When assigning users and groups to Zip, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add additional roles.
- * Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+* If you need additional roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
+ ## Step 5. Configure automatic user provisioning to Zip
active-directory Zoom Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/zoom-provisioning-tutorial.md
Add Zoom from the Azure AD application gallery to start managing provisioning to
The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* When assigning users and groups to Zoom, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add additional roles.
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+* If you need additional roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
## Step 5. Configure automatic user provisioning to Zoom
aks Resize Node Pool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/resize-node-pool.md
By default, your cluster has AKS_managed pod disruption budgets (such as `coredn
To delete the existing node pool, use the Azure portal or the [az aks delete][az-aks-delete] command:
-```bash
-kubectl delete nodepool /
- --resource-group myResourceGroup /
- --cluster-name myAKSCluster /
+> [!IMPORTANT]
+> When you delete a node pool, AKS doesn't perform cordon and drain. To minimize the disruption of rescheduling pods currently running on the node pool you are going to delete, perform a cordon and drain on all nodes in the node pool before deleting.
+
+```azurecli-interactive
+az aks nodepool delete \
+ --resource-group myResourceGroup \
+ --cluster-name myAKSCluster \
--name nodepool1 ```
aks Use Multiple Node Pools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-multiple-node-pools.md
AKS offers a separate feature to automatically scale node pools with a feature c
If you no longer need a pool, you can delete it and remove the underlying VM nodes. To delete a node pool, use the [az aks node pool delete][az-aks-nodepool-delete] command and specify the node pool name. The following example deletes the *mynodepool* created in the previous steps: > [!CAUTION]
-> There are no recovery options for data loss that may occur when you delete a node pool. If pods can't be scheduled on other node pools, those applications are unavailable. Make sure you don't delete a node pool when in-use applications don't have data backups or the ability to run on other node pools in your cluster.
+> When you delete a node pool, AKS doesn't perform cordon and drain, and there are no recovery options for data loss that may occur when you delete a node pool. If pods can't be scheduled on other node pools, those applications become unavailable. Make sure you don't delete a node pool when in-use applications don't have data backups or the ability to run on other node pools in your cluster. To minimize the disruption of rescheduling pods currently running on the node pool you are going to delete, perform a cordon and drain on all nodes in the node pool before deleting. For more details, see [cordon and drain node pools][cordon-and-drain].
```azurecli-interactive az aks nodepool delete -g myResourceGroup --cluster-name myAKSCluster --name mynodepool --no-wait
Use [proximity placement groups][reduce-latency-ppg] to reduce latency for your
[node-image-upgrade]: node-image-upgrade.md [fips]: /azure/compliance/offerings/offering-fips-140-2 [use-tags]: use-tags.md
-[use-labels]: use-labels.md
+[use-labels]: use-labels.md
+[cordon-and-drain]: resize-node-pool.md#cordon-the-existing-nodes
api-management Devops Api Development Templates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/devops-api-development-templates.md
API developers face challenges when working with Resource Manager templates:
* API developers often work with the [OpenAPI Specification](https://github.com/OAI/OpenAPI-Specification) and might not be familiar with Resource Manager schemas. Authoring templates manually might be error-prone.
- A tool called [Creator](https://github.com/Azure/azure-api-management-devops-resource-kit/blob/main/src/APIM_ARMTemplate/README.md#creator) in the resource kit can help automate the creation of API templates based on an Open API Specification file. Additionally, developers can supply API Management policies for an API in XML format.
+ A tool called [Creator](https://github.com/Azure/azure-api-management-devops-resource-kit/blob/main/src/README.md#creator) in the resource kit can help automate the creation of API templates based on an Open API Specification file. Additionally, developers can supply API Management policies for an API in XML format.
* For customers who are already using API Management, another challenge is to extract existing configurations into Resource Manager templates. For those customers, a tool called [Extractor](https://github.com/Azure/azure-api-management-devops-resource-kit/blob/main/src/APIM_ARMTemplate/README.md#creator) in the resource kit can help generate templates by extracting configurations from their API Management instances.
app-service App Gateway With Service Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/networking/app-gateway-with-service-endpoints.md
There are two parts to this configuration besides creating the App Service and t
With Azure portal, you follow four steps to provision and configure the setup. If you have existing resources, you can skip the first steps. 1. Create an App Service using one of the Quickstarts in the App Service documentation, for example [.NET Core Quickstart](../quickstart-dotnetcore.md) 2. Create an Application Gateway using the [portal Quickstart](../../application-gateway/quick-create-portal.md), but skip the Add backend targets section.
-3. Configure [App Service as a backend in Application Gateway](../../application-gateway/configure-web-app-portal.md), but skip the Restrict access section.
+3. Configure [App Service as a backend in Application Gateway](../../application-gateway/configure-web-app.md), but skip the Restrict access section.
4. Finally create the [access restriction using service endpoints](../../app-service/app-service-ip-restrictions.md#set-a-service-endpoint-based-rule). You can now access the App Service through Application Gateway, but if you try to access the App Service directly, you should receive a 403 HTTP error indicating that the web site is stopped.
app-service Overview Vnet Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/overview-vnet-integration.md
By default, only private traffic (also known as [RFC1918](https://datatracker.ie
Application routing applies to traffic that is sent from your app after it has been started. See [configuration routing](#configuration-routing) for traffic during start up. When you configure application routing, you can either route all traffic or only private traffic into your virtual network. You configure this behavior through the **Route All** setting. If **Route All** is disabled, your app only routes private traffic into your virtual network. If you want to route all your outbound app traffic into your virtual network, make sure that **Route All** is enabled. > [!NOTE]
-> * Only traffic configured in applicaiton or configuration routing is subject to the NSGs and UDRs that are applied to your integration subnet.
+> * Only traffic configured in application or configuration routing is subject to the NSGs and UDRs that are applied to your integration subnet.
> * When **Route All** is enabled, outbound traffic from your app is still sent from the addresses that are listed in your app properties, unless you provide routes that direct the traffic elsewhere. > * Regional virtual network integration can't use port 25.
application-gateway Application Gateway Configure Listener Specific Ssl Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/application-gateway-configure-listener-specific-ssl-policy.md
Now that we've created an SSL profile with a listener-specific SSL policy, we ne
![Associate SSL profile to new listener](./media/mutual-authentication-portal/mutual-authentication-listener-portal.png) ### Limitations
-There is a limitation right now on Application Gateway where different listeners using the same port cannot have the same custom SSL policy configured. To ensure that the custom protocols configured as part of the custom SSL policy are applied to a listener, make sure that different listeners are running on different ports or configure the same custom SSL policy with the same custom protocols across all listeners running on the same port.
+There is a limitation right now on Application Gateway where different listeners using the same port cannot have SSL policies (predefined or custom) with different TLS protocol versions. Choosing the same TLS version for different listeners will work for configuring cipher suite preference for each listener. However, to use different TLS protocol versions for separate listeners, you will need to use distinct ports for each.
## Next steps
application-gateway Application Gateway Web App Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/application-gateway-web-app-overview.md
- Title: Multi-tenant back ends-
-description: This page provides an overview of the Application Gateway support for multi-tenant back ends.
---- Previously updated : 06/09/2020----
-# Application Gateway support for multi-tenant back ends such as App service
-
-In multi-tenant architectural designs in web servers, multiple websites are running on the same web server instance. Hostnames are used to differentiate between the different applications which are hosted. By default, application gateway does not change the incoming HTTP host header from the client and sends the header unaltered to the back end. This works well for backend pool members such as NICs, virtual machine scale sets, public IP addresses, internal IP addresses and FQDN as these do not rely on a specific host header or SNI extension to resolve to the correct endpoint. However, there are many services such as Azure App service web apps and Azure API management that are multi-tenant in nature and rely on a specific host header or SNI extension to resolve to the correct endpoint. Usually, the DNS name of the application, which in turn is the DNS name associated with the application gateway, is different from the domain name of the backend service. Therefore, the host header in the original request received by the application gateway is not the same as the host name of the backend service. Because of this, unless the host header in the request from the application gateway to the backend is changed to the host name of the backend service, the multi-tenant backends are not able to resolve the request to the correct endpoint.
-
-Application gateway provides a capability which allows users to override the HTTP host header in the request based on the host name of the back-end. This capability enables support for multi-tenant back ends such as Azure App service web apps and API management. This capability is available for both the v1 and v2 standard and WAF SKUs.
-
-![host override](./media/application-gateway-web-app-overview/host-override.png)
-
-> [!NOTE]
-> This is not applicable to Azure App service environment (ASE) since ASE is a dedicated resource unlike Azure App service which is a multi-tenant resource.
-
-## Override host header in the request
-
-The ability to specify a host override is defined in the [HTTP settings](./configuration-overview.md#http-settings) and can be applied to any back-end pool during rule creation. The following two ways of overriding host header and SNI extension for multi-tenant back ends is supported:
--- The ability to set the host name to a fixed value explicitly entered in the HTTP settings. This capability ensures that the host header is overridden to this value for all traffic to the back-end pool where the particular HTTP settings are applied. When using end to end TLS, this overridden host name is used in the SNI extension. This capability enables scenarios where a back-end pool farm expects a host header that is different from the incoming customer host header.--- The ability to derive the host name from the IP or FQDN of the back-end pool members. HTTP settings also provide an option to dynamically pick the host name from a back-end pool member's FQDN if configured with the option to derive host name from an individual back-end pool member. When using end to end TLS, this host name is derived from the FQDN and is used in the SNI extension. This capability enables scenarios where a back-end pool can have two or more multi-tenant PaaS services like Azure web apps and the request's host header to each member contains the host name derived from its FQDN. For implementing this scenario, we use a switch in the HTTP Settings called [Pick hostname from backend address](./configuration-http-settings.md#pick-host-name-from-back-end-address) which will dynamically override the host header in the original request to the one mentioned in the backend pool. For example, if your backend pool FQDN contains ΓÇ£contoso11.azurewebsites.netΓÇ¥ and ΓÇ£contoso22.azurewebsites.netΓÇ¥, the original requestΓÇÖs host header which is contoso.com will be overridden to contoso11.azurewebsites.net or contoso22.azurewebsites.net when the request is sent to the appropriate backend server. -
- ![web app scenario](./media/application-gateway-web-app-overview/scenario.png)
-
-With this capability, customers specify the options in the HTTP settings and custom probes to the appropriate configuration. This setting is then tied to a listener and a back-end pool by using a rule.
-
-## Special considerations
-
-### TLS termination and end to end TLS with multi-tenant services
-
-Both TLS termination and end to end TLS encryption is supported with multi-tenant services. For TLS termination at the application gateway, TLS certificate continues to be required to be added to the application gateway listener. However, in case of end to end TLS, trusted Azure services such as Azure App service web apps do not require allowing the backends in the application gateway. Therefore, there is no need to add any authentication certificates.
-
-![end to end TLS](./media/application-gateway-web-app-overview/end-to-end-ssl.png)
-
-Notice that in the above image, there is no requirement to add authentication certificates when App service is selected as backend.
-
-### Health probe
-
-Overriding the host header in the **HTTP settings** only affects the request and its routing. it does not impact the health probe behavior. For end to end functionality to work, both the probe and the HTTP settings must be modified to reflect the correct configuration. In addition to providing the ability to specify a host header in the probe configuration, custom probes also support the ability to derive the host header from the currently configured HTTP settings. This configuration can be specified by using the `PickHostNameFromBackendHttpSettings` parameter in the probe configuration.
-
-### Redirection to App ServiceΓÇÖs URL scenario
-
-There can be scenarios where the hostname in the response from the App service may direct the end-user browser to the *.azurewebsites.net hostname instead of the domain associated with the Application Gateway. This issue may happen when:
--- You have redirection configured on your App Service. Redirection can be as simple as adding a trailing slash to the request.-- You have Azure AD authentication which causes the redirection.-
-To resolve such cases, see [Troubleshoot redirection to App serviceΓÇÖs URL issue](./troubleshoot-app-service-redirection-app-service-url.md).
-
-## Next steps
-
-Learn how to set up an application gateway with a multi-tenant app such as Azure App service web app as a back-end pool member by visiting [Configure App Service web apps with Application Gateway](./configure-web-app-portal.md)
application-gateway Configuration Http Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/configuration-http-settings.md
Previously updated : 09/09/2020- Last updated : 02/17/2022+ # Application Gateway HTTP settings configuration
This feature is useful when you want to keep a user session on the same server a
> Some vulnerability scans may flag the Applicaton Gateway affinity cookie because the Secure or HttpOnly flags are not set. These scans do not take into account that the data in the cookie is generated using a one-way hash. The cookie does not contain any user information and is used purely for routing.
-The [Chromium browser](https://www.chromium.org/Home) [v80 update](https://chromiumdash.appspot.com/schedule) brought a mandate where HTTP cookies without [SameSite](https://tools.ietf.org/id/draft-ietf-httpbis-rfc6265bis-03.html#rfc.section.5.3.7) attribute has to be treated as SameSite=Lax. In the case of CORS (Cross-Origin Resource Sharing) requests, if the cookie has to be sent in a third-party context, it has to use *SameSite=None; Secure* attributes and it should be sent over HTTPS only. Otherwise, in a HTTP only scenario, the browser doesn't send the cookies in the third-party context. The goal of this update from Chrome is to enhance security and to avoid Cross-Site Request Forgery (CSRF) attacks.
+The [Chromium browser](https://www.chromium.org/Home) [v80 update](https://chromiumdash.appspot.com/schedule) brought a mandate where HTTP cookies without [SameSite](https://tools.ietf.org/id/draft-ietf-httpbis-rfc6265bis-03.html#rfc.section.5.3.7) attribute have to be treated as SameSite=Lax. In the case of CORS (Cross-Origin Resource Sharing) requests, if the cookie has to be sent in a third-party context, it has to use *SameSite=None; Secure* attributes and it should be sent over HTTPS only. Otherwise, in an HTTP only scenario, the browser doesn't send the cookies in the third-party context. The goal of this update from Chrome is to enhance security and to avoid Cross-Site Request Forgery (CSRF) attacks.
-To support this change, starting February 17 2020, Application Gateway (all the SKU types) will inject another cookie called *ApplicationGatewayAffinityCORS* in addition to the existing *ApplicationGatewayAffinity* cookie. The *ApplicationGatewayAffinityCORS* cookie has two more attributes added to it (*"SameSite=None; Secure"*) so that sticky session are maintained even for cross-origin requests.
+To support this change, starting February 17 2020, Application Gateway (all the SKU types) will inject another cookie called *ApplicationGatewayAffinityCORS* in addition to the existing *ApplicationGatewayAffinity* cookie. The *ApplicationGatewayAffinityCORS* cookie has two more attributes added to it (*"SameSite=None; Secure"*) so that sticky sessions are maintained even for cross-origin requests.
Note that the default affinity cookie name is *ApplicationGatewayAffinity* and you can change it. In case you're using a custom affinity cookie name, an additional cookie is added with CORS as suffix. For example, *CustomCookieNameCORS*.
This setting lets you configure an optional custom forwarding path to use when t
| /pathrule/home/secondhome/ | /pathrule/home* | /override/ | /override/secondhome/ | | /pathrule/ | /pathrule/ | /override/ | /override/ |
-## Use for app service
-
-This is a UI only shortcut that selects the two required settings for the Azure App Service back end. It enables **pick host name from back-end address**, and it creates a new custom probe if you don't have one already. (For more information, see the [Pick host name from back-end address](#pick-host-name-from-back-end-address) setting section of this article.) A new probe is created, and the probe header is picked from the back-end member's address.
## Use custom probe
This setting associates a [custom probe](application-gateway-probe-overview.md#c
> [!NOTE] > The custom probe doesn't monitor the health of the back-end pool unless the corresponding HTTP setting is explicitly associated with a listener.
+## Configuring the host name
+
+Application Gateway allows for the connection established to the backend to use a *different* hostname than the one used by the client to connect to Application Gateway. While this configuration can be useful in some cases, overriding the hostname to be different between the client and application gateway and application gateway to backend target, should be done with care.
+
+In production, it is recommended to keep the hostname used by the client towards the application gateway as the same hostname used by the application gateway to the backend target. This avoids potential issues with absolute URLs, redirect URLs, and host-bound cookies.
+
+Before setting up Application Gateway that deviates from this, please review the implications of such configuration as discussed in more detail in Architecture Center: [Preserve the original HTTP host name between a reverse proxy and its backend web application](/azure/architecture/best-practices/host-name-preservation)
+
+There are two aspects of an HTTP setting that influence the [`Host`](https://datatracker.ietf.org/doc/html/rfc2616#section-14.23) HTTP header that is used by Application Gateway to connect to the backend:
+- "Pick host name from backend-address"
+- "Host name override"
+ ## Pick host name from back-end address This capability dynamically sets the *host* header in the request to the host name of the back-end pool. It uses an IP address or FQDN.
This feature helps when the domain name of the back end is different from the DN
An example case is multi-tenant services as the back end. An app service is a multi-tenant service that uses a shared space with a single IP address. So, an app service can only be accessed through the hostnames that are configured in the custom domain settings.
-By default, the custom domain name is *example.azurewebsites.net*. To access your app service by using an application gateway through a hostname that's not explicitly registered in the app service or through the application gateway's FQDN, you override the hostname in the original request to the app service's hostname. To do this, enable the **pick host name from backend address** setting.
+By default, the custom domain name is *example.azurewebsites.net*. To access your app service by using an application gateway through a hostname that's not explicitly registered in the app service or through the application gateway's FQDN, you can override the hostname in the original request to the app service's hostname. To do this, enable the **pick host name from backend address** setting.
-For a custom domain whose existing custom DNS name is mapped to the app service, you don't have to enable this setting.
+For a custom domain whose existing custom DNS name is mapped to the app service, the recommended configuration is not to enable the **pick host name from backend address**.
> [!NOTE] > This setting is not required for App Service Environment, which is a dedicated deployment.
application-gateway Configure Web App Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/configure-web-app-portal.md
- Title: Manage traffic to multi-tenant apps using the portal-
-description: This article provides guidance on how to configure Azure App service web apps as members in backend pool on an existing or new application gateway.
---- Previously updated : 01/02/2021---
-# Configure App Service with Application Gateway
-
-Since app service is a multi-tenant service instead of a dedicated deployment, it uses host header in the incoming request to resolve the request to the correct app service endpoint. Usually, the DNS name of the application, which in turn is the DNS name associated with the application gateway fronting the app service, is different from the domain name of the backend app service. Therefore, the host header in the original request received by the application gateway is not the same as the host name of the backend service. Because of this, unless the host header in the request from the application gateway to the backend is changed to the host name of the backend service, the multi-tenant backends are not able to resolve the request to the correct endpoint.
-
-Application Gateway provides a switch called `Pick host name from backend target` which overrides the host header in the request with the host name of the back-end when the request is routed from the Application Gateway to the backend. This capability enables support for multi-tenant back ends such as Azure app service and API management.
-
-In this article, you learn how to:
--- Edit a backend pool and add an App Service to it-- Edit HTTP Settings with 'Pick Hostname' switch enabled-
-## Prerequisites
--- Application gateway: Create an application gateway without a backend pool target. For more information, see [Quickstart: Direct web traffic with Azure Application Gateway - Azure portal](quick-create-portal.md)--- App service: If you don't have an existing App service, see [App service documentation](../app-service/index.yml).-
-## Add App service as backend pool
-
-1. In the Azure portal, select your application gateway.
-
-2. Under **Backend pools**, select the backend pool.
-
-4. Under **Target type**, select **App Services**.
-
-5. Under **Target** select your App Service.
-
- :::image type="content" source="./media/configure-web-app-portal/backend-pool.png" alt-text="App service backend":::
-
- > [!NOTE]
- > The dropdown only populates those app services which are in the same subscription as your Application Gateway. If you want to use an app service which is in a different subscription than the one in which the Application Gateway is, then instead of choosing **App Services** in the **Targets** dropdown, choose **IP address or hostname** option and enter the hostname (example. azurewebsites.net) of the app service.
-1. Select **Save**.
-
-## Edit HTTP settings for App Service
-
-1. Under **HTTP Settings**, select the existing HTTP setting.
-
-2. Under **Override with new host name**, select **Yes**.
-3. Under **Host name override**, select **Pick host name from backend target**.
-4. Select **Save**.
-
- :::image type="content" source="./media/configure-web-app-portal/http-settings.png" alt-text="Pick host name from backend http settings":::
-
-## Additional configuration in case of redirection to app service's relative path
-
-When the app service sends a redirection response to the client to redirect to its relative path (For example, a redirect from `contoso.azurewebsites.net/path1` to `contoso.azurewebsites.net/path2`), it uses the same hostname in the location header of its response as the one in the request it received from the application gateway. So the client will make the request directly to `contoso.azurewebsites.net/path2` instead of going through the application gateway (`contoso.com/path2`). Bypassing the application gateway isn't desirable.
-
-If in your use case, there are scenarios where the App service will need to send a redirection response to the client, perform the [additional steps to rewrite the location header](./troubleshoot-app-service-redirection-app-service-url.md#sample-configuration).
-
-## Restrict access
-
-The web apps deployed in these examples use public IP addresses that can be accessed directly from the Internet. This helps with troubleshooting when you are learning about a new feature and trying new things. But if you intend to deploy a feature into production, you'll want to add more restrictions.
-
-One way you can restrict access to your web apps is to use [Azure App Service static IP restrictions](../app-service/app-service-ip-restrictions.md). For example, you can restrict the web app so that it only receives traffic from the application gateway. Use the app service IP restriction feature to list the application gateway VIP as the only address with access.
-
-## Next steps
-
-To learn more about the App service and other multi-tenant support with application gateway, see [multi-tenant service support with application gateway](./application-gateway-web-app-overview.md).
application-gateway Configure Web App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/configure-web-app.md
+
+ Title: Manage traffic to App Service
+
+description: This article provides guidance on how to configure Application Gateway with Azure App Service
++++ Last updated : 02/17/2022+++
+<!-- markdownlint-disable MD044 -->
+
+# Configure App Service with Application Gateway
+
+Application gateway allows you to have an App Service app or other multi-tenant service as a back-end pool member. In this article, you learn to configure an App Service app with Application Gateway. The configuration for Application Gateway will differ depending on how App Service will be accessed:
+- The first option makes use of a **custom domain** on both Application Gateway and the App Service in the backend.
+- The second option is to have Application Gateway access App Service using its **default domain**, suffixed as ".azurewebsites.net".
+
+## [Custom domain (recommended)](#tab/customdomain)
+
+This configuration is recommended for production-grade scenarios and meets the practice of not changing the host name in the request flow. You are required to have a custom domain (and associated certificate) available to avoid having to rely on the default ".azurewebsites" domain.
+
+By associating the same domain name to both Application Gateway and App Service in the backend pool, the request flow does not need to override the host name. The backend web application will see the original host as was used by the client.
++
+## [Default domain](#tab/defaultdomain)
+
+This configuration is the easiest and does not require a custom domain. As such it allows for a quick convenient setup.
+
+> [!WARNING]
+> This configuration comes with limitations. We recommend to review the implications of using different host names between the client and Application Gateway and between Application and App Service in the backend. For more information, please review the article in Architecture Center: [Preserve the original HTTP host name between a reverse proxy and its backend web application](/azure/architecture/best-practices/host-name-preservation)
+
+When App Service does not have a custom domain associated with it, the host header on the incoming request on the web application will need to be set to the default domain, suffixed with ".azurewebsites.net" or else the platform will not be able to properly route the request.
+
+The host header in the original request received by the Application Gateway will be different from the host name of the backend App Service.
++++
+In this article you'll learn how to:
+- Configure DNS
+- Add App Service as backend pool to the Application Gateway
+- Configure HTTP Settings for the connection to App Service
+- Configure an HTTP Listener
+- Configure a Request Routing Rule
+
+## Prerequisites
+
+### [Custom domain (recommended)](#tab/customdomain)
+
+- Application Gateway: Create an application gateway without a backend pool target. For more information, see [Quickstart: Direct web traffic with Azure Application Gateway - Azure portal](quick-create-portal.md)
+
+- App Service: If you don't have an existing App Service, see [App Service documentation](../app-service/index.yml).
+
+- A custom domain name and associated certificate (signed by a well known authority), stored in Key Vault. For more information on how to store certificates in Key Vault, see [Tutorial: Import a certificate in Azure Key Vault](../key-vault/certificates/tutorial-import-certificate.md)
+
+### [Default domain](#tab/defaultdomain)
+
+- Application Gateway: Create an application gateway without a backend pool target. For more information, see [Quickstart: Direct web traffic with Azure Application Gateway - Azure portal](quick-create-portal.md)
+
+- App Service: If you don't have an existing App Service, see [App Service documentation](../app-service/index.yml).
+++
+## Configuring DNS
+
+In the context of this scenario, DNS is relevant in two places:
+- The DNS name, which the user or client is using towards Application Gateway and what is shown in a browser
+- The DNS name, which Application Gateway is internally using to access the App Service in the backend
+
+### [Custom domain (recommended)](#tab/customdomain)
+
+Route the user or client to Application Gateway using the custom domain. Set up DNS using a CNAME alias pointed to the DNS for Application Gateway. The Application Gateway DNS address is shown on the overview page of the associated Public IP address. Alternatively create an A record pointing to the IP address directly. (For Application Gateway V1 the VIP can change if you stop and start the service, which makes this option undesired.)
+
+App Service should be configured so it accepts traffic from Application Gateway using the custom domain name as the incoming host. For more information on how to map a custom domain to the App Service, see [Tutorial: Map an existing custom DNS name to Azure App Service](../app-service/app-service-web-tutorial-custom-domain.md) To verify the domain, App Service only requires adding a TXT record. No change is required on CNAME or A-records. The DNS configuration for the custom domain will remain directed towards Application Gateway.
+
+To accept connections to App Service over HTTPS, configure its TLS binding. For more information, see [Secure a custom DNS name with a TLS/SSL binding in Azure App Service](../app-service/configure-ssl-bindings.md) Configure App Service to pull the certificate for the custom domain from Azure Key Vault.
+
+### [Default domain](#tab/defaultdomain)
+
+When no custom domain is available, the user or client can access Application Gateway using either the IP address of the gateway or its DNS address. The Application Gateway DNS address can be found on the overview page of the associated Public IP address. Not having a custom domain available implies that no publicly signed certificate will be available for TLS on Application Gateway. Clients are restricted to use HTTP or HTTPS with a self-signed certificate, both of which are undesired.
+
+To connect to App Service, Application Gateway uses the default domain as provided by App Service (suffixed "azurewebsites.net").
+++
+## Add App service as backend pool
+
+### [Azure portal](#tab/azure-portal)
+
+1. In the Azure portal, select your Application Gateway.
+
+2. Under **Backend pools**, select the backend pool.
+
+3. Under **Target type**, select **App Services**.
+
+4. Under **Target** select your App Service.
+
+ :::image type="content" source="./media/configure-web-app/backend-pool.png" alt-text="App service backend":::
+
+ > [!NOTE]
+ > The dropdown only populates those app services which are in the same subscription as your Application Gateway. If you want to use an app service which is in a different subscription than the one in which the Application Gateway is, then instead of choosing **App Services** in the **Targets** dropdown, choose **IP address or hostname** option and enter the hostname (example.azurewebsites.net) of the app service.
+
+5. Select **Save**.
+
+### [PowerShell](#tab/azure-powershell)
+
+```powershell
+# Fully qualified default domain name of the web app:
+$webAppFQDN = "<nameofwebapp>.azurewebsite.net"
+
+# For Application Gateway: both name, resource group and name for the backend pool to create:
+$rgName = "<name of resource group for App Gateway>"
+$appGwName = "<name of the App Gateway>"
+$appGwBackendPoolNameForAppSvc = "<name for backend pool to be added>"
+
+# Get existing Application Gateway:
+$gw = Get-AzApplicationGateway -Name $appGwName -ResourceGroupName $rgName
+
+# Add a new Backend Pool with App Service in there:
+Add-AzApplicationGatewayBackendAddressPool -Name $appGwBackendPoolNameForAppSvc -ApplicationGateway $gw -BackendFqdns $webAppFQDN
+
+# Update Application Gateway with the new added Backend Pool:
+Set-AzApplicationGateway -ApplicationGateway $gw
+```
+++
+## Edit HTTP settings for App Service
+
+### [Azure portal](#tab/azure-portal/customdomain)
+
+An HTTP Setting is required that instructs Application Gateway to access the App Service backend using the **custom domain name**. The HTTP Setting will by default use the [default health probe](./application-gateway-probe-overview.md#default-health-probe) which relies on the hostname as is configured in the Backend Pool (suffixed "azurewebsites.net"). For this reason, it is good to first configure a [custom health probe](./application-gateway-probe-overview.md#custom-health-probe) that is configured with the correct custom domain name as its host name.
+
+We will connect to the backend using HTTPS.
+
+1. Under **HTTP Settings**, select an existing HTTP setting or add a new one.
+2. When creating a new HTTP Setting, give it a name
+3. Select HTTPS as the desired backend protocol using port 443
+4. If the certificate is signed by a well known authority, select "Yes" for "User well known CA certificate". Alternatively [Add authentication/trusted root certificates of back-end servers](./end-to-end-ssl-portal.md#add-authenticationtrusted-root-certificates-of-back-end-servers)
+5. Make sure to set "Override with new host name" to "No"
+6. Select the custom HTTPS health probe in the dropdown for "Custom probe".
+ > [!Note]
+ > It will work with the default probe but for correctness we recommend using a custom probe with the correct domain name.)
++
+### [Azure portal](#tab/azure-portal/defaultdomain)
+
+An HTTP Setting is required that instructs Application Gateway to access the App Service backend using the **default ("azurewebsites.net") domain name**. To do so, the HTTP Setting will explicitly override the host name.
+
+1. Under **HTTP Settings**, select an existing HTTP setting or add a new one.
+2. When creating a new HTTP Setting, give it a name
+3. Select HTTPS as the desired backend protocol using port 443
+4. If the certificate is signed by a well known authority, select "Yes" for "User well known CA certificate". Alternatively [Add authentication/trusted root certificates of back-end servers](./end-to-end-ssl-portal.md#add-authenticationtrusted-root-certificates-of-back-end-servers)
+5. Make sure to set "Override with new host name" to "Yes"
+6. Under "Host name override", select "Pick host name from backend target". This setting will cause the request towards App Service to use the "azurewebsites.net" host name, as is configured in the Backend Pool.
++
+### [PowerShell](#tab/azure-powershell/customdomain)
+
+```powershell
+# Configure Application Gateway to connect to App Service using the incoming hostname
+$rgName = "<name of resource group for App Gateway>"
+$appGwName = "<name of the App Gateway>"
+$customProbeName = "<name for custom health probe>"
+$customDomainName = "<FQDN for custom domain associated with App Service>"
+$httpSettingsName = "<name for http settings to be created>"
+
+# Get existing Application Gateway:
+$gw = Get-AzApplicationGateway -Name $appGwName -ResourceGroupName $rgName
+
+# Add custom health probe using custom domain name:
+Add-AzApplicationGatewayProbeConfig -Name $customProbeName -ApplicationGateway $gw -Protocol Https -HostName $customDomainName -Path "/" -Interval 30 -Timeout 120 -UnhealthyThreshold 3
+$probe = Get-AzApplicationGatewayProbeConfig -Name $customProbeName -ApplicationGateway $gw
+
+# Add HTTP Settings to use towards App Service:
+Add-AzApplicationGatewayBackendHttpSettings -Name $httpSettingsName -ApplicationGateway $gw -Protocol Https -Port 443 -Probe $probe -CookieBasedAffinity Disabled -RequestTimeout 30
+
+# Update Application Gateway with the new added HTTP settings and probe:
+Set-AzApplicationGateway -ApplicationGateway $gw
+```
+
+### [PowerShell](#tab/azure-powershell/defaultdomain)
+
+```powershell
+# Configure Application Gateway to connect to backend using default App Service hostname
+$rgName = "<name of resource group for App Gateway>"
+$appGwName = "<name of the App Gateway>"
+$httpSettingsName = "<name for http settings to be created>"
+
+# Get existing Application Gateway:
+$gw = Get-AzApplicationGateway -Name $appGwName -ResourceGroupName $rgName
+
+# Add HTTP Settings to use towards App Service:
+Add-AzApplicationGatewayBackendHttpSettings -Name $httpSettingsName -ApplicationGateway $gw -Protocol Https -Port 443 -PickHostNameFromBackendAddress -CookieBasedAffinity Disabled -RequestTimeout 30
+
+# Update Application Gateway with the new added HTTP settings and probe:
+Set-AzApplicationGateway -ApplicationGateway $gw
+```
+++
+## Configure an HTTP listener
+
+To accept traffic we need to configure a Listener. For more info on this see [Application Gateway listener configuration](configuration-listeners.md).
+
+### [Azure portal](#tab/azure-portal/customdomain)
+
+1. Open the "Listeners" section and choose "Add listener" or click an existing one to edit
+1. For a new listener: give it a name
+1. Under "Frontend IP", select the IP address to listen on
+1. Under "Port", select 443
+1. Under "Protocol", select "HTTPS"
+1. Under "Choose a certificate", select "Choose a certificate from Key Vault". For more information, see [Using Key Vault](key-vault-certs.md) where you find more information on how to assign a managed identity and provide it with rights to your Key Vault.
+ 1. Give the certificate a name
+ 1. Select the Managed Identity
+ 1. Select the Key Vault from where to get the certificate
+ 1. Select the certificate
+1. Under "Listener Type", select "Basic"
+1. Click "Add" to add the listener
++
+### [Azure portal](#tab/azure-portal/defaultdomain)
+
+Assuming there's no custom domain available or associated certificate, we'll configure Application Gateway to listen for HTTP traffic on port 80. Alternatively, see the instructions on how to [Create a self-signed certificate](tutorial-ssl-powershell.md#create-a-self-signed-certificate)
+
+1. Open the "Listeners" section and choose "Add listener" or click an existing one to edit
+1. For a new listener: give it a name
+1. Under "Frontend IP", select the IP address to listen on
+1. Under "Port", select 80
+1. Under "Protocol", select "HTTP"
++
+### [PowerShell](#tab/azure-powershell/customdomain)
+
+```powershell
+# This script assumes that:
+# - a certificate was imported in Azure Key Vault already
+# - a managed identity was assigned to Application Gateway with access to the certificate
+# - there is no HTTP listener defined yet for HTTPS on port 443
+
+$rgName = "<name of resource group for App Gateway>"
+$appGwName = "<name of the App Gateway>"
+$appGwSSLCertificateName = "<name for ssl cert to be created within Application Gateway"
+$appGwSSLCertificateKeyVaultSecretId = "<key vault secret id for the SSL certificate to use>"
+$httpListenerName = "<name for the listener to add>"
+
+# Get existing Application Gateway:
+$gw = Get-AzApplicationGateway -Name $appGwName -ResourceGroupName $rgName
+
+# Create SSL certificate object for Application Gateway:
+Add-AzApplicationGatewaySslCertificate -Name $appGwSSLCertificateName -ApplicationGateway $gw -KeyVaultSecretId $appGwSSLCertificateKeyVaultSecretId
+$sslCert = Get-AzApplicationGatewaySslCertificate -Name $appGwSSLCertificateName -ApplicationGateway $gw
+
+# Fetch public ip associated with Application Gateway:
+$ipAddressResourceId = $gw.FrontendIPConfigurations.PublicIPAddress.Id
+$ipAddressResource = Get-AzResource -ResourceId $ipAddressResourceId
+$publicIp = Get-AzPublicIpAddress -ResourceGroupName $ipAddressResource.ResourceGroupName -Name $ipAddressResource.Name
+
+$frontendIpConfig = $gw.FrontendIpConfigurations | Where-Object {$_.PublicIpAddress -ne $null}
+
+$port = New-AzApplicationGatewayFrontendPort -Name "port_443" -Port 443
+Add-AzApplicationGatewayFrontendPort -Name "port_443" -ApplicationGateway $gw -Port 443
+Add-AzApplicationGatewayHttpListener -Name $httpListenerName -ApplicationGateway $gw -Protocol Https -FrontendIPConfiguration $frontendIpConfig -FrontendPort $port -SslCertificate $sslCert
+
+# Update Application Gateway with the new HTTPS listener:
+Set-AzApplicationGateway -ApplicationGateway $gw
+
+```
+
+### [PowerShell](#tab/azure-powershell/defaultdomain)
+
+In many cases a public listener for HTTP on port 80 will already exist. The below script will create one if that is not yet the case.
+
+```powershell
+$rgName = "<name of resource group for App Gateway>"
+$appGwName = "<name of the App Gateway>"
+$httpListenerName = "<name for the listener to add if not exists yet>"
+
+# Get existing Application Gateway:
+$gw = Get-AzApplicationGateway -Name $appGwName -ResourceGroupName $rgName
+
+# Check if HTTP listener on port 80 already exists:
+$port = $gw.FrontendPorts | Where-Object {$_.Port -eq 80}
+$listener = $gw.HttpListeners | Where-Object {$_.Protocol.ToString().ToLower() -eq "http" -and $_.FrontendPort.Id -eq $port.Id}
+
+if ($listener -eq $null){
+ $frontendIpConfig = $gw.FrontendIpConfigurations | Where-Object {$_.PublicIpAddress -ne $null}
+ Add-AzApplicationGatewayHttpListener -Name $httpListenerName -ApplicationGateway $gw -Protocol Http -FrontendIPConfiguration $frontendIpConfig -FrontendPort $port
+
+ # Update Application Gateway with the new HTTPS listener:
+ Set-AzApplicationGateway -ApplicationGateway $gw
+}
+```
++
+## Configure request routing rule
+
+Provided with the earlier configured Backend Pool and the HTTP Settings, the request routing rule can be set up to take traffic from a listener and route it to the Backend Pool using the HTTP Settings. For this, make sure you have an HTTP or HTTPS listener available that is not already bound to an existing routing rule.
+
+### [Azure portal](#tab/azure-portal)
+
+1. Under "Rules", click to add a new "Request routing rule"
+1. Provide the rule with a name
+1. Select an HTTP or HTTPS listener that is not bound yet to an existing routing rule
+1. Under "Backend targets", choose the Backend Pool in which App Service has been configured
+1. Configure the HTTP settings with which Application Gateway should connect to the App Service backend
+1. Select "Add" to save this configuration
++
+### [PowerShell](#tab/azure-powershell)
+
+```powershell
+$rgName = "<name of resource group for App Gateway>"
+$appGwName = "<name of the App Gateway>"
+$httpListenerName = "<name for existing http listener (without rule) to route traffic from>"
+$httpSettingsName = "<name for http settings to use>"
+$appGwBackendPoolNameForAppSvc = "<name for backend pool to route to>"
+$reqRoutingRuleName = "<name for request routing rule to be added>"
+
+# Get existing Application Gateway:
+$gw = Get-AzApplicationGateway -Name $appGwName -ResourceGroupName $rgName
+
+# Get HTTP Settings:
+$httpListener = Get-AzApplicationGatewayHttpListener -Name $httpListenerName -ApplicationGateway $gw
+$httpSettings = Get-AzApplicationGatewayBackendHttpSettings -Name $httpSettingsName -ApplicationGateway $gw
+$backendPool = Get-AzApplicationGatewayBackendAddressPool -Name $appGwBackendPoolNameForAppSvc -ApplicationGateway $gw
+
+# Add routing rule:
+Add-AzApplicationGatewayRequestRoutingRule -Name $reqRoutingRuleName -ApplicationGateway $gw -RuleType Basic -BackendHttpSettings $httpSettings -HttpListener $httpListener -BackendAddressPool $backendPool
+
+# Update Application Gateway with the new routing rule:
+Set-AzApplicationGateway -ApplicationGateway $gw
+```
+++
+## Testing
+
+Before we do so, make sure that the backend health shows as healthy:
+
+### [Azure portal](#tab/azure-portal/defaultdomain)
+
+Open the "Backend health" section and ensure the "Status" column indicates the combination for HTTP Setting and Backend Pool shows as "Healthy".
++
+Now browse to the web application using either the Application Gateway IP Address or the associated DNS name for the IP Address. Both can be found on the Application Gateway "Overview" page as a property under "Essentials". Alternatively the Public IP Address resource also shows the IP address and associated DNS name.
+
+Pay attention to the following non-exhaustive list of potential symptoms when testing the application:
+- redirections pointing to ".azurewebsites.net" directly instead of to Application Gateway
+- this includes authentication redirects that try access ".azurewebsites.net" directly
+- domain-bound cookies not being passed on to the backend
+- this includes the use of the ["ARR affinity" setting](../app-service/configure-common.md#configure-general-settings) in App Service
+
+The above conditions (explained in more detail in [Architecture Center](/azure/architecture/best-practices/host-name-preservation)) would indicate that your web application does not deal well with rewriting the host name. This is very common to see. The recommended way to deal with this is to follow the instructions for configuration Application Gateway with App Service using a custom domain. Also see: [Troubleshoot App Service issues in Application Gateway](troubleshoot-app-service-redirection-app-service-url.md).
+
+### [Azure portal](#tab/azure-portal/customdomain)
+
+Open the "Backend health" section and ensure the "Status" column indicates the combination for HTTP Setting and Backend Pool shows as "Healthy".
++
+Now browse to the web application using the custom domain which you associated with both Application Gateway and the App Service in the backend.
+
+### [PowerShell](#tab/azure-powershell/customdomain)
+
+Check if the backend health for the backend and HTTP Settings shows as "Healthy":
+
+```powershell
+$rgName = "<name of resource group for App Gateway>"
+$appGwName = "<name of the App Gateway>"
+
+# Get existing Application Gateway:
+$gw = Get-AzApplicationGateway -Name $appGwName -ResourceGroupName $rgName
+
+# Check health:
+Get-AzApplicationGatewayBackendHealth -ResourceGroupName $rgName -Name $appGwName
+```
+
+To test the configuration, we'll request content from the App Service through Application Gateway using the custom domain:
+
+```powershell
+$customDomainName = "<FQDN for custom domain pointing to Application Gateway>"
+Invoke-WebRequest $customDomainName
+```
+
+### [PowerShell](#tab/azure-powershell/defaultdomain)
+
+Check if the backend health for the backend and HTTP Settings shows as "Healthy":
+
+```powershell
+$rgName = "<name of resource group for App Gateway>"
+$appGwName = "<name of the App Gateway>"
+
+# Get existing Application Gateway:
+$gw = Get-AzApplicationGateway -Name $appGwName -ResourceGroupName $rgName
+
+# Check health:
+Get-AzApplicationGatewayBackendHealth -ResourceGroupName $rgName -Name $appGwName
+```
+
+To test the configuration, we'll request content from the App Service through Application Gateway using the IP address:
+
+```powershell
+$rgName = "<name of resource group for App Gateway>"
+$appGwName = "<name of the App Gateway>"
+
+# Get existing Application Gateway:
+$gw = Get-AzApplicationGateway -Name $appGwName -ResourceGroupName $rgName
+
+# Get ip address:
+$ipAddressResourceId = $gw.FrontendIPConfigurations.PublicIPAddress.Id
+$ipAddressResource = Get-AzResource -ResourceId $ipAddressResourceId
+$publicIp = Get-AzPublicIpAddress -ResourceGroupName $ipAddressResource.ResourceGroupName -Name $ipAddressResource.Name
+Write-Host "Public ip address for Application Gateway is $($publicIp.IpAddress)"
+Invoke-WebRequest "http://$($publicIp.IpAddress)"
+```
+
+Pay attention to the following non-exhaustive list of potential symptoms when testing the application:
+- redirections pointing to ".azurewebsites.net" directly instead of to Application Gateway
+- this includes [App Service Authentication](../app-service/configure-authentication-provider-aad.md) redirects that try access ".azurewebsites.net" directly
+- domain-bound cookies not being passed on to the backend
+- this includes the use of the ["ARR affinity" setting](../app-service/configure-common.md#configure-general-settings) in App Service
+
+The above conditions (explained in more detail in [Architecture Center](/azure/architecture/best-practices/host-name-preservation)) would indicate that your web application does not deal well with rewriting the host name. This is very common to see. The recommended way to deal with this is to follow the instructions for configuration Application Gateway with App Service using a custom domain. Also see: [Troubleshoot App Service issues in Application Gateway](troubleshoot-app-service-redirection-app-service-url.md).
+++
+## Restrict access
+
+The web apps deployed in these examples use public IP addresses that can be accessed directly from the Internet. This helps with troubleshooting when you are learning about a new feature and trying new things. But if you intend to deploy a feature into production, you'll want to add more restrictions. Consider the following options:
+
+- Configure [Access restriction rules based on service endpoints](../app-service/networking-features.md#access-restriction-rules-based-on-service-endpoints). This allows you to lock down inbound access to the app making sure the source address is from Application Gateway.
+- Use [Azure App Service static IP restrictions](../app-service/app-service-ip-restrictions.md). For example, you can restrict the web app so that it only receives traffic from the application gateway. Use the app service IP restriction feature to list the application gateway VIP as the only address with access.
application-gateway Create Web App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/create-web-app.md
- Title: Configure App Service using PowerShell-
-description: This article provides guidance on how to configure web apps as back end hosts on an existing or new application gateway.
---- Previously updated : 11/15/2019----
-# Configure App Service with Application Gateway using PowerShell
-
-Application gateway allows you to have an App Service app or other multi-tenant service as a back-end pool member. In this article, you learn to configure an App Service app with Application Gateway. The first example shows you how to configure an existing application gateway to use a web app as a back-end pool member. The second example shows you how to create a new application gateway with a web app as a back-end pool member.
--
-## Configure a web app behind an existing application gateway
-
-The following example adds a web app as a back-end pool member to an existing application gateway. Both the switch `-PickHostNamefromBackendHttpSettings`on the Probe configuration and `-PickHostNameFromBackendAddress` on the back-end http settings must be provided in order for web apps to work.
-
-```powershell
-# FQDN of the web app
-$webappFQDN = "<enter your webapp FQDN i.e mywebsite.azurewebsites.net>"
-
-# Retrieve the resource group
-$rg = Get-AzResourceGroup -Name 'your resource group name'
-
-# Retrieve an existing application gateway
-$gw = Get-AzApplicationGateway -Name 'your application gateway name' -ResourceGroupName $rg.ResourceGroupName
-
-# Define the status codes to match for the probe
-$match=New-AzApplicationGatewayProbeHealthResponseMatch -StatusCode 200-399
-
-# Add a new probe to the application gateway
-Add-AzApplicationGatewayProbeConfig -name webappprobe2 -ApplicationGateway $gw -Protocol Http -Path / -Interval 30 -Timeout 120 -UnhealthyThreshold 3 -PickHostNameFromBackendHttpSettings -Match $match
-
-# Retrieve the newly added probe
-$probe = Get-AzApplicationGatewayProbeConfig -name webappprobe2 -ApplicationGateway $gw
-
-# Configure an existing backend http settings
-Set-AzApplicationGatewayBackendHttpSettings -Name appGatewayBackendHttpSettings -ApplicationGateway $gw -PickHostNameFromBackendAddress -Port 80 -Protocol http -CookieBasedAffinity Disabled -RequestTimeout 30 -Probe $probe
-
-# Add the web app to the backend pool
-Set-AzApplicationGatewayBackendAddressPool -Name appGatewayBackendPool -ApplicationGateway $gw -BackendFqdns $webappFQDN
-
-# Update the application gateway
-Set-AzApplicationGateway -ApplicationGateway $gw
-```
-
-## Configure a web application behind a new application gateway
-
-This scenario deploys a web app with the asp.net getting started website and an application gateway.
-
-```powershell
-# Defines a variable for a dotnet get started web app repository location
-$gitrepo="https://github.com/Azure-Samples/app-service-web-dotnet-get-started.git"
-
-# Unique web app name
-$webappname="mywebapp$(Get-Random)"
-
-# Creates a resource group
-$rg = New-AzResourceGroup -Name ContosoRG -Location Eastus
-
-# Create an App Service plan in Free tier.
-New-AzAppServicePlan -Name $webappname -Location EastUs -ResourceGroupName $rg.ResourceGroupName -Tier Free
-
-# Creates a web app
-$webapp = New-AzWebApp -ResourceGroupName $rg.ResourceGroupName -Name $webappname -Location EastUs -AppServicePlan $webappname
-
-# Configure GitHub deployment from your GitHub repo and deploy once to web app.
-$PropertiesObject = @{
- repoUrl = "$gitrepo";
- branch = "master";
- isManualIntegration = "true";
-}
-Set-AzResource -PropertyObject $PropertiesObject -ResourceGroupName $rg.ResourceGroupName -ResourceType Microsoft.Web/sites/sourcecontrols -ResourceName $webappname/web -ApiVersion 2015-08-01 -Force
-
-# Creates a subnet for the application gateway
-$subnet = New-AzVirtualNetworkSubnetConfig -Name subnet01 -AddressPrefix 10.0.0.0/24
-
-# Creates a vnet for the application gateway
-$vnet = New-AzVirtualNetwork -Name appgwvnet -ResourceGroupName $rg.ResourceGroupName -Location EastUs -AddressPrefix 10.0.0.0/16 -Subnet $subnet
-
-# Retrieve the subnet object for use later
-$subnet=$vnet.Subnets[0]
-
-# Create a public IP address
-$publicip = New-AzPublicIpAddress -ResourceGroupName $rg.ResourceGroupName -name publicIP01 -location EastUs -AllocationMethod Dynamic
-
-# Create a new IP configuration
-$gipconfig = New-AzApplicationGatewayIPConfiguration -Name gatewayIP01 -Subnet $subnet
-
-# Create a backend pool with the hostname of the web app
-$pool = New-AzApplicationGatewayBackendAddressPool -Name appGatewayBackendPool -BackendFqdns $webapp.HostNames
-
-# Define the status codes to match for the probe
-$match = New-AzApplicationGatewayProbeHealthResponseMatch -StatusCode 200-399
-
-# Create a probe with the PickHostNameFromBackendHttpSettings switch for web apps
-$probeconfig = New-AzApplicationGatewayProbeConfig -name webappprobe -Protocol Http -Path / -Interval 30 -Timeout 120 -UnhealthyThreshold 3 -PickHostNameFromBackendHttpSettings -Match $match
-
-# Define the backend http settings
-$poolSetting = New-AzApplicationGatewayBackendHttpSettings -Name appGatewayBackendHttpSettings -Port 80 -Protocol Http -CookieBasedAffinity Disabled -RequestTimeout 120 -PickHostNameFromBackendAddress -Probe $probeconfig
-
-# Create a new front-end port
-$fp = New-AzApplicationGatewayFrontendPort -Name frontendport01 -Port 80
-
-# Create a new front end IP configuration
-$fipconfig = New-AzApplicationGatewayFrontendIPConfig -Name fipconfig01 -PublicIPAddress $publicip
-
-# Create a new listener using the front-end ip configuration and port created earlier
-$listener = New-AzApplicationGatewayHttpListener -Name listener01 -Protocol Http -FrontendIPConfiguration $fipconfig -FrontendPort $fp
-
-# Create a new rule
-$rule = New-AzApplicationGatewayRequestRoutingRule -Name rule01 -RuleType Basic -BackendHttpSettings $poolSetting -HttpListener $listener -BackendAddressPool $pool
-
-# Define the application gateway SKU to use
-$sku = New-AzApplicationGatewaySku -Name Standard_Small -Tier Standard -Capacity 2
-
-# Create the application gateway
-$appgw = New-AzApplicationGateway -Name ContosoAppGateway -ResourceGroupName $rg.ResourceGroupName -Location EastUs -BackendAddressPools $pool -BackendHttpSettingsCollection $poolSetting -Probes $probeconfig -FrontendIpConfigurations $fipconfig -GatewayIpConfigurations $gipconfig -FrontendPorts $fp -HttpListeners $listener -RequestRoutingRules $rule -Sku $sku
-```
-
-## Get application gateway DNS name
-
-Once the gateway is created, the next step is to configure the front end for communication. When using a public IP, application gateway requires a dynamically assigned DNS name, which is not friendly. To ensure end users can hit the application gateway, a CNAME record can be used to point to the public endpoint of the application gateway. To create the alias, retrieve the details of the application gateway and its associated IP/DNS name using the PublicIPAddress element attached to the application gateway. This can be done with Azure DNS or other DNS providers, by creating a CNAME record that points to the [public IP address](../dns/dns-custom-domain.md#public-ip-address). The use of A-records is not recommended since the VIP may change on restart of application gateway.
-
-```powershell
-Get-AzPublicIpAddress -ResourceGroupName ContosoRG -Name publicIP01
-```
-
-```
-Name : publicIP01
-ResourceGroupName : ContosoRG
-Location : eastus
-Id : /subscriptions/<subscription_id>/resourceGroups/ContosoRG/providers/Microsoft.Network/publicIPAddresses/publicIP01
-Etag : W/"00000d5b-54ed-4907-bae8-99bd5766d0e5"
-ResourceGuid : 00000000-0000-0000-0000-000000000000
-ProvisioningState : Succeeded
-Tags :
-PublicIpAllocationMethod : Dynamic
-IpAddress : xx.xx.xxx.xx
-PublicIpAddressVersion : IPv4
-IdleTimeoutInMinutes : 4
-IpConfiguration : {
- "Id": "/subscriptions/<subscription_id>/resourceGroups/ContosoRG/providers/Microsoft.Network/applicationGateways/ContosoAppGateway/frontendIP
- Configurations/frontend1"
- }
-DnsSettings : {
- "Fqdn": "00000000-0000-xxxx-xxxx-xxxxxxxxxxxx.cloudapp.net"
- }
-```
-
-## Restrict access
-
-The web apps deployed in these examples use public IP addresses that can be accessed directly from the Internet. This helps with troubleshooting when you are learning about a new feature and trying new things. But if you intend to deploy a feature into production, you'll want to add more restrictions.
-
-One way you can restrict access to your web apps is to use [Azure App Service static IP restrictions](../app-service/app-service-ip-restrictions.md). For example, you can restrict the web app so that it only receives traffic from the application gateway. Use the app service IP restriction feature to list the application gateway VIP as the only address with access.
-
-## Next steps
-
-Learn how to configure redirection by visiting: [Configure redirection on Application Gateway with PowerShell](redirect-overview.md).
application-gateway Rewrite Http Headers Url https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/rewrite-http-headers-url.md
Application Gateway inserts an X-Forwarded-For header into all requests before i
#### Modify a redirection URL
-When a back-end application sends a redirection response, you might want to redirect the client to a different URL than the one specified by the back-end application. For example, you might want to do this when an app service is hosted behind an application gateway and requires the client to do a redirection to its relative path. (For example, a redirect from contoso.azurewebsites.net/path1 to contoso.azurewebsites.net/path2.)
+Modification of a redirect URL can be useful under certain circumstances. For example: clients were originally redirected to a path like "/blog" but now should be sent to "/updates" due to a change in content structure.
-Because App Service is a multitenant service, it uses the host header in the request to route the request to the correct endpoint. App services have a default domain name of \*.azurewebsites.net (say contoso.azurewebsites.net) that's different from the application gateway's domain name (say contoso.com). Because the original request from the client has the application gateway's domain name (contoso.com) as the hostname, the application gateway changes the hostname to contoso.azurewebsites.net. It makes this change so that the app service can route the request to the correct endpoint.
+> [!WARNING]
+> The need to modify a redirection URL sometimes comes up in the context of a configuration whereby Application Gateway is configured to override the hostname towards the backend. The hostname as seen by the backend is in that case different from the hostname as seen by the browser. In this situation, the redirect would not use the correct hostname. This configuration is not recommended.
+>
+> The limitations and implications of such a configuration are described in [Preserve the original HTTP host name between a reverse proxy and its back-end web application](/azure/architecture/best-practices/host-name-preservation). The recommended setup for App Service is to follow the instructions for **"Custom Domain (recommended)"** in [Configure App Service with Application Gateway](configure-web-app.md). Rewriting the location header on the response as described in the below example should be considered a workaround and does not address the root cause.
When the app service sends a redirection response, it uses the same hostname in the location header of its response as the one in the request it receives from the application gateway. So the client will make the request directly to `contoso.azurewebsites.net/path2` instead of going through the application gateway (`contoso.com/path2`). Bypassing the application gateway isn't desirable.
application-gateway Troubleshoot App Service Redirection App Service Url https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/troubleshoot-app-service-redirection-app-service-url.md
Learn how to diagnose and resolve issues you might encounter when Azure App Serv
## Overview
-In this article, you'll learn how to troubleshoot the following issues:
+In this article, you'll learn how to troubleshoot the following issues, as described in more detail in Architecture Center: [Preserve the original HTTP host name between a reverse proxy and its backend web application](/azure/architecture/best-practices/host-name-preservation.md#potential-issues)
-* The app service URL is exposed in the browser when there's a redirection.
-* The app service ARRAffinity cookie domain is set to the app service host name, example.azurewebsites.net, instead of the original host.
+* [Incorrect absolute URLs](/azure/architecture/best-practices/host-name-preservation.md#incorrect-absolute-urls)
+* [Incorrect redirect URLs](/azure/architecture/best-practices/host-name-preservation.md#incorrect-redirect-urls)
+ * the app service URL is exposed in the browser when there's a redirection
+ * an example of this: an OIDC authentication flow is broken because of a redirect with wrong hostname; this includes the use of [App Service Authentication and Authorization](../app-service/overview-authentication-authorization.md)
+* [Broken cookies](/azure/architecture/best-practices/host-name-preservation.md#broken-cookies)
+ * cookies are not propagated between the browser and the App Service
+ * an example of this: the app service ARRAffinity cookie domain is set to the app service host name and is tied to "example.azurewebsites.net", instead of the original host. As a result, session affinity is broken.
-When a back-end application sends a redirection response, you might want to redirect the client to a different URL than the one specified by the back-end application. You might want to do this when an app service is hosted behind an application gateway and requires the client to do a redirection to its relative path. An example is a redirect from contoso.azurewebsites.net/path1 to contoso.azurewebsites.net/path2.
+The root-cause for the above symptoms is a setup that overrides the hostname as used by Application Gateway towards App Service into a different hostname as is seen by the browser. Often the hostname is overridden to the default App Service "azurewebsites.net" domain.
-When the app service sends a redirection response, it uses the same host name in the location header of its response as the one in the request it receives from the application gateway. For example, the client makes the request directly to contoso.azurewebsites.net/path2 instead of going through the application gateway contoso.com/path2. You don't want to bypass the application gateway.
-
-This issue might happen for the following main reasons:
--- You have redirection configured on your app service. Redirection can be as simple as adding a trailing slash to the request.-- You have Azure Active Directory authentication, which causes the redirection.-
-Also, when you use app services behind an application gateway, the domain name associated with the application gateway (example.com) is different from the domain name of the app service (say, example.azurewebsites.net). The domain value for the ARRAffinity cookie set by the app service carries the example.azurewebsites.net domain name, which isn't desirable. The original host name, example.com, should be the domain name value in the cookie.
## Sample configuration -- HTTP listener: Basic or multi-site-- Back-end address pool: App Service-- HTTP settings: **Pick Hostname from Backend Address** enabled-- Probe: **Pick Hostname from HTTP Settings** enabled
+In case your configuration matches one of below two situations, your setup is subject to the instructions in this article:
+- **Pick Hostname from Backend Address** is enabled in HTTP Settings
+- **Override with specific domain name** is set to a value different from what the browser request has
## Cause
-App Service is a multitenant service, so it uses the host header in the request to route the request to the correct endpoint. The default domain name of App Services, *.azurewebsites.net (say, contoso.azurewebsites.net), is different from the application gateway's domain name (say, contoso.com).
-
-The original request from the client has the application gateway's domain name, contoso.com, as the host name. You need to configure the application gateway to change the host name in the original request to the app service's host name when it routes the request to the app service back end. Use the switch **Pick Hostname from Backend Address** in the application gateway's HTTP setting configuration. Use the switch **Pick Hostname from Backend HTTP Settings** in the health probe configuration.
---
-![Application gateway changes host name](./media/troubleshoot-app-service-redirection-app-service-url/appservice-1.png)
-
-When the app service does a redirection, it uses the overridden host name contoso.azurewebsites.net in the location header instead of the original host name contoso.com, unless configured otherwise. Check the following example request and response headers.
-```
-## Request headers to Application Gateway:
-
-Request URL: http://www.contoso.com/path
+App Service is a multitenant service, so it uses the host header in the request to route the request to the correct endpoint. The default domain name of App Services, *.azurewebsites.net (say, contoso.azurewebsites.net), is different from the application gateway's domain name (say, contoso.com). The backend App Service is missing the required context to generate redirect url's or cookies that align with the domain as seen by the browser.
-Request Method: GET
+## Solution
-Host: www.contoso.com
+The production-recommended solution is to configure Application Gateway and App Service to not override the hostname. Follow the instructions for **"Custom Domain (recommended)"** in [Configure App Service with Application Gateway](./configure-web-app.md)
-## Response headers:
+Only consider applying another workaround (like a rewrite of the Location header as described below) after assessing the implications as described in the article: [Preserve the original HTTP host name between a reverse proxy and its backend web application](/azure/architecture/best-practices/host-name-preservation). These implications include the potential for domain-bound cookies and for absolute URL's outside of the location header, to remain broken.
-Status Code: 301 Moved Permanently
+## Workaround: rewrite the Location header
-Location: http://contoso.azurewebsites.net/path/
-
-Server: Microsoft-IIS/10.0
-
-Set-Cookie: ARRAffinity=b5b1b14066f35b3e4533a1974cacfbbd969bf1960b6518aa2c2e2619700e4010;Path=/;HttpOnly;Domain=contoso.azurewebsites.net
-
-X-Powered-By: ASP.NET
-```
-In the previous example, notice that the response header has a status code of 301 for redirection. The location header has the app service's host name instead of the original host name `www.contoso.com`.
-
-## Solution: Rewrite the location header
+> [!WARNING]
+> This configuration comes with limitations. We recommend to review the implications of using different host names between the client and Application Gateway and between Application and App Service in the backend. For more information, please review the article in Architecture Center: [Preserve the original HTTP host name between a reverse proxy and its backend web application](/azure/architecture/best-practices/host-name-preservation)
Set the host name in the location header to the application gateway's domain name. To do this, create a [rewrite rule](./rewrite-http-headers-url.md) with a condition that evaluates if the location header in the response contains azurewebsites.net. It must also perform an action to rewrite the location header to have the application gateway's host name. For more information, see instructions on [how to rewrite the location header](./rewrite-http-headers-url.md#modify-a-redirection-url). > [!NOTE] > The HTTP header rewrite support is only available for the [Standard_v2 and WAF_v2 SKU](./application-gateway-autoscaling-zone-redundant.md) of Application Gateway. We recommend [migrating to v2](./migrate-v1-v2.md) for Header Rewrite and other [advanced capabilities](./overview-v2.md#feature-comparison-between-v1-sku-and-v2-sku) that are available with v2 SKU.
-## Alternate solution: Use a custom domain name
-
-Using App Service's Custom Domain feature is another solution to always redirect the traffic to Application Gateway's domain name (`www.contoso.com` in our example). This configuration also serves as a solution for the ARR Affinity cookie problem. By default, the ARRAffinity cookie domain is set to the App Service's default host name (example.azurewebsites.net) instead of the Application Gateway's domain name. Therefore, the browser in such cases will reject the cookie due to the difference in the domain names of the request and the cookie.
-
-You can follow the given method for both the Redirection and ARRAffinity's cookie domain mismatch issues. This method will need you to have your custom domain's DNS zone access.
-
-**Step1**: Set a Custom Domain in App Service and verify the domain ownership by adding the [CNAME & TXT DNS records](../app-service/app-service-web-tutorial-custom-domain.md#2-get-a-domain-verification-id).
-The records would look similar to
-- `www.contoso.com` IN CNAME `contoso.azurewebsite.net`-- `asuid.www.contoso.com` IN TXT "`<verification id string>`"--
-**Step2**: The CNAME record in the previous step was only needed for the domain verification. Ultimately, we need the traffic to route via Application Gateway. You can thus modify `www.contoso.com`'s CNAME now to point to Application Gateway's FQDN. To set a FQDN for your Application Gateway, navigate to its Public IP address resource and assign a "DNS Name label" for it. The updated CNAME record should now look as
-- `www.contoso.com` IN CNAME `contoso.eastus.cloudapp.azure.com`--
-**Step3**: Disable "Pick Hostname from Backend Address" for the associated HTTP Setting.
-
-In PowerShell, don't use the `-PickHostNameFromBackendAddress` switch in the `Set-AzApplicationGatewayBackendHttpSettings` command.
--
-**Step4**: For the probes to determine the backend as healthy and an operational traffic, set a custom Health Probe with Host field as custom or default domain of the App Service.
-
-In PowerShell, don't use the `-PickHostNameFromBackendHttpSettings` switch in the `Set-AzApplicationGatewayProbeConfig` command and use either the custom or default domain of the App Service in the -HostName switch of the probe.
-
-To implement the previous steps using PowerShell for an existing setup, use the sample PowerShell script that follows. Note how we haven't used the **-PickHostname** switches in the probe and HTTP settings configuration.
-
-```azurepowershell-interactive
-$gw=Get-AzApplicationGateway -Name AppGw1 -ResourceGroupName AppGwRG
-Set-AzApplicationGatewayProbeConfig -ApplicationGateway $gw -Name AppServiceProbe -Protocol Http -HostName "example.azurewebsites.net" -Path "/" -Interval 30 -Timeout 30 -UnhealthyThreshold 3
-$probe=Get-AzApplicationGatewayProbeConfig -Name AppServiceProbe -ApplicationGateway $gw
-Set-AzApplicationGatewayBackendHttpSettings -Name appgwhttpsettings -ApplicationGateway $gw -Port 80 -Protocol Http -CookieBasedAffinity Disabled -Probe $probe -RequestTimeout 30
-Set-AzApplicationGateway -ApplicationGateway $gw
-```
- ```
- ## Request headers to Application Gateway:
-
- Request URL: http://www.contoso.com/path
-
- Request Method: GET
-
- Host: www.contoso.com
-
- ## Response headers:
-
- Status Code: 301 Moved Permanently
-
- Location: http://www.contoso.com/path/
-
- Server: Microsoft-IIS/10.0
-
- Set-Cookie: ARRAffinity=b5b1b14066f35b3e4533a1974cacfbbd969bf1960b6518aa2c2e2619700e4010;Path=/;HttpOnly;Domain=www.contoso.com
- X-Powered-By: ASP.NET
- ```
- ## Next steps
+## Next steps
If the preceding steps didn't resolve the issue, open a [support ticket](https://azure.microsoft.com/support/options/).
attestation Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/attestation/overview.md
Client applications can be designed to take advantage of TPM attestation by dele
Azure [Confidential VM](../confidential-computing/confidential-vm-overview.md) (CVM) is based on [AMD processors with SEV-SNP technology](../confidential-computing/virtual-machine-solutions-amd.md) and aims to improve VM security posture by removing trust in host, hypervisor and Cloud Service Provider (CSP). To achieve this, CVM offers VM OS disk encryption option with platform-managed keys and binds the disk encryption keys to the virtual machine's TPM. When a CVM boots up, SNP report containing the guest VM firmware measurements will be sent to Azure Attestation. The service validates the measurements and issues an attestation token that is used to release keys from [Managed-HSM](../key-vault/managed-hsm/overview.md) or [Azure Key Vault](../key-vault/general/basic-concepts.md). These keys are used to decrypt the vTPM state of the guest VM, unlock the OS disk and start the CVM. The attestation and key release process is performed automatically on each CVM boot, and the process ensures the CVM boots up only upon successful attestation of the hardware.
+### Trusted Launch attestation
+
+Azure customers can [prevent bootkit and rootkit infections](https://www.youtube.com/watch?v=CQqu_rTSi0Q) by enabling [Trusted launch](../virtual-machines/trusted-launch.md)) for their virtual machines (VMs). When the VM is Secure Boot and vTPM enabled with guest attestation extension installed, vTPM measurements get submitted to Azure Attestation periodically for monitoring of boot integrity. An attestation failure indicates potential malware, which is surfaced to customers via Microsoft Defender for Cloud, through Alerts and Recommendations.
+ ## Azure Attestation can run in a TEE Azure Attestation is critical to Confidential Computing scenarios, as it performs the following actions:
Clusters deployed in two regions will operate independently under normal circums
## Next steps - Learn about [Azure Attestation basic concepts](basic-concepts.md) - [How to author and sign an attestation policy](author-sign-policy.md)-- [Set up Azure Attestation using PowerShell](quickstart-powershell.md)
+- [Set up Azure Attestation using PowerShell](quickstart-powershell.md)
automation Automation Manage Send Joblogs Log Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-manage-send-joblogs-log-analytics.md
Title: Forward Azure Automation job data to Azure Monitor logs
description: This article tells how to send job status and runbook job streams to Azure Monitor logs. Previously updated : 09/02/2020 Last updated : 03/10/2022
-# Forward Azure Automation job data to Azure Monitor logs
+# Forward Azure Automation diagnostic logs to Azure Monitor
-Azure Automation can send runbook job status and job streams to your Log Analytics workspace. This process does not involve workspace linking and is completely independent. Job logs and job streams are visible in the Azure portal, or with PowerShell, for individual jobs and this allows you to perform simple investigations. Now with Azure Monitor logs you can:
+Azure Automation can send runbook job status and job streams to your Log Analytics workspace. This process does not involve workspace linking and is completely independent and allows you to perform simple investigations. Job logs and job streams are visible in the Azure portal, or with PowerShell for individual jobs. With Azure Monitor logs for your Automation account, you can:
-* Get insight into the status of your Automation jobs.
-* Trigger an email or alert based on your runbook job status (for example, failed or suspended).
-* Write advanced queries across your job streams.
-* Correlate jobs across Automation accounts.
-* Use custom views and search queries to visualize your runbook results, runbook job status, and other related key indicators or metrics.
+ - Get insights into the status of your Automation jobs.
+ - Trigger an email or alert based on your runbook job status (for example, failed or suspended).
+ - Write advanced queries across your job streams.
+ - Correlate jobs across Automation accounts.
+ - Use customized views and search queries to visualize your runbook results, runbook job status, and other related key indicators or metrics through an [Azure dashboard](/azure/azure-portal/azure-portal-dashboards).
+ - Get the audit logs related to Automation accounts, runbooks, and other asset create, modify and delete operations.
-## Prerequisites
+Using Azure Monitor logs, you can consolidate logs from different resources in the same workspace where it can be analyzed with [queries](/azure/azure-monitor/logs/log-query-overview) to quickly retrieve, consolidate, and analyze the collected data. You can create and test queries using [Log Analytics](/azure/azure-monitor/logs/log-query-overview) in the Azure portal and then either directly analyze the data using these tools or save queries for use with [visualization](/azure/azure-monitor/best-practices-analysis) or [alert rules](/azure/azure-monitor/alerts/alerts-overview).
-To start sending your Automation logs to Azure Monitor logs, you need:
+Azure Monitor uses a version of the [Kusto query language (KQL)](/azure/kusto/query/) used by Azure Data Explorer that is suitable for simple log queries. It also includes advanced functionality such as aggregations, joins, and smart analytics. You can quickly learn the query language using [multiple lessons](/azure/azure-monitor/logs/get-started-queries).
-* The latest release of [Azure PowerShell](/powershell/azure/).
-* A Log Analytics workspace and it's resource ID. For more information, see [Get started with Azure Monitor logs](../azure-monitor/overview.md).
+## Azure Automation diagnostic settings
-* The resource ID of your Azure Automation account.
+You can forward the following platform logs and metric data using Automation diagnostic settings support:
-## How to find resource IDs
+| Data types | Description |
+| | |
+| Job Logs | Status of the runbook job in the Automation account.|
+| Job Streams | Status of the job streams in the runbook defined in the Automation account.|
+| DSCNodeStatus | Status of the DSC node.|
+| AuditEvent | All resource logs that record customer interactions with data or the settings of the Azure Automation service.|
+| Metrics | Total jobs, total update, deployment machine runs, total update deployment runs.|
-1. Use the following command to find the resource ID for your Azure Automation account:
- ```powershell-interactive
- # Find the ResourceId for the Automation account
- Get-AzResource -ResourceType "Microsoft.Automation/automationAccounts"
- ```
+## Configure diagnostic settings in Azure portal
-2. Copy the value for **ResourceID**.
+You can configure diagnostic settings in the Azure portal from the menu for the Automation account resource.
-3. Use the following command to find the resource ID of your Log Analytics workspace:
+1. In the Automation account menu, under **Monitoring** select **Diagnostic settings**.
- ```powershell-interactive
- # Find the ResourceId for the Log Analytics workspace
- Get-AzResource -ResourceType "Microsoft.OperationalInsights/workspaces"
- ```
+ :::image type="content" source="media/automation-manage-send-joblogs-log-analytics/select-diagnostic-settings-inline.png" alt-text="Screenshot showing selection of diagnostic setting option." lightbox="media/automation-manage-send-joblogs-log-analytics/select-diagnostic-settings-expanded.png":::
+
+1. Click **Add diagnostic setting**.
+
+ :::image type="content" source="media/automation-manage-send-joblogs-log-analytics/select-add-diagnostic-setting-inline.png" alt-text="Screenshot showing selection of add diagnostic setting." lightbox="media/automation-manage-send-joblogs-log-analytics/select-add-diagnostic-setting-expanded.png":::
-4. Copy the value for **ResourceID**.
+1. Enter a setting name in the **Diagnostic setting name** if it doesn't already have one.
+
+ You can also view all categories of Logs and metrics.
-To return results from a specific resource group, include the `-ResourceGroupName` parameter. For more information, see [Get-AzResource](/powershell/module/az.resources/get-azresource).
+ :::image type="content" source="media/automation-manage-send-joblogs-log-analytics/view-diagnostic-setting.png" alt-text="Screenshot showing all categories of logs and metrics.":::
-If you have more than one Automation account or workspace in the output of the preceding commands, you can find the name and other related properties that are part of the full resource ID of your Automation account by performing the following:
+ - **Logs and metrics to route** : For logs, choose a category group or select the individual checkboxes for each category of data you want to send to the destinations specified. Choose **AllMetrics** if you want to store metrics into Azure Monitor logs.
+ - **Destination details** : Select the checkbox for each destination. As per the selection of each box, the options appear to allow you to add additional information.
+
+ :::image type="content" source="media/automation-manage-send-joblogs-log-analytics/destination-details-options-inline.png" alt-text="Screenshot showing selections in destination details section." lightbox="media/automation-manage-send-joblogs-log-analytics/destination-details-options-expanded.png":::
-1. Sign in to the [Azure portal](https://portal.azure.com).
-1. In the Azure portal, select your Automation account from the **Automation Accounts** page.
-1. On the page of the selected Automation account, under **Account Settings**, select **Properties**.
-1. In the **Properties** page, note the details shown below.
+ - **Log Analytics** : Enter the Subscription ID and workspace name. If you don't have a workspace, you must [create one before proceeding](/azure/azure-monitor/logs/quick-create-workspace).
+
+ - **Event Hubs**: Specify the following criteria:
+ - Subscription: The same subscription as that of the Event Hub.
+ - Event Hub namespace: [Create Event Hub](/azure/event-hubs/event-hubs-create) if you don't have one yet.
+ - Event Hub name (optional): If you don't specify a name, an event hub is created for each log category. If you are sending multiple categories, specify a name to limit the number of Event Hubs created. See [Azure Event Hubs quotas and limits](/azure/event-hubs/event-hubs-quotas) for details.
+ - Event Hub policy (optional): A policy defines the permissions that the streaming mechanism has. See [Event Hubs feature](/azure/event-hubs/event-hubs-features#publisher-policy).
+
+ - **Storage**: Choose the subscription, storage account, and retention policy.
+ :::image type="content" source="media/automation-manage-send-joblogs-log-analytics/storage-account-details-inline.png" alt-text="Screenshot showing the storage account." lightbox="media/automation-manage-send-joblogs-log-analytics/storage-account-details-expanded.png":::
- ![Automation account properties](media/automation-manage-send-joblogs-log-analytics/automation-account-properties.png).
+ - **Partner integration**: You must first install a partner integration into your subscription. Configuration options will vary by partner. For more information, see [Azure Monitor integration](/azure/partner-solutions/overview).
+
+1. Click **Save**.
-## Configure diagnostic settings
+After a few moments, the new setting appears in your list of settings for this resource, and logs are streamed to the specified destinations as new event data is generated. There can be 15 minutes time difference between the event emitted and its appearance in [Log Analytics workspace](/azure/azure-monitor/logs/data-ingestion-time).
-Automation diagnostic settings supports forwarding the following platform logs and metric data:
+## Query the logs
-* JobLogs
-* JobStreams
-* DSCNodeStatus
-* Metrics - Total Jobs, Total Update Deployment Machine Runs, Total Update Deployment Runs
+To query the generated logs:
+
+1. In your Automation account, under **Monitoring**, select **Logs**.
+1. Under **All Queries**, select **Automation Jobs**.
+
+ :::image type="content" source="media/automation-manage-send-joblogs-log-analytics/select-query-logs.png" alt-text="Screenshot showing how to navigate to select Automation jobs.":::
+
+1. Select one of the queries you want to execute and click **Run**.
+1. To execute a custom query, close the **Queries** window and paste your custom query in the new query window and click **Run**.
+
+ The output of the query is displayed in **Results** pane.
+
+1. Click **New alert rule** to configure an Azure Monitor alert for this query.
+
+ :::image type="content" source="media/automation-manage-send-joblogs-log-analytics/custom-query-inline.png" alt-text="Screenshot showing how to query logs." lightbox="media/automation-manage-send-joblogs-log-analytics/custom-query-expanded.png":::
-To start sending your Automation logs to Azure Monitor logs, review [create diagnostic settings](../azure-monitor/essentials/diagnostic-settings.md) to understand the feature and methods available to configure diagnostic settings to send platform logs.
## Azure Monitor log records
-Azure Automation diagnostics create two types of records in Azure Monitor logs, tagged as `AzureDiagnostics`. The tables in the next sections are examples of records that Azure Automation generates and the data types that appear in log search results.
+Azure Automation diagnostics create the following types of records in Azure Monitor logs, tagged as `AzureDiagnostics`. The tables in the below sections are examples of records that Azure Automation generates and the data types that appear in log search results.
### Job logs | Property | Description | | | | | TimeGenerated |Date and time when the runbook job executed. |
-| RunbookName_s |The name of the runbook. |
-| Caller_s |The caller that initiated the operation. Possible values are either an email address or system for scheduled jobs. |
-| Tenant_g | GUID that identifies the tenant for the caller. |
+| RunbookName_s |Name/names of the runbook. |
+| Caller_s |Caller that initiated the operation. Possible values are either an email address or system for scheduled jobs. |
+| Tenant_g | GUID (globally unique identifier) that identifies the tenant for the caller. |
| JobId_g |GUID that identifies the runbook job. |
-| ResultType |The status of the runbook job. Possible values are:<br>- New<br>- Created<br>- Started<br>- Stopped<br>- Suspended<br>- Failed<br>- Completed |
+| ResultType |Status of the runbook job. Possible values are:<br>- New<br>- Created<br>- Started<br>- Stopped<br>- Suspended<br>- Failed<br>- Completed |
| Category | Classification of the type of data. For Automation, the value is JobLogs. |
-| OperationName | The type of operation performed in Azure. For Automation, the value is Job. |
-| Resource | The name of the Automation account |
+| OperationName | Type of operation performed in Azure. For Automation, the value is Job. |
+| Resource | Name of the Automation account |
| SourceSystem | System that Azure Monitor logs use to collect the data. The value is always Azure for Azure diagnostics. |
-| ResultDescription |The runbook job result state. Possible values are:<br>- Job is started<br>- Job Failed<br>- Job Completed |
-| CorrelationId |The correlation GUID of the runbook job. |
-| ResourceId |The Azure Automation account resource ID of the runbook. |
-| SubscriptionId | The Azure subscription GUID for the Automation account. |
-| ResourceGroup | The name of the resource group for the Automation account. |
-| ResourceProvider | The resource provider. The value is MICROSOFT.AUTOMATION. |
-| ResourceType | The resource type. The value is AUTOMATIONACCOUNTS. |
+| ResultDescription |Runbook job result state. Possible values are:<br>- Job is started<br>- Job Failed<br>- Job Completed |
+| CorrelationId |Correlation GUID of the runbook job. |
+| ResourceId |Azure Automation account resource ID of the runbook. |
+| SubscriptionId | Azure subscription GUID for the Automation account. |
+| ResourceGroup | Name of the resource group for the Automation account. |
+| ResourceProvider | Name of the resource provider. The value is MICROSOFT.AUTOMATION. |
+| ResourceType | Resource type. The value is AUTOMATIONACCOUNTS. |
### Job streams | Property | Description | | | |
-| TimeGenerated |Date and time when the runbook job executed. |
-| RunbookName_s |The name of the runbook. |
-| Caller_s |The caller that initiated the operation. Possible values are either an email address or system for scheduled jobs. |
-| StreamType_s |The type of job stream. Possible values are:<br>-Progress<br>- Output<br>- Warning<br>- Error<br>- Debug<br>- Verbose |
+| TimeGenerated |Date and time when the runbook job was executed. |
+| RunbookName_s |Name of the runbook. |
+| Caller_s |Caller that initiated the operation. Possible values are either an email address or system for scheduled jobs. |
+| StreamType_s |Type of job stream. Possible values are:<br>-Progress<br>- Output<br>- Warning<br>- Error<br>- Debug<br>- Verbose |
| Tenant_g | GUID that identifies the tenant for the caller. | | JobId_g |GUID that identifies the runbook job. | | ResultType |The status of the runbook job. Possible values are:<br>- In Progress | | Category | Classification of the type of data. For Automation, the value is JobStreams. | | OperationName | Type of operation performed in Azure. For Automation, the value is Job. |
-| Resource | The name of the Automation account. |
+| Resource | Name of the Automation account. |
| SourceSystem | System that Azure Monitor logs use to collect the data. The value is always Azure for Azure diagnostics. | | ResultDescription |Description that includes the output stream from the runbook. |
-| CorrelationId |The correlation GUID of the runbook job. |
-| ResourceId |The Azure Automation account resource ID of the runbook. |
-| SubscriptionId | The Azure subscription GUID for the Automation account. |
-| ResourceGroup | The name of the resource group for the Automation account. |
-| ResourceProvider | The resource provider. The value is MICROSOFT.AUTOMATION. |
-| ResourceType | The resource type. The value is AUTOMATIONACCOUNTS. |
+| CorrelationId |Correlation GUID of the runbook job. |
+| ResourceId |Azure Automation account resource ID of the runbook. |
+| SubscriptionId | Azure subscription GUID for the Automation account. |
+| ResourceGroup | Name of the resource group for the Automation account. |
+| ResourceProvider | Resource provider. The value is MICROSOFT.AUTOMATION. |
+| ResourceType | Resource type. The value is AUTOMATIONACCOUNTS. |
+
+### Audit events
+| Property | Description |
+| | |
+| TenantID | GUID that identifies the tenant for the caller. |
+| TimeGenerated (UTC) | Date and time when the runbook job is executed.|
+| Category | AuditEvent|
+| ResourceGroup | Resource group name of the Automation account.|
+| Subscription Id | Azure subscription GUID for the Automation account.|
+| ResourceProvider | MICROSOFT.AUTOMATION|
+| Resource | Automation Account name|
+| ResourceType | AUTOMATIONACCOUNTS |
+| OperationName | Possible values are Update, Create, Delete.|
+| ResultType | Status of the runbook job. Possible value is: Completed.|
+| CorrelationId | Correlation GUID of the runbook job. |
+| ResultDescription | Runbook job result state. Possible values are Update, Create, Delete. |
+| Tenant_g | GUID that identifies the tenant for the caller. |
+| SourceSystem | System that Azures Monitor logs use to collect the data. The value is always Azure for Azure diagnostics. |
+| clientInfo_IpAddress_s | {scrubbed} |
+| clientInfo_PrincipalName_s | {scrubbed} |
+| clientInfo_TenantId_g | Tenant ID of the client.|
+| clientInfo_Issuer_s |
+| clientInfo_ObjectId_g | Object ID of the client.|
+| clientInfo_AppId_g | AppID of the client.|
+| clientInfo_ClientRequestId_g | RequestID of the client|
+| targetResources_Resource_s | Account, Job, Credential, Connections, Variables, Runbook. |
+| Type | AzureDiagnostics |
+| _ResourceId | Azure Automation account resource ID of the runbook. |
+ ## View Automation logs in Azure Monitor logs Now that you started sending your Automation job streams and logs to Azure Monitor logs, let's see what you can do with these logs inside Azure Monitor logs. To see the logs, run the following query:
-`AzureDiagnostics | where ResourceProvider == "MICROSOFT.AUTOMATION"`
-
-### Send an email when a runbook job fails or suspends
-
-The following steps show how to set up alerts in Azure Monitor to notify you when something goes wrong with a runbook job.
-
-To create an alert rule, start by creating a log search for the runbook job records that should invoke the alert. Click the **Alert** button to create and configure the alert rule.
-
-1. From the Log Analytics workspace Overview page, click **View logs**.
-
-2. Create a log search query for your alert by typing the following search into the query field: `AzureDiagnostics | where ResourceProvider == "MICROSOFT.AUTOMATION" and Category == "JobLogs" and (ResultType == "Failed" or ResultType == "Suspended")`<br><br>You can also group by the runbook name by using: `AzureDiagnostics | where ResourceProvider == "MICROSOFT.AUTOMATION" and Category == "JobLogs" and (ResultType == "Failed" or ResultType == "Suspended") | summarize AggregatedValue = count() by RunbookName_s`
-
- If you set up logs from more than one Automation account or subscription to your workspace, you can group your alerts by subscription and Automation account. Automation account name can be found in the `Resource` field in the search of `JobLogs`.
-
-3. To open the **Create rule** screen, click **New Alert Rule** at the top of the page. For more information on the options to configure the alert, see [Log alerts in Azure](../azure-monitor/alerts/alerts-unified-log.md).
+ ```kusto
+ AzureDiagnostics | where ResourceProvider == "MICROSOFT.AUTOMATION"
+ ```
-### Find all jobs that have completed with errors
+## Sample queries for job logs and job streams
-In addition to alerting on failures, you can find when a runbook job has a non-terminating error. In these cases, PowerShell produces an error stream, but the non-terminating errors don't cause your job to suspend or fail.
+### Find all jobs that are completed with error
-1. In your Log Analytics workspace, click **Logs**.
+In addition to scenarios like alerting on failures, you can find when a runbook job has a non-terminating error. In these cases, PowerShell produces an error stream, but the non-terminating errors don't cause your job to suspend or fail.
-2. In the query field, type `AzureDiagnostics | where ResourceProvider == "MICROSOFT.AUTOMATION" and Category == "JobStreams" and StreamType_s == "Error" | summarize AggregatedValue = count() by JobId_g`.
+1. In your Log Analytics workspace, clickΓÇ»**Logs**.
+1. In the query field, type:
+ ```kusto
+ AzureDiagnostics | where ResourceProvider == "MICROSOFT.AUTOMATION" and Category == "JobStreams" and StreamType_s == "Error" | summarize AggregatedValue = count () by JobId_g.
+ ```
+1. Click **Search**.
-3. Click the **Search** button.
### View job streams for a job
AzureDiagnostics
![Log Analytics Historical Job Status Chart](media/automation-manage-send-joblogs-log-analytics/historical-job-status-chart.png)
+### Find logs reporting errors in the automation jobs.
+```kusto
+AzureDiagnostics
+| where ResourceProvider == "MICROSOFT.AUTOMATION"
+| where StreamType_s == "Error"
+| project TimeGenerated, Category, JobId_g, OperationName, RunbookName_s, ResultDescription, _ResourceId
+```
+### Find Azure Automation jobs that are completed
+```kusto
+AzureDiagnostics
+| where ResourceProvider == "MICROSOFT.AUTOMATION" and Category == "JobLogs" and ResultType == "Completed"
+| project TimeGenerated, RunbookName_s, ResultType, _ResourceId, JobId_g
+```
+
+### Find Azure Automation jobs that are failed, suspended, or stopped
+```kusto
+AzureDiagnostics
+| where ResourceProvider == "MICROSOFT.AUTOMATION" and Category == "JobLogs" and (ResultType == "Failed" or ResultType == "Stopped" or ResultType == "Suspended")
+| project TimeGenerated, RunbookName_s, ResultType, _ResourceId, JobId_g
+```
+
+### List all runbooks & jobs that completed successfully with errors
+```kusto
+AzureDiagnostics
+| where ResourceProvider == "MICROSOFT.AUTOMATION" and Category == "JobStreams" and StreamType_s == "Error"
+| project TimeGenerated, RunbookName_s, StreamType_s, _ResourceId, ResultDescription, JobId_g
+```
+
+### Send an email when a runbook job fails or suspends
+
+The following steps explain how to set up email alerts in Azure Monitor to notify when something goes wrong with a runbook job.
+
+To create an alert rule, create a log search for the runbook job records that should invoke the alert as described in [Query the logs](#query-the-logs). Click the **+New alert rule** to configure the alert rule.
+
+1. In your Automation account, under **Monitoring**, select **Logs**.
+1. Create a log search query for your alert by entering a search criteria into the query field.
+
+ ```kusto
+ AzureDiagnostics | where ResourceProvider == "MICROSOFT.AUTOMATION" and Category == "JobLogs" and (ResultType == "Failed" or ResultType == "Suspended")
+ ```
+ You can also group by the runbook name by using:
+
+ ```kusto
+ AzureDiagnostics | where ResourceProvider == "MICROSOFT.AUTOMATION" and Category == "JobLogs" and (ResultType == "Failed" or ResultType == "Suspended") | summarize AggregatedValue = count() by RunbookName_s
+ ```
+ 1. To open the **Create alert rule** screen, click **+New alert rule** on the top of the page. For more information on the options to configure the alerts, see [Log alerts in Azure](/azure/azure-monitor/alerts/alerts-log#create-a-log-alert-rule-in-the-azure-portal)
++
+## Azure Automation diagnostic audit logs
+
+You can now send audit logs also to the Azure Monitor workspace. This allows enterprises to monitor key automation account activities for security & compliance. When enabled through the Azure Diagnostics settings, you will be able to collect telemetry about create, update and delete operations for the Automation runbooks, jobs and automation assets like connection, credential, variable & certificate. You can also [configure the alerts](#send-an-email-when-a-runbook-job-fails-or-suspends) for audit log conditions as part of your security monitoring requirements.
++
+## Difference between activity logs and audit logs
+
+Activity log is aΓÇ»[platform log](/azure/azure-monitor/essentials/platform-logs-overview)in Azure that provides insight into subscription-level events. The activity log for Automation account includes information about when an automation resource is modified or created or deleted. However, it does not capture the name or ID of the resource.
+
+Audit logs for Automation accounts capture the name and ID of the resource such as automation variable, credential, connection and so on, along with the type of the operation performed for the resource and Azure Automation would scrub some details like client IP data conforming to the GDPR compliance.
+
+Activity logs would show details such as client IP because an Activity log is a platform log that provides detailed diagnostic and auditing information for Azure resources. They are automatically generated for activities that occur in ARM and gets pushed to the activity log resource provider. Since Activity logs are part of Azure monitoring, it would show some client data to provide insights into the client activity. ΓÇ»
+
+## Sample queries for audit logs
+
+### Query to view Automation resource audit logs
+
+```kusto
+AzureDiagnostics
+| where ResourceProvider == "MICROSOFT.AUTOMATION" and Category == "AuditEvent"
+```
+
+### Query to Monitor any variable update, create or delete operation
+
+```kusto
+AzureDiagnostics
+| where ResourceProvider == "MICROSOFT.AUTOMATION" and Category == "AuditEvent" and targetResources_Resource_s == "Variable"
+```
+
+### Query to Monitor any runbook operation like create, draft or update
+
+```kusto
+AzureDiagnostics
+| where ResourceProvider == "MICROSOFT.AUTOMATION" and Category == "AuditEvent" and targetResources_Resource_s contains "Runbook"
+```
+
+### Query to Monitor any certificate creation, updating or deletion
+
+```kusto
+AzureDiagnostics
+| where ResourceProvider == "MICROSOFT.AUTOMATION" and Category == "AuditEvent" and targetResources_Resource_s contains "Certificate"
+```
+
+### Query to Monitor any credentials creation, updating or deletion
+
+```kusto
+AzureDiagnostics
+| where ResourceProvider == "MICROSOFT.AUTOMATION" and Category == "AuditEvent" and targetResources_Resource_s contains "Credential"
+```
+ ### Filter job status output converted into a JSON object
-Recently we changed the behavior of how the Automation log data is written to the `AzureDiagnostics` table in the Log Analytics service, where it no longer breaks down the JSON properties into separate fields. If you configured your runbook to format objects in the output stream in JSON format as separate columns, it is necessary to reconfigure your queries to parse that field to a JSON object in order to access those properties. This is accomplished using [parsejson](/azure/data-explorer/kusto/query/samples?pivots=#parsejson) to access a specific JSON element in a known path.
+Recently we changed the behavior of how the Automation log data is written to the `AzureDiagnostics` table in the Log Analytics service, where it no longer breaks down the JSON properties into separate fields. If you configured your runbook to format objects in the output stream in JSON format as separate columns, it is necessary to reconfigure your queries to parse that field to a JSON object to access those properties. This is accomplished using [parse json](/azure/data-explorer/kusto/query/samples?pivots=#parsejson) to access a specific JSON element in a known path.
For example, a runbook formats the *ResultDescription* property in the output stream in JSON format with multiple fields. To search for the status of your jobs that are in a failed state as specified in a field called **Status**, use this example query to search the *ResultDescription* with a status of **Failed**:
automation Automation Security Guidelines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-security-guidelines.md
Review the Azure Policy recommendations for Azure Automation and act as appropri
## Next steps
-* To learn how to use Azure role-based access control (Azure RBAC), see [Manage role permissions and security in Azure Automation](/automation/automation-role-based-access-control).
+* To learn how to use Azure role-based access control (Azure RBAC), see [Manage role permissions and security in Azure Automation](/azure/automation/automation-role-based-access-control).
* For information on how Azure protects your privacy and secures your data, see [Azure Automation data security](./automation-managing-data.md).
-* To learn about configuring the Automation account to use encryption, see [Encryption of secure assets in Azure Automation](/automation/automation-secure-asset-encryption).
+* To learn about configuring the Automation account to use encryption, see [Encryption of secure assets in Azure Automation](/azure/automation/automation-secure-asset-encryption).
automation Automation Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-services.md
+
+ Title: Azure Automation services overview
+description: This article tells what Azure Automation services are and how to use it to automate the lifecycle of infrastructure and applications.
+
+keywords: azure automation services, automanage, Bicep, Blueprints, Guest Config, Policy, Functions
Last updated : 03/04/2022+++
+# Choose the Automation services in Azure
+
+This article explains various Automation services offered in the Azure environment. These services can automate business and operational processes and solve integration problems amongst multiple services, systems, and processes. Automation services can define input, action, activity to be performed, conditions, error handling, and output generation. Using these services you can run various activities on a schedule or do a manual demand-based execution. Each service has its unique advantages and target audience.
+Using these services, you can shift effort from manually performing operational tasks towards building automation for these tasks, including:
+
+- Reduce time to perform an action
+- Reduce risk in performing the action
+- Increased human capacity for further innovation
+- Standardize operations
+
+## Categories in Automation operations
+Automation is required in three broad categories of operations:
+
+- **Deployment and management of resources** ΓÇöcreate and configure programmatically using automation or infrastructure as code tooling to deliver repeatable and consistent deployment and management of cloud resources. For example, an Azure Network Security Group can be deployed, and security group rules are created using an Azure Resource Manager template or an automation script.
+
+- **Response to external events** ΓÇöbased on a critical external event such as responding to database changes, acting as per the inputs given to a web page, and so on, you can diagnose and resolve issues.
+
+- **Complex Orchestration** ΓÇöby integrating with first or third party products, you can define an end to end automation workflows.
+
+## Azure services for Automation
+
+Multiple Azure services can fulfill the above requirements. Each service has its benefits and limitations, and customers can use multiple services to meet their automation requirements.
+
+**Deployment and management of resources**
+ - Azure Resource Manager (ARM) templates with Bicep
+ - Azure Blueprints
+ - Azure Automation
+ - Azure Automanage (for machine configuration and management.)
+
+**Responding to external events**
+ - Azure Functions
+ - Azure Automation
+ - Azure Policy Guest Config (to take an action when there's a change in the compliance state of resource.)
+
+**Complex orchestration and integration with 1st or 3rd party products**
+ - Azure Logic Apps
+ - Azure Functions or Azure Automation. (Azure Logic app has over 400+ connectors to other services, including Azure Automation and Azure Functions, which could be used to meet complex automation scenarios.)
++
+ :::image type="content" source="media/automation-services/automation-services-overview.png" alt-text="Screenshot shows an Overview of Automation services.":::
++
+## Deploy and manage Automation services
+
+### Azure Resource Manager (ARM) template
+
+Azure Resource Manager provides a language to develop repeatable and consistent deployment templates for Azure resources. The template is a JavaScript Object Notation (JSON) file that defines the infrastructure and configuration for your project. It uses declarative syntax, which lets you state what you intend to deploy without having to write the sequence of programming commands to create it. In the template, you specify the resources to deploy and the properties for those resources. [Learn more](/azure/azure-resource-manager/templates/overview).
+
+### Bicep
+
+We've introduced a new language named [Bicep](/azure/azure-resource-manager/bicep/overview) that offers the same capabilities as ARM templates but with a syntax that's easier to use. Each Bicep file is automatically converted to an ARM template during deployment. If you're considering infrastructure as code options, we recommend Bicep. For more information, see [What is Bicep?](/azure/azure-resource-manager/bicep/overview)
+
+The following table describes the scenarios and users for ARM template and Bicep:
+
+ **Scenarios** | **Users**
+ |
+ | Create, manage, and update infrastructure resources, such as virtual machines, networks, storage accounts, containers and so on. </br> </br> Deploy apps, add tags, assign policies, assign role-based access control all declaratively as code and integrated with your CI\CD tools. </br> </br> Manage multiple environments such as production, non-production, and disaster recovery. </br> </br> Deploy resources consistently and reliably at a scale.| Application Developers, Infrastructure Administrators, DevOps Engineers using Azure for the first time or using Azure as their primary cloud. </br> </br> IT Engineer\Cloud Architect responsible for cloud infrastructure deployment.
++
+### Azure Blueprints (Preview)
+
+ Azure Blueprints (Preview) define a repeatable set of Azure resources that implements and adheres to an organization's standards, patterns, and requirements. Blueprints are a declarative way to orchestrate the deployment of various resource templates and other artifacts such as, Role assignments, Policy assignments, ARM templates and Resource groups. [Learn more](/azure/governance/blueprints/overview).
+
+ **Scenarios** | **Users**
+ |
+ | Create, manage, and update infrastructure resources to ensure that the deployed infrastructure meets the organization compliance standards. </br> </br> Audit and track Azure deployments.| Auditors and central information technology groups responsible to ensure that the deployed Azure infrastructure meets the organization compliance standards.
++
+
+### [Azure Automation](/azure/automation/overview)
+
+Orchestrates repetitive processes using graphical, PowerShell, and Python runbooks in the cloud or hybrid environment.
+It provides a persistent shared assets including variables, connections, objects that allow orchestration of complex jobs. [Learn more](/azure/automation/automation-runbook-gallery).
+
+ **Scenarios** | **Users**
+ |
+ | Schedule tasks, for example ΓÇô Stop dev/test VMs or services at night and turn on during the day. </br> </br> Response to alerts such as system alerts, service alerts, high CPU/memory alerts, create ServiceNow tickets, and so on. </br> </br> Hybrid automation where you can manage to automate on-premises servers such as SQL Server, Active Directory and so on. </br> </br> Azure resource life-cycle management and governance include resource provisioning, de-provisioning, adding correct tags, locks, NSGs and so on. | IT administrators, System administrators, IT operations administrators who are skilled at using PowerShell or Python based scripting. </br> </br> Infrastructure administrators manage the on-premises infrastructure using scripts or executing long-running jobs such as month-end operations on servers running on-premises.
+
+### Azure Automation based in-guest management
+
+**Configuration management** : Collects inventory and tracks changes in your environment. [Learn more](/azure/automation/change-tracking/overview).
+You can configure desired the state of your machines to discover and correct configuration drift. [Learn more](/azure/automation/automation-dsc-overview).
+
+**Update management** : Assess compliance of servers and can schedule update installation on your machines. [Learn more](/azure/automation/update-management/overview).
+
+ **Scenarios** | **Users**
+ |
+ | Detect and alert on software, services, file and registry changes to your machines, vigilant on everything installed in your servers. </br> </br> Assess and install updates on your servers using Azure Update management. </br> </br> Configure the desired state of your servers and ensure they stay compliant. | </br> </br> Central IT\Infrastructure Administrators\Auditors looking for regulatory requirements at scale and ensuring end state of severs looks as desired, patched and audited.
++
+### Azure Automanage (Preview)
+
+Replaces repetitive, day-to-day operational tasks with an exception-only management model, where a healthy, steady-state of VM is equal to hands-free management. [Learn more](/azure/automanage/automanage-virtual-machines).
+
+ **Linux and Windows support**
+ - You can intelligently onboard virtual machines to select best practices Azure services.
+ - It allows you to configure each service per Azure best practices automatically.
+ - It supports customization of best practice services through VM Best practices template for Dev\Test and Production workload.
+ - You can monitor for drift and correct it when detected.
+ - It provides a simple experience (point, select, set, and forget).
+
+ **Scenarios** | **Users**
+ |
+ | Automatically configures guest operating system per Microsoft baseline configuration. </br> </br> Automatically detects for drift and corrects it across a VMΓÇÖs entire lifecycle. </br> </br> Aims at a hands-free management of machines. | The IT Administrators, Infra Administrators, IT Operations Administrators are responsible for managing server workload, day to day admin tasks such as backup, disaster recovery, security updates, responding to security threats, and so on across Azure and on-premise. </br> </br> Developers who do not wish to manage servers or spend the time on fewer priority tasks.
++
+## Respond to events in Automation workflow
+
+### Azure Policy based Guest Configuration
+
+Azure Policy based Guest configuration is the next iteration of Azure Automation State configuration. [Learn more](/azure/governance/policy/concepts/guest-configuration-policy-effects).
+
+ You can check on what is installed in:
+
+ - The next iteration of [Azure Automation State Configuration](/azure/automation/automation-dsc-overview).
+ - For known-bad apps, protocols certificates, administrator privileges, and health of agents.
+ - For customer-authored content.
+
+ **Scenarios** | **Users**
+ |
+ | Obtain compliance data that may include: The configuration of the operating system ΓÇô files, registry, and services, Application configuration or presence, Check environment settings. </br> </br> Audit or deploy settings to all machines (Set) in scope either reactively to existing machines or proactively to new machines as they are deployed. </br> </br> Respond to policy events to provide [remediation on demand or continuous remediation.](/azure/governance/policy/concepts/guest-configuration-policy-effects#remediation-on-demand-applyandmonitor) | The Central IT, Infrastructure Administrators, Auditors (Cloud custodians) are working towards the regulatory requirements at scale and ensuring that servers' end state looks as desired. </br> </br> The application teams validate compliance before releasing change.
++
+### Azure Automation - Process Automation
+
+Orchestrates repetitive processes using graphical, PowerShell, and Python runbooks in the cloud or hybrid environment. [Learn more](/azure/automation/automation-runbook-types?).
+
+ - It provides persistent shared assets, including variables, connections, objects, that allows orchestration of complex jobs.
+ - You can invoke a runbook on the basis of [Azure Monitor alert](/azure/automation/automation-create-alert-triggered-runbook) or through a [webhook](/azure/automation/automation-webhooks).
+
+ **Scenarios** | **Users**
+ |
+ | Respond to system alerts, service alerts, high CPU/memory alerts, create ServiceNow tickets, and so on. </br> </br> Hybrid automation scenarios where you can manage automate on-premises servers such as SQL Server, Active Directory and so on based on an external event.</br> </br> Azure resource life-cycle management and governance that includes Resource provisioning, deprovisioning, adding correct tags, locks, NSGs and so on based on Azure monitor alerts. | IT administrators, System administrators, IT operations administrators who are skilled at using PowerShell or Python based scripting.
++
+### Azure functions
+
+Provides a serverless automation platform that allows you to write code to react to critical events without worrying about the underlying platform. [Learn more](/azure/azure-functions/functions-overview).
+
+ - You can use a variety of languages so that you can write functions in a language of your choice such as C#, Java, JavaScript, PowerShell, or Python and focus on specific pieces of code.
+ - It allows you to orchestrate complex workflows through durable functions.
+
+ **Scenarios** | **Users**
+ |
+ | Respond to events on resources: such as add tags to resource group basis cost center, when VM is deleted etc. </br> </br> Set scheduled tasks such as setting a pattern to stop and start a VM at a specific time, reading blob storage content at regular intervals etc. </br> </br> Process Azure alerts to send the teamΓÇÖs event when the CPU activity spikes to 90%. </br> </br> Orchestrate with external systems such as Microsoft 365. </br> </br> Respond to database changes. | The Application developers who are skilled in coding languages such as C#, F#, PHP, Java, JavaScript, PowerShell, or Python. </br> </br> Cloud Architects who build serverless Micro-services based applications.
++
+## Orchestrate complex jobs in Azure Automation
+
+### Azure logic apps
+
+Logic Apps is a platform for creating and running complex orchestration workflows that integrate your apps, data, services, and systems. [Learn more](/azure/logic-apps/logic-apps-overview).
+
+ - Allows you to build smart integrations between 1st party and 3rd party apps, services and systems running across on-premises, hybrid and cloud native.
+ - Allows you to use managed connectors from a 450+ and growing Azure connectors ecosystem to use in your workflows.
+ - Provides a first-class support for enterprise integration and B2B scenarios.
+ - Flexibility to visually create and edit workflows - Low Code\no code approach
+ - Runs only in the cloud.
+ - Provides a large collection of ready made actions and triggers.
+
+ **Scenarios** | **Users**
+ |
+ | Schedule and send email notifications using Office 365 when a specific event happens. For example, a new file is uploaded. </br> </br> Route and process customer orders across on-premises systems and cloud services. </br></br> Move uploaded files from an SFTP or FTP server to Azure Storage. </br> </br> Monitor tweets, analyze the sentiment, and create alerts or tasks for items that need review. | The Pro integrators and developers, IT professionals who would want to use low code/no code option for Advanced integration scenarios to external systems or APIs.
++
+### Azure Automation - Process Automation
+
+Orchestrates repetitive processes using graphical, PowerShell, and Python runbooks in the cloud or hybrid environment. It provides persistent shared assets, including variables, connections, objects, that allows orchestration of complex jobs. [Learn more](/azure/automation/overview).
+
+ **Scenarios** | **Users**
+ |
+ | Azure resource life-cycle management and governance which includes Resource provisioning, de-provisioning, adding correct tags, locks, NSGs and so on through runbooks that are triggered from ITSM alerts. </br></br> Use hybrid worker as a bridge from cloud to on-premises enabling resource\user management on-premise. </br></br> Execute complex disaster recovery workflows through Automation runbooks. </br></br> Execute automation runbooks as part of Logic apps workflow through Azure Automation Connector. | IT administrators, System administrators, IT operations administrators who are skilled at using PowerShell or Python based scripting. </br> </br> Infrastructure Administrators managing on-premises infrastructure using scripts or executing long running jobs such as month-end operations on servers running on-premises.
++
+### Azure functions
+
+A serverless automation platform that allows you to write code to react to critical events without worrying about the underlying platform. [Learn more](/azure/azure-functions/functions-overview).
+
+ - It provides a variety of languages so that you can write functions in a language of your choice such as C#, Java, JavaScript, PowerShell, or Python and focus on specific pieces of code.
+ - You can orchestrate complex workflows through [durable functions](/azure-functions/durable/durable-functions-overview?tabs=csharp).
+
+ **Scenarios** | **Users**
+ |
+ | Respond to events on resources : such as add tags to resource group basis cost center, when VM is deleted etc. </br> </br> Set scheduled tasks such as setting a pattern to stop and start a VM at a specific time, reading blob storage content at regular intervals etc. </br> </br> Process Azure alerts where you can send teamΓÇÖs event when the CPU activity spikes to 90%. </br> </br> Orchestrate with external systems such as Microsoft 365. </br> </br>Executes Azure Function as part of Logic apps workflow through Azure Function Connector. | Application Developers who are skilled in coding languages such as C#, F#, PHP, Java, JavaScript, PowerShell, or Python. </br> </br> Cloud Architects who build serverless Micro-services based applications.
+
+## Next steps
+- To learn on how to securely execute the automation jobs, see [best practices for security in Azure Automation](/azure/automation/automation-security-guidelines).
automation Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/whats-new.md
Azure Automation receives improvements on an ongoing basis. To stay up to date w
This page is updated monthly, so revisit it regularly. If you're looking for items older than six months, you can find them in [Archive for What's new in Azure Automation](whats-new-archive.md). +
+## March 2022
+
+### Forward diagnostic audit data to Azure Monitor logs
+
+**Type:** New feature
+
+Azure Automation can send diagnostic audit logs in addition to runbook job status and job streams to your Log Analytics workspace. Read [here](automation-manage-send-joblogs-log-analytics.md) for more information.
+ ## February 2022 ### Permissions change in the built-in Reader role for the Automation Account.
azure-arc Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/resource-bridge/overview.md
While Azure has a number of redundancy features at every level of failure, if a
The following private cloud environments and their versions are officially supported for the Azure Arc resource bridge:
-* VMware vSphere version 6.5
+* VMware vSphere version 6.7
* Azure Stack HCI ### Required Azure permissions
URLS:
## Next steps
-To learn more about how Azure Arc-enabled VMware vSphere extends Azure's governance and management capabilities to VMware vSphere infrastructure, see the following [Overview](../vmware-vsphere/overview.md) article.
+To learn more about how Azure Arc-enabled VMware vSphere extends Azure's governance and management capabilities to VMware vSphere infrastructure, see the following [Overview](../vmware-vsphere/overview.md) article.
azure-arc Quick Start Connect Vcenter To Arc Using Script https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/vmware-vsphere/quick-start-connect-vcenter-to-arc-using-script.md
To start using the Azure Arc-enabled VMware vSphere (preview) features, you need
First, the script deploys a virtual appliance called [Azure Arc resource bridge (preview)](../resource-bridge/overview.md) in your vCenter environment. Then, it installs a VMware cluster extension to provide a continuous connection between vCenter Server and Azure Arc.
-> [!IMPORTANT]
-> In the interest of ensuring that new features are documented no later than their release, this article might include documentation for features that aren't yet publicly available.
- ## Prerequisites ### Azure
First, the script deploys a virtual appliance called [Azure Arc resource bridge
- vCenter Server version 6.7. -- Inbound connections allowed on TCP port (usually 443) so that the Azure Arc resource bridge and VMware cluster extension can communicate with the vCenter Server instance.
+- A virtual network that can provide internet access, directly or through a proxy. It must also be possible for VMs on this network to communicate with the vCenter server on TCP port (usually 443).
+
+- At least one free IP address on the above network that isn't in the DHCP range. At least three free IP addresses if there's no DHCP server on the network.
- A resource pool or a cluster with a minimum capacity of 16 GB of RAM and four vCPUs. - A datastore with a minimum of 100 GB of free disk space available through the resource pool or cluster. -- An external virtual network/switch and internet access, directly or through a proxy.- > [!NOTE] > Azure Arc-enabled VMware vSphere (preview) supports vCenter Server instances with a maximum of 2,500 virtual machines (VMs). If your vCenter Server instance has more than 2,500 VMs, we don't recommend that you use Azure Arc-enabled VMware vSphere with it at this point.
A typical onboarding that uses the script takes 30 to 60 minutes. During the pro
| **vCenter password** | Enter the password for the vSphere account. | | **Data center selection** | Select the name of the datacenter (as shown in the vSphere client) where the Azure Arc resource bridge's VM should be deployed. | | **Network selection** | Select the name of the virtual network or segment to which the VM must be connected. This network should allow the appliance to communicate with vCenter Server and the Azure endpoints (or internet). |
-| **Static IP / DHCP** | If you have DHCP server in your network and want to use it, enter **y**. Otherwise, enter **n**. </br>When you choose a static IP configuration, you're asked for the following information: </br> 1. **Static IP address prefix**: Network address in CIDR notation. For example: **192.168.0.0/24**. </br> 2. **Static gateway**: Gateway address. For example: **192.168.0.0**. </br> 3. **DNS servers**: Comma-separated list of DNS servers. </br> 4. **Start range IP**: Minimum size of two available IP addresses is required. One IP address is for the VM, and the other is reserved for upgrade scenarios. Provide the starting IP of that range. </br> 5. **End range IP**: Last IP address of the IP range requested in the previous field. </br> 6. **VLAN ID** (optional) |
+| **Static IP / DHCP** | If you have DHCP server in your network and want to use it, enter **y**. Otherwise, enter **n**. </br>When you choose a static IP configuration, you're asked for the following information: </br> 1. **Static IP address prefix**: Network address in CIDR notation. For example: **192.168.0.0/24**. </br> 2. **Static gateway**: Gateway address. For example: **192.168.0.0**. </br> 3. **DNS servers**: Comma-separated list of DNS servers. </br> 4. **Start range IP**: Minimum size of two available IP addresses is required. One IP address is for the VM, and the other is reserved for upgrade scenarios. Provide the starting IP address of that range. </br> 5. **End range IP**: Last IP address of the IP range requested in the previous field. </br> 6. **VLAN ID** (optional) |
| **Resource pool** | Select the name of the resource pool to which the Azure Arc resource bridge's VM will be deployed. | | **Data store** | Select the name of the datastore to be used for the Azure Arc resource bridge's VM. | | **Folder** | Select the name of the vSphere VM and the template folder where the Azure Arc resource bridge's VM will be deployed. | | **VM template Name** | Provide a name for the VM template that will be created in your vCenter Server instance based on the downloaded OVA file. For example: **arc-appliance-template**. |
-| **Control Pane IP** | Provide a reserved IP address in your DHCP range, or provide a static IP address that's outside the DHCP range but still available on the network. Ensure that this IP address isn't assigned to any other machine on the network. |
+| **Control Pane IP** address | Provide a static IP address that's outside the DHCP range but still available on the network. Ensure that this IP address isn't assigned to any other machine on the network. Azure Arc resource bridge (preview) runs a Kubernetes cluster, and its control plane requires a static IP address.|
| **Appliance proxy settings** | Enter **y** if there's a proxy in your appliance network. Otherwise, enter **n**. </br> You need to populate the following boxes when you have a proxy set up: </br> 1. **Http**: Address of the HTTP proxy server. </br> 2. **Https**: Address of the HTTPS proxy server. </br> 3. **NoProxy**: Addresses to be excluded from the proxy. </br> 4. **CertificateFilePath**: For SSL-based proxies, the path to the certificate to be used.
-After the command finishes running, your setup is complete. You can now try out the capabilities of Azure Arc-enabled VMware vSphere.
+After the command finishes running, your setup is complete. You can now use the capabilities of Azure Arc-enabled VMware vSphere.
+
+## Save SSH keys and kubeconfig
+
+> [!IMPORTANT]
+> Performing some day 2 operations on the Arc resource bridge will require the SSH key to the resource bridge VM and kubeconfig to the Kubernetes cluster on it. It is important to store them to a secure location as it is not possible to retrieve them if the workstation used for the onboarding is deleted.
+
+You will find the kubeconfig file with the name `kubeconfig` in the folder where the onboarding script is downloaded and run.
+
+The SSH key pair will be available in the following location.
+
+- If you used a Windows workstation, `C:\ProgramData\kva\.ssh\logkey` and `C:\ProgramData\kva\.ssh\logkey.pub`
+
+- If you used a Linux workstation, `$HOME\.KVA\.ssh\logkey` and `$HOME\.KVA\.ssh\logkey.pub`
## Next steps
azure-cache-for-redis Cache Best Practices Memory Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-best-practices-memory-management.md
description: Learn how to manage your Azure Cache for Redis memory effectively.
Previously updated : 08/25/2021 Last updated : 03/22/2022 # Memory management
Add monitoring on memory usage to ensure that you don't run out of memory and ha
## Configure your maxmemory-reserved setting
-Configure your [maxmemory-reserved setting](cache-configure.md#maxmemory-policy-and-maxmemory-reserved) to improve system responsiveness:
+Configure your [maxmemory-reserved setting](cache-configure.md#memory-policies) to improve system responsiveness:
-* A sufficient reservation setting is especially important for write-heavy workloads or if you're storing values of 100 KB or more in your cache. Start with 10% of the size of your cache and increase this percentage if you have write-heavy loads.
+- A sufficient reservation setting is especially important for write-heavy workloads or if you're storing values of 100 KB or more in your cache. By default when you create a cache, 10% of the available memory is reserved for `maxmemory-reserved`. Another 10% is reserved for `maxfragmentationmemory-reserved`. You can increase the amount reserved if you have write-heavy loads.
-* The `maxmemory-reserved` setting configures the amount of memory, in MB per instance in a cluster, that is reserved for non-cache operations, such as replication during failover. Setting this value allows you to have a more consistent Redis server experience when your load varies. This value should be set higher for workloads that write large amounts of data. When memory is reserved for such operations, it's unavailable for storage of cached data.
+- The `maxmemory-reserved` setting configures the amount of memory, in MB per instance in a cluster, that is reserved for non-cache operations, such as replication during failover. Setting this value allows you to have a more consistent Redis server experience when your load varies. This value should be set higher for workloads that write large amounts of data. When memory is reserved for such operations, it's unavailable for storage of cached data. The allowed range for `maxmemory-reserved` is 10% - 60% of `maxmemory`. If you try to set these values lower than 10% or higher than 60%, they are re-evaluated and set to the 10% minimum and 60% maximum. The values are rendered in megabytes.
-* The `maxfragmentationmemory-reserved` setting configures the amount of memory, in MB per instance in a cluster, that is reserved to accommodate for memory fragmentation. When you set this value, the Redis server experience is more consistent when the cache is full or close to full and the fragmentation ratio is high. When memory is reserved for such operations, it's unavailable for storage of cached data.
+- The `maxfragmentationmemory-reserved` setting configures the amount of memory, in MB per instance in a cluster, that is reserved to accommodate for memory fragmentation. When you set this value, the Redis server experience is more consistent when the cache is full or close to full and the fragmentation ratio is high. When memory is reserved for such operations, it's unavailable for storage of cached data. The allowed range for `maxfragmentationmemory-reserved` is 10% - 60% of `maxmemory`. If you try to set these values lower than 10% or higher than 60%, they are re-evaluated and set to the 10% minimum and 60% maximum. The values are rendered in megabytes.
-* One thing to consider when choosing a new memory reservation value (`maxmemory-reserved` or `maxfragmentationmemory-reserved`) is how this change might affect a cache that is already running with large amounts of data in it. For instance, if you have a 53-GB cache with 49 GB of data and then change the reservation value to 8 GB, the max available memory for the system will drop to 45 GB. If either your current `used_memory` or your `used_memory_rss` values are higher than the new limit of 45 GB, then the system must evict data until both `used_memory` and `used_memory_rss` are below 45 GB. Eviction can increase server load and memory fragmentation. For more information on cache metrics such as `used_memory` and `used_memory_rss`, see [Available metrics and reporting intervals](cache-how-to-monitor.md#available-metrics-and-reporting-intervals).
+- One thing to consider when choosing a new memory reservation value (`maxmemory-reserved` or `maxfragmentationmemory-reserved`) is how this change might affect a cache with large amounts of data in it that is already running. For instance, if you have a 53-GB cache with 49 GB of data and then change the reservation value to 8 GB, the max available memory for the system will drop to 45 GB. If either your current `used_memory` or your `used_memory_rss` values are higher than the new limit of 45 GB, then the system must evict data until both `used_memory` and `used_memory_rss` are below 45 GB. Eviction can increase server load and memory fragmentation. For more information on cache metrics such as `used_memory` and `used_memory_rss`, see [Available metrics and reporting intervals](cache-how-to-monitor.md#available-metrics-and-reporting-intervals).
## Next steps
-* [Best practices for development](cache-best-practices-development.md)
-* [Azure Cache for Redis development FAQs](cache-development-faq.yml)
-* [maxmemory-reserved setting](cache-configure.md#maxmemory-policy-and-maxmemory-reserved)
-* [Best practices for scaling](cache-best-practices-scale.md)
+- [Best practices for development](cache-best-practices-development.md)
+- [Azure Cache for Redis development FAQs](cache-development-faq.yml)
+- [maxmemory-reserved setting](cache-configure.md#memory-policies)
+- [Best practices for scaling](cache-best-practices-scale.md)
azure-cache-for-redis Cache Best Practices Scale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-best-practices-scale.md
## Scaling under load
-While scaling a cache under load, configure your maxmemory-reserved setting to improve system responsiveness. For more information, see [Configure your maxmemory-reserved setting](cache-best-practices-memory-management.md#configure-your-maxmemory-reserved-setting).
+While scaling a cache under load, configure your `maxmemory-reserved` setting to improve system responsiveness. For more information, see [Configure your maxmemory-reserved setting](cache-best-practices-memory-management.md#configure-your-maxmemory-reserved-setting).
## Scaling clusters
Start scaling before the server load or memory usage gets too high. If it's too
## Cache sizes
-If you are using TLS and you have a high number of connections, consider scaling out so that you can distribute the load over more cores. Some cache sizes are hosted on VMs with four or more cores.
+If you're using TLS and you have a high number of connections, consider scaling out so that you can distribute the load over more cores. Some cache sizes are hosted on VMs with four or more cores. By distributing the workloads across multiple cores, you help bring down overall CPU usage on the cache VMs. For more information, see [details around VM sizes and cores](./cache-planning-faq.yml#azure-cache-for-redis-performance).
+
+## Scaling and memory
+
+You can scale your cache instances in the Azure portal or programatically using PowerShell cmdlets, Azure CLI, and by using the Microsoft Azure Management Libraries (MAML).
+
+Either way, when you scale a cache up or down, both `maxmemory-reserved` and `maxfragmentationmemory-reserved` settings automatically scale in proportion to the cache size. For example, if
+`maxmemory-reserved` is set to 3 GB on a 6-GB cache, and you scale to 12-GB cache, the settings automatically updated to 6 GB during scaling. When you scale down, the reverse happens.
+
+For more information on scaling and memory, see [How to automate a scaling operation](cache-how-to-scale.md#how-to-automate-a-scaling-operation).
+
+> [!NOTE]
+> When you scale a cache up or down programmatically, any `maxmemory-reserved` or `maxfragmentationmemory-reserved` are ignored as part of the update request. Only your scaling change is honored. You can update these memory settings after the scaling operation has completed.
-Distribute the TLS encryption/decryption and TLS connection/disconnection workloads across multiple cores to bring down overall CPU usage on the cache VMs. For more information, see [details around VM sizes and cores](./cache-planning-faq.yml#azure-cache-for-redis-performance).
## Next steps
azure-cache-for-redis Cache Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-configure.md
Title: How to configure Azure Cache for Redis description: Understand the default Redis configuration for Azure Cache for Redis and learn how to configure your Azure Cache for Redis instances - Previously updated : 02/02/2022 Last updated : 03/22/2022 + # How to configure Azure Cache for Redis This article describes the configurations available for your Azure Cache for Redis instances. This article also covers the default Redis server configuration for Azure Cache for Redis instances.
This article describes the configurations available for your Azure Cache for Red
Azure Cache for Redis settings are viewed and configured on the **Azure Cache for Redis** on the left using the **Resource Menu**.
-![Azure Cache for Redis Settings](./media/cache-configure/redis-cache-settings.png)
You can view and configure the following settings using the **Resource Menu**. The settings that you see depend on the tier of your cache. For example, you don't see **Reboot** when using the Enterprise tier.
-* [Overview](#overview)
-* [Activity log](#activity-log)
-* [Access control (IAM)](#access-control-iam)
-* [Tags](#tags)
-* [Diagnose and solve problems](#diagnose-and-solve-problems)
-* [Settings](#settings)
- * [Access keys](#access-keys)
- * [Advanced settings](#advanced-settings)
- * [Azure Cache for Redis Advisor](#azure-cache-for-redis-advisor)
- * [Scale](#scale)
- * [Cluster size](#cluster-size)
- * [Data persistence](#redis-data-persistence)
- * [Schedule updates](#schedule-updates)
- * [Geo-replication](#geo-replication)
- * [Virtual Network](#virtual-network)
- * [Firewall](#firewall)
- * [Properties](#properties)
- * [Locks](#locks)
- * [Automation script](#automation-script)
-* Administration
- * [Import data](#importexport)
- * [Export data](#importexport)
- * [Reboot](#reboot)
-* [Monitoring](#monitoring)
- * [Redis metrics](#redis-metrics)
- * [Alert rules](#alert-rules)
- * [Diagnostics](#diagnostics)
-* Support & troubleshooting settings
- * [Resource health](#resource-health)
- * [New support request](#new-support-request)
+- [Overview](#overview)
+- [Activity log](#activity-log)
+- [Access control (IAM)](#access-control-iam)
+- [Tags](#tags)
+- [Diagnose and solve problems](#diagnose-and-solve-problems)
+- [Settings](#settings)
+ - [Access keys](#access-keys)
+ - [Advanced settings](#advanced-settings)
+ - [Azure Cache for Redis Advisor](#azure-cache-for-redis-advisor)
+ - [Scale](#scale)
+ - [Cluster size](#cluster-size)
+ - [Data persistence](#data-persistence)
+ - [Schedule updates](#schedule-updates)
+ - [Geo-replication](#geo-replication)
+ - [Virtual Network](#virtual-network)
+ - [Firewall](#firewall)
+ - [Properties](#properties)
+ - [Locks](#locks)
+ - [Automation script](#automation-script)
+- Administration
+ - [Import data](#importexport)
+ - [Export data](#importexport)
+ - [Reboot](#reboot)
+- [Monitoring](#monitoring)
+ - [Redis metrics](#redis-metrics)
+ - [Alert rules](#alert-rules)
+ - [Diagnostics](#diagnostics)
+- Support & troubleshooting settings
+ - [Resource health](#resource-health)
+ - [New support request](#new-support-request)
## Overview
Select **Diagnose and solve problems** to be provided with common issues and str
The **Settings** section allows you to access and configure the following settings for your cache.
-* [Access keys](#access-keys)
-* [Advanced settings](#advanced-settings)
-* [Azure Cache for Redis Advisor](#azure-cache-for-redis-advisor)
-* [Scale](#scale)
-* [Cluster size](#cluster-size)
-* [Data persistence](#redis-data-persistence)
-* [Schedule updates](#schedule-updates)
-* [Geo-replication](#geo-replication)
-* [Virtual Network](#virtual-network)
-* [Firewall](#firewall)
-* [Properties](#properties)
-* [Locks](#locks)
-* [Automation script](#automation-script)
+- [Access keys](#access-keys)
+- [Advanced settings](#advanced-settings)
+- [Azure Cache for Redis Advisor](#azure-cache-for-redis-advisor)
+- [Scale](#scale)
+- [Cluster size](#cluster-size)
+- [Data persistence](#data-persistence)
+- [Schedule updates](#schedule-updates)
+- [Geo-replication](#geo-replication)
+- [Virtual Network](#virtual-network)
+- [Firewall](#firewall)
+- [Properties](#properties)
+- [Locks](#locks)
+- [Automation script](#automation-script)
### Access keys Select **Access keys** to view or regenerate the access keys for your cache. These keys are used by the clients connecting to your cache.
-![Azure Cache for Redis Access Keys](./media/cache-configure/redis-cache-manage-keys.png)
### Advanced settings The following settings are configured on the **Advanced settings** on the left.
-* [Access Ports](#access-ports)
-* [Memory policies](#memory-policies)
-* [Keyspace notifications (advanced settings)](#keyspace-notifications-advanced-settings)
+- [Access Ports](#access-ports)
+- [Memory policies](#memory-policies)
+- [Keyspace notifications (advanced settings)](#keyspace-notifications-advanced-settings)
#### Access Ports
By default, non-TLS/SSL access is disabled for new caches. To enable the non-TLS
> [!NOTE] > TLS access to Azure Cache for Redis supports TLS 1.0, 1.1 and 1.2 currently, but versions 1.0 and 1.1 are being retired soon. Please read our [Remove TLS 1.0 and 1.1 page](cache-remove-tls-10-11.md) for more details.
-![Azure Cache for Redis Access Ports](./media/cache-configure/redis-cache-access-ports.png)
-
-<a name="maxmemory-policy-and-maxmemory-reserved"></a>
#### Memory policies
-The **Maxmemory policy**, **maxmemory-reserved**, and **maxfragmentationmemory-reserved** settings on the **Advanced settings** on the left configure the memory policies for the cache.
+Use the **Maxmemory policy**, **maxmemory-reserved**, and **maxfragmentationmemory-reserved** settings from **Advanced settings** from the Resource menu on the left to configure the memory policies for the cache. When you create a cache, the values `maxmemory-reserved` and `maxfragmentationmemory-reserved` default to 10% of `maxmemory`, which is the cache size.
-![Azure Cache for Redis Maxmemory Policy](./media/cache-configure/redis-cache-maxmemory-policy.png)
**Maxmemory policy** configures the eviction policy for the cache and allows you to choose from the following eviction policies:
-* `volatile-lru` - The default eviction policy.
-* `allkeys-lru`
-* `volatile-random`
-* `allkeys-random`
-* `volatile-ttl`
-* `noeviction`
+- `volatile-lru` - The default eviction policy.
+- `allkeys-lru`
+- `volatile-random`
+- `allkeys-random`
+- `volatile-ttl`
+- `noeviction`
For more information about `maxmemory` policies, see [Eviction policies](https://redis.io/topics/lru-cache#eviction-policies).
-The **maxmemory-reserved** setting configures the amount of memory, in MB per instance in a cluster, that is reserved for non-cache operations, such as replication during failover. Setting this value allows you to have a more consistent Redis server experience when your load varies. This value should be set higher for workloads that write large amounts of data. When memory is reserved for such operations, it's unavailable for storage of cached data.
+The **maxmemory-reserved** setting configures the amount of memory, in MB per instance in a cluster, that is reserved for non-cache operations, such as replication during failover. Setting this value allows you to have a more consistent Redis server experience when your load varies. This value should be set higher for workloads that write large amounts of data. When memory is reserved for such operations, it's unavailable for storage of cached data. The minimum and maximum values on the slider are 10% and 60%, shown in megabytes. You must set the value in that range.
-The **maxfragmentationmemory-reserved** setting configures the amount of memory, in MB per instance in a cluster, that is reserved to accommodate for memory fragmentation. When you set this value, the Redis server experience is more consistent when the cache is full or close to full and the fragmentation ratio is high. When memory is reserved for such operations, it's unavailable for storage of cached data.
+The **maxfragmentationmemory-reserved** setting configures the amount of memory, in MB per instance in a cluster, that is reserved to accommodate for memory fragmentation. When you set this value, the Redis server experience is more consistent when the cache is full or close to full and the fragmentation ratio is high. When memory is reserved for such operations, it's unavailable for storage of cached data. The minimum and maximum values on the slider are 10% and 60%, shown in megabytes. You must set the value in that range.
-One thing to consider when choosing a new memory reservation value (**maxmemory-reserved** or **maxfragmentationmemory-reserved**) is how this change might affect a cache that is already running with large amounts of data in it. For instance, if you have a 53-GB cache with 49 GB of data, then change the reservation value to 8 GB, this change will drop the max available memory for the system down to 45 GB. If either your current `used_memory` or your `used_memory_rss` values are higher than the new limit of 45 GB, then the system will have to evict data until both `used_memory` and `used_memory_rss` are below 45 GB. Eviction can increase server load and memory fragmentation. For more information on cache metrics such as `used_memory` and `used_memory_rss`, see [Available metrics and reporting intervals](cache-how-to-monitor.md#available-metrics-and-reporting-intervals).
+When choosing a new memory reservation value (**maxmemory-reserved** or **maxfragmentationmemory-reserved**), consider how this change might affect a cache that is already running with large amounts of data in it. For instance, if you have a 53-GB cache with 49 GB of data, then change the reservation value to 8 GB, this change drops the max available memory for the system down to 45 GB. If either your current `used_memory` or your `used_memory_rss` values are higher than the new limit of 45 GB, then the system will have to evict data until both `used_memory` and `used_memory_rss` are below 45 GB. Eviction can increase server load and memory fragmentation. For more information on cache metrics such as `used_memory` and `used_memory_rss`, see [Available metrics and reporting intervals](cache-how-to-monitor.md#available-metrics-and-reporting-intervals).
> [!IMPORTANT] > The **maxmemory-reserved** and **maxfragmentationmemory-reserved** settings are available only for Standard and Premium caches.
One thing to consider when choosing a new memory reservation value (**maxmemory-
Redis keyspace notifications are configured on the **Advanced settings** on the left. Keyspace notifications allow clients to receive notifications when certain events occur.
-![Azure Cache for Redis Advanced Settings](./media/cache-configure/redis-cache-advanced-settings.png)
> [!IMPORTANT] > Keyspace notifications and the **notify-keyspace-events** setting are only available for Standard and Premium caches.
For more information, see [Redis Keyspace Notifications](https://redis.io/topics
The **Azure Cache for Redis Advisor** on the left displays recommendations for your cache. During normal operations, no recommendations are displayed.
-![Screenshot that shows where the recommendations are displayed.](./media/cache-configure/redis-cache-no-recommendations.png)
If any conditions occur during the operations of your cache such as high memory usage, network bandwidth, or server load, an alert is displayed on the **Azure Cache for Redis** on the left.
-![Screenshot that shows where alerts are displayed in the Azure Cache for Redis section.](./media/cache-configure/redis-cache-recommendations-alert.png)
Further information can be found on the **Recommendations** on the left.
-![Recommendations](./media/cache-configure/redis-cache-recommendations.png)
You can monitor these metrics on the [Monitoring charts](cache-how-to-monitor.md#monitoring-charts) and [Usage charts](cache-how-to-monitor.md#usage-charts) sections of the **Azure Cache for Redis** on the left.
To upgrade your cache, select **Upgrade now** to change the pricing tier and [sc
Select **Scale** to view or change the pricing tier for your cache. For more information on scaling, see [How to Scale Azure Cache for Redis](cache-how-to-scale.md).
-![Azure Cache for Redis pricing tier](./media/cache-configure/pricing-tier.png)
-
-<a name="cluster-size"></a>
-### Redis Cluster Size
+### Cluster Size
Select **Cluster Size** to change the cluster size for a running premium cache with clustering enabled.
-![Cluster size](./media/cache-configure/redis-cache-redis-cluster-size.png)
To change the cluster size, use the slider or type a number between 1 and 10 in the **Shard count** text box. Then, select **OK** to save.
To change the cluster size, use the slider or type a number between 1 and 10 in
> >
-### Redis data persistence
+### Data persistence
Select **Data persistence** to enable, disable, or configure data persistence for your premium cache. Azure Cache for Redis offers Redis persistence using either RDB persistence or AOF persistence.
The Schedule updates on the left allow you to choose a maintenance window for Re
> >
-![Schedule updates](./media/cache-configure/redis-schedule-updates.png)
To specify a maintenance window, check the days you want. Then, specify the maintenance window start hour for each day, and select **OK**. The maintenance window time is in UTC.
Firewall rules configuration is available for all Azure Cache for Redis tiers.
Select **Firewall** to view and configure firewall rules for cache.
-![Firewall](./media/cache-configure/redis-firewall-rules.png)
You can specify firewall rules with a start and end IP address range. When firewall rules are configured, only client connections from the specified IP address ranges can connect to the cache. When a firewall rule is saved, there's a short delay before the rule is effective. This delay is typically less than one minute.
You can specify firewall rules with a start and end IP address range. When firew
Select **Properties** to view information about your cache, including the cache endpoint and ports.
-![Azure Cache for Redis Properties](./media/cache-configure/redis-cache-properties.png)
### Locks
Select **Automation script** to build and export a template of your deployed res
The settings in the **Administration** section allow you to perform the following administrative tasks for your cache.
-![Administration](./media/cache-configure/redis-cache-administration.png)
-* [Import data](#importexport)
-* [Export data](#importexport)
-* [Reboot](#reboot)
+- [Import data](#importexport)
+- [Export data](#importexport)
+- [Reboot](#reboot)
### Import/Export
Export allows you to export the data stored in Azure Cache for Redis to Redis co
The **Reboot** item on the left allows you to reboot the nodes of your cache. This reboot capability enables you to test your application for resiliency if there's a failure of a cache node.
-![Reboot](./media/cache-configure/redis-cache-reboot.png)
If you have a premium cache with clustering enabled, you can select which shards of the cache to reboot.
-![Screenshot that shows where to select which shards of the cache to reboot.](./media/cache-configure/redis-cache-reboot-cluster.png)
To reboot one or more nodes of your cache, select the desired nodes and select **Reboot**. If you have a premium cache with clustering enabled, select the shard(s) to reboot and then select **Reboot**. After a few minutes, the selected node(s) reboot, and are back online a few minutes later.
To reboot one or more nodes of your cache, select the desired nodes and select *
The **Monitoring** section allows you to configure diagnostics and monitoring for your Azure Cache for Redis. For more information on Azure Cache for Redis monitoring and diagnostics, see [How to monitor Azure Cache for Redis](cache-how-to-monitor.md).
-![Diagnostics](./media/cache-configure/redis-cache-diagnostics.png)
-* [Redis metrics](#redis-metrics)
-* [Alert rules](#alert-rules)
-* [Diagnostics](#diagnostics)
+- [Redis metrics](#redis-metrics)
+- [Alert rules](#alert-rules)
+- [Diagnostics](#diagnostics)
### Redis metrics
By default, cache metrics in Azure Monitor are [stored for 30 days](../azure-mon
The settings in the **Support + troubleshooting** section provide you with options for resolving issues with your cache.
-![Support + troubleshooting](./media/cache-configure/redis-cache-support-troubleshooting.png)
-* [Resource health](#resource-health)
-* [New support request](#new-support-request)
+- [Resource health](#resource-health)
+- [New support request](#new-support-request)
### Resource health
New Azure Cache for Redis instances are configured with the following default Re
| | | | | `databases` |16 |The default number of databases is 16 but you can configure a different number based on the pricing tier.<sup>1</sup> The default database is DB 0, you can select a different one on a per-connection basis using `connection.GetDatabase(dbid)` where `dbid` is a number between `0` and `databases - 1`. | | `maxclients` |Depends on the pricing tier<sup>2</sup> |This value is the maximum number of connected clients allowed at the same time. Once the limit is reached Redis closes all the new connections, returning a 'max number of clients reached' error. |
-| `maxmemory-policy` |`volatile-lru` |Maxmemory policy is the setting used by Redis to select what to remove when `maxmemory` (the size of the cache offering you selected when you created the cache) is reached. With Azure Cache for Redis the default setting is `volatile-lru`, which removes the keys with an expiration set using an LRU algorithm. This setting can be configured in the Azure portal. For more information, see [Memory policies](#memory-policies). |
+| `maxmemory-reserved` | 10% of `maxmemory` | The allowed range for `maxmemory-reserved` is 10% - 60% of `maxmemory`. If you try to set these values lower than 10% or higher than 60%, they are re-evaluated and set to the 10% minimum and 60% maximum. The values are rendered in megabytes. |
+| `maxfragmentationmemory-reserved` | 10% of `maxmemory` | The allowed range for `maxfragmentationmemory-reserved` is 10% - 60% of `maxmemory`. If you try to set these values lower than 10% or higher than 60%, they are re-evaluated and set to the 10% minimum and 60% maximum. The values are rendered in megabytes. |
+| `maxmemory-policy` |`volatile-lru` | Maxmemory policy is the setting used by the Redis server to select what to remove when `maxmemory` (the size of the cache that you selected when you created the cache) is reached. With Azure Cache for Redis, the default setting is `volatile-lru`. This setting removes the keys with an expiration set using an LRU algorithm. This setting can be configured in the Azure portal. For more information, see [Memory policies](#memory-policies). |
| `maxmemory-samples` |3 |To save memory, LRU and minimal TTL algorithms are approximated algorithms instead of precise algorithms. By default Redis checks three keys and picks the one that was used less recently. | | `lua-time-limit` |5,000 |Max execution time of a Lua script in milliseconds. If the maximum execution time is reached, Redis logs that a script is still in execution after the maximum allowed time, and starts to reply to queries with an error. | | `lua-event-limit` |500 |Max size of script event queue. |
New Azure Cache for Redis instances are configured with the following default Re
<sup>1</sup>The limit for `databases` is different for each Azure Cache for Redis pricing tier and can be set at cache creation. If no `databases` setting is specified during cache creation, the default is 16.
-* Basic and Standard caches
- * C0 (250 MB) cache - up to 16 databases
- * C1 (1 GB) cache - up to 16 databases
- * C2 (2.5 GB) cache - up to 16 databases
- * C3 (6 GB) cache - up to 16 databases
- * C4 (13 GB) cache - up to 32 databases
- * C5 (26 GB) cache - up to 48 databases
- * C6 (53 GB) cache - up to 64 databases
-* Premium caches
- * P1 (6 GB - 60 GB) - up to 16 databases
- * P2 (13 GB - 130 GB) - up to 32 databases
- * P3 (26 GB - 260 GB) - up to 48 databases
- * P4 (53 GB - 530 GB) - up to 64 databases
- * All premium caches with Redis cluster enabled - Redis cluster only supports use of database 0 so the `databases` limit for any premium cache with Redis cluster enabled is effectively 1 and the [Select](https://redis.io/commands/select) command isn't allowed. For more information, see [Do I need to make any changes to my client application to use clustering?](cache-how-to-premium-clustering.md#do-i-need-to-make-any-changes-to-my-client-application-to-use-clustering)
+- Basic and Standard caches
+ - C0 (250 MB) cache - up to 16 databases
+ - C1 (1 GB) cache - up to 16 databases
+ - C2 (2.5 GB) cache - up to 16 databases
+ - C3 (6 GB) cache - up to 16 databases
+ - C4 (13 GB) cache - up to 32 databases
+ - C5 (26 GB) cache - up to 48 databases
+ - C6 (53 GB) cache - up to 64 databases
+- Premium caches
+ - P1 (6 GB - 60 GB) - up to 16 databases
+ - P2 (13 GB - 130 GB) - up to 32 databases
+ - P3 (26 GB - 260 GB) - up to 48 databases
+ - P4 (53 GB - 530 GB) - up to 64 databases
+ - P5 (120 GB - 1200 GB) - up to 64 databases
+ - All premium caches with Redis cluster enabled - Redis cluster only supports use of database 0 so the `databases` limit for any premium cache with Redis cluster enabled is effectively 1 and the [Select](https://redis.io/commands/select) command isn't allowed. For more information, see [Do I need to make any changes to my client application to use clustering?](cache-how-to-premium-clustering.md#do-i-need-to-make-any-changes-to-my-client-application-to-use-clustering)
For more information about databases, see [What are Redis databases?](cache-development-faq.yml#what-are-redis-databases-)
For more information about databases, see [What are Redis databases?](cache-deve
<sup>2</sup>`maxclients` is different for each Azure Cache for Redis pricing tier.
-* Basic and Standard caches
- * C0 (250 MB) cache - up to 256 connections
- * C1 (1 GB) cache - up to 1,000 connections
- * C2 (2.5 GB) cache - up to 2,000 connections
- * C3 (6 GB) cache - up to 5,000 connections
- * C4 (13 GB) cache - up to 10,000 connections
- * C5 (26 GB) cache - up to 15,000 connections
- * C6 (53 GB) cache - up to 20,000 connections
-* Premium caches
- * P1 (6 GB - 60 GB) - up to 7,500 connections
- * P2 (13 GB - 130 GB) - up to 15,000 connections
- * P3 (26 GB - 260 GB) - up to 30,000 connections
- * P4 (53 GB - 530 GB) - up to 40,000 connections
+- Basic and Standard caches
+ - C0 (250 MB) cache - up to 256 connections
+ - C1 (1 GB) cache - up to 1,000 connections
+ - C2 (2.5 GB) cache - up to 2,000 connections
+ - C3 (6 GB) cache - up to 5,000 connections
+ - C4 (13 GB) cache - up to 10,000 connections
+ - C5 (26 GB) cache - up to 15,000 connections
+ - C6 (53 GB) cache - up to 20,000 connections
+- Premium caches
+ - P1 (6 GB - 60 GB) - up to 7,500 connections
+ - P2 (13 GB - 130 GB) - up to 15,000 connections
+ - P3 (26 GB - 260 GB) - up to 30,000 connections
+ - P4 (53 GB - 530 GB) - up to 40,000 connections
+ - P5: (120 GB - 1200 GB) - up to 40,000 connections
> [!NOTE] > While each size of cache allows *up to* a certain number of connections, each connection to Redis has overhead associated with it. An example of such overhead would be CPU and memory usage as a result of TLS/SSL encryption. The maximum connection limit for a given cache size assumes a lightly loaded cache. If load from connection overhead *plus* load from client operations exceeds capacity for the system, the cache can experience capacity issues even if you have not exceeded the connection limit for the current cache size.
For more information about databases, see [What are Redis databases?](cache-deve
> [!IMPORTANT] > Because configuration and management of Azure Cache for Redis instances is managed by Microsoft, the following commands are disabled. If you try to invoke them, you receive an error message similar to `"(error) ERR unknown command"`. >
-> * BGREWRITEAOF
-> * BGSAVE
-> * CONFIG
-> * DEBUG
-> * MIGRATE
-> * SAVE
-> * SHUTDOWN
-> * SLAVEOF
-> * REPLICAOF
-> * ACL
-> * CLUSTER - Cluster write commands are disabled, but read-only Cluster commands are permitted.
+>- BGREWRITEAOF
+>- BGSAVE
+>- CONFIG
+>- DEBUG
+>- MIGRATE
+>- SAVE
+>- SHUTDOWN
+>- SLAVEOF
+>- REPLICAOF
+>- ACL
+>- CLUSTER - Cluster write commands are disabled, but read-only Cluster commands are permitted.
> >
You can securely issue commands to your Azure Cache for Redis instances using th
> [!IMPORTANT] >
-> * The Redis Console does not work with [VNET](cache-how-to-premium-vnet.md). When your cache is part of a VNET, only clients in the VNET can access the cache. Because Redis Console runs in your local browser, which is outside the VNET, it can't connect to your cache.
-> * Not all Redis commands are supported in Azure Cache for Redis. For a list of Redis commands that are disabled for Azure Cache for Redis, see the previous [Redis commands not supported in Azure Cache for Redis](#redis-commands-not-supported-in-azure-cache-for-redis) section. For more information about Redis commands, see [https://redis.io/commands](https://redis.io/commands).
+>- The Redis Console does not work with [VNET](cache-how-to-premium-vnet.md). When your cache is part of a VNET, only clients in the VNET can access the cache. Because Redis Console runs in your local browser, which is outside the VNET, it can't connect to your cache.
+>- Not all Redis commands are supported in Azure Cache for Redis. For a list of Redis commands that are disabled for Azure Cache for Redis, see the previous [Redis commands not supported in Azure Cache for Redis](#redis-commands-not-supported-in-azure-cache-for-redis) section. For more information about Redis commands, see [https://redis.io/commands](https://redis.io/commands).
> > To access the Redis Console, select **Console** from the **Azure Cache for Redis** on the left.
-![Screenshot that highlights the Console button.](./media/cache-configure/redis-console-menu.png)
To issue commands against your cache instance, type the command you want into the console.
-![Screenshot thas shows the Redis Console with the input command and results.](./media/cache-configure/redis-console.png)
### Using the Redis Console with a premium clustered cache When using the Redis Console with a premium clustered cache, you can issue commands to a single shard of the cache. To issue a command to a specific shard, first connect to the shard you want by selecting it on the shard picker.
-![Redis console](./media/cache-configure/redis-console-premium-cluster.png)
If you attempt to access a key that is stored in a different shard than the connected shard, you receive an error message similar to the following message:
In the previous example, shard 1 is the selected shard, but `myKey` is located i
You can move your cache to a new subscription by selecting **Move**.
-![Move Azure Cache for Redis](./media/cache-configure/redis-cache-move.png)
For information on moving resources from one resource group to another, and from one subscription to another, see [Move resources to new resource group or subscription](../azure-resource-manager/management/move-resource-group-and-subscription.md). ## Next steps
-* For more information on working with Redis commands, see [How can I run Redis commands?](cache-development-faq.yml#how-can-i-run-redis-commands-)
+- For more information on working with Redis commands, see [How can I run Redis commands?](cache-development-faq.yml#how-can-i-run-redis-commands-)
azure-cache-for-redis Cache How To Manage Redis Cache Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-how-to-manage-redis-cache-powershell.md
The following table contains Azure PowerShell properties and descriptions for co
| rdb-backup-enabled |Whether [Redis data persistence](cache-how-to-premium-persistence.md) is enabled |Premium only | | rdb-storage-connection-string |The connection string to the storage account for [Redis data persistence](cache-how-to-premium-persistence.md) |Premium only | | rdb-backup-frequency |The backup frequency for [Redis data persistence](cache-how-to-premium-persistence.md) |Premium only |
-| maxmemory-reserved |Configures the [memory reserved](cache-configure.md#maxmemory-policy-and-maxmemory-reserved) for non-cache processes |Standard and Premium |
-| maxmemory-policy |Configures the [eviction policy](cache-configure.md#maxmemory-policy-and-maxmemory-reserved) for the cache |All pricing tiers |
+| maxmemory-reserved |Configures the [memory reserved](cache-configure.md#memory-policies) for non-cache processes |Standard and Premium |
+| maxmemory-policy |Configures the [eviction policy](cache-configure.md#memory-policies) for the cache |All pricing tiers |
| notify-keyspace-events |Configures [keyspace notifications](cache-configure.md#keyspace-notifications-advanced-settings) |Standard and Premium | | hash-max-ziplist-entries |Configures [memory optimization](https://redis.io/topics/memory-optimization) for small aggregate data types |Standard and Premium | | hash-max-ziplist-value |Configures [memory optimization](https://redis.io/topics/memory-optimization) for small aggregate data types |Standard and Premium |
azure-cache-for-redis Cache How To Scale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-how-to-scale.md
Previously updated : 02/08/2021 Last updated : 03/22/2022 ms.devlang: csharp + + # Scale an Azure Cache for Redis instance
-Azure Cache for Redis has different cache offerings, which provide flexibility in the choice of cache size and features. For a Basic, Standard or Premium cache, you can change its size and tier after creating it to match your application needs. This article shows you how to scale your cache using the Azure portal, and tools such as Azure PowerShell, and Azure CLI.
+Azure Cache for Redis has different cache offerings that provide flexibility in the choice of cache size and features. For a Basic, Standard or Premium cache, you can change its size and tier after creating it to match your application needs. This article shows you how to scale your cache using the Azure portal, and tools such as Azure PowerShell, and Azure CLI.
## When to scale
For more information on determining the cache pricing tier to use, see [Choosing
## Scale a cache
-To scale your cache, [browse to the cache](cache-configure.md#configure-azure-cache-for-redis-settings) in the [Azure portal](https://portal.azure.com) and select **Scale** on the left.
+1. To scale your cache, [browse to the cache](cache-configure.md#configure-azure-cache-for-redis-settings) in the [Azure portal](https://portal.azure.com) and select **Scale** on the left.
+ :::image type="content" source="media/cache-how-to-scale/scale-a-cache.png" alt-text="scale on the resource menu":::
-Choose a pricing tier on the right and then choose **Select**.
-
+1. Choose a pricing tier on the right and then choose **Select**.
+
+ :::image type="content" source="media/cache-how-to-scale/select-a-tier.png" alt-text="Azure Cache for Redis tiers":::
> [!NOTE]
-> Scaling is currently not avaialble with Enterprise Tier.
+> Scaling is currently not available with Enterprise Tier.
> You can scale to a different pricing tier with the following restrictions:
When scaling is complete, the status changes from **Scaling** to **Running**.
You can scale your cache instances in the Azure portal. And, you can scale using PowerShell cmdlets, Azure CLI, and by using the Microsoft Azure Management Libraries (MAML).
+When you scale a cache up or down, both `maxmemory-reserved` and `maxfragmentationmemory-reserved` settings automatically scale in proportion to the cache size. For example, if `maxmemory-reserved` is set to 3 GB on a 6-GB cache, and you scale to 12-GB cache, the settings automatically get updated to 6 GB during scaling. When you scale down, the reverse happens.
+
+> [!NOTE]
+> When you scale a cache up or down programmatically, any `maxmemory-reserved` or `maxfragmentationmemory-reserved` are ignored as part of the update request. Only your scaling change is honored. You can update these memory settings after the scaling operation has completed.
++ - [Scale using PowerShell](#scale-using-powershell) - [Scale using Azure CLI](#scale-using-azure-cli) - [Scale using MAML](#scale-using-maml)
azure-cache-for-redis Cache Troubleshoot Connectivity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-troubleshoot-connectivity.md
Previously updated : 12/31/2021 Last updated : 03/22/2022 + # Connectivity troubleshooting
azure-cache-for-redis Cache Troubleshoot Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-troubleshoot-server.md
Validate that the `maxmemory-reserved` and `maxfragmentationmemory-reserved` val
There are several possible changes you can make to help keep memory usage healthy: -- [Configure a memory policy](cache-configure.md#maxmemory-policy-and-maxmemory-reserved) and set expiration times on your keys. This policy may not be sufficient if you have fragmentation.-- [Configure a maxmemory-reserved value](cache-configure.md#maxmemory-policy-and-maxmemory-reserved) that is large enough to compensate for memory fragmentation.
+- [Configure a memory policy](cache-configure.md#memory-policies) and set expiration times on your keys. This policy may not be sufficient if you have fragmentation.
+- [Configure a maxmemory-reserved value](cache-configure.md#memory-policies) that is large enough to compensate for memory fragmentation.
- [Create alerts](cache-how-to-monitor.md#alerts) on metrics like used memory to be notified early about potential impacts. - [Scale](cache-how-to-scale.md) to a larger cache size with more memory capacity. For more information, see [Azure Cache for Redis planning FAQs](./cache-planning-faq.yml).
azure-fluid-relay Deploy Fluid Static Web Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-fluid-relay/how-tos/deploy-fluid-static-web-apps.md
Run the `npm run build` command from the root directory to rebuild the app. This
:::image type="content" source="../../static-web-apps/media/getting-started/extension-browse-site.png" alt-text="An image of the menu that is shown when right-clicking on a static web app. The Browse Site option is highlighted.":::
-1. The location of your application code, Azure Function, and build output is part of the `azure-static-web-apps-xxx-xxx-xxx.yml` workflow file located in the `/.github/workflows` directory. This file is automatically created when create the Static Web app. It defines a GitHub Action to build and deploy your Static Web app.
+1. The location of your application code, Azure Function, and build output is part of the `azure-static-web-apps-xxx-xxx-xxx.yml` workflow file located in the `/.github/workflows` directory. This file is automatically created when create the Static Web app. It defines a GitHub Actions to build and deploy your Static Web app.
## Clean up resources
azure-functions Configure Networking How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/configure-networking-how-to.md
This article shows you how to perform tasks related to configuring your function
## Restrict your storage account to a virtual network
-When you create a function app, you must create or link to a general-purpose Azure Storage account that supports Blob, Queue, and Table storage. You can replace this storage account with one that is secured with service endpoints or private endpoints. When configuring your storage account with private endpoints, public access to your function app will be automatically disabled, and your function app will only be accessible through the virtual network.
+When you create a function app, you must create or link to a general-purpose Azure Storage account that supports Blob, Queue, and Table storage. You can replace this storage account with one that is secured with service endpoints or private endpoints. When configuring your storage account with private endpoints, public access to your storage account is not automatically disabled. In order to disable public access to your storage account, configure your storage firewall to allow access from only selected networks.
++ > [!NOTE] > This feature currently works for all Windows and Linux virtual network-supported SKUs in the Dedicated (App Service) plan and for Windows Elastic Premium plans. ASEv3 is not supported yet. Consumption tier isn't supported.
azure-functions Functions Bindings Signalr Service Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-signalr-service-output.md
Here's the JavaScript code:
```javascript module.exports = async function (context, req) {
- context.bindings.outMessages = [{
+ context.bindings.signalRMessages = [{
// message will only be sent to this user ID "userId": "userId1", "target": "newMessage",
azure-functions Functions Create Maven Intellij https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-create-maven-intellij.md
Title: Create a Java function in Azure Functions using IntelliJ description: Learn how to use IntelliJ to create a simple HTTP-triggered Java function, which you then publish to run in a serverless environment in Azure.- Last updated 07/01/2018- ms.devlang: java
azure-functions Functions Cli Create Function App Github Continuous https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/scripts/functions-cli-create-function-app-github-continuous.md
Title: Create a function app with GitHub deployment - Azure CLI description: Create a function app and deploy function code from a GitHub repository using Azure Functions. Previously updated : 03/24/2022 Last updated : 03/28/2022
azure-maps Drawing Package Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/drawing-package-guide.md
Title: Drawing package guide for Microsoft Azure Maps Creator (Preview)
+ Title: Drawing package guide for Microsoft Azure Maps Creator
+ description: Learn how to prepare a Drawing package for the Azure Maps Conversion service
azure-maps Migrate From Bing Maps Web Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/migrate-from-bing-maps-web-services.md
Azure Maps has several additional REST web services that may be of interest;
* [Map Tiles](/rest/api/maps/render/getmaptile) ΓÇô Access road and imagery tiles from Azure Maps as raster and vector tiles. * [Batch routing](/rest/api/maps/route/postroutedirectionsbatchpreview) ΓÇô Allows up to 1,000 route requests to be made in a single batch over a period of time. Routes are calculated in parallel on the server for faster processing. * [Traffic](/rest/api/maps/traffic) Flow ΓÇô Access real-time traffic flow data as both raster and vector tiles.
-* [Geolocation API (Preview)](/rest/api/maps/geolocation/get-ip-to-location) ΓÇô Get the location of an IP address.
+* [Geolocation API](/rest/api/maps/geolocation/get-ip-to-location) ΓÇô Get the location of an IP address.
* [Weather Services](/rest/api/maps/weather) ΓÇô Gain access to real-time and forecast weather data. Be sure to also review the following best practices guides:
azure-maps Migrate From Google Maps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/migrate-from-google-maps.md
# Tutorial: Migrate from Google Maps to Azure Maps
-This article provides insights on how to migrate web, mobile and server-based applications from Google Maps to the Microsoft Azure Maps platform. This tutorial includes comparative code samples, migration suggestions, and best practices for migrating to Azure Maps. In this tutorial, you will learn:
+This article provides insights on how to migrate web, mobile and server-based applications from Google Maps to the Microsoft Azure Maps platform. This tutorial includes comparative code samples, migration suggestions, and best practices for migrating to Azure Maps. In this tutorial, you'll learn:
> [!div class="checklist"] > * High-level comparison for equivalent Google Maps features available in Azure Maps.
The table provides a high-level list of Azure Maps features, which correspond to
| REST Service APIs | Γ£ô | | Directions (Routing) | Γ£ô | | Distance Matrix | Γ£ô |
-| Elevation | Γ£ô (Preview) |
+| Elevation | Γ£ô |
| Geocoding (Forward/Reverse) | Γ£ô | | Geolocation | N/A | | Nearest Roads | Γ£ô |
Google Maps provides basic key-based authentication. Azure Maps provides both ba
When migrating to Azure Maps from Google Maps, consider the following points about licensing. * Azure Maps charges for the usage of interactive maps, which is based on the number of loaded map tiles. On the other hand, Google Maps charges for loading the map control. In the interactive Azure Maps SDKs, map tiles are automatically cached to reduce the development cost. One Azure Maps transaction is generated for every 15 map tiles that are loaded. The interactive Azure Maps SDKs uses 512-pixel tiles, and on average, it generates one or less transactions per page view.
-* Often, its more cost effective to replace static map images from Google Maps web services with the Azure Maps Web SDK. The Azure Maps Web SDK uses map tiles. Unless the user pans and zooms the map, the service often generates only a fraction of a transaction per map load. The Azure Maps web SDK has options for disabling panning and zooming, if desired. Additionally, the Azure Maps web SDK provides a lot more visualization options than the static map web service.
+* Often, it's more cost effective to replace static map images from Google Maps web services with the Azure Maps Web SDK. The Azure Maps Web SDK uses map tiles. Unless the user pans and zooms the map, the service often generates only a fraction of a transaction per map load. The Azure Maps web SDK has options for disabling panning and zooming, if desired. Additionally, the Azure Maps web SDK provides a lot more visualization options than the static map web service.
* Azure Maps allows data from its platform to be stored in Azure. Also, data can be cached elsewhere for up to six months as per the [terms of use](https://www.microsoftvolumelicensing.com/DocumentSearch.aspx?Mode=3&DocumentTypeId=46). Here are some related resources for Azure Maps:
To create an Azure Maps account and get access to the Azure Maps platform, follo
## Azure Maps technical resources
-Here is a list of useful technical resources for Azure Maps.
+Here's a list of useful technical resources for Azure Maps.
- Overview: [https://azure.com/maps](https://azure.com/maps) - Documentation: [https://aka.ms/AzureMapsDocs](./index.yml)
azure-maps Weather Services Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/weather-services-concepts.md
This article introduces concepts that apply to Azure Maps [Weather services](/re
## Unit types
-Some of the Weather service (Preview) APIs allow user to specify if the data is returned either in metric or in imperial units. The returned responses for these APIs include unitType and a numeric value that can be used for unit translations. See table below to interpret these values.
-
-|unitType|Description |
-|--|--|
-|0 |feet |
-|1 |inches |
-|2 |miles |
-|3 |millimeter |
-|4 |centimeter |
-|5 |meter |
-|6 |kilometer |
-|7 |kilometersPerHour |
-|8 |knots |
-|9 |milesPerHour |
-|10 |metersPerSecond |
-|11 |hectoPascals |
-|12 |inchesOfMercury |
-|13 |kiloPascals |
-|14 |millibars |
-|15 |millimetersOfMercury|
-|16 |poundsPerSquareInch |
-|17 |celsius |
-|18 |fahrenheit |
-|19 |kelvin |
-|20 |percent |
-|21 |float |
-|22 |integer |
-|31 |MicrogramsPerCubicMeterOfAir |
+Some of the Weather service APIs allow user to specify if the data is returned either in metric or in imperial units. The returned responses for these APIs include unitType and a numeric value that can be used for unit translations. See table below to interpret these values.
+
+|unitType|Description |
+|--|-|
+|0 |feet |
+|1 |inches |
+|2 |miles |
+|3 |millimeter |
+|4 |centimeter |
+|5 |meter |
+|6 |kilometer |
+|7 |kilometersPerHour |
+|8 |knots |
+|9 |milesPerHour |
+|10 |metersPerSecond |
+|11 |hectoPascals |
+|12 |inchesOfMercury |
+|13 |kiloPascals |
+|14 |millibars |
+|15 |millimetersOfMercury |
+|16 |poundsPerSquareInch |
+|17 |celsius |
+|18 |fahrenheit |
+|19 |kelvin |
+|20 |percent |
+|21 |float |
+|22 |integer |
+|31 |MicrogramsPerCubicMeterOfAir|
## Weather icons
-Some of the Weather service (Preview) APIs return the `iconCode` in the response. The `iconCode` is a numeric value used to define the icon. Don't link directly to these images from your applications, the URLs can and will change.
+Some of the Weather service APIs return the `iconCode` in the response. The `iconCode` is a numeric value used to define the icon. Don't link directly to these images from your applications, the URLs can and will change.
| Icon Number |Icon| Day | Night | Text | |-|:-:|--|-||
Some of the Weather service (Preview) APIs return the `iconCode` in the response
| 43 | :::image type="icon" source="./media/weather-services-concepts/mostly-cloudy-flurries-night.png"::: | No | Yes | Mostly Cloudy with Flurries| | 44 | :::image type="icon" source="./media/weather-services-concepts/mostly-cloudy-snow.png"::: | No | Yes | Mostly Cloudy with Snow| - ## Radar and satellite imagery color scale Via [Get Map Tile v2 API](/rest/api/maps/render-v2/get-map-tile) users can request latest radar and infrared satellite images. See below guide to help interpret colors used for radar and satellite tiles.
The table below provides guidance to interpret the radar images and create a map
| #8a32d7 | ![Color for mix-heavy.](./media/weather-services-concepts/color-8a32d7.png) | Mix-Heavy | | #6500ba | ![Color for mix-severe.](./media/weather-services-concepts/color-6500ba.png) | Mix-Severe |
-Detailed color palette for radar tiles with Hex color codes and dBZ values is shown below. dBZ represents precipitation intensity in weather radar.
+Detailed color palette for radar tiles with Hex color codes and dBZ values is shown below. dBZ represents precipitation intensity in weather radar.
| **RAIN** | **ICE** | **SNOW** | **MIXED** | |-|-|--|--|
Detailed color palette for radar tiles with Hex color codes and dBZ values is sh
| 3.75 (#92BE01) | 3.75 (#E69FC5) | 3.75 (#93C3EF) | 3.75 (#BD8EE6) | | 5 (#92BA02) | 5 (#E69DC4) | 5 (#8DC1EE) | 5 (#BB8BE6) | | 6.25 (#92B502) | 6.25 (#E69AC2) | 6.25 (#88BFEC) | 6.25 (#BA87E6) |
-| 6.75 (#92B403) | 7.5 (#E697C1) | 7.5 (#82BDEB) | 7.5 (#B883E6) |
+| 6.75 (#92B403) | 7.5 (#E697C1) | 7.5 (#82BDEB) | 7.5 (#B883E6) |
| 8 (#80AD02) | 8.75 (#E695C0) | 8.75 (#7DBAEA) | 8.75 (#B680E6) | | 9.25 (#6FA602) | 10 (#E692BE) | 10 (#77B8E8) | 10 (#B47CE6) | | 10.5 (#5EA002) | 11.25 (#E68FBD) | 11.25 (#72B6E7) | 11.25 (#B378E6) |
Detailed color palette for radar tiles with Hex color codes and dBZ values is sh
| 12.25 (#479702) | 13.75 (#E68ABA) | 13.75 (#67B2E5) | 13.75 (#AF71E6) | | 13.5 (#3D9202) | 15 (#E687B9) | 15 (#61AEE4) | 15 (#AE6EE6) | | 14.75 (#338D02) | 16.25 (#E685B8) | 16.25 (#5BABE3) | 16.25 (#AB6AE4) |
-| 16 (#298802) | 17.5 (#E682B6) | 17.5 (#56A8E2) | 17.5 (#A967E3) |
+| 16 (#298802) | 17.5 (#E682B6) | 17.5 (#56A8E2) | 17.5 (#A967E3) |
| 17.25 (#1F8302) | 18.75 (#E67FB5) | 18.75 (#50A5E1) | 18.75 (#A764E2) | | 17.75 (#1B8103) | 20 (#E67DB4) | 20 (#4BA2E0) | 20 (#A560E1) | | 19 (#187102) | 21.25 (#E275B0) | 21.25 (#459EDF) | 21.25 (#A35DE0) |
The table below provides guidance to interpret the infrared satellite images sho
| Hex color code | Color sample | Cloud Temperature | |-|--|-|
-| #b5b5b5 | ![Color tile for #b5b5b5.](./media/weather-services-concepts/color-b5b5b5.png) | Temperature-Low |
+| #b5b5b5 | ![Color tile for #b5b5b5.](./media/weather-services-concepts/color-b5b5b5.png) | Temperature-Low |
| #d24fa0 | ![Color tile for #d24fa0.](./media/weather-services-concepts/color-d24fa0.png) | | | #8a32d7 | ![Color tile for #8a32d7.](./media/weather-services-concepts/color-8a32d7.png) | | | #144bed | ![Color tile for #144bed.](./media/weather-services-concepts/color-144bed.png) | |
The table below provides guidance to interpret the infrared satellite images sho
| #ba0808 | ![Color tile for #ba0808.](./media/weather-services-concepts/color-ba0808.png) | | | #1f1f1f | ![Color tile for #1f1f1f.](./media/weather-services-concepts/color-1f1f1f.png) | Temperature-High | - Detailed color palette for infrared satellite tiles is shown below. |**Temp (K)**|**Hex color code**|
Below is the list of available Index groups (indexGroupId):
## Daily index range sets
-[Get Daily Indices API](/rest/api/maps/weather) returns the ranged value and its associated category name for each index ID. Range sets are not the same for all indices. The tables below show the various range sets used by the supported indices listed in [Index IDs and index groups IDs](#index-ids-and-index-groups-ids). To find out which indices use which range sets, go to the [Index IDs and Index Groups IDs](#index-ids-and-index-groups-ids) section of this document.
+[Get Daily Indices API](/rest/api/maps/weather) returns the ranged value and its associated category name for each index ID. Range sets aren't the same for all indices. The tables below show the various range sets used by the supported indices listed in [Index IDs and index groups IDs](#index-ids-and-index-groups-ids). To find out which indices use which range sets, go to the [Index IDs and Index Groups IDs](#index-ids-and-index-groups-ids) section of this document.
### Poor-Excellent 1 | Category Name | Begin Range | End Range |
- -|--|
- Poor | 0 | 2.99
- Fair | 3 | 4.99
- Good | 5 | 6.99
- Very Good | 7 | 8.99
- Excellent | 9 | 10
+ -|-|
+ Poor | 0 | 2.99
+ Fair | 3 | 4.99
+ Good | 5 | 6.99
+ Very Good | 7 | 8.99
+ Excellent | 9 | 10
### Poor-Excellent 2 | Category Name | Begin Range | End Range |
- -|--|
- Poor |0 | 3
- Fair |3.01 | 6
- Good |6.01 | 7.5
- Very Good |7.51 | 8.99
- Excellent |9 | 10
+ |-|--
+ Poor | 0 | 3
+ Fair | 3.01 | 6
+ Good | 6.01 | 7.5
+ Very Good | 7.51 | 8.99
+ Excellent | 9 | 10
### Excellent-Poor | Category Name | Begin Range | End Range |
- -|--|
- Excellent | 0.00 | 1.00
- Very Good | 1.01 | 3.00
- Good | 3.01 | 5.00
- Fair | 5.01 | 7.00
- Poor | 7.01 | 10.00
+ |-|
+ Excellent | 0.00 | 1.00
+ Very Good | 1.01 | 3.00
+ Good | 3.01 | 5.00
+ Fair | 5.01 | 7.00
+ Poor | 7.01 | 10.00
### Low-Extreme 1
- | Category Name | Begin Range | End Range |
- -|--|
- Low | 0 | 1.99
- Moderate | 2 | 3.99
- High | 4 | 5.99
- Very High | 6 | 7.99
- Extreme | 8 | 10
+ | Category Name | Begin Range | End Range |
+ -|-|
+ Low | 0 | 1.99
+ Moderate | 2 | 3.99
+ High | 4 | 5.99
+ Very High | 6 | 7.99
+ Extreme | 8 | 10
### Low-Extreme 2
- | Category Name | Begin Range | End Range |
- -|--|
- Low | 0 | 2.99
- Moderate | 3 | 4.99
- High | 5 | 6.99
- Very High | 7 | 8.99
- Extreme | 9 | 10
+ | Category Name | Begin Range | End Range |
+ |-|--
+ Low | 0 | 2.99
+ Moderate | 3 | 4.99
+ High | 5 | 6.99
+ Very High | 7 | 8.99
+ Extreme | 9 | 10
### Very Unlikely-Very Likely | Category Name | Begin Range | End Range |
- -|--|
- Very Unlikely | 0 | 1.99
- Unlikely | 2 | 3.99
- Possibly | 4 | 5.99
- Likely | 6 | 7.99
- Very Likely | 8 | 10
+ |-|--
+ Very Unlikely | 0 | 1.99
+ Unlikely | 2 | 3.99
+ Possibly | 4 | 5.99
+ Likely | 6 | 7.99
+ Very Likely | 8 | 10
### Very Unlikely-Very Likely 2 | Category Name | Begin Range | End Range |
- -|--|
- Very Unlikely | 0.00 | 1.00
- Unlikely | 1.01 | 3.00
- Possibly | 3.01 | 5.00
- Likely | 5.01 | 7.00
- Very Likely | 7.01 | 10.00
+ |-|
+ Very Unlikely | 0.00 | 1.00
+ Unlikely | 1.01 | 3.00
+ Possibly | 3.01 | 5.00
+ Likely | 5.01 | 7.00
+ Very Likely | 7.01 | 10.00
### Unlikely-Emergency | Category Name | Begin Range | End Range |
- -|--|
- Unlikely | 0 | 2.99
- Watch | 3 | 4.99
- Advisory | 5 | 6.99
- Warning | 7 | 8.99
- Emergency | 9 | 10
+ --|-|--
+ Unlikely | 0 | 2.99
+ Watch | 3 | 4.99
+ Advisory | 5 | 6.99
+ Warning | 7 | 8.99
+ Emergency | 9 | 10
### Beneficial-At Extreme Risk
-| Category Name | Begin Range | End Range |
- -|--|
- Beneficial | 0 | 1.99
- Neutral | 2 | 3.99
- At Risk | 4 | 5.99
- At High Risk | 6 | 7.99
- At Extreme Risk | 8 | 10
+| Category Name | Begin Range | End Range |
+ -|-|
+ Beneficial | 0 | 1.99
+ Neutral | 2 | 3.99
+ At Risk | 4 | 5.99
+ At High Risk | 6 | 7.99
+ At Extreme Risk | 8 | 10
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Azure Maps Weather services frequently asked questions (FAQ)](weather-services-faq.yml)
+
+> [!div class="nextstepaction"]
+> [Azure Maps Weather services coverage](weather-coverage.md)
+
+> [!div class="nextstepaction"]
+> [Weather services API](/rest/api/maps/weather)
azure-monitor Alerts Log https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-log.md
This article shows you how to create and manage log alerts. Azure Monitor log al
- Criteria: Logic to evaluate. If met, the alert fires. - Action: Notifications or automation - email, SMS, webhook, and so on. You can also [create log alert rules using Azure Resource Manager templates](../alerts/alerts-log-create-templates.md).-
-> [!NOTE]
-> [This page](alerts-unified-log.md) explains all of the concepts behind each setting used when setting up a log alert rule.
## Create a log alert rule in the Azure portal > [!NOTE] > This article describes creating alert rules using the new alert rule wizard.
azure-monitor Alerts Managing Alert Instances https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-managing-alert-instances.md
Last updated 2/23/2022
# Manage alert instances with unified alerts
-With the [unified alerts experience](./alerts-overview.md) in Azure Monitor, you can see all your different types of alerts across Azure. This spans multiple subscriptions, in a single pane. This article shows how you can view your alert instances, and how to find specific alert instances for troubleshooting.
+With the [unified alerts experience](./alerts-overview.md) in Azure Monitor, you can see all your different types of alerts across Azure. Unified alerts span multiple subscriptions in a single pane. This article shows how you can view your alert instances, and how to find specific alert instances for troubleshooting.
> [!NOTE] > You can only access alerts generated in the last 30 days.
You can go to the alerts page in any of the following ways:
![Screenshot of resource group Monitoring Alerts](media/alerts-managing-alert-instances/alert-rg.JPG)
-## Find alert instances
-
-The **Alerts Summary** page gives you an overview of all your alert instances across Azure. You can modify the summary view by selecting **multiple subscriptions** (up to a maximum of 5), or by filtering across **resource groups**, specific **resources**, or **time ranges**. Select **Total Alerts**, or any of the severity bands, to go to the list view for your alerts.
-
-![Screenshot of Alerts Summary page](media/alerts-managing-alert-instances/alerts-summary.jpg)
-
-On the **All Alerts** page, all the alert instances across Azure are listed. If youΓÇÖre coming to the portal from an alert notification, you can use the filters available to narrow in on that specific alert instance.
-
-> [!NOTE]
-> If you came to the page by selecting any of the severity bands, the list is pre-filtered for that severity.
-
-Apart from the filters available on the previous page, you can also filter on the basis of monitor service (for example, platform for metrics), monitor condition (fired or resolved), severity, alert state (new/acknowledged/closed), or the smart group ID.
-
-![Screenshot of All Alerts page](media/alerts-managing-alert-instances/all-alerts.jpg)
+## The alerts page
+The **Alerts** page summarizes all your alert instances across Azure. You can modify the results by selecting filters such as **time range**, **subscription**, **alert condition**, **severity**, and more. You can select an alert instance to open the **Alert Details** page and see more details about the specific alert instance.
> [!NOTE]
-> If you came to the page by selecting any of the severity bands, the list is pre-filtered for that severity.
+> If you navigated to the alerts page by selecting a specific alert severity, the list is pre-filtered for that severity.
-Selecting any alert instance opens the **Alert Details** page, allowing you to see more details about that specific alert instance.
+
+## The alerts details page
+ The **Alerts details** page provides details about the selected alert. Select **Change user response** to change the user response to the alert. You can see all closed alerts in the **History** tab.
-![Screenshot of Alert Details page](media/alerts-managing-alert-instances/alert-details.jpg)
azure-monitor Alerts Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-overview.md
You can alert on metrics and logs, as described in [monitoring data sources](./.
- Activity log events - Health of the underlying Azure platform - Tests for website availability
+## Alerts experience
+### Alerts page
-## Manage alerts
-
-You can set the state of an alert to specify where it is in the resolution process. When the criteria specified in the alert rule is met, an alert is created or fired, and it has a status of *New*. You can change the status when you acknowledge an alert and when you close it. All state changes are stored in the history of the alert.
+The Alerts page provides a summary of the alerts created in the last 24 hours. You can filter the list by the subscription or any of the filter parameters at the top of the page. The page displays the total alerts for each severity. Select a severity to filter the alerts by that severity.
+> [!NOTE]
+ > You can only access alerts generated in the last 30 days.
-The following alert states are supported.
+You can also [programmatically enumerate the alert instances generated on your subscriptions by using REST APIs](#manage-your-alert-instances-programmatically).
-| State | Description |
-|:|:|
-| New | The issue has been detected and hasn't yet been reviewed. |
-| Acknowledged | An administrator has reviewed the alert and started working on it. |
-| Closed | The issue has been resolved. After an alert has been closed, you can reopen it by changing it to another state. |
-*Alert state* is different and independent of the *monitor condition*. Alert state is set by the user. Monitor condition is set by the system. When an alert fires, the alert's monitor condition is set to *'fired'*, and when the underlying condition that caused the alert to fire clears, the monitor condition is set to *'resolved'*.
+You can narrow down the list by selecting values from any of these filters at the top of the page:
-The alert state isn't changed until the user changes it. Learn [how to change the state of your alerts and smart groups](./alerts-managing-alert-states.md?toc=%2fazure%2fazure-monitor%2ftoc.json).
+| Column | Description |
+|:|:|
+| Subscription | Select the Azure subscriptions for which you want to view the alerts. You can optionally choose to select all your subscriptions. Only alerts that you have access to in the selected subscriptions are included in the view. |
+| Resource group | Select a single resource group. Only alerts with targets in the selected resource group are included in the view. |
+| Resource type | Select one or more resource types. Only alerts with targets of the selected type are included in the view. This column is only available after a resource group has been specified. |
+| Resource | Select a resource. Only alerts with that resource as a target are included in the view. This column is only available after a resource type has been specified. |
+| Severity | Select an alert severity, or select **All** to include alerts of all severities. |
+| Alert condition | Select an alert condition, or select **All** to include alerts of all conditions. |
+| User response | Select a user response, or select **All** to include alerts of all user responses. |
+| Monitor service | Select a service, or select **All** to include all services. Only alerts created by rules that use service as a target are included. |
+| Time range | Only alerts fired within the selected time range are included in the view. Supported values are the past hour, the past 24 hours, the past seven days, and the past 30 days. |
-## Alerts experience
-The default Alerts page provides a summary of alerts that are created within a particular time range. It displays the total alerts for each severity, with columns that identify the total number of alerts in each state for each severity. Select any of the severities to open the [All Alerts](#all-alerts-page) page filtered by that severity.
+Select **Columns** at the top of the page to select which columns to show.
+### Alert details pane
-Instead, you can [programmatically enumerate the alert instances generated on your subscriptions by using REST APIs](#manage-your-alert-instances-programmatically).
+When you select an alert, this alert details pane provides details of the alert and enables you to change how you want to respond to the alert.
-> [!NOTE]
- > You can only access alerts generated in the last 30 days.
-You can change the subscriptions or filter parameters to update the page.
+The Alert details pane includes:
-![Screenshot of Alerts page](media/alerts-overview/alerts-page.png)
-You can filter this view by selecting values in the drop-down menus at the top of the page.
+|Section |Description |
+|||
+|Summary | Displays the properties and other significant information about the alert. |
+|History | Lists all actions on the alert and any changes made to the alert. |
+## Manage alerts
-| Column | Description |
-|:|:|
-| Subscription | Select the Azure subscriptions for which you want to view the alerts. You can optionally choose to select all your subscriptions. Only alerts that you have access to in the selected subscriptions are included in the view. |
-| Resource group | Select a single resource group. Only alerts with targets in the selected resource group are included in the view. |
-| Time range | Only alerts fired within the selected time range are included in the view. Supported values are the past hour, the past 24 hours, the past 7 days, and the past 30 days. |
+You can set the user response of an alert to specify where it is in the resolution process. When the criteria specified in the alert rule is met, an alert is created or fired, and it has a status of *New*. You can change the status when you acknowledge an alert and when you close it. All user response changes are stored in the history of the alert.
-Select the following values at the top of the Alerts page to open another page:
+The following user responses are supported.
-| Value | Description |
+| User Response | Description |
|:|:|
-| Total alerts | The total number of alerts that match the selected criteria. Select this value to open the All Alerts view with no filter. |
-| Smart groups | The total number of smart groups that were created from the alerts that match the selected criteria. Select this value to open the smart groups list in the All Alerts view.
-| Total alert rules | The total number of alert rules in the selected subscription and resource group. Select this value to open the Rules view filtered on the selected subscription and resource group.
-
+| New | The issue has been detected and hasn't yet been reviewed. |
+| Acknowledged | An administrator has reviewed the alert and started working on it. |
+| Closed | The issue has been resolved. After an alert has been closed, you can reopen it by changing it to another user response. |
+The *user response* is different and independent of the *alert condition*. The response is set by the user, while the alert condition is set by the system. When an alert fires, the alert's alert condition is set to *'fired'*, and when the underlying condition that caused the alert to fire clears, the alert condition is set to *'resolved'*.
## Manage alert rules
-To show the **Rules** page, select **Manage alert rules**. The Rules page is a single place for managing all alert rules across your Azure subscriptions. It lists all alert rules and can be sorted based on target resources, resource groups, rule name, or status. You can also edit, enable, or disable alert rules from this page.
-
- ![Screenshot of Rules page](./media/alerts-overview/alerts-preview-rules.png)
+To show the **Rules** page, select **Manage alert rules**. The Rules page is a single place for managing all alert rules across your Azure subscriptions. It lists all alert rules and can be sorted based on target resources, resource groups, rule name, or status. You can also edit, enable, or disable alert rules from this page.
+ :::image type="content" source="media/alerts-overview/alerts-rules.png" alt-text="Screenshot of alert rules page.":::
## Create an alert rule You can author alert rules in a consistent manner, whatever of the monitoring service or signal type.
You can learn more about how to create alert rules in [Create, view, and manage
Alerts are available across several Azure monitoring services. For information about how and when to use each of these services, see [Monitoring Azure applications and resources](../overview.md). -
-## All Alerts page
-To see the **All Alerts** page, select **Total Alerts**. Here you can view a list of alerts created within the selected time. You can view either a list of the individual alerts or a list of the smart groups that contain the alerts. Select the banner at the top of the page to toggle between views.
-
-![Screenshot of All Alerts page](media/alerts-overview/all-alerts-page.png)
-
-You can filter the view by selecting the following values in the drop-down menus at the top of the page:
-
-| Column | Description |
-|:|:|
-| Subscription | Select the Azure subscriptions for which you want to view the alerts. You can optionally choose to select all your subscriptions. Only alerts that you have access to in the selected subscriptions are included in the view. |
-| Resource group | Select a single resource group. Only alerts with targets in the selected resource group are included in the view. |
-| Resource type | Select one or more resource types. Only alerts with targets of the selected type are included in the view. This column is only available after a resource group has been specified. |
-| Resource | Select a resource. Only alerts with that resource as a target are included in the view. This column is only available after a resource type has been specified. |
-| Severity | Select an alert severity, or select **All** to include alerts of all severities. |
-| Monitor condition | Select a monitor condition, or select **All** to include alerts of all conditions. |
-| Alert state | Select an alert state, or select **All** to include alerts of all states. |
-| Monitor service | Select a service, or select **All** to include all services. Only alerts created by rules that use service as a target are included. |
-| Time range | Only alerts fired within the selected time range are included in the view. Supported values are the past hour, the past 24 hours, the past 7 days, and the past 30 days. |
-
-Select **Columns** at the top of the page to select which columns to show.
-
-## Alert details page
-When you select an alert, this page provides details of the alert and enables you to change its state.
-
-![Screenshot of Alert details page](media/alerts-overview/alert-detail2.png)
-
-The Alert details page includes the following sections:
-
-| Section | Description |
-|:|:|
-| Summary | Displays the properties and other significant information about the alert. |
-| History | Lists each action taken by the alert and any changes made to the alert. Currently limited to state changes. |
-| Diagnostics | Information about the smart group in which the alert is included. The *alert count* refers to the number of alerts that are included in the smart group. Includes other alerts in the same smart group that were created in the past 30 days, whatever of the time filter in the alerts list page. Select an alert to view its detail. |
- ## Azure role-based access control (Azure RBAC) for your alert instances The consumption and management of alert instances requires the user to have the Azure built-in roles of either [monitoring contributor](../../role-based-access-control/built-in-roles.md#monitoring-contributor) or [monitoring reader](../../role-based-access-control/built-in-roles.md#monitoring-reader). These roles are supported at any Azure Resource Manager scope, from the subscription level to granular assignments at a resource level. For example, if a user only has monitoring contributor access for virtual machine `ContosoVM1`, that user can consume and manage only alerts generated on `ContosoVM1`.
The consumption and management of alert instances requires the user to have the
You might want to query programmatically for alerts generated against your subscription. Queries might be to create custom views outside of the Azure portal, or to analyze your alerts to identify patterns and trends.
-It is recommended you that you use [Azure Resource Graph](../../governance/resource-graph/overview.md) with the `AlertsManagementResources` schema for querying fired alerts. Resource Graph is recommended when you have to manage alerts generated across multiple subscriptions.
+We recommended that you use [Azure Resource Graph](../../governance/resource-graph/overview.md) with the `AlertsManagementResources` schema for querying fired alerts. Resource Graph is recommended when you have to manage alerts generated across multiple subscriptions.
The following sample request to the Resource Graph REST API returns alerts within one subscription in the last day:
azure-monitor Azure Data Explorer Monitor Cross Service Query https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/azure-data-explorer-monitor-cross-service-query.md
Title: Cross service query between Azure Monitor and Azure Data Explorer description: Query Azure Data Explorer data through Azure Log Analytics tools vice versa to join and analyze all your data in one place.---++ Previously updated : 06/12/2020 Last updated : 03/28/2022+ # Cross service query - Azure Monitor and Azure Data Explorer
Use Azure Data Explorer to query data that was exported from your Log Analytics
Learn more about: * [create cross service queries between Azure Data Explorer and Azure Monitor](/azure/data-explorer/query-monitor-data). Query Azure Monitor data from Azure Data Explorer * [create cross service queries between Azure Monitor and Azure Data Explorer](./azure-monitor-data-explorer-proxy.md). Query Azure Data Explorer data from Azure Monitor
-* [Log Analytics workspace data export in Azure Monitor](/azure/data-explorer/query-monitor-data). Link and query Azure Blob storage account with Log Analytics Exported data.
+* [Log Analytics workspace data export in Azure Monitor](/azure/data-explorer/query-monitor-data). Link and query Azure Blob storage account with Log Analytics Exported data.
azure-monitor Azure Data Explorer Monitor Proxy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/azure-data-explorer-monitor-proxy.md
Title: Query data in Azure Monitor using Azure Data Explorer description: Use Azure Data Explorer to perform cross product queries between Azure Data Explorer, Log Analytics workspaces and classic Application Insights applications in Azure Monitor.---++ Previously updated : 10/13/2020 Last updated : 03/28/2022+
The following syntax options are available when calling the Log Analytics or App
- Read more about the [data structure of Log Analytics workspaces and Application Insights](data-platform-logs.md). - Learn to [write queries in Azure Data Explorer](/azure/data-explorer/write-queries).--
+-
azure-monitor Azure Data Explorer Query Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/azure-data-explorer-query-storage.md
Title: Query exported data from Azure Monitor using Azure Data Explorer description: Use Azure Data Explorer to query data that was exported from your Log Analytics workspace to an Azure storage account.-- Last updated 03/22/2022
azure-monitor Azure Monitor Data Explorer Proxy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/azure-monitor-data-explorer-proxy.md
Title: Cross-resource query Azure Data Explorer by using Azure Monitor description: Use Azure Monitor to perform cross-product queries between Azure Data Explorer, Log Analytics workspaces, and classic Application Insights applications in Azure Monitor.---++ Previously updated : 12/02/2020 Last updated : 03/28/2022+ # Cross-resource query Azure Data Explorer by using Azure Monitor
Kusto Explorer automatically signs you in to the tenant to which the user accoun
## Next steps * [Write queries](/azure/data-explorer/write-queries) * [Query data in Azure Monitor by using Azure Data Explorer](/azure/data-explorer/query-monitor-data)
-* [Perform cross-resource log queries in Azure Monitor](../logs/cross-workspace-query.md)
+* [Perform cross-resource log queries in Azure Monitor](../logs/cross-workspace-query.md)
azure-monitor Operationalinsights Api Retirement https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/operationalinsights-api-retirement.md
- Title: Azure Monitor API retirement
-description: Describes the retirement of older versions of the OperationalInsights resource provider API.
--- Previously updated : 10/29/2020---
-# OperationalInsights API version retirement
-Microsoft provides notification at least 12 months in advance of retiring an API in order to smooth the transition to a newer/supported version. We have released a new version (2020-08-01) for **OperationalInsights** resource provider APIs and will retire any earlier API versions on February 29, 2024.
-
-We encourage you to start using version 2020-08-01 now to gain the benefits of new functionality, such as [dedicated cluster](./logs-dedicated-clusters.md), [customer-managed keys](../logs/customer-managed-keys.md), [private link](./private-link-security.md) and [data export](./logs-data-export.md). Also, new features and functionality and optimizations are only added to the current API.
-
-After February 29, 2024 Azure Monitor will no longer support earlier APIs versions than 2020-08-01. If you prefer not to upgrade, requests sent from earlier versions will continue to be served by the Azure Monitor service until February 29, 2024.
-
-## Migration steps
-Depending on the configuration method you use, you should update the new version in **REST** requests and **Resource Manager templates**. Follow the examples below to update the API version:
-
-1. REST API requests use the API version in the URL of the request. Replace that version with the latest version (2020-08-01) as shown in the following example.
-
- ```rest
- https://management.azure.com/subscriptions/{subscriptionId}/resourcegroups/{resourceGroupName}/providers/Microsoft.OperationalInsights/workspaces/{workspaceName}?api-version=2020-08-01
- ```
-
-2. Azure Resource Manager templates use the API version in the **apiVersion** property of the resource. Replace that version with the latest version (2020-08-01) as shown in the following example.
-
- ```json
- {
- "$schema": "https://schema.management.azure.com/schemas/2019-08-01/deploymentTemplate.json#",
- "contentVersion": "1.0.0.0",
- "parameters": {
- "workspaceName": {
- "type": "string",
- "metadata": {
- "description": "Name of the workspace."
- }
- },
- "resources": [
- {
- "type": "Microsoft.OperationalInsights/workspaces",
- "name": "[parameters('workspaceName')]",
- "apiVersion": "2020-08-01",
- "location": "westus",
- "properties": {
- "sku": {
- "name": "pergb2018"
- },
- "retentionInDays": 30,
- "features": {
- "searchVersion": 1,
- "legacy": 0,
- "enableLogAccessUsingOnlyResourcePermissions": true
- }
- }
- }
- ]
- }
- }
- ```
--
-### More information
-If you have questions, get answers from [our tech community experts]( https://techcommunity.microsoft.com/t5/azure-monitor/bd-p/AzureMonitor). If you have a support plan and you need technical help, create a [support request]( https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest):
-1. Under *Issue type*, select **Technical**.
-2. Under *Subscription*, select your subscription.
-3. Under *Service*, select **My services**, then select **Log Analytics**.
-4. Under *Summary*, type a description of your issue.
-5. Under *Problem type*, select **Log Analytics workspace management**.
-6. Under *Problem subtype*, select **ARM templates, PowerShell and CLI**.
-
-## Next steps
--- See the [reference for the OperationalInsights workspace API](/rest/api/loganalytics/workspaces).
azure-monitor Powershell Workspace Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/powershell-workspace-configuration.md
Title: Create & configure Log Analytics with PowerShell description: Log Analytics workspaces in Azure Monitor store data from servers in your on-premises or cloud infrastructure. You can collect machine data from Azure storage when generated by Azure diagnostics. -- Previously updated : 10/20/2021++ Last updated : 03/28/2022+
azure-monitor Quick Create Workspace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/quick-create-workspace.md
Title: Create a Log Analytics workspace in the Azure portal | Microsoft Docs description: Learn how to create a Log Analytics workspace to enable management solutions and data collection from your cloud and on-premises environments in the Azure portal. -- Previously updated : 03/18/2021++ Last updated : 03/28/2022+
azure-portal Networking Quota Requests https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-portal/supportability/networking-quota-requests.md
Title: Increase networking quotas description: Learn how to request a networking quota increase in the Azure portal. Previously updated : 12/02/2021 Last updated : 03/25/2022 # Increase networking quotas
-This article shows how to request increases for networking quotas in the [Azure portal](https://portal.azure.com).
+This article shows how to request increases for networking quotas from [Azure Home](https://portal.azure.com) or from **My quotas**, a centralized location where you can view your quota usage and request quota increases.
-To view your current networking usage and quota in the Azure portal, open your subscription, then select **Usage + quotas**. You can also use the following options to view your network usage and limits.
+For quick access to request an increase, select **Quotas** on the Azure Home page.
-- [Usage CLI](/cli/azure/network#az-network-list-usages)-- [PowerShell](/powershell/module/azurerm.network/get-azurermnetworkusage)-- [The network usage API](/rest/api/virtualnetwork/virtualnetworks/listusage)
-You can request an increase in the Azure portal by using **Help + support** or in **Usage + quotas** for your subscription.
+If you don't see **Quotas** on Azure Home, type "quotas" in the search box, then select **Quotas**. The **Quotas** icon will then appear on your Home page the next time you visit.
-> [!Note]
-> To change the default size of **Public IP Prefixes**, select **Min Public IP InterNetwork Prefix Length** from the dropdown list.
+You can also use the following options to view your network quota usage and limits:
-## Request networking quota increase by using Help + support
+- [Azure CLI](/cli/azure/network#az-network-list-usages)
+- [Azure PowerShell](/powershell/module/azurerm.network/get-azurermnetworkusage)
+- [REST API](/rest/api/virtualnetwork/virtualnetworks/listusage)
+- **Usage + quotas** (in the left pane when viewing your subscription in the Azure portal)
-Follow the instructions below to create a networking quota increase request by using **Help + support** in the Azure portal.
+Based on your subscription, you can typically request increases for these quotas:
-1. Sign in to the [Azure portal](https://portal.azure.com), and [open a new support request](how-to-create-azure-support-request.md).
+- Public IP Addresses
+- Public IP Addresses - Standard
+- Public IPv4 Prefix Length
-1. For **Issue type**, choose **Service and subscription limits (quotas)**.
+## Request networking quota increases
-1. Select the subscription that needs an increased quota.
+Follow these steps to request a networking quota increase from Azure Home.
-1. Under **Quota type**, select **Networking**. Then select **Next**.
+1. From [Azure Home](https://portal.azure.com), select **Quotas** and then select **Microsoft.Network**.
- :::image type="content" source="media/networking-quota-request/new-networking-quota-request.png" alt-text="Screenshot of a new networking quota increase request in the Azure portal.":::
+1. Find the quota you want to increase, then select the support icon.
-1. In the **Problem details** section, select **Enter details**. Follow the prompts to select a deployment model, location, the resources to include in your request, and the new limit you would like on the subscription for those resources. When you're finished, select **Save and continue** to continue creating your support request.
+ :::image type="content" source="media/networking-quota-request/quota-support-icon.png" alt-text="Screenshot showing the support icon for a networking quota.":::
- :::image type="content" source="media/networking-quota-request/quota-details-network.png" alt-text="Screenshot of the Quota details screen for a networking quota increase request in the Azure portal.":::
+1. In the **New support request** form, on the **Problem description** screen, some fields will be pre-filled for you. In the **Quota type** list, select **Networking**, then select **Next**.
-1. Complete the rest of the **Additional information** screen, and then select **Next**.
+ :::image type="content" source="media/networking-quota-request/new-networking-quota-request.png" alt-text="Screenshot of a networking quota support request in the Azure portal.":::
-1. On the **Review + create** screen, review the details that you'll send to support, and then select **Create**.
+1. On the **Additional details** screen, under P**rovide details for the request**, select **Enter details**.
-## Request networking quota increase from Usage + quotas
+1. In the **Quota details** pane, enter the information for your request.
-Follow these instructions to create a networking quota increase request from **Usage + quotas** in the Azure portal.
+ > [!IMPORTANT]
+ > To increase a static public IP address quota, select **Other** in the **Resources** list, then specify this information in the **Details** section.
-1. From https://portal.azure.com, search for and select **Subscriptions**.
+ :::image type="content" source="media/networking-quota-request/quota-details-network.png" alt-text="Screenshot of the Quota details pane for a networking quota increase request.":::
-1. Select the subscription that needs an increased quota.
+1. Select **Save and continue**. The information you entered will appear in the **Request summary** under **Problem details**.
-1. Select **Usage + quotas**.
+1. Continue to fill out the form, including your preferred contact method. When you're finished, select **Next**.
+1. Review your quota increase request information, then select **Create**.
-1. In the upper right corner, select **Request increase**.
+After your networking quota increase request has been submitted, a support engineer will contact you and assist you with the request.
-1. Follow the steps above (starting at step 4) to complete your request.
+For more information about support requests, see [Create an Azure support request](how-to-create-azure-support-request.md).
## Next steps
azure-resource-manager Contribute https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/contribute.md
+
+ Title: Contribute to Bicep
+description: Describes how to submit open source contributions to Bicep.
++++ Last updated : 03/27/2022++
+# Contribute to Bicep
+
+Bicep is an open-source project. That means you can contribute to Bicep's development, and participate in the broader Bicep community.
+
+## Contribution types
+
+- **Azure Quickstart Templates.** You can contribute example Bicep files and ARM templates to the Azure Quickstart Templates repository. For more information, see the [Azure Quickstart Templates contribution guide](https://github.com/Azure/azure-quickstart-templates/blob/master/1-CONTRIBUTION-GUIDE/README.md#contribution-guide).
+- **Documentation.** Bicep's documentation is open to contributions, too. For more information, see [Microsoft Docs contributor guide overview](/contribute/).
+- **Snippets.** Do you have a favorite snippet you think the community would benefit from? You can add it to the Visual Studio Code extension's collection of snippets. For more information, see [Contributing to Bicep](https://github.com/Azure/bicep/blob/main/CONTRIBUTING.md#snippets).
+- **Code changes.** If you're a developer and you have ideas you'd like to see in the Bicep language or tooling, you can contribute a pull request. For more information, see [Contributing to Bicep](https://github.com/Azure/bicep/blob/main/CONTRIBUTING.md).
+
+## Next steps
+
+To learn about the structure and syntax of Bicep, see [Bicep file structure](./file.md).
azure-resource-manager Azure Subscription Service Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/azure-subscription-service-limits.md
The following table applies to v1, v2, Standard, and WAF SKUs unless otherwise s
[!INCLUDE [azure-firewall-limits](../../../includes/firewall-limits.md)]
-### Azure Front Door Service limits
+### Azure Front Door (classic) limits
[!INCLUDE [azure-front-door-service-limits](../../../includes/front-door-limits.md)]
azure-sql Advance Notifications https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/advance-notifications.md
Previously updated : 03/07/2022 Last updated : 03/25/2022 # Advance notifications for planned maintenance events (Preview) [!INCLUDE[appliesto-sqldb-sqlmi](../includes/appliesto-sqldb-sqlmi.md)]
-Advance notifications (Preview) are available for databases configured to use a non-default [maintenance window](maintenance-window.md). Advance notifications enable customers to configure notifications to be sent up to 24 hours in advance of any planned event.
+Advance notifications (Preview) are available for databases configured to use a non-default [maintenance window](maintenance-window.md) and managed instances with any configuration (including the default one). Advance notifications enable customers to configure notifications to be sent up to 24 hours in advance of any planned event.
Notifications can be configured so you can get texts, emails, Azure push notifications, and voicemails when planned maintenance is due to begin in the next 24 hours. Additional notifications are sent when maintenance begins and when maintenance ends.
-Advance notifications cannot be configured for the **System default** maintenance window option. Choose a maintenance window other than the **System default** to configure and enable Advance notifications.
+> [!IMPORTANT]
+> For Azure SQL Database, advance notifications cannot be configured for the **System default** maintenance window option. Choose a maintenance window other than the **System default** to configure and enable Advance notifications.
> [!NOTE] > While [maintenance windows](maintenance-window.md) are generally available, advance notifications for maintenance windows are in public preview for Azure SQL Database and Azure SQL Managed Instance.
azure-sql Managed Instance Link Preparation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/managed-instance/managed-instance-link-preparation.md
Title: Prepare environment for link feature
+ Title: Prepare environment for Managed Instance link
-description: This guide teaches you how to prepare your environment to use the SQL Managed Instance link to replicate your database over to Azure SQL Managed Instance, and possibly failover.
+description: Learn how to prepare your environment for using a Managed Instance link to replicate and fail over your database to SQL Managed Instance.
Last updated 03/22/2022
-# Prepare environment for link feature - Azure SQL Managed Instance
+# Prepare your environment for a link - Azure SQL Managed Instance
[!INCLUDE[appliesto-sqlmi](../includes/appliesto-sqlmi.md)]
-This article teaches you to prepare your environment for the [Managed Instance link feature](link-feature.md) so that you can replicate databases from SQL Server instance to Azure SQL Managed Instance.
+This article teaches you how to prepare your environment for a [Managed Instance link](link-feature.md) so that you can replicate databases from SQL Server to Azure SQL Managed Instance.
> [!NOTE]
-> The link feature for Azure SQL Managed Instance is currently in preview.
+> The link is a feature of Azure SQL Managed Instance and is currently in preview.
## Prerequisites
-To use the Managed Instance link feature, you need the following prerequisites:
+To use the link with Azure SQL Managed Instance, you need the following prerequisites:
- An active Azure subscription. If you don't have one, [create a free account](https://azure.microsoft.com/free/). - [SQL Server 2019 Enterprise or Developer edition](https://www.microsoft.com/en-us/evalcenter/evaluate-sql-server-2019?filetype=EXE), starting with [CU15 (15.0.4198.2)](https://support.microsoft.com/topic/kb5008996-cumulative-update-15-for-sql-server-2019-4b6a8ee9-1c61-482d-914f-36e429901fb6).-- An instance of Azure SQL Managed Instance. [Get started](instance-create-quickstart.md) if you don't have one.
+- Azure SQL Managed Instance. [Get started](instance-create-quickstart.md) if you don't have it.
## Prepare your SQL Server instance
-To prepare your SQL Server instance, you need to validate:
-- you're on the minimum supported version;-- you've enabled the availability group feature;-- you've added the proper trace flags at startup;-- your databases are in full recovery mode and backed up.
+To prepare your SQL Server instance, you need to validate that:
+
+- You're on the minimum supported version.
+- You've enabled the availability groups feature.
+- You've added the proper trace flags at startup.
+- Your databases are in full recovery mode and backed up.
You'll need to restart SQL Server for these changes to take effect.
-### Install CU15 (or higher)
+### Install CU15 (or later)
The link feature for SQL Managed Instance was introduced in CU15 of SQL Server 2019. To check your SQL Server version, run the following Transact-SQL (T-SQL) script on SQL Server: ```sql Execute on SQL Server
+-- Run on SQL Server
-- Shows the version and CU of the SQL Server SELECT @@VERSION ```
-If your SQL Server version is lower than CU15 (15.0.4198.2), either install the [CU15](https://support.microsoft.com/topic/kb5008996-cumulative-update-15-for-sql-server-2019-4b6a8ee9-1c61-482d-914f-36e429901fb6), or the current latest cumulative update. Your SQL Server instance will be restarted during the update.
+If your SQL Server version is earlier than CU15 (15.0.4198.2), install [CU15](https://support.microsoft.com/topic/kb5008996-cumulative-update-15-for-sql-server-2019-4b6a8ee9-1c61-482d-914f-36e429901fb6) or the latest cumulative update. You must restart your SQL Server instance during the update.
-### Create database master key in the master database
+### Create a database master key in the master database
-Create database master key in the master database by running the following T-SQL script on SQL Server.
+Create database master key in the master database by running the following T-SQL script on SQL Server:
```sql Execute on SQL Server Create MASTER KEY
+-- Run on SQL Server
+-- Create a master key
USE MASTER CREATE MASTER KEY ENCRYPTION BY PASSWORD = '<strong_password>' ```
-To check if you have database master key, use the following T-SQL script on SQL Server.
+To make sure that you have the database master key, use the following T-SQL script on SQL Server:
```sql Execute on SQL Server
+-- Run on SQL Server
SELECT * FROM sys.symmetric_keys WHERE name LIKE '%DatabaseMasterKey%' ```
-### Enable availability groups feature
+### Enable availability groups
-The link feature for SQL Managed Instance relies on the Always On availability groups feature, which isn't enabled by default. To learn more, review [enabling the Always On availability groups feature](/sql/database-engine/availability-groups/windows/enable-and-disable-always-on-availability-groups-sql-server).
+The link feature for SQL Managed Instance relies on the Always On availability groups feature, which isn't enabled by default. To learn more, review [Enable the Always On availability groups feature](/sql/database-engine/availability-groups/windows/enable-and-disable-always-on-availability-groups-sql-server).
-To confirm the Always On availability groups feature is enabled, run the following Transact-SQL (T-SQL) script on SQL Server:
+To confirm that the Always On availability groups feature is enabled, run the following T-SQL script on SQL Server:
```sql Execute on SQL Server Is HADR enabled on this SQL Server?
+-- Run on SQL Server
+-- Is Always On enabled on this SQL Server instance?
declare @IsHadrEnabled sql_variant = (select SERVERPROPERTY('IsHadrEnabled')) select @IsHadrEnabled as IsHadrEnabled,
select
If the availability groups feature isn't enabled, follow these steps to enable it:
-1. Open the **SQL Server Configuration Manager**.
-1. Choose the SQL Server service from the navigation pane.
-1. Right-click on the SQL Server service, and select **Properties**:
+1. Open SQL Server Configuration Manager.
+1. Select **SQL Server Services** from the left pane.
+1. Right-click the SQL Server service, and then select **Properties**.
- :::image type="content" source="./media/managed-instance-link-preparation/sql-server-configuration-manager-sql-server-properties.png" alt-text="Screenshot showing S Q L Server configuration manager.":::
+ :::image type="content" source="./media/managed-instance-link-preparation/sql-server-configuration-manager-sql-server-properties.png" alt-text="Screenshot that shows SQL Server Configuration Manager, with selections for opening properties for the service.":::
1. Go to the **Always On Availability Groups** tab.
-1. Select the checkbox to enable **Always On Availability Groups**. Select **OK**:
+1. Select the **Always On Availability Groups** checkbox, and then select **OK**.
- :::image type="content" source="./media/managed-instance-link-preparation/always-on-availability-groups-properties.png" alt-text="Screenshot showing always on availability groups properties.":::
+ :::image type="content" source="./media/managed-instance-link-preparation/always-on-availability-groups-properties.png" alt-text="Screenshot that shows the properties for Always On availability groups.":::
-1. Select **OK** on the dialog box to restart the SQL Server service.
+1. Select **OK** in the dialog to restart the SQL Server service.
### Enable startup trace flags
-To optimize Managed Instance link performance, enabling trace flags `-T1800` and `-T9567` at startup is highly recommended:
--- **-T1800**: This trace flag optimizes performance when the log files for the primary and secondary replica in an availability group are hosted on disks with different sector sizes, such as 512 bytes and 4k. If both primary and secondary replicas have a disk sector size of 4k, this trace flag isn't required. To learn more, review [KB3009974](https://support.microsoft.com/topic/kb3009974-fix-slow-synchronization-when-disks-have-different-sector-sizes-for-primary-and-secondary-replica-log-files-in-sql-server-ag-and-logshipping-environments-ed181bf3-ce80-b6d0-f268-34135711043c).-- **-T9567**: This trace flag enables compression of the data stream for availability groups during automatic seeding. The compression increases the load on the processor but can significantly reduce transfer time during seeding.
+To optimize the performance of your SQL Managed Instance link, we recommend enabling the following trace flags at startup:
-To enable these trace flags at startup, follow these steps:
+- `-T1800`: This trace flag optimizes performance when the log files for the primary and secondary replicas in an availability group are hosted on disks with different sector sizes, such as 512 bytes and 4K. If both primary and secondary replicas have a disk sector size of 4K, this trace flag isn't required. To learn more, review [KB3009974](https://support.microsoft.com/topic/kb3009974-fix-slow-synchronization-when-disks-have-different-sector-sizes-for-primary-and-secondary-replica-log-files-in-sql-server-ag-and-logshipping-environments-ed181bf3-ce80-b6d0-f268-34135711043c).
+- `-T9567`: This trace flag enables compression of the data stream for availability groups during automatic seeding. The compression increases the load on the processor but can significantly reduce transfer time during seeding.
-1. Open **SQL Server Configuration Manager**.
-1. Choose the SQL Server service from the navigation pane.
-1. Right-click on the SQL Server service, and select **Properties**:
+To enable these trace flags at startup, use the following steps:
- :::image type="content" source="./media/managed-instance-link-preparation/sql-server-configuration-manager-sql-server-properties.png" alt-text="Screenshot showing S Q L Server configuration manager.":::
+1. Open SQL Server Configuration Manager.
+1. Select **SQL Server Services** from the left pane.
+1. Right-click the SQL Server service, and then select **Properties**.
-1. Go to the **Startup Parameters** tab. In **Specify a startup parameter**, enter `-T1800` and select **Add** to add the startup parameter. After the trace flag has been added, enter `-T9567` and select **Add** to add the other trace flag as well. Select **Apply** to save your changes:
+ :::image type="content" source="./media/managed-instance-link-preparation/sql-server-configuration-manager-sql-server-properties.png" alt-text="Screenshot that shows SQL Server Configuration Manager.":::
- :::image type="content" source="./media/managed-instance-link-preparation/startup-parameters-properties.png" alt-text="Screenshot showing Startup parameter properties.":::
+1. Go to the **Startup Parameters** tab. In **Specify a startup parameter**, enter `-T1800` and select **Add** to add the startup parameter. Then enter `-T9567` and select **Add** to add the other trace flag. Select **Apply** to save your changes.
-1. Select **OK** to close the **Properties** window.
+ :::image type="content" source="./media/managed-instance-link-preparation/startup-parameters-properties.png" alt-text="Screenshot that shows startup parameter properties.":::
-To learn more, review [enabling trace flags](/sql/t-sql/database-console-commands/dbcc-traceon-transact-sql).
+1. Select **OK** to close the **Properties** window.
-### Restart SQL Server and validate configuration
+To learn more, review the [syntax for enabling trace flags](/sql/t-sql/database-console-commands/dbcc-traceon-transact-sql).
-After you've validated you're on a supported version of SQL Server, enabled the Always On availability groups feature, and added your startup trace flags, restart your SQL Server instance to apply all of these changes.
+### Restart SQL Server and validate the configuration
-To restart your SQL Server instance, follow these steps:
+After you've ensured that you're on a supported version of SQL Server, enabled the Always On availability groups feature, and added your startup trace flags, restart your SQL Server instance to apply all of these changes:
1. Open **SQL Server Configuration Manager**.
-1. Choose the SQL Server service from the navigation pane.
-1. Right-click on the SQL Server service, and select **Restart**:
+1. Select **SQL Server Services** from the left pane.
+1. Right-click the SQL Server service, and then select **Restart**.
- :::image type="content" source="./media/managed-instance-link-preparation/sql-server-configuration-manager-sql-server-restart.png" alt-text="Screenshot showing S Q L Server restart command call.":::
+ :::image type="content" source="./media/managed-instance-link-preparation/sql-server-configuration-manager-sql-server-restart.png" alt-text="Screenshot that shows the SQL Server restart command call.":::
-After the restart, use Transact-SQL to validate the configuration of your SQL Server. Your SQL Server version should be 15.0.4198.2 or greater, the Always On availability groups feature should be enabled, and you should have the Trace flags -T1800 and -T9567 enabled.
-
-To validate your configuration, run the following Transact-SQL (T-SQL) script:
+After the restart, run the following T-SQL script on SQL Server to validate the configuration of your SQL Server instance:
```sql Execute on SQL Server
+-- Run on SQL Server
-- Shows the version and CU of SQL Server SELECT @@VERSION Shows if Always On availability groups feature is enabled
+-- Shows if the Always On availability groups feature is enabled
SELECT SERVERPROPERTY ('IsHadrEnabled') Lists all trace flags enabled on the SQL Server
+-- Lists all trace flags enabled on SQL Server
DBCC TRACESTATUS ```
-The following screenshot is an example of the expected outcome for a SQL Server that's been properly configured:
+Your SQL Server version should be 15.0.4198.2 or later, the Always On availability groups feature should be enabled, and you should have the trace flags `-T1800` and `-T9567` enabled. The following screenshot is an example of the expected outcome for a SQL Server instance that has been properly configured:
-### User database recovery mode and backup
+### Set up database recovery and backup
-All databases that are to be replicated via instance link must be in full recovery mode and have at least one backup. Execute the following on SQL Server:
+All databases that will be replicated via the link must be in full recovery mode and have at least one backup. Run the following code on SQL Server:
```sql Execute on SQL Server
+-- Run on SQL Server
-- Set full recovery mode for all databases you want to replicate. ALTER DATABASE [<DatabaseName>] SET RECOVERY FULL GO
GO
## Configure network connectivity
-For the instance link to work, there must be network connectivity between SQL Server and SQL Managed Instance. The network option that you choose depends on where your SQL Server resides - whether it's on-premises or on a virtual machine (VM).
+For the link to work, you must have network connectivity between SQL Server and SQL Managed Instance. The network option that you choose depends on where your SQL Server instance resides - whether it's on-premises or on a virtual machine (VM).
-### SQL Server on Azure VM
+### SQL Server on Azure Virtual Machines
-Deploying your SQL Server to an Azure VM in the same Azure virtual network (VNet) that hosts your SQL Managed Instance is the simplest method, as there will automatically be network connectivity between the two instances. To learn more, see the detailed tutorial [Deploy and configure an Azure VM to connect to Azure SQL Managed Instance](./connect-vm-instance-configure.md).
+Deploying SQL Server on Azure Virtual Machines in the same Azure virtual network that hosts SQL Managed Instance is the simplest method, because network connectivity will automatically exist between the two instances. To learn more, see the detailed tutorial [Deploy and configure an Azure VM to connect to Azure SQL Managed Instance](./connect-vm-instance-configure.md).
-If your SQL Server on Azure VM is in a different VNet to your managed instance, either connect the two Azure VNets using [Global VNet peering](https://techcommunity.microsoft.com/t5/azure-sql/new-feature-global-vnet-peering-support-for-azure-sql-managed/ba-p/1746913), or configure [VPN gateways](../../vpn-gateway/tutorial-create-gateway-portal.md).
+If your SQL Server on Azure Virtual Machines instance is in a different virtual network from your managed instance, either connect the two Azure virtual networks by using [global virtual network peering](https://techcommunity.microsoft.com/t5/azure-sql/new-feature-global-vnet-peering-support-for-azure-sql-managed/ba-p/1746913) or configure [VPN gateways](../../vpn-gateway/tutorial-create-gateway-portal.md).
>[!NOTE]
-> Global VNet peering is enabled by default on managed instances provisioned after November 2020. [Raise a support ticket](../database/quota-increase-request.md) to enable Global VNet peering on older instances.
+> Global virtual network peering is enabled by default on managed instances provisioned after November 2020. [Raise a support ticket](../database/quota-increase-request.md) to enable global virtual network peering on older instances.
+
-### SQL Server outside of Azure
+### SQL Server outside Azure
-If your SQL Server is hosted outside of Azure, establish a VPN connection between your SQL Server and your SQL Managed Instance with either option:
+If your SQL Server instance is hosted outside Azure, establish a VPN connection between SQL Server and SQL Managed Instance by using either of these options:
-- [Site-to-site virtual private network (VPN) connection](/office365/enterprise/connect-an-on-premises-network-to-a-microsoft-azure-virtual-network)-- [Azure Express Route connection](../../expressroute/expressroute-introduction.md)
+- [Site-to-site VPN connection](/office365/enterprise/connect-an-on-premises-network-to-a-microsoft-azure-virtual-network)
+- [Azure ExpressRoute connection](../../expressroute/expressroute-introduction.md)
> [!TIP]
-> Azure Express Route is recommended for the best network performance when replicating data. Ensure to provision a gateway with sufficiently large bandwidth for your use case.
+> We recommend ExpressRoute for the best network performance when you're replicating data. Provision a gateway with enough bandwidth for your use case.
-### Open network ports between the environments
+### Network ports between the environments
-Port 5022 needs to allow inbound and outbound traffic between SQL Server and SQL Managed Instance. Port 5022 is the standard port used for availability groups, and can't be changed or customized.
+Port 5022 needs to allow inbound and outbound traffic between SQL Server and SQL Managed Instance. Port 5022 is the standard database mirroring endpoint port for availability groups. It can't be changed or customized.
The following table describes port actions for each environment: |Environment|What to do| |:|:--|
-|SQL Server (in Azure) | Open both inbound and outbound traffic on port 5022 for the network firewall to the entire subnet of the SQL Managed Instance. If necessary, do the same on the Windows firewall as well. Create an NSG rule in the virtual network hosting the VM that allows communication on port 5022. |
-|SQL Server (outside of Azure) | Open both inbound and outbound traffic on port 5022 for the network firewall to the entire subnet of the SQL Managed Instance. If necessary, do the same on the Windows firewall as well. |
-|SQL Managed Instance |[Create an NSG rule](../../virtual-network/manage-network-security-group.md#create-a-security-rule) in the Azure portal to allow inbound and outbound traffic from the IP address of the SQL Server on port 5022 to the virtual network hosting the SQL Managed Instance. |
+|SQL Server (in Azure) | Open both inbound and outbound traffic on port 5022 for the network firewall to the entire subnet of SQL Managed Instance. If necessary, do the same on the Windows firewall. Create a network security group (NSG) rule in the virtual network that hosts the VM to allow communication on port 5022. |
+|SQL Server (outside Azure) | Open both inbound and outbound traffic on port 5022 for the network firewall to the entire subnet of SQL Managed Instance. If necessary, do the same on the Windows firewall. |
+|SQL Managed Instance |[Create an NSG rule](../../virtual-network/manage-network-security-group.md#create-a-security-rule) in the Azure portal to allow inbound and outbound traffic from the IP address of SQL Server on port 5022 to the virtual network that hosts SQL Managed Instance. |
-Use the following PowerShell script on the Windows host of the SQL Server to open ports in the Windows Firewall:
+Use the following PowerShell script on the Windows host of the SQL Server instance to open ports in the Windows firewall:
```powershell New-NetFirewallRule -DisplayName "Allow TCP port 5022 inbound" -Direction inbound -Profile Any -Action Allow -LocalPort 5022 -Protocol TCP
New-NetFirewallRule -DisplayName "Allow TCP port 5022 outbound" -Direction outbo
## Test bidirectional network connectivity
-Bidirectional network connectivity between SQL Server and SQL Managed Instance is necessary for the Managed Instance link feature to work. After opening your ports on the SQL Server side, and configuring an NSG rule on the SQL Managed Instance side, test connectivity.
+Bidirectional network connectivity between SQL Server and SQL Managed Instance is necessary for the link to work. After you open ports on the SQL Server side and configure an NSG rule on the SQL Managed Instance side, test connectivity.
-### Test connection from SQL Server to SQL Managed Instance
+### Test the connection from SQL Server to SQL Managed Instance
-To check if SQL Server can reach your SQL Managed Instance, use the `tnc` command in PowerShell from the SQL Server host machine. Replace `<ManagedInstanceFQDN>` with the fully qualified domain (FQDN) name of the Azure SQL Managed Instance. You can copy this information from the managed instance overview page in Azure portal.
+To check if SQL Server can reach SQL Managed Instance, use the following `tnc` command in PowerShell from the SQL Server host machine. Replace `<ManagedInstanceFQDN>` with the fully qualified domain name (FQDN) of the managed instance. You can copy the FQDN from the managed instance's overview page in the Azure portal.
```powershell tnc <ManagedInstanceFQDN> -port 5022 ```
-A successful test shows `TcpTestSucceeded : True`:
+A successful test shows `TcpTestSucceeded : True`.
If the response is unsuccessful, verify the following network settings:-- There are rules in both the network firewall *and* the windows firewall that allow traffic to the *subnet* of the SQL Managed Instance. -- There's an NSG rule allowing communication on port 5022 for the virtual network hosting the SQL Managed Instance.
+- There are rules in both the network firewall *and* the Windows firewall that allow traffic to the *subnet* of SQL Managed Instance.
+- There's an NSG rule that allows communication on port 5022 for the virtual network that hosts SQL Managed Instance.
-#### Test connection from SQL Managed Instance to SQL Server
-To check that the SQL Managed Instance can reach your SQL Server, create a test endpoint on SQL Server, and then use the SQL Agent on Managed Instance to execute a PowerShell script with the `tnc` command pinging SQL Server on port 5022 from Managed Instance.
+### Test the connection from SQL Managed Instance to SQL Server
-Connect to SQL Server and run the following Transact-SQL (T-SQL) script to create a test endpoint:
+To check that SQL Managed Instance can reach SQL Server, you first create a test endpoint. Then you use the SQL Agent to run a PowerShell script with the `tnc` command pinging SQL Server on port 5022 from the managed instance.
+
+To create a test endpoint, connect to SQL Server and run the following T-SQL script:
```sql Execute on SQL Server Create certificate needed for the test endpoint on SQL Server
+-- Run on SQL Server
+-- Create the certificate needed for the test endpoint
USE MASTER CREATE CERTIFICATE TEST_CERT WITH SUBJECT = N'Certificate for SQL Server', EXPIRY_DATE = N'3/30/2051' GO Create test endpoint on SQL Server
+-- Create the test endpoint on SQL Server
USE MASTER CREATE ENDPOINT TEST_ENDPOINT STATE=STARTED
CREATE ENDPOINT TEST_ENDPOINT
) ```
-To verify that SQL Server endpoint is receiving connections on the port 5022, execute the following PowerShell command on the host OS of your SQL Server:
+To verify that the SQL Server endpoint is receiving connections on port 5022, run the following PowerShell command on the host operating system of your SQL Server instance:
```powershell tnc localhost -port 5022 ```
-A successful test shows `TcpTestSucceeded : True`. We can then proceed creating an SQL Agent job on Managed Instance to attempt testing the SQL Server test endpoint on port 5022 from the managed instance.
-
-Next, create a new SQL Agent job on managed instance called `NetHelper`, using the public IP address or DNS name that can be resolved from the SQL Managed Instance for `SQL_SERVER_ADDRESS`.
+A successful test shows `TcpTestSucceeded : True`. You can then proceed to creating a SQL Agent job on the managed instance to try testing the SQL Server test endpoint on port 5022 from the managed instance.
-To create the SQL Agent Job, run the following Transact-SQL (T-SQL) script on managed instance:
+Next, create a SQL Agent job on the managed instance called `NetHelper` by using the public IP address or DNS name that can be resolved from the managed instance for `SQL_SERVER_ADDRESS`. Run the following T-SQL script on the managed instance:
```sql Execute on Managed Instance SQL_SERVER_ADDRESS should be public IP address, or DNS name that can be resolved from the Managed Instance host machine.
+-- Run on the managed instance
+-- SQL_SERVER_ADDRESS should be a public IP address, or the DNS name that can be resolved from the SQL Managed Instance host machine.
DECLARE @SQLServerIpAddress NVARCHAR(MAX) = '<SQL_SERVER_ADDRESS>' DECLARE @tncCommand NVARCHAR(MAX) = 'tnc ' + @SQLServerIpAddress + ' -port 5022 -InformationLevel Quiet' DECLARE @jobId BINARY(16)
EXEC msdb.dbo.sp_add_jobserver @job_id = @jobId, @server_name = N'(local)'
EXEC msdb.dbo.sp_start_job @job_name = N'NetHelper' ```
-Execute the SQL Agent job by running the following T-SQL command on managed instance:
+
+Run the SQL Agent job by running the following T-SQL command on the managed instance:
```sql Execute on Managed Instance
+-- Run on the managed instance
EXEC msdb.dbo.sp_start_job @job_name = N'NetHelper' ```
-Execute the following query on managed instance to show the log of the SQL Agent job:
+Run the following query on the managed instance to show the log of the SQL Agent job:
```sql Execute on Managed Instance
+-- Run on the managed instance
SELECT sj.name JobName, sjs.step_id, sjs.step_name, sjsl.log, sjsl.date_modified FROM
WHERE
If the connection is successful, the log will show `True`. If the connection is unsuccessful, the log will show `False`.
-Finally, drop the test endpoint and certificate on SQL Server with the following Transact-SQL (T-SQL) commands:
+Finally, drop the test endpoint and certificate on SQL Server by using the following T-SQL commands:
```sql Execute on SQL Server
+-- Run on SQL Server
DROP ENDPOINT TEST_ENDPOINT GO DROP CERTIFICATE TEST_CERT
GO
``` If the connection is unsuccessful, verify the following items: -- The firewall on the host SQL Server allows inbound and outbound communication on port 5022. -- There's an NSG rule for the virtual network hosting the SQL Managed instance that allows communication on port 5022. -- If your SQL Server is on an Azure VM, there's an NSG rule allowing communication on port 5022 on the virtual network hosting the VM.+
+- The firewall on the host SQL Server instance allows inbound and outbound communication on port 5022.
+- An NSG rule for the virtual network that hosts SQL Managed Instance allows communication on port 5022.
+- If your SQL Server instance is on an Azure VM, an NSG rule allows communication on port 5022 on the virtual network that hosts the VM.
- SQL Server is running. > [!CAUTION]
-> Proceed with the next steps only if there is validated network connectivity between your source and target environments. Otherwise, please troubleshoot network connectivity issues before proceeding any further.
+> Proceed with the next steps only if you've validated network connectivity between your source and target environments. Otherwise, troubleshoot network connectivity issues before proceeding.
## Migrate a certificate of a TDE-protected database
-If you are migrating a database on SQL Server protected by Transparent Data Encryption to a managed instance, the corresponding encryption certificate from the on-premises or Azure VM SQL Server needs to be migrated to managed instance before using the link. For detailed steps, see [Migrate a TDE cert to a managed instance](tde-certificate-migrate.md).
+If you're migrating a SQL Server database protected by Transparent Data Encryption to a managed instance, you must migrate the corresponding encryption certificate from the on-premises or Azure VM SQL Server instance to the managed instance before using the link. For detailed steps, see [Migrate a TDE certificate to a managed instance](tde-certificate-migrate.md).
## Install SSMS
-SQL Server Management Studio (SSMS) v18.11.1 is the easiest way to use the Managed Instance Link. [Download SSMS version 18.11.1 or later](/sql/ssms/download-sql-server-management-studio-ssms) and install it to your client machine.
+SQL Server Management Studio (SSMS) v18.11.1 is the easiest way to use a SQL Managed Instance link. [Download SSMS version 18.11.1 or later](/sql/ssms/download-sql-server-management-studio-ssms) and install it to your client machine.
-After installation completes, open SSMS and connect to your supported SQL Server instance. Right-click a user database, and validate you see the "Azure SQL Managed Instance link" option in the menu:
+After installation finishes, open SSMS and connect to your supported SQL Server instance. Right-click a user database and validate that the **Azure SQL Managed Instance link** option appears on the menu.
## Next steps
-After your environment has been prepared, you're ready to start [replicating your database](managed-instance-link-use-ssms-to-replicate-database.md). To learn more, review [Link feature in Azure SQL Managed Instance](link-feature.md).
+After you've prepared your environment, you're ready to start [replicating your database](managed-instance-link-use-ssms-to-replicate-database.md). To learn more, review [Link feature for Azure SQL Managed Instance](link-feature.md).
azure-sql Managed Instance Link Use Scripts To Failover Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/managed-instance/managed-instance-link-use-scripts-to-failover-database.md
Title: Fail over database with link feature with T-SQL and PowerShell scripts
+ Title: Fail over a database with the link via T-SQL & PowerShell scripts
-description: This guide teaches you how to use the SQL Managed Instance link with scripts to fail over database from SQL Server to Azure SQL Managed Instance.
+description: Learn how to use Transact-SQL and PowerShell scripts to fail over a database from SQL Server to SQL Managed Instance by using the Managed Instance link.
Last updated 03/15/2022
-# Failover (migrate) database with Azure SQL Managed Instance link feature with T-SQL and PowerShell scripts
+# Fail over (migrate) a database with a link via T-SQL and PowerShell scripts - Azure SQL Managed Instance
[!INCLUDE[appliesto-sqlmi](../includes/appliesto-sqlmi.md)]
-This article teaches you to use T-SQL and PowerShell scripts for [Managed Instance link feature](link-feature.md) to fail over (migrate) your database from SQL Server to Azure SQL Managed Instance.
+This article teaches you how to use Transact-SQL (T-SQL) and PowerShell scripts and a [Managed Instance link](link-feature.md) to fail over (migrate) your database from SQL Server to SQL Managed Instance.
> [!NOTE]
-> The link feature for Azure SQL Managed Instance is currently in preview.
+> - The link is a feature of Azure SQL Managed Instance and is currently in preview. You can also use a [SQL Server Management Studio (SSMS) wizard](managed-instance-link-use-ssms-to-failover-database.md) to fail over a database with the link.
+> - The PowerShell scripts in this article make REST API calls on the SQL Managed Instance side.
-> [!NOTE]
-> Configuration on Azure side is done with PowerShell that calls SQL Managed Instance REST API. Support for Azure PowerShell and CLI will be released in the upcomming weeks. At that point this article will be updated with the simplified PowerShell scripts.
-> [!TIP]
-> SQL Managed Instance link database failover can be set up with [SSMS wizard](managed-instance-link-use-ssms-to-failover-database.md).
+Database failover from SQL Server to SQL Managed Instance breaks the link between the two databases. Failover stops replication and leaves both databases in an independent state, ready for individual read/write workloads.
-Database failover from SQL Server instance to SQL Managed Instance breaks the link between the two databases. Failover stops replication and leaves both databases in an independent state, ready for individual read-write workloads.
+To start migrating your database to SQL Managed Instance, first stop any application workloads on SQL Server during your maintenance hours. This enables SQL Managed Instance to catch up with database replication and migrate to Azure while mitigating data loss.
-To start migrating database to the SQL Managed Instance, first stop the application workload to the SQL Server during your maintenance hours. This is required to enable SQL Managed Instance to catchup with the database replication and make migration to Azure without any data loss.
+While the primary database is a part of an Always On availability group, you can't set it to read-only mode. You need to ensure that your applications aren't committing transactions to SQL Server.
-While database is a part of Always On Availability Group, it isn't possible to set it to read-only mode. You'll need to ensure that your application(s) aren't committing transactions to SQL Server.
+## Switch the replication mode
-## Switch the replication mode from asynchronous to synchronous
+The replication between SQL Server and SQL Managed Instance is asynchronous by default. Before you migrate your database to Azure, switch the link to synchronous mode. Synchronous replication across large network distances might slow down transactions on the primary SQL Server instance.
-The replication between SQL Server and SQL Managed Instance is asynchronous by default. Before you perform database migration to Azure, the link needs to be switched to synchronous mode. Synchronous replication across distances might slow down transactions on the primary SQL Server.
-Switching from async to sync mode requires replication mode change on SQL Managed Instance and SQL Server.
+Switching from async to sync mode requires a replication mode change on SQL Managed Instance and SQL Server.
-## Switch replication mode on Managed Instance
+### Switch replication mode (SQL Managed Instance)
-Use the following PowerShell script to call REST API that changes the replication mode from asynchronous to synchronous on SQL Managed Instance. We suggest you execute the REST API call using Azure Cloud Shell in Azure portal.
+Use the following PowerShell script to call a REST API that changes the replication mode from asynchronous to synchronous on SQL Managed Instance. We suggest that you make the REST API call by using Azure Cloud Shell in the Azure portal. In the script, replace:
-Replace `<YourSubscriptionID>` with your subscription ID and replace `<ManagedInstanceName>` with the name of your managed instance. Replace `<DAGName>` with the name of Distributed Availability Group link for which youΓÇÖd like to get the status.
+- `<YourSubscriptionID>` with your subscription ID.
+- `<ManagedInstanceName>` with the name of your managed instance.
+- `<DAGName>` with the name of the distributed availability group that you want to get the status for.
```powershell
-# Execute in Azure Cloud Shell
+# Run in Azure Cloud Shell
# ==================================================================================== # POWERSHELL SCRIPT TO SWITCH REPLICATION MODE SYNC-ASYNC ON MANAGED INSTANCE # USER CONFIGURABLE VALUES # (C) 2021-2022 SQL Managed Instance product group # ====================================================================================
-# Enter your Azure Subscription ID
+# Enter your Azure subscription ID
$SubscriptionID = "<SubscriptionID>"
-# Enter your Managed Instance name ΓÇô example "sqlmi1"
+# Enter your managed instance name ΓÇô for example, "sqlmi1"
$ManagedInstanceName = "<ManagedInstanceName>"
-# Enter the Distributed Availability Group name
+# Enter the distributed availability group name (the link name)
$DAGName = "<DAGName>" # ==================================================================================== # INVOKING THE API CALL -- THIS PART IS NOT USER CONFIGURABLE # ====================================================================================
-# Log in and select subscription if needed
+# Log in and select a subscription if needed
if ((Get-AzContext ) -eq $null) { echo "Logging to Azure subscription"
if ((Get-AzContext ) -eq $null)
} Select-AzSubscription -SubscriptionName $SubscriptionID
-# Build URI for the API call
+# Build a URI for the API call
# $miRG = (Get-AzSqlInstance -InstanceName $ManagedInstanceName).ResourceGroupName $uriFull = "https://management.azure.com/subscriptions/" + $SubscriptionID + "/resourceGroups/" + $miRG+ "/providers/Microsoft.Sql/managedInstances/" + $ManagedInstanceName + "/distributedAvailabilityGroups/" + $DAGName + "?api-version=2021-05-01-preview" echo $uriFull
-# Build API request body
+# Build the API request body
# $bodyFull = "{`"properties`":{`"ReplicationMode`":`"sync`"}}" echo $bodyFull
-# Get auth token and build the header
+# Get an authentication token and build the header
# $azProfile = [Microsoft.Azure.Commands.Common.Authentication.Abstractions.AzureRmProfileProvider]::Instance.Profile $currentAzureContext = Get-AzContext
$authToken = $token.AccessToken
$headers = @{} $headers.Add("Authorization", "Bearer "+"$authToken")
-# Invoke API call
+# Invoke the API call
# echo "Invoking API call switch Async-Sync replication mode on Managed Instance" Invoke-WebRequest -Method PATCH -Headers $headers -Uri $uriFull -ContentType "application/json" -Body $bodyFull ```
-## Switch replication mode on SQL Server
+### Switch replication mode (SQL Server)
+
+Use the following T-SQL script on SQL Server to change the replication mode of the distributed availability group on SQL Server from async to sync. Replace:
-Use the following T-SQL script on SQL Server to change the replication mode of Distributed Availability Group on SQL Server from async to sync. Replace `<DAGName>` with the name of Distributed Availability Group, and replace `<AGName>` with the name of Availability Group created on SQL Server. In addition, replace `<ManagedInstanceName>` with the name of your SQL Managed Instance.
+- `<DAGName>` with the name of the distributed availability group.
+- `<AGName>` with the name of the availability group created on SQL Server.
+- `<ManagedInstanceName>` with the name of your managed instance.
```sql Execute on SQL Server Sets the Distributed Availability Group to synchronous commit. ManagedInstanceName example 'sqlmi1'
+-- Run on SQL Server
+-- Sets the distributed availability group to a synchronous commit.
+-- ManagedInstanceName example: 'sqlmi1'
USE master GO ALTER AVAILABILITY GROUP [<DAGName>]
AVAILABILITY GROUP ON
(AVAILABILITY_MODE = SYNCHRONOUS_COMMIT); ```
-To validate change of the link replication, execute the following DMV, and expected results are shown below. They're indicating SYNCHRONOUS_COMIT state.
+To confirm that you've changed the link's replication mode successfully, use the following dynamic management view. Results indicate the `SYNCHRONOUS_COMIT` state.
```sql Execute on SQL Server
+-- Run on SQL Server
-- Verifies the state of the distributed availability group SELECT ag.name, ag.is_distributed, ar.replica_server_name,
WHERE
ag.is_distributed=1 ```
-With both SQL Managed Instance, and SQL Server being switched to Sync mode, the replication between the two entities is now synchronous. If you require to reverse this state, follow the same steps and set async state for both SQL Server and SQL Managed Instance.
+Now that you've switched both SQL Managed Instance and SQL Server to sync mode, the replication between the two entities is synchronous. If you need to reverse this state, follow the same steps and set the async state for both SQL Server and SQL Managed Instance.
-## Check LSN values on both SQL Server and Managed Instance
+## Check LSN values on both SQL Server and SQL Managed Instance
-To complete the migration, we need to ensure that the replication has completed. For this, you need to ensure that LSNs (Log Sequence Numbers) indicating the log records written for both SQL Server and SQL Managed Instance are the same. Initially, it's expected that SQL Server LSN will be higher than LSN number on SQL Managed Instance. The difference is caused by the fact that SQL Managed Instance might be lagging somewhat behind the primary SQL Server due to network latency. After some time, LSNs on SQL Managed Instance and SQL Server should match and stop changing, as the workload on SQL Server should be stopped.
+To complete the migration, confirm that replication has finished. For this, ensure that the log sequence numbers (LSNs) indicating the log records written for both SQL Server and SQL Managed Instance are the same.
-Use the following T-SQL query on SQL Server to read the LSN number of the last recorded transaction log. Replace `<DatabaseName>` with your database name and look for the last hardened LSN number, as shown below.
+Initially, it's expected that the SQL Server LSN will be higher than the SQL Managed Instance LSN. Network latency might cause SQL Managed Instance to lag somewhat behind the primary SQL Server instance. Because the workload has been stopped on SQL Server, you should expect the LSNs to match and stop changing after some time.
+
+Use the following T-SQL query on SQL Server to read the LSN of the last recorded transaction log. Replace `<DatabaseName>` with your database name and look for the last hardened LSN number.
```sql Execute on SQL Server Obtain last hardened LSN for a database on SQL Server.
+-- Run on SQL Server
+-- Obtain the last hardened LSN for the database on SQL Server.
SELECT ag.name AS [Replication group], db.name AS [Database name],
WHERE
ag.is_distributed = 1 and db.name = '<DatabaseName>' ```
-Use the following T-SQL query on SQL Managed Instance to read the LSN number of the last hardened LSN number for your database. Replace `<DatabaseName>` with your database name.
+Use the following T-SQL query on SQL Managed Instance to read the last hardened LSN for your database. Replace `<DatabaseName>` with your database name.
-Query shown below will work on General Purpose SQL Managed Instance. For Business Critical Managed Instance, you will need to uncomment `and drs.is_primary_replica = 1` at the end of the script. On Business Critical, this filter will make sure that only primary replica details are read.
+This query will work on a General Purpose managed instance. For a Business Critical managed instance, you need to uncomment `and drs.is_primary_replica = 1` at the end of the script. On Business Critical, this filter ensures that only primary replica details are read.
```sql Execute on Managed Instance Obtain LSN for a database on SQL Managed Instance.
+-- Run on a managed instance
+-- Obtain the LSN for the database on SQL Managed Instance.
SELECT db.name AS [Database name], drs.database_id AS [Database ID],
FROM
inner join sys.databases db on db.database_id = drs.database_id WHERE db.name = '<DatabaseName>'
- -- for BC add the following as well
+ -- for Business Critical, add the following as well
-- AND drs.is_primary_replica = 1 ```
-Verify once again that your workload is stopped on SQL Server. Check that LSNs on both SQL Server and SQL Managed Instance match, and that they remain matched and unchanged for some time. Stable LSN numbers on both ends indicate that tail log has been replicated to SQL Managed Instance and workload is effectively stopped. Proceed to the next step to initiate database failover and migration to Azure.
+Verify once again that your workload is stopped on SQL Server. Check that LSNs on both SQL Server and SQL Managed Instance match, and that they remain matched and unchanged for some time. Stable LSNs on both instances indicate that the tail log has been replicated to SQL Managed Instance and the workload is effectively stopped.
+
+## Start database failover and migration to Azure
-## Initiate database failover and migration to Azure
+Invoke a REST API call to fail over your database over the link and finalize your migration to Azure. The REST API call breaks the link and ends replication to SQL Managed Instance. The replicated database becomes read/write on the managed instance.
-SQL Managed Instance link database failover and migration to Azure is accomplished by invoking REST API call. This will close the link and complete the replication on SQL Managed Instance. Replicated database will become read-write on SQL Managed Instance.
+Use the following API to start database failover to Azure. Replace:
-Use the following API to initiate database failover to Azure. Replace `<YourSubscriptionID>` with your actual Azure subscription ID. Replace `<RG>` with the resource group where your SQL Managed Instance is deployed and replace `<ManagedInstanceName>` with the name of our SQL Managed Instance. In addition, replace `<DAGName>` with the name of Distributed Availability Group made on SQL Server.
+- `<YourSubscriptionID>` with your Azure subscription ID.
+- `<RG>` with the resource group where your managed instance is deployed.
+- `<ManagedInstanceName>` with the name of your managed instance.
+- `<DAGName>` with the name of the distributed availability group made on SQL Server.
```PowerShell
-# Execute in Azure Cloud Shell
+# Run in Azure Cloud Shell
# ====================================================================================
-# POWERSHELL SCRIPT TO FAILOVER AND MIGRATE DATABASE WITH SQL MANAGED INSTANCE LINK
+# POWERSHELL SCRIPT TO FAIL OVER AND MIGRATE DATABASE WITH SQL MANAGED INSTANCE LINK
# USER CONFIGURABLE VALUES # (C) 2021-2022 SQL Managed Instance product group # ====================================================================================
-# Enter your Azure Subscription ID
+# Enter your Azure subscription ID
$SubscriptionID = "<SubscriptionID>"
-# Enter your Managed Instance name ΓÇô example "sqlmi1"
+# Enter your managed instance name ΓÇô for example, "sqlmi1"
$ManagedInstanceName = "<ManagedInstanceName>"
-# Enter the Distributed Availability Group link name
+# Enter the distributed availability group link name
$DAGName = "<DAGName>" # ==================================================================================== # INVOKING THE API CALL -- THIS PART IS NOT USER CONFIGURABLE. # ====================================================================================
-# Log in and select subscription if needed
+# Log in and select a subscription if needed
if ((Get-AzContext ) -eq $null) { echo "Logging to Azure subscription"
if ((Get-AzContext ) -eq $null)
} Select-AzSubscription -SubscriptionName $SubscriptionID
-# Build URI for the API call
+# Build a URI for the API call
# $miRG = (Get-AzSqlInstance -InstanceName $ManagedInstanceName).ResourceGroupName $uriFull = "https://management.azure.com/subscriptions/" + $SubscriptionID + "/resourceGroups/" + $miRG+ "/providers/Microsoft.Sql/managedInstances/" + $ManagedInstanceName + "/distributedAvailabilityGroups/" + $DAGName + "?api-version=2021-05-01-preview" echo $uriFull
-# Get auth token and build the header
+# Get an authentication token and build the header
# $azProfile = [Microsoft.Azure.Commands.Common.Authentication.Abstractions.AzureRmProfileProvider]::Instance.Profile $currentAzureContext = Get-AzContext
$authToken = $token.AccessToken
$headers = @{} $headers.Add("Authorization", "Bearer "+"$authToken")
-# Invoke API call
+# Invoke the API call
# Invoke-WebRequest -Method DELETE -Headers $headers -Uri $uriFull -ContentType "application/json" ```
-## Cleanup Availability Group and Distributed Availability Group on SQL Server
+## Clean up availability groups
-After breaking the link and migrating database to Azure SQL Managed Instance, consider cleaning up Availability Group and Distributed Availability Group on SQL Server if they aren't used otherwise on SQL Server.
-Replace `<DAGName>` with the name of the Distributed Availability Group on SQL Server and replace `<AGName>` with Availability Group name on the SQL Server.
+After you break the link and migrate a database to Azure SQL Managed Instance, consider cleaning up the availability group and distributed availability group resources from SQL Server if they're no longer necessary.
+
+In the following code, replace:
+
+- `<DAGName>` with the name of the distributed availability group on SQL Server.
+- `<AGName>` with the name of the availability group on SQL Server.
``` sql Execute on SQL Server
+-- Run on SQL Server
+USE MASTER
+GO
DROP AVAILABILITY GROUP <DAGName> GO DROP AVAILABILITY GROUP <AGName> GO ```
-With this step, the migration of the database from SQL Server to Managed Instance has been completed.
+With this step, you've finished the migration of the database from SQL Server to SQL Managed Instance.
## Next steps For more information on the link feature, see the following resources: -- [Managed Instance link ΓÇô connecting SQL Server to Azure reimagined](https://aka.ms/mi-link-techblog).-- [Prepare for SQL Managed Instance link](./managed-instance-link-preparation.md).-- [Use SQL Managed Instance link with scripts to replicate database](./managed-instance-link-use-scripts-to-replicate-database.md).-- [Use SQL Managed Instance link via SSMS to replicate database](./managed-instance-link-use-ssms-to-replicate-database.md).-- [Use SQL Managed Instance link via SSMS to migrate database](./managed-instance-link-use-ssms-to-failover-database.md).
+- [Managed Instance link ΓÇô connecting SQL Server to Azure reimagined](https://aka.ms/mi-link-techblog)
+- [Prepare your environment for Managed Instance link](./managed-instance-link-preparation.md)
+- [Use a Managed Instance link with scripts to replicate a database](./managed-instance-link-use-scripts-to-replicate-database.md)
+- [Use a Managed Instance link via SSMS to replicate a database](./managed-instance-link-use-ssms-to-replicate-database.md)
+- [Use a Managed Instance link via SSMS to migrate a database](./managed-instance-link-use-ssms-to-failover-database.md)
azure-sql Managed Instance Link Use Scripts To Replicate Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/managed-instance/managed-instance-link-use-scripts-to-replicate-database.md
Title: Replicate database with link feature with T-SQL and PowerShell scripts
+ Title: Replicate a database with the link via T-SQL & PowerShell scripts
-description: This guide teaches you how to use the SQL Managed Instance link with scripts to replicate database from SQL Server to Azure SQL Managed Instance.
+description: Learn how to use a Managed Instance link with T-SQL and PowerShell scripts to replicate a database from SQL Server to Azure SQL Managed Instance.
Last updated 03/22/2022
-# Replicate database with Azure SQL Managed Instance link feature with T-SQL and PowerShell scripts
+# Replicate a database with the link feature via T-SQL and PowerShell scripts - Azure SQL Managed Instance
[!INCLUDE[appliesto-sqlmi](../includes/appliesto-sqlmi.md)]
-This article teaches you to use scripts, T-SQL and PowerShell, to set up [Managed Instance link feature](link-feature.md) to replicate your database from SQL Server to Azure SQL Managed Instance.
-
-Before configuring replication for your database through the link feature, make sure you've [prepared your environment](managed-instance-link-preparation.md).
-
-> [!NOTE]
-> The link feature for Azure SQL Managed Instance is currently in preview.
+This article teaches you how to use Transact-SQL (T-SQL) and PowerShell scripts to replicate your database from SQL Server to Azure SQL Managed Instance by using a [Managed Instance link](link-feature.md).
> [!NOTE]
-> Configuration on Azure side is done with PowerShell that calls SQL Managed Instance REST API. Support for Azure PowerShell and CLI will be released in the upcomming weeks. At that point this article will be updated with the simplified PowerShell scripts.
+> - The link is a feature of Azure SQL Managed Instance and is currently in preview. You can also use a [SQL Server Management Studio (SSMS) wizard](managed-instance-link-use-ssms-to-replicate-database.md) to set up the link to replicate your database.
+> - The PowerShell scripts in this article call SQL Managed Instance REST APIs.
-> [!TIP]
-> SQL Managed Instance link database replication can be set up with [SSMS wizard](managed-instance-link-use-ssms-to-replicate-database.md).
## Prerequisites
-To replicate your databases to Azure SQL Managed Instance, you need the following prerequisites:
+To replicate your databases to SQL Managed Instance, you need the following prerequisites:
- An active Azure subscription. If you don't have one, [create a free account](https://azure.microsoft.com/free/). - [SQL Server 2019 Enterprise or Developer edition](https://www.microsoft.com/en-us/evalcenter/evaluate-sql-server-2019), starting with [CU15 (15.0.4198.2)](https://support.microsoft.com/topic/kb5008996-cumulative-update-15-for-sql-server-2019-4b6a8ee9-1c61-482d-914f-36e429901fb6).-- An instance of Azure SQL Managed Instance. [Get started](instance-create-quickstart.md) if you don't have one. -- [SQL Server Management Studio (SSMS) v18.11.1 or later](/sql/ssms/download-sql-server-management-studio-ssms).
+- Azure SQL Managed Instance. [Get started](instance-create-quickstart.md) if you don't have it.
+- [SQL Server Management Studio v18.11.1 or later](/sql/ssms/download-sql-server-management-studio-ssms).
- A properly [prepared environment](managed-instance-link-preparation.md).
-## Replicate database
+## Replicate a database
-Use instructions below to manually setup the link between your instance of SQL Server and your instance of SQL Managed Instance. Once the link is created, your source database gets a read-only replica copy on your target Azure SQL Managed Instance.
+Use the following instructions to manually set up the link between your SQL Server instance and managed instance. After the link is created, your source database gets a read-only replica copy on your target managed instance.
> [!NOTE]
-> The link supports replication of user databases only. Replication of system databases is not supported. To replicate instance-level objects (stored in master or msdb databases), we recommend to script them out and run T-SQL scripts on the destination instance.
+> The link supports replication of user databases only. Replication of system databases is not supported. To replicate instance-level objects (stored in master or msdb databases), we recommend that you script them out and run T-SQL scripts on the destination instance.
## Terminology and naming conventions
-In executing scripts from this user guide, it's important not to mistaken, for example, SQL Server, or Managed Instance name, with their fully qualified domain names.
-The following table is explaining what different names exactly represent, and how to obtain their values.
+As you run scripts from this user guide, it's important not to mistake SQL Server and SQL Managed Instance names for their fully qualified domain names (FQDNs). The following table explains what the various names exactly represent and how to obtain their values:
| Terminology | Description | How to find out | | :-| :- | :- |
-| SQL Server name | Also referred to as a short SQL Server name. For example: **"sqlserver1"**. This isn't a fully qualified domain name. | Execute **ΓÇ£SELECT @@SERVERNAMEΓÇ¥** from T-SQL |
-| SQL Server FQDN | Fully qualified domain name of your SQL Server. For example: **"sqlserver1.domain.com"**. | From your network (DNS) configuration on-prem, or Server name if using Azure VM. |
-| Managed Instance name | Also referred to as a short Managed Instance name. For example: **"managedinstance1"**. | See the name of your Managed Instance in Azure portal. |
-| SQL Managed Instance FQDN | Fully qualified domain name of your SQL Managed Instance name. For example: **"managedinstance1.6d710bcf372b.database.windows.net"**. | See the Host name at SQL Managed Instance overview page in Azure portal. |
-| Resolvable domain name | DNS name that could be resolved to an IP address. For example, executing **"nslookup sqlserver1.domain.com"** should return an IP address, for example 10.0.1.100. | Use nslookup from the command prompt. |
+| SQL Server name | Also called a short SQL Server name. For example: *sqlserver1*. This isn't a fully qualified domain name. | Run `SELECT @@SERVERNAME` from T-SQL. |
+| SQL Server FQDN | Fully qualified domain name of your SQL Server instance. For example: *sqlserver1.domain.com*. | See your network (DNS) configuration on-premises, or the server name if you're using an Azure virtual machine (VM). |
+| SQL Managed Instance name | Also called a short SQL Managed Instance name. For example: *managedinstance1*. | See the name of your managed instance in the Azure portal. |
+| SQL Managed Instance FQDN | Fully qualified domain name of your SQL Managed Instance name. For example: *managedinstance1.6d710bcf372b.database.windows.net*. | See the host name on the SQL Managed Instance overview page in the Azure portal. |
+| Resolvable domain name | DNS name that can be resolved to an IP address. For example, running *nslookup sqlserver1.domain.com* should return an IP address such as 10.0.1.100. | Use nslookup from the command prompt. |
-## Trust between SQL Server and SQL Managed Instance
+## Establish trust between instances
-This first step in creating SQL Managed Instance link is establishing the trust between the two entities and secure the endpoints used for communication and encryption of data across the network. Distributed Availability Groups technology in SQL Server doesn't have its own database mirroring endpoint, but it rather uses the existing Availability Group database mirroring endpoint. This is why the security and trust between the two entities needs to be configured for the Availability Group database mirroring endpoint.
+The first step in setting up a link is to establish trust between the two instances and secure the endpoints that are used to communicate and encrypt data across the network. Distributed availability groups use the existing availability group database mirroring endpoint, rather than having their own dedicated endpoint. This is why security and trust need to be configured between the two entities through the availability group database mirroring endpoint.
-Certificates-based trust is the only supported way to secure database mirroring endpoints on SQL Server and SQL Managed Instance. In case you've existing Availability Groups that are using Windows Authentication, certificate based trust needs to be added to the existing mirroring endpoint as a secondary authentication option. This can be done by using ALTER ENDPOINT statement.
+Certificate-based trust is the only supported way to secure database mirroring endpoints on SQL Server and SQL Managed Instance. If you have existing availability groups that use Windows authentication, you need to add certificate-based trust to the existing mirroring endpoint as a secondary authentication option. You can do this by using the `ALTER ENDPOINT` statement.
> [!IMPORTANT]
-> Certificates are generated with an expiry date and time, and they need to be rotated before they expire.
+> Certificates are generated with an expiration date and time. They must be rotated before they expire.
-Here's the overview of the process to secure database mirroring endpoints for both SQL Server and SQL Managed Instance:
-- Generate certificate on SQL Server and obtain its public key.-- Obtain public key of SQL Managed Instance certificate.-- Exchange the public keys between the SQL Server and SQL Managed Instance.
+Here's an overview of the process to secure database mirroring endpoints for both SQL Server and SQL Managed Instance:
-The following section discloses steps to complete these actions.
+1. Generate a certificate on SQL Server and obtain its public key.
+1. Obtain a public key of the SQL Managed Instance certificate.
+1. Exchange the public keys between SQL Server and SQL Managed Instance.
-## Create certificate on SQL Server and import its public key to Managed Instance
+The following sections describe these steps in detail.
-First, create master key on SQL Server and generate authentication certificate.
+### Create a certificate on SQL Server and import its public key to SQL Managed Instance
+
+First, create a master key on SQL Server and generate an authentication certificate:
```sql Execute on SQL Server Create MASTER KEY encryption password Keep the password confidential and in a secure place.
+-- Run on SQL Server
+-- Create a master key encryption password
+-- Keep the password confidential and in a secure place
USE MASTER CREATE MASTER KEY ENCRYPTION BY PASSWORD = '<strong_password>' GO
EXEC sp_executesql @stmt = @create_sqlserver_certificate_command
GO ```
-Then, use the following T-SQL query on SQL Server to verify the certificate has been created.
+Then, use the following T-SQL query on SQL Server to verify that the certificate has been created:
```sql Execute on SQL Server
+-- Run on SQL Server
USE MASTER GO SELECT * FROM sys.certificates ```
-In the query results you'll find the certificate and will see that it has been encrypted with the master key.
+In the query results, you'll see that the certificate has been encrypted with the master key.
-Now you can get the public key of the generated certificate on SQL Server.
+Now you can get the public key of the generated certificate on SQL Server:
```sql Execute on SQL Server
+-- Run on SQL Server
-- Show the public key of the generated SQL Server certificate USE MASTER GO
DECLARE @PUBLICKEYENC VARBINARY(MAX) = CERTENCODED(CERT_ID(@sqlserver_certificat
SELECT @PUBLICKEYENC AS PublicKeyEncoded; ```
-Save the value of PublicKeyEncoded from the output, as it will be needed for the next step.
+Save the value of `PublicKeyEncoded` from the output, because you'll need it for the next step.
-Next step should be executed in PowerShell, with installed Az.Sql module, version 3.5.1 or higher, or use Azure Cloud Shell online to run the commands as it's always updated wit the latest module versions.
+For the next step, use PowerShell with the installed [Az.Sql module](https://www.powershellgallery.com/packages/Az.Sql/3.7.1), version 3.5.1 or later. Or use Azure Cloud Shell online to run the commands, because it's always updated with the latest module versions.
-Execute the following PowerShell script in Azure Cloud Shell (fill out necessary user information, copy, paste into Azure Cloud Shell and execute).
-Replace `<SubscriptionID>` with your Azure Subscription ID. Replace `<ManagedInstanceName>` with the short name of your managed instance. Replace `<PublicKeyEncoded>` below with the public portion of the SQL Server certificate in binary format generated in the previous step. That will be a long string value starting with 0x, that you've obtained from SQL Server.
+Run the following PowerShell script. (If you use Cloud Shell, fill out necessary user information, copy it, paste it into Cloud Shell, and then run the script.) Replace:
+
+- `<SubscriptionID>` with your Azure subscription ID.
+- `<ManagedInstanceName>` with the short name of your managed instance.
+- `<PublicKeyEncoded>` with the public portion of the SQL Server certificate in binary format, which you generated in the previous step. It's a long string value that starts with `0x`.
```powershell
-# Execute in Azure Cloud Shell
+# Run in Azure Cloud Shell
# =============================================================================== # POWERSHELL SCRIPT TO IMPORT SQL SERVER CERTIFICATE TO MANAGED INSTANCE # USER CONFIGURABLE VALUES # (C) 2021-2022 SQL Managed Instance product group # ===============================================================================
-# Enter your Azure Subscription ID
+# Enter your Azure subscription ID
$SubscriptionID = "<YourSubscriptionID>"
-# Enter your Managed Instance name ΓÇô example "sqlmi1"
+# Enter your managed instance name ΓÇô for example, "sqlmi1"
$ManagedInstanceName = "<YourManagedInstanceName>"
-# Enter name for the server trust certificate - example "Cert_sqlserver1_endpoint"
+# Enter the name for the server trust certificate ΓÇô for example, "Cert_sqlserver1_endpoint"
$certificateName = "<YourServerTrustCertificateName>"
-# Insert the cert public key blob you got from the SQL Server - example "0x1234567..."
+# Insert the certificate public key blob that you got from SQL Server ΓÇô for example, "0x1234567..."
+ $PublicKeyEncoded = "<PublicKeyEncoded>" # =============================================================================== # INVOKING THE API CALL -- REST OF THE SCRIPT IS NOT USER CONFIGURABLE # ===============================================================================
-# Log in and select Subscription if needed.
+# Log in and select a subscription if needed.
# if ((Get-AzContext ) -eq $null) {
if ((Get-AzContext ) -eq $null)
} Select-AzSubscription -SubscriptionName $SubscriptionID
-# Build URI for the API call.
+# Build the URI for the API call.
# $miRG = (Get-AzSqlInstance -InstanceName $ManagedInstanceName).ResourceGroupName $uriFull = "https://management.azure.com/subscriptions/" + $SubscriptionID + "/resourceGroups/" + $miRG+ "/providers/Microsoft.Sql/managedInstances/" + $ManagedInstanceName + "/serverTrustCertificates/" + $certificateName + "?api-version=2021-08-01-preview" echo $uriFull
-# Build API request body.
+# Build the API request body.
# $bodyFull = "{ `"properties`":{ `"PublicBlob`":`"$PublicKeyEncoded`" } }"
$headers.Add("Authorization", "Bearer "+"$authToken")
Invoke-WebRequest -Method PUT -Headers $headers -Uri $uriFull -ContentType "application/json" -Body $bodyFull ```
-The result of this operation will be time stamp of the successful upload of the SQL Server certificate private key to Managed Instance.
+The result of this operation will be a time stamp of the successful upload of the SQL Server certificate private key to SQL Managed Instance.
-## Get the Managed Instance public certificate public key and import it to SQL Server
+### Get the certificate public key from SQL Managed Instance and import it to SQL Server
-Certificate for securing the endpoint for SQL Managed Instance link is automatically generated. This section describes how to get the SQL Managed Instance certificate public key, and how import is to SQL Server.
+The certificate for securing the endpoint for a link is automatically generated. This section describes how to get the certificate public key from SQL Managed Instance, and how to import it to SQL Server.
-Use SSMS to connect to the SQL Managed Instance and execute stored procedure [sp_get_endpoint_certificate](/sql/relational-databases/system-stored-procedures/sp-get-endpoint-certificate-transact-sql) to get the certificate public key.
+Use SSMS to connect to SQL Managed Instance. Run the stored procedure [sp_get_endpoint_certificate](/sql/relational-databases/system-stored-procedures/sp-get-endpoint-certificate-transact-sql) to get the certificate public key:
```sql Execute on Managed Instance
+-- Run on a managed instance
EXEC sp_get_endpoint_certificate @endpoint_type = 4 ```
-Copy the entire public key from Managed Instance starting with ΓÇ£0xΓÇ¥ shown in the previous step and use it in the below query on SQL Server by replacing `<InstanceCertificate>` with the key value. No quotations need to be used.
+Copy the entire public key (which starts with `0x`) from SQL Managed Instance. Run the following query on SQL Server by replacing `<InstanceCertificate>` with the key value. You don't need to use quotation marks.
> [!IMPORTANT]
-> Name of the certificate must be SQL Managed Instance FQDN.
+> The name of the certificate must be the SQL Managed Instance FQDN.
```sql Execute on SQL Server
+-- Run on SQL Server
USE MASTER CREATE CERTIFICATE [<SQLManagedInstanceFQDN>] FROM BINARY = <InstanceCertificate> ```
-Finally, verify all created certificates by viewing the following DMV.
+Finally, verify all created certificates by using the following dynamic management view (DMV):
```sql Execute on SQL Server
+-- Run on SQL Server
SELECT * FROM sys.certificates ```
-## Mirroring endpoint on SQL Server
+## Create a mirroring endpoint on SQL Server
+
+If you don't have an existing availability group or a mirroring endpoint on SQL Server, the next step is to create a mirroring endpoint on SQL Server and secure it with the certificate. If you do have an existing availability group or mirroring endpoint, go straight to the next section, [Alter an existing endpoint](#alter-an-existing-endpoint).
-If you donΓÇÖt have existing Availability Group nor mirroring endpoint on SQL Server, the next step is to create a mirroring endpoint on SQL Server and secure it with the certificate. If you do have existing Availability Group or mirroring endpoint, go straight to the next section ΓÇ£Altering existing database mirroring endpointΓÇ¥
-To verify that you don't have an existing database mirroring endpoint created, use the following script.
+To verify that you don't have an existing database mirroring endpoint created, use the following script:
```sql Execute on SQL Server
+-- Run on SQL Server
-- View database mirroring endpoints on SQL Server SELECT * FROM sys.database_mirroring_endpoints WHERE type_desc = 'DATABASE_MIRRORING' ```
-In case that the above query doesn't show there exists a previous database mirroring endpoint, execute the following script on SQL Server to create a new database mirroring endpoint on the port 5022 and secure it with a certificate.
+If the preceding query doesn't show an existing database mirroring endpoint, run the following script on SQL Server. It creates a new database mirroring endpoint on port 5022 and secures the endpoint with a certificate.
```sql Execute on SQL Server Create connection endpoint listener on SQL Server
+-- Run on SQL Server
+-- Create a connection endpoint listener on SQL Server
USE MASTER CREATE ENDPOINT database_mirroring_endpoint STATE=STARTED
CREATE ENDPOINT database_mirroring_endpoint
GO ```
-Validate that the mirroring endpoint was created by executing the following on SQL Server.
+Validate that the mirroring endpoint was created by running the following script on SQL Server:
```sql Execute on SQL Server
+-- Run on SQL Server
-- View database mirroring endpoints on SQL Server SELECT name, type_desc, state_desc, role_desc,
FROM
sys.database_mirroring_endpoints ```
-New mirroring endpoint was created with CERTIFICATE authentication, and AES encryption enabled.
+A new mirroring endpoint was created with certificate authentication and AES encryption enabled.
-### Altering existing database mirroring endpoint
+### Alter an existing endpoint
> [!NOTE]
-> Skip this step if you've just created a new mirroring endpoint. Use this step only if using existing Availability Groups with existing database mirroring endpoint.
+> Skip this step if you've just created a new mirroring endpoint. Use this step only if you're using existing availability groups with an existing database mirroring endpoint.
-In case existing Availability Groups are used for SQL Managed Instance link, or in case there's an existing database mirroring endpoint, first validate it satisfies the following mandatory conditions for SQL Managed Instance Link:
-- Type must be ΓÇ£DATABASE_MIRRORINGΓÇ¥.-- Connection authentication must be ΓÇ£CERTIFICATEΓÇ¥.
+If you're using existing availability groups for the link, or if there's an existing database mirroring endpoint, first validate that it satisfies the following mandatory conditions for the link:
+
+- Type must be `DATABASE_MIRRORING`.
+- Connection authentication must be `CERTIFICATE`.
- Encryption must be enabled.-- Encryption algorithm must be ΓÇ£AESΓÇ¥.
+- Encryption algorithm must be `AES`.
-Execute the following query on SQL Server to view details for an existing database mirroring endpoint.
+Run the following query on SQL Server to view details for an existing database mirroring endpoint:
```sql Execute on SQL Server
+-- Run on SQL Server
-- View database mirroring endpoints on SQL Server SELECT name, type_desc, state_desc, role_desc, connection_auth_desc,
FROM
sys.database_mirroring_endpoints ```
-In case that the output shows that the existing DATABASE_MIRRORING endpoint connection_auth_desc isn't ΓÇ£CERTIFICATEΓÇ¥, or encryption_algorthm_desc isn't ΓÇ£AESΓÇ¥, the **endpoint needs to be altered to meet the requirements**.
+If the output shows that the existing `DATABASE_MIRRORING` endpoint `connection_auth_desc` isn't `CERTIFICATE`, or `encryption_algorthm_desc` isn't `AES`, the *endpoint needs to be altered to meet the requirements*.
+
+On SQL Server, the same database mirroring endpoint is used for both availability groups and distributed availability groups. If your `connection_auth_desc` endpoint is `NTLM` (Windows authentication) or `KERBEROS`, and you need Windows authentication for an existing availability group, it's possible to alter the endpoint to use multiple authentication methods by switching the authentication option to `NEGOTIATE CERTIFICATE`. This change will allow the existing availability group to use Windows authentication, while using certificate authentication for SQL Managed Instance.
-On SQL Server, one database mirroring endpoint is used for both Availability Groups and Distributed Availability Groups. In case your connection_auth_desc is NTLM (Windows authentication) or KERBEROS, and you need Windows authentication for an existing Availability Groups, it's possible to alter the endpoint to use multiple authentication methods by switching the auth option to NEGOTIATE CERTIFICATE. This will allow the existing AG to use Windows authentication, while using certificate authentication for SQL Managed Instance. See details of possible options at documentation page for [sys.database_mirroring_endpoints](/sql/relational-databases/system-catalog-views/sys-database-mirroring-endpoints-transact-sql).
+Similarly, if encryption doesn't include AES and you need RC4 encryption, it's possible to alter the endpoint to use both algorithms. For details about possible options for altering endpoints, see the [documentation page for sys.database_mirroring_endpoints](/sql/relational-databases/system-catalog-views/sys-database-mirroring-endpoints-transact-sql).
-Similarly, if encryption doesn't include AES and you need RC4 encryption, it's possible to alter the endpoint to use both algorithms. See details of possible options at documentation page for [sys.database_mirroring_endpoints](/sql/relational-databases/system-catalog-views/sys-database-mirroring-endpoints-transact-sql).
+The following script is an example of how to alter your existing database mirroring endpoint on SQL Server. Replace:
-The script below is provided as an example of how to alter your existing database mirroring endpoint on SQL Server. Depending on your existing specific configuration, you perhaps might need to customize it further for your scenario. Replace `<YourExistingEndpointName>` with your existing endpoint name. Replace `<CERTIFICATE-NAME>` with the name of the generated SQL Server certificate. You can also use `SELECT * FROM sys.certificates` to get the name of the created certificate on the SQL Server.
+- `<YourExistingEndpointName>` with your existing endpoint name.
+- `<CERTIFICATE-NAME>` with the name of the generated SQL Server certificate.
+
+Depending on your specific configuration, you might need to customize the script further. You can also use `SELECT * FROM sys.certificates` to get the name of the created certificate on SQL Server.
```sql Execute on SQL Server
+-- Run on SQL Server
-- Alter the existing database mirroring endpoint to use CERTIFICATE for authentication and AES for encryption USE MASTER ALTER ENDPOINT <YourExistingEndpointName>
ALTER ENDPOINT <YourExistingEndpointName>
GO ```
-After running the ALTER endpoint query and setting the dual authentication mode to Windows and Certificate, use again this query on SQL Server to show the database mirroring endpoint details.
+After you run the `ALTER` endpoint query and set the dual authentication mode to Windows and certificate, use this query again on SQL Server to show details for the database mirroring endpoint:
```sql Execute on SQL Server
+-- Run on SQL Server
-- View database mirroring endpoints on SQL Server SELECT name, type_desc, state_desc, role_desc, connection_auth_desc,
FROM
sys.database_mirroring_endpoints ```
-With this you've successfully modified your database mirroring endpoint for SQL Managed Instance link.
+You've successfully modified your database mirroring endpoint for a SQL Managed Instance link.
+
+## Create an availability group on SQL Server
-## Availability Group on SQL Server
+If you don't have an existing availability group, the next step is to create one on SQL Server. Create an availability group with the following parameters for a link:
-If you don't have existing AG the next step is to create an AG on SQL Server. If you do have existing AG go straight to the next section ΓÇ£Use existing Availability Group (AG) on SQL ServerΓÇ¥. A new AG needs to be created with the following parameters for Managed Instance link:
-- Specify SQL Server name-- Specify database name-- Failover mode MANUAL-- Seeding mode AUTOMATIC
+- SQL Server name
+- Database name
+- A failover mode of `MANUAL`
+- A seeding mode of `AUTOMATIC`
-Use the following script to create a new Availability Group on SQL Server. Replace `<SQLServerName>` with the name of your SQL Server. Find out your SQL Server name with executing the following T-SQL:
+First, find out your SQL Server name by running the following T-SQL statement:
```sql Execute on SQL Server
+-- Run on SQL Server
SELECT @@SERVERNAME AS SQLServerName ```
-Replace `<AGName>` with the name of your availability group. For multiple databases you'll need to create multiple Availability Groups. Managed Instance link requires one database per AG. In this respect, consider naming each AG so that its name reflects the corresponding database - for example `AG_<db_name>`. Replace `<DatabaseName>` with the name of database you wish to replicate. Replace `<SQLServerIP>` with SQL ServerΓÇÖs IP address. Alternatively, resolvable SQL Server host machine name can be used, but you need to make sure that the name is resolvable from SQL Managed Instance virtual network.
+Then, use the following script to create the availability group on SQL Server. Replace:
+
+- `<SQLServerName>` with the name of your SQL Server instance.
+- `<AGName>` with the name of your availability group. For multiple databases, you'll need to create multiple availability groups. A Managed Instance link requires one database per availability group. Consider naming each availability group so that its name reflects the corresponding database - for example, `AG_<db_name>`.
+
+ > [!NOTE]
+ > The link feature supports one database per link. To replicate multiplate databases on an instance, create a link for each individual database. For example, to replicate 10 databases to SQL Managed Instance, create 10 individual links.
+- `<DatabaseName>` with the name of database that you want to replicate.
+- `<SQLServerIP>` with the SQL Server IP address. You can use a resolvable SQL Server host machine name as an alternative, but you need to make sure that the name is resolvable from the SQL Managed Instance virtual network.
```sql Execute on SQL Server Create primary AG on SQL Server
+-- Run on SQL Server
+-- Create the primary availability group on SQL Server
USE MASTER CREATE AVAILABILITY GROUP [<AGName>] WITH (CLUSTER_TYPE = NONE)
WITH (CLUSTER_TYPE = NONE)
GO ```
-> [!NOTE]
-> One database per single Availability Group is the current product limitation for replication to SQL Managed Instance using the link feature.
-> If you get the Error 1475 you'll have to create a full backup without COPY ONLY option, that will start new backup chain.
-> As the best practice it's highly recommended that collation on SQL Server and SQL Managed Instance is the same. This is because depending on collation settings, AG and DAG names could, or could not be case sensitive. If there's a mismatch with this, there could be issues in ability to successfully connect SQL Server to Managed Instance.
+Consider the following:
-Replace `<DAGName>` with the name of your distributed availability group. When replicating several databases, one availability group and one distributed availability groups is needed for each database so consider naming each item accordingly - for example `DAG_<db_name>`. Replace `<AGName>` with the name of availability group created in the previous step. Replace `<SQLServerIP>` with the IP address of SQL Server from the previous step. Alternatively, resolvable SQL Server host machine name can be used, but you need to make sure that the name is resolvable from SQL Managed Instance virtual network. Replace `<ManagedInstanceName>` with the short name of your SQL Managed Instance. Replace `<ManagedInstnaceFQDN>` with a fully qualified domain name of SQL Managed Instance.
+- The link currently supports replicating one database per availability group. You can replicate multiple databases to SQL Managed Instance by setting up multiple links.
+- Collation between SQL Server and SQL Managed Instance should be the same. A mismatch in collation could cause a mismatch in server name casing and prevent a successful connection from SQL Server to SQL Managed Instance.
+- Error 1475 indicates that you need to start a new backup chain by creating a full backup without the `COPY ONLY` option.
+
+In the following code, replace:
+
+- `<DAGName>` with the name of your distributed availability group. When you're replicating several databases, you need one availability group and one distributed availability group for each database. Consider naming each item accordingly - for example, `DAG_<db_name>`.
+- `<AGName>` with the name of the availability group that you created in the previous step.
+- `<SQLServerIP>` with the IP address of SQL Server from the previous step. You can use a resolvable SQL Server host machine name as an alternative, but make sure that the name is resolvable from the SQL Managed Instance virtual network.
+- `<ManagedInstanceName>` with the short name of your managed instance.
+- `<ManagedInstnaceFQDN>` with the fully qualified domain name of your managed instance.
```sql Execute on SQL Server Create DAG for AG and database ManagedInstanceName example 'sqlmi1' ManagedInstanceFQDN example 'sqlmi1.73d19f36a420a.database.windows.net'
+-- Run on SQL Server
+-- Create a distributed availability group for the availability group and database
+-- ManagedInstanceName example: 'sqlmi1'
+-- ManagedInstanceFQDN example: 'sqlmi1.73d19f36a420a.database.windows.net'
USE MASTER CREATE AVAILABILITY GROUP [<DAGName>] WITH (DISTRIBUTED)
CREATE AVAILABILITY GROUP [<DAGName>]
GO ```
-### Verify AG and distributed AG
+### Verify availability groups
-Use the following script to list all available Availability Groups and Distributed Availability Groups on the SQL Server. Availability Group state needs to be connected, and Distributed Availability Group state disconnected at this point. Distributed Availability Group state will move to `connected` only when it has been joined with SQL Managed Instance. This will be explained in one of the next steps.
+Use the following script to list all availability groups and distributed availability groups on the SQL Server instance. At this point, the state of your availability group needs to be `connected`, and the state of your distributed availability groups needs to be `disconnected`. The state of the distributed availability group will move to `connected` only when it has been joined with SQL Managed Instance.
```sql Execute on SQL Server This will show that Availability Group and Distributed Availability Group have been created on SQL Server.
+-- Run on SQL Server
+-- This will show that the availability group and distributed availability group have been created on SQL Server.
SELECT * FROM sys.availability_groups ```
-Alternatively, in SSMS object explorer, expand the ΓÇ£Always On High AvailabilityΓÇ¥, then ΓÇ£Availability GroupsΓÇ¥ folder to show available Availability Groups and Distributed Availability Groups.
+Alternatively, you can use SSMS Object Explorer to find availability groups and distributed availability groups. Expand the **Always On High Availability** folder and then the **Availability Groups** folder.
-## Creating SQL Managed Instance link
+## Create a link
-The final step of the setup process is to create the SQL Managed Instance link. To accomplish this, a REST API call will be made. Invoking direct API calls will be replaced with PowerShell and CLI clients, which will be delivered in one of our next releases.
+The final step of the setup process is to create the link. At this time, you accomplish this by making a REST API call.
-Invoking direct API call to Azure can be accomplished with various API clients. However, for simplicity of the process, execute the below PowerShell script from Azure Cloud Shell.
+You can invoke direct API calls to Azure by using various API clients. For simplicity of the process, sign in to the Azure portal and run the following PowerShell script from Azure Cloud Shell. Replace:
-Log in to Azure portal and execute the below PowerShell scripts in Azure Cloud Shell. Make the following replacements with the actual values in the script: Replace `<SubscriptionID>` with your Azure Subscription ID. Replace `<ManagedInstanceName>` with the short name of your managed instance. Replace `<AGName>` with the name of Availability Group created on SQL Server. Replace `<DAGName>` with the name of Distributed Availability Group create on SQL Server. Replace `<DatabaseName>` with the database replicated in Availability Group on SQL Server. Replace `<SQLServerAddress>` with the address of the SQL Server. This can be a DNS name, or public IP or even private IP address, as long as the address provided can be resolved from the backend node hosting the SQL Managed Instance.
+- `<SubscriptionID>` with your Azure subscription ID.
+- `<ManagedInstanceName>` with the short name of your managed instance.
+- `<AGName>` with the name of the availability group created on SQL Server.
+- `<DAGName>` with the name of the distributed availability group created on SQL Server.
+- `<DatabaseName>` with the database replicated in the availability group on SQL Server.
+- `<SQLServerAddress>` with the address of the SQL Server instance. This can be a DNS name, a public IP address, or even a private IP address. The provided address must be resolvable from the back-end node that hosts the managed instance.
```powershell
-# Execute in Azure Cloud Shell
+# Run in Azure Cloud Shell
# ============================================================================= # POWERSHELL SCRIPT FOR CREATING MANAGED INSTANCE LINK # USER CONFIGURABLE VALUES # (C) 2021-2022 SQL Managed Instance product group # =============================================================================
-# Enter your Azure Subscription ID
+# Enter your Azure subscription ID
$SubscriptionID = "<SubscriptionID>"
-# Enter your Managed Instance name ΓÇô example "sqlmi1"
+# Enter your managed instance name ΓÇô for example, "sqlmi1"
$ManagedInstanceName = "<ManagedInstanceName>"
-# Enter Availability Group name that was created on the SQL Server
+# Enter the availability group name that was created on SQL Server
$AGName = "<AGName>"
-# Enter Distributed Availability Group name that was created on SQL Server
+# Enter the distributed availability group name that was created on SQL Server
$DAGName = "<DAGName>"
-# Enter database name that was placed in Availability Group for replciation
+# Enter the database name that was placed in the availability group for replication
$DatabaseName = "<DatabaseName>"
-# Enter SQL Server address
+# Enter the SQL Server address
$SQLServerAddress = "<SQLServerAddress>" # ============================================================================= # INVOKING THE API CALL -- THIS PART IS NOT USER CONFIGURABLE # =============================================================================
-# Log in to subscription if needed
+# Log in to the subscription if needed
if ((Get-AzContext ) -eq $null) { echo "Logging to Azure subscription"
if ((Get-AzContext ) -eq $null)
} Select-AzSubscription -SubscriptionName $SubscriptionID # --
-# Build URI for the API call
+# Build the URI for the API call
# -- echo "Building API URI" $miRG = (Get-AzSqlInstance -InstanceName $ManagedInstanceName).ResourceGroupName $uriFull = "https://management.azure.com/subscriptions/" + $SubscriptionID + "/resourceGroups/" + $miRG+ "/providers/Microsoft.Sql/managedInstances/" + $ManagedInstanceName + "/distributedAvailabilityGroups/" + $DAGName + "?api-version=2021-05-01-preview" echo $uriFull # --
-# Build API request body
+# Build the API request body
# -- echo "Buildign API request body" $bodyFull = @"
$bodyFull = @"
"@ echo $bodyFull # --
-# Get auth token and build the header
+# Get the authentication token and build the header
# -- $azProfile = [Microsoft.Azure.Commands.Common.Authentication.Abstractions.AzureRmProfileProvider]::Instance.Profile $currentAzureContext = Get-AzContext
$authToken = $token.AccessToken
$headers = @{} $headers.Add("Authorization", "Bearer "+"$authToken") # --
-# Invoke API call
+# Invoke the API call
# -- echo "Invoking API call to have Managed Instance join DAG on SQL Server" $response = Invoke-WebRequest -Method PUT -Headers $headers -Uri $uriFull -ContentType "application/json" -Body $bodyFull echo $response ```
-The result of this operation will be the time stamp of the successful execution of request for Managed Instance link creation.
+The result of this operation will be a time stamp of the successful execution of the request to create a link.
-## Verifying created SQL Managed Instance link
+## Verify the link
-To verify that connection has been made between SQL Managed Instance and SQL Server, execute the following query on SQL Server. Have in mind that connection will not be instantaneous upon executing the API call. It can take up to a minute for the DMV to start showing a successful connection. Keep refreshing the DMV until connection is shown as CONNECTED for SQL Managed Instance replica.
+To verify that connection has been made between SQL Managed Instance and SQL Server, run the following query on SQL Server. The connection will not be instantaneous after you make the API call. It can take up to a minute for the DMV to start showing a successful connection. Keep refreshing the DMV until the connection appears as `CONNECTED` for the SQL Managed Instance replica.
```sql Execute on SQL Server
+-- Run on SQL Server
SELECT r.replica_server_name AS [Replica], r.endpoint_url AS [Endpoint],
FROM
ON rs.replica_id = r.replica_id ```
-In addition, once the connection is established, Managed Instance Databases view in SSMS will initially show replicated database as “Restoring…”. This is because the initial seeding is in progress moving the full backup of the database, which is followed by the catchup replication. Once the seeding process is done, the database will no longer be in “Restoring…” state. For small databases, seeding might finish quickly so you might not see the initial “Restoring…” state in SSMS.
+After the connection is established, the **Managed Instance Databases** view in SSMS initially shows the replicated databases in a **Restoring** state as the initial seeding phase moves and restores the full backup of the database. After the database is restored, replication has to catch up to bring the two databases to a synchronized state. The database will no longer be in **Restoring** after the initial seeding finishes. Seeding small databases might be fast enough that you won't see the initial **Restoring** state in SSMS.
> [!IMPORTANT]
-> The link will not work unless network connectivity exists between SQL Server and Managed Instance. To troubleshoot the network connectivity following steps described in [test bidirectional network connectivity](managed-instance-link-preparation.md#test-bidirectional-network-connectivity).
+> - The link won't work unless network connectivity exists between SQL Server and SQL Managed Instance. To troubleshoot network connectivity, follow the steps in [Test bidirectional network connectivity](managed-instance-link-preparation.md#test-bidirectional-network-connectivity).
+> - Take regular backups of the log file on SQL Server. If the used log space reaches 100 percent, replication to SQL Managed Instance stops until space use is reduced. We highly recommend that you automate log backups by setting up a daily job. For details, see [Back up log files on SQL Server](link-feature-best-practices.md#take-log-backups-regularly).
-> [!IMPORTANT]
-> Make regular backups of the log file on SQL Server. If the log space used reaches 100%, the replication to SQL Managed Instance will stop until this space use is reduced. It is highly recommended that you automate log backups through setting up a daily job. For more details on how to do this see [Backup log files on SQL Server](link-feature-best-practices.md#take-log-backups-regularly).
## Next steps
-For more information on the link feature, see the following:
+For more information on the link feature, see the following resources:
-- [Managed Instance link ΓÇô connecting SQL Server to Azure reimagined](https://aka.ms/mi-link-techblog).-- [Prepare for SQL Managed Instance link](./managed-instance-link-preparation.md).-- [Use SQL Managed Instance link with scripts to migrate database](./managed-instance-link-use-scripts-to-failover-database.md).-- [Use SQL Managed Instance link via SSMS to replicate database](./managed-instance-link-use-ssms-to-replicate-database.md).-- [Use SQL Managed Instance link via SSMS to migrate database](./managed-instance-link-use-ssms-to-failover-database.md).
+- [Managed Instance link ΓÇô connecting SQL Server to Azure reimagined](https://aka.ms/mi-link-techblog)
+- [Prepare your environment for a Managed Instance link](./managed-instance-link-preparation.md)
+- [Use a Managed Instance link with scripts to migrate a database](./managed-instance-link-use-scripts-to-failover-database.md)
+- [Use a Managed Instance link via SSMS to replicate a database](./managed-instance-link-use-ssms-to-replicate-database.md)
+- [Use a Managed Instance link via SSMS to migrate a database](./managed-instance-link-use-ssms-to-failover-database.md)
azure-sql Managed Instance Link Use Ssms To Failover Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/managed-instance/managed-instance-link-use-ssms-to-failover-database.md
Title: Failover database with link feature in SSMS
+ Title: Fail over a database by using the link in SSMS
-description: This guide teaches you how to use the SQL Managed Instance link in SQL Server Management Studio (SSMS) to failover database from SQL Server to Azure SQL Managed Instance.
+description: Learn how to use the link feature in SQL Server Management Studio (SSMS) to fail over a database from SQL Server to Azure SQL Managed Instance.
Last updated 03/10/2022
-# Failover database with link feature in SSMS - Azure SQL Managed Instance
+# Fail over a database by using the link in SSMS - Azure SQL Managed Instance
[!INCLUDE[appliesto-sqlmi](../includes/appliesto-sqlmi.md)]
-This article teaches you to use the [Managed Instance link feature](link-feature.md) to failover your database from SQL Server to Azure SQL Managed Instance in SQL Server Management Studio (SSMS).
+This article teaches you how to fail over a database from SQL Server to Azure SQL Managed Instance by using [the link feature](link-feature.md) in SQL Server Management Studio (SSMS).
-Failing over your database from your SQL Server instance to your SQL Managed Instance breaks the link between the two databases, stopping replication, and leaving both databases in an independent state, ready for individual read-write workloads.
-
-Before failing over your database, make sure you've [prepared your environment](managed-instance-link-preparation.md) and [configured replication through the link feature](managed-instance-link-use-ssms-to-replicate-database.md).
+Failing over your database from SQL Server to SQL Managed Instance breaks the link between the two databases. It stops replication and leaves both databases in an independent state, ready for individual read/write workloads.
> [!NOTE]
-> The link feature for Azure SQL Managed Instance is currently in preview.
+> The link is a feature of Azure SQL Managed Instance and is currently in preview.
## Prerequisites
-To failover your databases to Azure SQL Managed Instance, you need the following prerequisites:
+To fail over your databases to SQL Managed Instance, you need the following prerequisites:
- An active Azure subscription. If you don't have one, [create a free account](https://azure.microsoft.com/free/). - [SQL Server 2019 Enterprise or Developer edition](https://www.microsoft.com/en-us/evalcenter/evaluate-sql-server-2019), starting with [CU15 (15.0.4198.2)](https://support.microsoft.com/topic/kb5008996-cumulative-update-15-for-sql-server-2019-4b6a8ee9-1c61-482d-914f-36e429901fb6).-- An instance of Azure SQL Managed Instance. [Get started](instance-create-quickstart.md) if you don't have one. -- [SQL Server Management Studio (SSMS) v18.11.1 or later](/sql/ssms/download-sql-server-management-studio-ssms).-- [Prepared your environment for replication](managed-instance-link-preparation.md)-- Setup the [link feature and replicated your database to your managed instance in Azure](managed-instance-link-use-ssms-to-replicate-database.md).
+- Azure SQL Managed Instance. [Get started](instance-create-quickstart.md) if you don't have it.
+- [SQL Server Management Studio v18.11.1 or later](/sql/ssms/download-sql-server-management-studio-ssms).
+- [An environment that's prepared for replication](managed-instance-link-preparation.md).
+- [Setup of the link feature and replication of your database to your managed instance in Azure](managed-instance-link-use-ssms-to-replicate-database.md).
-## Failover database
+## Fail over a database
-Use the **Failover database to Managed Instance** wizard in SQL Server Management Studio (SSMS) to failover your database from your instance of SQL Server to your instance of SQL Managed Instance. The wizard takes you through the failing over your database, breaking the link between the two instances in the process.
+In the following steps, you use the **Failover database to Managed Instance** wizard in SSMS to fail over your database from SQL Server to SQL Managed Instance. The wizard takes you through failing over your database, breaking the link between the two instances in the process.
> [!CAUTION]
-> If you are performing a planned manual failover, stop the workload on the database hosted on the source SQL Server to allow the replicated database on the SQL Managed Instance to completely catch up and failover without data loss. If you are performing a forced failover, there may be data loss.
-
-To failover your database, follow these steps:
+> If you're performing a planned manual failover, stop the workload on the source SQL Server database to allow the SQL Managed Instance replicated database to completely catch up and fail over without data loss. If you're performing a forced failover, you might lose data.
-1. Open SQL Server Management Studio (SSMS) and connect to your instance of SQL Server.
-1. In **Object Explorer**, right-click your database, hover over **Azure SQL Managed Instance link** and select **Failover database** to open the **Failover database to Managed Instance** wizard:
+1. Open SSMS and connect to your SQL Server instance.
+1. In Object Explorer, right-click your database, hover over **Azure SQL Managed Instance link**, and select **Failover database** to open the **Failover database to Managed Instance** wizard.
- :::image type="content" source="./media/managed-instance-link-use-ssms-to-failover-database/link-failover-ssms-database-context-failover-database.png" alt-text="Screenshot showing database's context menu option for database failover.":::
+ :::image type="content" source="./media/managed-instance-link-use-ssms-to-failover-database/link-failover-ssms-database-context-failover-database.png" alt-text="Screenshot that shows a database's context menu option for failover.":::
-1. Select **Next** on the **Introduction** page of the **Failover database to Managed Instance** wizard:
+1. On the **Introduction** page of the **Failover database to Managed Instance** wizard, select **Next**.
- :::image type="content" source="./media/managed-instance-link-use-ssms-to-failover-database/link-failover-introduction.png" alt-text="Screenshot showing Introduction page.":::
+ :::image type="content" source="./media/managed-instance-link-use-ssms-to-failover-database/link-failover-introduction.png" alt-text="Screenshot that shows the Introduction page.":::
-3. On the **Log in to Azure** page, select **Sign-in** to provide your credentials and sign into your Azure account. Select the subscription that is hosting your SQL Managed Instance from the drop-down and then select **Next**:
+3. On the **Log in to Azure** page, select **Sign-in** to provide your credentials and sign in to your Azure account. Select the subscription that's hosting SQL Managed Instance from the dropdown list, and then select **Next**.
- :::image type="content" source="./media/managed-instance-link-use-ssms-to-failover-database/link-failover-login-to-azure.png" alt-text="Screenshot showing Log in to Azure page.":::
+ :::image type="content" source="./media/managed-instance-link-use-ssms-to-failover-database/link-failover-login-to-azure.png" alt-text="Screenshot that shows the page for signing in to Azure.":::
-4. On the **Failover type** page, choose the type of failover you're performing and check the box to confirm that you've either stopped the workload for a planned failover, or you understand that there may be data loss for a forced failover. Select **Next**:
+4. On the **Failover Type** page, choose the type of failover you're performing. Select the box to confirm that you've stopped the workload for a planned failover, or you understand that you might lose data if using a forced failover. Select **Next**.
- :::image type="content" source="./media/managed-instance-link-use-ssms-to-failover-database/link-failover-failover-type.png" alt-text="Screenshot showing Failover Type page.":::
+ :::image type="content" source="./media/managed-instance-link-use-ssms-to-failover-database/link-failover-failover-type.png" alt-text="Screenshot that shows the Failover Type page.":::
-1. On the **Clean up (optional)**, choose to drop the availability group if it was created solely for the purpose of migrating your database to Azure and you no longer need the availability group. If you want to keep the availability group, then leave the boxes unchecked. Select **Next**:
+1. On the **Clean-up (optional)** page, choose to drop the availability group if you created it solely for the purpose of migrating your database to Azure and you no longer need it. If you want to keep the availability group, leave the boxes cleared. Select **Next**.
- :::image type="content" source="./media/managed-instance-link-use-ssms-to-failover-database/link-failover-cleanup-optional.png" alt-text="Screenshot showing Cleanup (optional) page.":::
+ :::image type="content" source="./media/managed-instance-link-use-ssms-to-failover-database/link-failover-cleanup-optional.png" alt-text="Screenshot that shows the page for the option of deleting an availability group.":::
-1. On the **Summary** page, review the actions that will be performed for your failover. (Optionally) You can also create a script to save and run yourself at a later time. When you're ready to proceed with the failover, select **Finish**:
+1. On the **Summary** page, review the actions that will be performed for your failover. Optionally, select **Script** to create a script that you can run at a later time. When you're ready to proceed with the failover, select **Finish**.
- :::image type="content" source="./media/managed-instance-link-use-ssms-to-failover-database/link-failover-summary.png" alt-text="Screenshot showing Summary page.":::
+ :::image type="content" source="./media/managed-instance-link-use-ssms-to-failover-database/link-failover-summary.png" alt-text="Screenshot that shows the Summary page.":::
-7. The **Executing actions** page displays the progress of each action:
+7. The **Executing actions** page displays the progress of each action.
- :::image type="content" source="./media/managed-instance-link-use-ssms-to-failover-database/link-failover-executing-actions.png" alt-text="Screenshot showing Executing actions page.":::
+ :::image type="content" source="./media/managed-instance-link-use-ssms-to-failover-database/link-failover-executing-actions.png" alt-text="Screenshot that shows the page for executing actions.":::
-8. After all steps complete, the **Results** page shows a completed status, with checkmarks next to each successfully completed action. You can now close the window:
+8. After all steps finish, the **Results** page shows check marks next to the successfully completed actions. You can now close the window.
- :::image type="content" source="./media/managed-instance-link-use-ssms-to-failover-database/link-failover-results.png" alt-text="Screenshot showing Results window.":::
+ :::image type="content" source="./media/managed-instance-link-use-ssms-to-failover-database/link-failover-results.png" alt-text="Screenshot that shows the Results page with completed status.":::
-## View failed over database
+## View the failed-over database
-During the failover process, the Managed Instance link is dropped and no longer exists. Both databases on the source SQL Server instance and target SQL Managed Instance can execute a read-write workload, and are completely independent.
+During the failover process, the link is dropped and no longer exists. The source SQL Server database and the target SQL Managed Instance database can both execute a read/write workload. They're completely independent.
-You can validate this by reviewing the database on the SQL Server:
+You can validate that the link bas been dropped by reviewing the database on SQL Server.
-And then reviewing the database on the SQL Managed Instance:
+Then, review the database on SQL Managed Instance.
## Next steps
-For more information about Managed Instance link feature, see the following resources:
-
-To learn more, review [Link feature in Azure SQL Managed Instance](link-feature.md).
+To learn more, see [Link feature for Azure SQL Managed Instance](link-feature.md).
azure-sql Managed Instance Link Use Ssms To Replicate Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/managed-instance/managed-instance-link-use-ssms-to-replicate-database.md
Title: Replicate database with link feature in SSMS
+ Title: Replicate a database by using the link in SSMS
-description: This guide teaches you how to use the SQL Managed Instance link in SQL Server Management Studio (SSMS) to replicate database from SQL Server to Azure SQL Managed Instance.
+description: Learn how to use a link feature in SQL Server Management Studio (SSMS) to replicate a database from SQL Server to Azure SQL Managed Instance.
Last updated 03/22/2022
-# Replicate database with link feature in SSMS - Azure SQL Managed Instance
+# Replicate a database by using the link feature in SSMS - Azure SQL Managed Instance
[!INCLUDE[appliesto-sqlmi](../includes/appliesto-sqlmi.md)]
-This article teaches you to use the [Managed Instance link feature](link-feature.md) to replicate your database from SQL Server to Azure SQL Managed Instance in SQL Server Management Studio (SSMS).
-
-Before configuring replication for your database through the link feature, make sure you've [prepared your environment](managed-instance-link-preparation.md).
+This article teaches you how to replicate your database from SQL Server to Azure SQL Managed Instance by using [the link feature](link-feature.md) in SQL Server Management Studio (SSMS).
> [!NOTE]
-> The link feature for Azure SQL Managed Instance is currently in preview.
+> The link is a feature of Azure SQL Managed Instance and is currently in preview.
## Prerequisites
-To replicate your databases to Azure SQL Managed Instance, you need the following prerequisites:
+To replicate your databases to SQL Managed Instance through the link, you need the following prerequisites:
- An active Azure subscription. If you don't have one, [create a free account](https://azure.microsoft.com/free/). - [SQL Server 2019 Enterprise or Developer edition](https://www.microsoft.com/en-us/evalcenter/evaluate-sql-server-2019), starting with [CU15 (15.0.4198.2)](https://support.microsoft.com/topic/kb5008996-cumulative-update-15-for-sql-server-2019-4b6a8ee9-1c61-482d-914f-36e429901fb6).-- An instance of Azure SQL Managed Instance. [Get started](instance-create-quickstart.md) if you don't have one. -- [SQL Server Management Studio (SSMS) v18.11.1 or later](/sql/ssms/download-sql-server-management-studio-ssms).
+- Azure SQL Managed Instance. [Get started](instance-create-quickstart.md) if you don't have it.
+- [SQL Server Management Studio v18.11.1 or later](/sql/ssms/download-sql-server-management-studio-ssms).
- A properly [prepared environment](managed-instance-link-preparation.md).
-## Replicate database
-Use the **New Managed Instance link** wizard in SQL Server Management Studio (SSMS) to setup the link between your instance of SQL Server and your instance of SQL Managed Instance. The wizard takes you through the process of creating the Managed Instance link. Once the link is created, your source database gets a read-only replica copy on your target Azure SQL Managed Instance.
+## Replicate a database
-> [!NOTE]
-> The link supports replication of user databases only. Replication of system databases is not supported. To replicate instance-level objects (stored in master or msdb databases), we recommend to script them out and run T-SQL scripts on the destination instance.
+In the following steps, you use the **New Managed Instance link** wizard in SSMS to create the link between SQL Server and SQL Managed Instance. After you create the link, your source database gets a read-only replica copy on your target managed instance.
-To set up the Managed Instance link, follow these steps:
+> [!NOTE]
+> The link supports replication of user databases only. Replication of system databases is not supported. To replicate instance-level objects (stored in master or msdb databases), we recommend that you script them out and run T-SQL scripts on the destination instance.
-1. Open SQL Server Management Studio (SSMS) and connect to your instance of SQL Server.
-1. In **Object Explorer**, right-click your database, hover over **Azure SQL Managed Instance link** and select **Replicate database** to open the **New Managed Instance link** wizard. If SQL Server version isn't supported, this option won't be available in the context menu.
+1. Open SSMS and connect to your SQL Server instance.
+1. In Object Explorer, right-click your database, hover over **Azure SQL Managed Instance link**, and select **Replicate database** to open the **New Managed Instance link** wizard. If your SQL Server version isn't supported, this option won't be available on the context menu.
- :::image type="content" source="./media/managed-instance-link-use-ssms-to-replicate-database/link-replicate-ssms-database-context-replicate-database.png" alt-text="Screenshot showing database's context menu option to replicate database after hovering over Azure SQL Managed Instance link.":::
+ :::image type="content" source="./media/managed-instance-link-use-ssms-to-replicate-database/link-replicate-ssms-database-context-replicate-database.png" alt-text="Screenshot that shows a database's context menu option for replication.":::
-1. Select **Next** on the **Introduction** page of the **New Managed Instance link** wizard:
+1. On the **Introduction** page of the wizard, select **Next**.
- :::image type="content" source="./media/managed-instance-link-use-ssms-to-replicate-database/link-replicate-introduction.png" alt-text="Screenshot showing the introduction page for Managed Instance link replicate database wizard.":::
+ :::image type="content" source="./media/managed-instance-link-use-ssms-to-replicate-database/link-replicate-introduction.png" alt-text="Screenshot that shows the Introduction page of the wizard for creating a new Managed Instance link.":::
-1. On the **Requirements** page, the wizard validates requirements to establish a link to your SQL Managed Instance. Select **Next** once all the requirements are validated:
+1. On the **SQL Server requirements** page, the wizard validates requirements to establish a link to SQL Managed Instance. Select **Next** after all the requirements are validated.
- :::image type="content" source="./media/managed-instance-link-use-ssms-to-replicate-database/link-replicate-sql-server-requirements.png" alt-text="Screenshot showing S Q L Server requirements page.":::
+ :::image type="content" source="./media/managed-instance-link-use-ssms-to-replicate-database/link-replicate-sql-server-requirements.png" alt-text="Screenshot that shows the Requirements page for a Managed Instance link.":::
-1. On the **Select Databases** page, choose one or more databases you want to replicate to your SQL Managed Instance via the Managed Instance link. Select **Next**:
+1. On the **Select Databases** page, choose one or more databases that you want to replicate to SQL Managed Instance via the link feature. Then select **Next**.
- :::image type="content" source="./media/managed-instance-link-use-ssms-to-replicate-database/link-replicate-select-databases.png" alt-text="Screenshot showing Select Databases page.":::
+ :::image type="content" source="./media/managed-instance-link-use-ssms-to-replicate-database/link-replicate-select-databases.png" alt-text="Screenshot that shows the Select Databases page.":::
-1. On the **Login to Azure and select Managed Instance** page, select **Sign In...** to sign into Microsoft Azure. Choose the subscription, resource group, and target managed instance from the drop-downs. Select **Login** and provide login details for the SQL Managed Instance:
+1. On the **Login to Azure and select Managed Instance** page, select **Sign In** to sign in to Microsoft Azure.
- :::image type="content" source="./media/managed-instance-link-use-ssms-to-replicate-database/link-replicate-login-to-azure.png" alt-text="Screenshot showing Login to Azure and select Managed Instance page.":::
+ :::image type="content" source="./media/managed-instance-link-use-ssms-to-replicate-database/link-replicate-login-to-azure.png" alt-text="Screenshot that shows the area for signing in to Azure.":::
-1. After providing all necessary information, select **Next**:
+1. On the **Login to Azure and select Managed Instance** page, choose the subscription, resource group, and target managed instance from the dropdown lists. Select **Login** and provide login details for SQL Managed Instance. After you've provided all necessary information, select **Next**.
- :::image type="content" source="./media/managed-instance-link-use-ssms-to-replicate-database/link-replicate-login-to-azure-populated.png" alt-text="Screenshot showing Login to Azure and select Managed Instance populated page.":::
+ :::image type="content" source="./media/managed-instance-link-use-ssms-to-replicate-database/link-replicate-login-to-azure-populated.png" alt-text="Screenshot that shows the populated page for selecting a managed instance.":::
-1. Review the prepopulated values on the **Specify Distributed AG Options** page, and change any that need customization. When ready, select **Next**.
+1. Review the prepopulated values on the **Specify Distributed AG Options** page, and change any that need customization. When you're ready, select **Next**.
- :::image type="content" source="./media/managed-instance-link-use-ssms-to-replicate-database/link-replicate-distributed-ag-options.png" alt-text="Screenshot showing Specify Distributed A G options page.":::
+ :::image type="content" source="./media/managed-instance-link-use-ssms-to-replicate-database/link-replicate-distributed-ag-options.png" alt-text="Screenshot that shows the Specify Distributed A G Options page.":::
-1. Review the actions on the **Summary** page, and select **Finish** when ready. (Optionally) You can also create a script to save and run yourself at a later time.
+1. Review the actions on the **Summary** page. Optionally, select **Script** to create a script that you can run at a later time. When you're ready, select **Finish**.
- :::image type="content" source="./media/managed-instance-link-use-ssms-to-replicate-database/link-replicate-summary.png" alt-text="Screenshot showing Summary window.":::
+ :::image type="content" source="./media/managed-instance-link-use-ssms-to-replicate-database/link-replicate-summary.png" alt-text="Screenshot that shows the Summary page.":::
-1. The **Executing actions** page displays the progress of each action:
+1. The **Executing actions** page displays the progress of each action.
- :::image type="content" source="./media/managed-instance-link-use-ssms-to-replicate-database/link-replicate-executing-actions.png" alt-text="Screenshot showing Executing actions page.":::
+ :::image type="content" source="./media/managed-instance-link-use-ssms-to-replicate-database/link-replicate-executing-actions.png" alt-text="Screenshot that shows the page for executing actions.":::
-1. After all steps complete, the **Results** page shows a completed status, with checkmarks next to each successfully completed action. You can now close the window:
+1. After all steps finish, the **Results** page shows check marks next to the successfully completed actions. You can now close the window.
- :::image type="content" source="./media/managed-instance-link-use-ssms-to-replicate-database/link-replicate-results.png" alt-text="Screenshot showing Results page.":::
+ :::image type="content" source="./media/managed-instance-link-use-ssms-to-replicate-database/link-replicate-results.png" alt-text="Screenshot that shows the Results page with completed status.":::
-## View replicated database
+## View a replicated database
-After the Managed Instance link is created, the selected databases are replicated to the SQL Managed Instance.
+After the link is created, the selected databases are replicated to the managed instance.
-Use **Object Explorer** on your SQL Server instance to view the `Synchronized` status of the replicated database, and expand **Always On High Availability** and **Availability Groups** to view the distributed availability group that is created for the Managed Instance link.
+Use Object Explorer on your SQL Server instance to view the **Synchronized** status of the replicated database. Expand **Always On High Availability** and **Availability Groups** to view the distributed availability group that's created for the link.
-Connect to your SQL Managed Instance and use **Object Explorer** to view your replicated database. Depending on the database size and network speed, the database may initially be in a `Restoring` state. After initial seeding completes, the database is restored to the SQL Managed Instance and ready for read-only workloads:
+Connect to your managed instance and use Object Explorer to view your replicated database. Depending on the database size and network speed, the database might initially be in a **Restoring** state. After initial seeding finishes, the database is restored to the managed instance and ready for read-only workloads.
## Next steps
-To break the link and failover your database to the SQL Managed Instance, see [failover database](managed-instance-link-use-ssms-to-failover-database.md). To learn more, see [Link feature in Azure SQL Managed Instance](link-feature.md).
+To break the link and fail over your database to SQL Managed Instance, see [Fail over a database](managed-instance-link-use-ssms-to-failover-database.md). To learn more, see [Link feature for Azure SQL Managed Instance](link-feature.md).
azure-video-analyzer Observed People Tracing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-analyzer/video-analyzer-for-media-docs/observed-people-tracing.md
Title: Trace observed people in a video
description: This topic gives an overview of a Trace observed people in a video concept. Previously updated : 12/10/2021 Last updated : 03/27/2022
azure-video-analyzer Video Indexer Output Json V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-analyzer/video-analyzer-for-media-docs/video-indexer-output-json-v2.md
Title: Examine the v2 API output from Azure Video Analyzer for Media (formerly Video Indexer)
+ Title: Examine the Azure Video Analyzer for Media output
-description: This topic examines the Azure Video Analyzer for Media (formerly Video Indexer) output produced by v2 API.
+description: This topic examines the Azure Video Analyzer for Media (formerly Video Indexer) output produced by the Get Video Index API.
# Examine the Video Analyzer for Media output
-When a video is indexed, Azure Video Analyzer for Media (formerly Video Indexer) produces the JSON content that contains details of the specified video insights. The insights include: transcripts, OCRs, faces, topics, blocks, etc. Each insight type includes instances of time ranges that show when the insight appears in the video.
+When a video is indexed, Azure Video Analyzer for Media (formerly Video Indexer) produces the JSON content that contains details of the specified video insights. The insights include transcripts, optical character recognition elements (OCRs), faces, topics, blocks, and similar details. Each insight type includes instances of time ranges that show when the insight appears in the video.
You can visually examine the video's summarized insights by pressing the **Play** button on the video on the [Video Analyzer for Media](https://www.videoindexer.ai/) website.
-You can also use the API by calling the **Get Video Index** API and the response status is OK, you get a detailed JSON output as the response content.
+You can also use the Get Video Index API. If the response status is `OK`, you get a detailed JSON output as the response content.
-![Insights](./media/video-indexer-output-json/video-indexer-summarized-insights.png)
+![Screenshot of the Insights tab in Azure Video Analyzer for Media.](./media/video-indexer-output-json/video-indexer-summarized-insights.png)
-This article examines the Video Analyzer for Media output (JSON content). <br/>For information about what features and insights are available to you, see [Video Analyzer for Media insights](video-indexer-overview.md#video-insights).
+This article examines the Video Analyzer for Media output (JSON content). For information about what features and insights are available to you, see [Video Analyzer for Media insights](video-indexer-overview.md#video-insights).
> [!NOTE]
-> Expiration of all the access tokens in Video Analyzer for Media is one hour.
+> All the access tokens in Video Analyzer for Media expire in one hour.
## Get the insights
-### Insights/output produced in the website/portal
+To get insights produced on the website or the Azure portal:
1. Browse to the [Video Analyzer for Media](https://www.videoindexer.ai/) website and sign in.
-1. Find a video the output of which you want to examine.
+1. Find a video whose output you want to examine.
1. Press **Play**.
-1. Select the **Insights** tab (summarized insights) or the **Timeline** tab (allows to filter the relevant insights).
+1. Select the **Insights** tab to get summarized insights. Or select the **Timeline** tab to filter the relevant insights.
1. Download artifacts and what's in them. For more information, see [View and edit video insights](video-indexer-view-edit.md).
-## Insights/output produced by API
+To get insights produced by the API:
-1. To retrieve the JSON file, call [Get Video Index API](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Get-Video-Index)
-1. If you are also interested in specific artifacts, call [Get Video Artifact Download URL API](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Get-Video-Artifact-Download-Url)
+- To retrieve the JSON file, call the [Get Video Index API](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Get-Video-Index).
+- If you're interested in specific artifacts, call the [Get Video Artifact Download URL API](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Get-Video-Artifact-Download-Url).
- In the API call, specify the requested artifact type (OCR, Faces, Key frames etc.)
+ In the API call, specify the requested artifact type (for example, OCR, face, or keyframe).
## Root elements of the insights |Name|Description| |||
-|accountId|The playlist's VI account ID.|
-|id|The playlist's ID.|
-|name|The playlist's name.|
-|description|The playlist's description.|
-|userName|The name of the user who created the playlist.|
-|created|The playlist's creation time.|
-|privacyMode|The playlist's privacy mode (Private/Public).|
-|state|The playlist's (uploaded, processing, processed, failed, quarantined).|
-|isOwned|Indicates whether the playlist was created by the current user.|
-|isEditable|Indicates whether the current user is authorized to edit the playlist.|
-|isBase|Indicates whether the playlist is a base playlist (a video) or a playlist made of other videos (derived).|
-|durationInSeconds|The total duration of the playlist.|
-|summarizedInsights|Contains one [summarizedInsights](#summarizedinsights).
-|videos|A list of [videos](#videos) constructing the playlist.<br/>If this playlist of constructed of time ranges of other videos (derived), the videos in this list will contain only data from the included time ranges.|
+|`accountId`|The playlist's VI account ID.|
+|`id`|The playlist's ID.|
+|`name`|The playlist's name.|
+|`description`|The playlist's description.|
+|`userName`|The name of the user who created the playlist.|
+|`created`|The playlist's creation time.|
+|`privacyMode`|The playlist's privacy mode (`Private` or `Public`).|
+|`state`|The playlist's state (`Uploaded`, `Processing`, `Processed`, `Failed`, or `Quarantined`).|
+|`isOwned`|Indicates whether the current user created the playlist.|
+|`isEditable`|Indicates whether the current user is authorized to edit the playlist.|
+|`isBase`|Indicates whether the playlist is a base playlist (a video) or a playlist made of other videos (derived).|
+|`durationInSeconds`|The total duration of the playlist.|
+|`summarizedInsights`|Contains one [summarized insight](#summarizedinsights).
+|`videos`|A list of [videos](#videos) that construct the playlist.<br/>If this playlist is constructed of time ranges of other videos (derived), the videos in this list will contain only data from the included time ranges.|
```json {
For more information, see [View and edit video insights](video-indexer-view-edit
## summarizedInsights
-This section shows the summary of the insights.
+This section shows a summary of the insights.
|Attribute | Description| |||
-|name|The name of the video. For example, Azure Monitor.|
-|id|The ID of the video. For example, 63c6d532ff.|
-|privacyMode|Your breakdown can have one of the following modes: **Private**, **Public**. **Public** - the video is visible to everyone in your account and anyone that has a link to the video. **Private** - the video is visible to everyone in your account.|
-|duration|Contains one duration that describes the time an insight occurred. Duration is in seconds.|
-|thumbnailVideoId|The ID of the video from which the thumbnail was taken.
-|thumbnailId|The video's thumbnail ID. To get the actual thumbnail, call [Get-Thumbnail](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Get-Video-Thumbnail) and pass it thumbnailVideoId and thumbnailId.|
-|faces/animatedCharacters|May contain zero or more faces. For more detailed information, see [faces/animatedCharacters](#facesanimatedcharacters).|
-|keywords|May contain zero or more keywords. For more detailed information, see [keywords](#keywords).|
-|sentiments|May contain zero or more sentiments. For more detailed information, see [sentiments](#sentiments).|
-|audioEffects| May contain zero or more audioEffects. For more detailed information, see [audioEffects](#audioeffects-preview).|
-|labels| May contain zero or more labels. For detailed more information, see [labels](#labels).|
-|brands| May contain zero or more brands. For more detailed information, see [brands](#brands).|
-|statistics | For more detailed information, see [statistics](#statistics).|
-|emotions| May contain zero or more emotions. For More detailed information, see [emotions](#emotions).|
-|topics|May contain zero or more topics. The [topics](#topics) insight.|
+|`name`|The name of the video. For example: `Azure Monitor`.|
+|`id`|The ID of the video. For example: `63c6d532ff`.|
+|`privacyMode`|Your breakdown can have one of the following modes: A `Public` video is visible to everyone in your account and anyone who has a link to the video. A `Private` video is visible to everyone in your account.|
+|`duration`|The time when an insight occurred, in seconds.|
+|`thumbnailVideoId`|The ID of the video from which the thumbnail was taken.
+|`thumbnailId`|The video's thumbnail ID. To get the actual thumbnail, call [Get-Thumbnail](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Get-Video-Thumbnail) and pass it `thumbnailVideoId` and `thumbnailId`.|
+|`faces/animatedCharacters`|Contains zero or more faces. For more information, see [faces/animatedCharacters](#facesanimatedcharacters).|
+|`keywords`|Contains zero or more keywords. For more information, see [keywords](#keywords).|
+|`sentiments`|Contains zero or more sentiments. For more information, see [sentiments](#sentiments).|
+|`audioEffects`| Contains zero or more audio effects. For more information, see [audioEffects](#audioeffects-preview).|
+|`labels`| Contains zero or more labels. For more information, see [labels](#labels).|
+|`brands`| Contains zero or more brands. For more information, see [brands](#brands).|
+|`statistics` | For more information, see [statistics](#statistics).|
+|`emotions`| Contains zero or more emotions. For more information, see [emotions](#emotions).|
+|`topics`|Contains zero or more topics. For more information, see [topics](#topics).|
## videos |Name|Description| |||
-|accountId|The video's VI account ID.|
-|id|The video's ID.|
-|name|The video's name.
-|state|The video's state (uploaded, processing, processed, failed, quarantined).|
-|processingProgress|The processing progress during processing (for example, 20%).|
-|failureCode|The failure code if failed to process (for example, 'UnsupportedFileType').|
-|failureMessage|The failure message if failed to process.|
-|externalId|The video's external ID (if specified by the user).|
-|externalUrl|The video's external url (if specified by the user).|
-|metadata|The video's external metadata (if specified by the user).|
-|isAdult|Indicates whether the video was manually reviewed and identified as an adult video.|
-|insights|The insights object. For more information, see [insights](#insights).|
-|thumbnailId|The video's thumbnail ID. To get the actual thumbnail call [Get-Thumbnail](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Get-Video-Thumbnail) and pass it the video ID and thumbnailId.|
-|publishedUrl|A url to stream the video.|
-|publishedUrlProxy|A url to stream the video from (for Apple devices).|
-|viewToken|A short lived view token for streaming the video.|
-|sourceLanguage|The video's source language.|
-|language|The video's actual language (translation).|
-|indexingPreset|The preset used to index the video.|
-|streamingPreset|The preset used to publish the video.|
-|linguisticModelId|The CRIS model used to transcribe the video.|
-|statistics | For more information, see [statistics](#statistics).|
+|`accountId`|The video's VI account ID.|
+|`id`|The video's ID.|
+|`name`|The video's name.
+|`state`|The video's state (`Uploaded`, `Processing`, `Processed`, `Failed`, or `Quarantined`).|
+|`processingProgress`|The progress during processing. For example: `20%`.|
+|`failureCode`|The failure code if the video failed to process. For example: `UnsupportedFileType`.|
+|`failureMessage`|The failure message if the video failed to process.|
+|`externalId`|The video's external ID (if the user specifies one).|
+|`externalUrl`|The video's external URL (if the user specifies one).|
+|`metadata`|The video's external metadata (if the user specifies one).|
+|`isAdult`|Indicates whether the video was manually reviewed and identified as an adult video.|
+|`insights`|The insights object. For more information, see [insights](#insights).|
+|`thumbnailId`|The video's thumbnail ID. To get the actual thumbnail, call [Get-Thumbnail](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Get-Video-Thumbnail) and pass it the video ID and thumbnail ID.|
+|`publishedUrl`|A URL to stream the video.|
+|`publishedUrlProxy`|A URL to stream the video on Apple devices.|
+|`viewToken`|A short-lived view token for streaming the video.|
+|`sourceLanguage`|The video's source language.|
+|`language`|The video's actual language (translation).|
+|`indexingPreset`|The preset used to index the video.|
+|`streamingPreset`|The preset used to publish the video.|
+|`linguisticModelId`|The transcript customization (CRIS) model used to transcribe the video.|
+|`statistics` | For more information, see [statistics](#statistics).|
```json {
This section shows the summary of the insights.
``` ### insights
-Each insight (for example, transcript lines, faces, brands, etc.), contains a list of unique elements (for example, face1, face2, face3), and each element has its own metadata and a list of its instances (which are time ranges with additional optional metadata).
+Each insight (for example, transcript lines, faces, or brands) contains a list of unique elements (for example, `face1`, `face2`, `face3`). Each element has its own metadata and a list of its instances, which are time ranges with additional metadata.
-A face might have an ID, a name, a thumbnail, other metadata, and a list of its temporal instances (for example: 00:00:05 ΓÇô 00:00:10, 00:01:00 - 00:02:30 and 00:41:21 ΓÇô 00:41:49.) Each temporal instance can have additional metadata. For example, the face's rectangle coordinates (20,230,60,60).
+A face might have an ID, a name, a thumbnail, other metadata, and a list of its temporal instances (for example, `00:00:05 ΓÇô 00:00:10`,` 00:01:00 - 00:02:30`, and `00:41:21 ΓÇô 00:41:49`). Each temporal instance can have additional metadata. For example, the metadata can include the face's rectangle coordinates (`20,230,60,60`).
|Version|The code version| |||
-|sourceLanguage|The video's source language (assuming one master language). In the form of a [BCP-47](https://tools.ietf.org/html/bcp47) string.|
-|language|The insights language (translated from the source language). In the form of a [BCP-47](https://tools.ietf.org/html/bcp47) string.|
-|transcript|The [transcript](#transcript) insight.|
-|ocr|The [OCR](#ocr) insight.|
-|keywords|The [keywords](#keywords) insight.|
-|blocks|May contain one or more [blocks](#blocks)|
-|faces/animatedCharacters|The [faces/animatedCharacters](#facesanimatedcharacters) insight.|
-|labels|The [labels](#labels) insight.|
-|shots|The [shots](#shots) insight.|
-|brands|The [brands](#brands) insight.|
-|audioEffects|The [audioEffects](#audioeffects-preview) insight.|
-|sentiments|The [sentiments](#sentiments) insight.|
-|visualContentModeration|The [visualContentModeration](#visualcontentmoderation) insight.|
-|textualContentModeration|The [textualContentModeration](#textualcontentmoderation) insight.|
-|emotions| The [emotions](#emotions) insight.|
-|topics|The [topics](#topics) insight.|
-|speakers|The [speakers](#speakers) insight.|
+|`sourceLanguage`|The video's source language (assuming one master language), in the form of a [BCP-47](https://tools.ietf.org/html/bcp47) string.|
+|`language`|The insights language (translated from the source language), in the form of a [BCP-47](https://tools.ietf.org/html/bcp47) string.|
+|`transcript`|The [transcript](#transcript) insight.|
+|`ocr`|The [OCR](#ocr) insight.|
+|`keywords`|The [keywords](#keywords) insight.|
+|`blocks`|Might contain one or more [blocks](#blocks).|
+|`faces/animatedCharacters`|The [faces/animatedCharacters](#facesanimatedcharacters) insight.|
+|`labels`|The [labels](#labels) insight.|
+|`shots`|The [shots](#shots) insight.|
+|`brands`|The [brands](#brands) insight.|
+|`audioEffects`|The [audioEffects](#audioeffects-preview) insight.|
+|`sentiments`|The [sentiments](#sentiments) insight.|
+|`visualContentModeration`|The [visualContentModeration](#visualcontentmoderation) insight.|
+|`textualContentModeration`|The [textualContentModeration](#textualcontentmoderation) insight.|
+|`emotions`| The [emotions](#emotions) insight.|
+|`topics`|The [topics](#topics) insight.|
+|`speakers`|The [speakers](#speakers) insight.|
Example:
Example:
Attribute | Description |
-id|ID of the block.|
-instances|A list of time ranges of this block.|
+`id`|The ID of the block.|
+`instances`|A list of time ranges for this block.|
#### transcript |Name|Description| |||
-|id|The line ID.|
-|text|The transcript itself.|
-|confidence|The transcript accuracy confidence.|
-|speakerId|The ID of the speaker.|
-|language|The transcript language. Intended to support transcript where each line can have a different language.|
-|instances|A list of time ranges where this line appeared. If the instance is transcript, it will have only 1 instance.|
+|`id`|The line ID.|
+|`text`|The transcript itself.|
+|`confidence`|The confidence level for transcript accuracy.|
+|`speakerId`|The ID of the speaker.|
+|`language`|The transcript language. It's intended to support transcripts where each line can have a different language.|
+|`instances`|A list of time ranges where this line appeared. If the instance is in a transcript, it will have only one instance.|
Example:
Example:
|Name|Description| |||
-|id|The OCR line ID.|
-|text|The OCR text.|
-|confidence|The recognition confidence.|
-|language|The OCR language.|
-|instances|A list of time ranges where this OCR appeared (the same OCR can appear multiple times).|
-|height|The height of the OCR rectangle.|
-|top|The top location in px.|
-|left|The left location in px.|
-|width|The width of the OCR rectangle.|
-|angle|The angle of the OCR rectangle, from -180 to 180. 0 means left to right horizontal, 90 means top to bottom vertical, 180 means right to left horizontal, and -90 means bottom to top vertical. 30 means from top left to bottom right. |
+|`id`|The OCR's line ID.|
+|`text`|The OCR's text.|
+|`confidence`|The recognition confidence.|
+|`language`|The OCR's language.|
+|`instances`|A list of time ranges where this OCR appeared. (The same OCR can appear multiple times.)|
+|`height`|The height of the OCR rectangle.|
+|`top`|The top location, in pixels.|
+|`left`|The left location, in pixels.|
+|`width`|The width of the OCR rectangle.|
+|`angle`|The angle of the OCR rectangle, from `-180` to `180`. A value of `0` means left-to-right horizontal. A value of `90` means top-to-bottom vertical. A value of `180` means right-to-left horizontal. A value of `-90` means bottom-to-top vertical. A value of `30` means from top left to bottom right. |
```json "ocr": [
Example:
|Name|Description| |||
-|id|The keyword ID.|
-|text|The keyword text.|
-|confidence|The keyword's recognition confidence.|
-|language|The keyword language (when translated).|
-|instances|A list of time ranges where this keyword appeared (a keyword can appear multiple times).|
+|`id`|The keyword's ID.|
+|`text`|The keyword's text.|
+|`confidence`|Recognition confidence in the keyword.|
+|`language`|The keyword language (when translated).|
+|`instances`|A list of time ranges where this keyword appeared. (A keyword can appear multiple times.)|
```json {
Example:
#### faces/animatedCharacters
-`animatedCharacters` element replaces `faces` in case the video was indexed with an animated characters model. This is done using a custom model in Custom Vision, Video Analyzer for Media runs it on keyframes.
+The `animatedCharacters` element replaces `faces` if the video was indexed with an animated characters model. This indexing is done through a custom model in Custom Vision. Video Analyzer for Media runs it on keyframes.
-If faces (not animated characters) are present, Video Analyzer for Media uses Face API on all the video's frames to detect faces and celebrities.
+If faces (not animated characters) are present, Video Analyzer for Media uses the Face API on all the video's frames to detect faces and celebrities.
|Name|Description| |||
-|id|The face ID.|
-|name|The name of the face. It can be 'Unknown #0, an identified celebrity or a customer trained person.|
-|confidence|The face identification confidence.|
-|description|A description of the celebrity. |
-|thumbnailId|The ID of the thumbnail of that face.|
-|knownPersonId|If it is a known person, its internal ID.|
-|referenceId|If it is a Bing celebrity, its Bing ID.|
-|referenceType|Currently, just Bing.|
-|title|If it is a celebrity, its title (for example "Microsoft's CEO").|
-|imageUrl|If it is a celebrity, its image url.|
-|instances|These are instances of where the face appeared in the given time range. Each instance also has a thumbnailsId. |
+|`id`|The face's ID.|
+|`name`|The name of the face. It can be `Unknown #0`, an identified celebrity, or a customer-trained person.|
+|`confidence`|The level of confidence in face identification.|
+|`description`|A description of the celebrity. |
+|`thumbnailId`|The ID of the thumbnail of the face.|
+|`knownPersonId`|If it's a known person, the internal ID.|
+|`referenceId`|If it's a Bing celebrity, the Bing ID.|
+|`referenceType`|Currently, just Bing.|
+|`title`|If it's a celebrity, the person's title. For example: `Microsoft's CEO`.|
+|`imageUrl`|If it's a celebrity, the image URL.|
+|`instances`|Instances of where the face appeared in the time range. Each instance also has a `thumbnailsIds` value. |
```json "faces": [{
If faces (not animated characters) are present, Video Analyzer for Media uses Fa
|Name|Description| |||
-|id|The label ID.|
-|name|The label name (for example, 'Computer', 'TV').|
-|language|The label name language (when translated). BCP-47|
-|instances|A list of time ranges where this label appeared (a label can appear multiple times). Each instance has a confidence field. |
+|`id`|The label's ID.|
+|`name`|The label's name. For example: `Computer` or `TV`.|
+|`language`|The language of the label's name (when translated), in the form of a [BCP-47](https://tools.ietf.org/html/bcp47) string.|
+|`instances`|A list of time ranges where this label appeared. (A label can appear multiple times.) Each instance has a confidence field. |
```json
If faces (not animated characters) are present, Video Analyzer for Media uses Fa
|Name|Description| |||
-|id|The scene ID.|
-|instances|A list of time ranges of this scene (a scene can only have 1 instance).|
+|`id`|The scene's ID.|
+|`instances`|A list of time ranges for this scene. (A scene can have only one instance.)|
```json "scenes":[
If faces (not animated characters) are present, Video Analyzer for Media uses Fa
|Name|Description| |||
-|id|The shot ID.|
-|keyFrames|A list of keyFrames within the shot (each has an ID and a list of instances time ranges). Each keyFrame instance has a thumbnailId field, which holds the keyFrame's thumbnail ID.|
-|instances|A list of time ranges of this shot (a shot can only have 1 instance).|
+|`id`|The shot's ID.|
+|`keyFrames`|A list of keyframes within the shot. Each has an ID and a list of instance time ranges. Each keyframe instance has a `thumbnailId` field, which holds the keyframe's thumbnail ID.|
+|`instances`|A list of time ranges for this shot. (A shot can have only one instance.)|
```json "shots":[
If faces (not animated characters) are present, Video Analyzer for Media uses Fa
#### brands
-Business and product brand names detected in the speech to text transcript and/or Video OCR. This does not include visual recognition of brands or logo detection.
+Video Analyzer for Media detects business and product brand names in the speech-to-text transcript and/or video OCR. This information does not include visual recognition of brands or logo detection.
|Name|Description| |||
-|id|The brand ID.|
-|name|The brands name.|
-|referenceId | The suffix of the brand wikipedia url. For example, "Target_Corporation" is the suffix of [https://en.wikipedia.org/wiki/Target_Corporation](https://en.wikipedia.org/wiki/Target_Corporation).
-|referenceUrl | The brand's Wikipedia url, if exists. For example, [https://en.wikipedia.org/wiki/Target_Corporation](https://en.wikipedia.org/wiki/Target_Corporation).
-|description|The brands description.|
-|tags|A list of predefined tags that were associated with this brand.|
-|confidence|The confidence value of the Video Analyzer for Media brand detector (0-1).|
-|instances|A list of time ranges of this brand. Each instance has a brandType, which indicates whether this brand appeared in the transcript or in OCR.|
+|`id`|The brand's ID.|
+|`name`|The brand's name.|
+|`referenceId` | The suffix of the brand's Wikipedia URL. For example, `Target_Corporation` is the suffix of [https://en.wikipedia.org/wiki/Target_Corporation](https://en.wikipedia.org/wiki/Target_Corporation).
+|`referenceUrl` | The brand's Wikipedia URL, if exists. For example: [https://en.wikipedia.org/wiki/Target_Corporation](https://en.wikipedia.org/wiki/Target_Corporation).
+|`description`|The brand's description.|
+|`tags`|A list of predefined tags that were associated with this brand.|
+|`confidence`|The confidence value of the Video Analyzer for Media brand detector (`0`-`1`).|
+|`instances`|A list of time ranges for this brand. Each instance has a `brandType` value, which indicates whether this brand appeared in the transcript or in an OCR.|
```json "brands": [
Business and product brand names detected in the speech to text transcript and/o
|Name|Description| |||
-|CorrespondenceCount|Number of correspondences in the video.|
-|SpeakerWordCount|The number of words per speaker.|
-|SpeakerNumberOfFragments|The amount of fragments the speaker has in a video.|
-|SpeakerLongestMonolog|The speaker's longest monolog. If the speaker has silences inside the monolog it is included. Silence at the beginning and the end of the monolog is removed.|
-|SpeakerTalkToListenRatio|The calculation is based on the time spent on the speaker's monolog (without the silence in between) divided by the total time of the video. The time is rounded to the third decimal point.|
+|`CorrespondenceCount`|The number of correspondences in the video.|
+|`SpeakerWordCount`|The number of words per speaker.|
+|`SpeakerNumberOfFragments`|The number of fragments that the speaker has in a video.|
+|`SpeakerLongestMonolog`|The speaker's longest monolog. If the speaker has silence inside the monolog, it's included. Silence at the beginning and the end of the monolog is removed.|
+|`SpeakerTalkToListenRatio`|The calculation is based on the time spent on the speaker's monolog (without the silence in between) divided by the total time of the video. The time is rounded to the third decimal point.|
#### audioEffects (preview) |Name|Description |||
-|id|The audio effect ID|
-|type|The audio effect type|
-|name| The audio effect type in the language in which the JSON was indexed. |
-|instances|A list of time ranges where this audio effect appeared. Each instance has a confidence field.|
-|start + end| Time range in the time original video.|
-|adjustedStart + adjustedEnd|[time range vs adjusted time range](concepts-overview.md#time-range-vs-adjusted-time-range)|
+|`id`|The audio effect's ID.|
+|`type`|The audio effect's type.|
+|`name`| The audio effect's type in the language in which the JSON was indexed. |
+|`instances`|A list of time ranges where this audio effect appeared. Each instance has a confidence field.|
+|`start` + `end`| The time range in the original video.|
+|`adjustedStart` + `adjustedEnd`|[Time range versus adjusted time range](concepts-overview.md#time-range-vs-adjusted-time-range).|
```json audioEffects: [{
audioEffects: [{
#### sentiments
-Sentiments are aggregated by their sentimentType field (Positive/Neutral/Negative). For example, 0-0.1, 0.1-0.2.
+Sentiments are aggregated by their `sentimentType` field (`Positive`, `Neutral`, or `Negative`). For example: `0-0.1`, `0.1-0.2`.
|Name|Description| |||
-|id|The sentiment ID.|
-|averageScore |The average of all scores of all instances of that sentiment type - Positive/Neutral/Negative|
-|instances|A list of time ranges where this sentiment appeared.|
-|sentimentType |The type can be 'Positive', 'Neutral', or 'Negative'.|
+|`id`|The sentiment's ID.|
+|`averageScore` |The average of all scores of all instances of that sentiment type.|
+|`instances`|A list of time ranges where this sentiment appeared.|
+|`sentimentType` |The type can be `Positive`, `Neutral`, or `Negative`.|
```json "sentiments": [
Sentiments are aggregated by their sentimentType field (Positive/Neutral/Negativ
#### visualContentModeration
-The visualContentModeration block contains time ranges which Video Analyzer for Media found to potentially have adult content. If visualContentModeration is empty, there is no adult content that was identified.
+The `visualContentModeration` block contains time ranges that Video Analyzer for Media found to potentially have adult content. If `visualContentModeration` is empty, no adult content was identified.
-Videos that are found to contain adult or racy content might be available for private view only. Users have the option to submit a request for a human review of the content, in which case the IsAdult attribute will contain the result of the human review.
+Videos that contain adult or racy content might be available for private view only. Users have the option to submit a request for a human review of the content. In that case, the `IsAdult` attribute will contain the result of the human review.
|Name|Description| |||
-|id|The visual content moderation ID.|
-|adultScore|The adult score (from content moderator).|
-|racyScore|The racy score (from content moderation).|
-|instances|A list of time ranges where this visual content moderation appeared.|
+|`id`|The ID of the visual content moderation.|
+|`adultScore`|The adult score (from content moderation).|
+|`racyScore`|The racy score (from content moderation).|
+|`instances`|A list of time ranges where this visual content moderation appeared.|
```json "VisualContentModeration": [
Videos that are found to contain adult or racy content might be available for pr
|Name|Description| |||
-|id|The textual content moderation ID.|
-|bannedWordsCount |The number of banned words.|
-|bannedWordsRatio |The ratio from total number of words.|
+|`id`|The ID of the textual content moderation.|
+|`bannedWordsCount` |The number of banned words.|
+|`bannedWordsRatio` |The ratio of banned words to the total number of words.|
#### emotions
-Video Analyzer for Media identifies emotions based on speech and audio cues. The identified emotion could be: joy, sadness, anger, or fear.
+Video Analyzer for Media identifies emotions based on speech and audio cues.
|Name|Description| |||
-|id|The emotion ID.|
-|type|The emotion moment that was identified based on speech and audio cues. The emotion could be: joy, sadness, anger, or fear.|
-|instances|A list of time ranges where this emotion appeared.|
+|`id`|The emotion's ID.|
+|`type`|The type of an identified emotion: `Joy`, `Sadness`, `Anger`, or `Fear`.|
+|`instances`|A list of time ranges where this emotion appeared.|
```json "emotions": [{
Video Analyzer for Media identifies emotions based on speech and audio cues. The
#### topics
-Video Analyzer for Media makes inference of main topics from transcripts. When possible, the 2nd-level [IPTC](https://iptc.org/standards/media-topics/) taxonomy is included.
+Video Analyzer for Media makes an inference of main topics from transcripts. When possible, the second-level [IPTC](https://iptc.org/standards/media-topics/) taxonomy is included.
|Name|Description| |||
-|id|The topic ID.|
-|name|The topic name, for example: "Pharmaceuticals".|
-|referenceId|Breadcrumbs reflecting the topics hierarchy. For example: "Health and wellbeing / Medicine and healthcare / Pharmaceuticals".|
-|confidence|The confidence score in the range [0,1]. Higher is more confident.|
-|language|The language used in the topic.|
-|iptcName|The IPTC media code name, if detected.|
-|instances |Currently, Video Analyzer for Media does not index a topic to time intervals, so the whole video is used as the interval.|
+|`id`|The topic's ID.|
+|`name`|The topic's name. For example: `Pharmaceuticals`.|
+|`referenceId`|Breadcrumbs that reflect the topic's hierarchy. For example: `HEALTH AND WELLBEING/MEDICINE AND HEALTHCARE/PHARMACEUTICALS`.|
+|`confidence`|The confidence score in the range `0`-`1`. Higher is more confident.|
+|`language`|The language used in the topic.|
+|`iptcName`|The IPTC media code name, if detected.|
+|`instances` |Currently, Video Analyzer for Media does not index a topic to time intervals. The whole video is used as the interval.|
```json "topics": [{
Video Analyzer for Media makes inference of main topics from transcripts. When p
|Name|Description| |||
-|id|The speaker ID.|
-|name|The speaker name in the form of "Speaker #*\<number\>*" For example: "Speaker #1".|
-|instances |A list of time ranges where this speaker appeared.|
+|`id`|The speaker's ID.|
+|`name`|The speaker's name in the form of `Speaker #<number>`. For example: `Speaker #1`.|
+|`instances` |A list of time ranges where this speaker appeared.|
```json "speakers":[
Video Analyzer for Media makes inference of main topics from transcripts. When p
## Next steps
-[Video Analyzer for Media Developer Portal](https://api-portal.videoindexer.ai)
+Explore the [Video Analyzer for Media Developer Portal](https://api-portal.videoindexer.ai).
For information about how to embed widgets in your application, see [Embed Video Analyzer for Media widgets into your applications](video-indexer-embed-widgets.md).
backup Backup Azure Monitoring Built In Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-monitoring-built-in-monitor.md
Title: Monitor Azure Backup protected workloads description: In this article, learn about the monitoring and notification capabilities for Azure Backup workloads using the Azure portal. Previously updated : 01/07/2022 Last updated : 03/21/2022 ms.assetid: 86ebeb03-f5fa-4794-8a5f-aa5cbbf68a81
Each of the above types of alerts is further split into **Security** and **Confi
![Screenshot for viewing alerts in Backup center](media/backup-azure-monitoring-laworkspace/backup-center-azure-monitor-alerts.png)
-Clicking on any of the numbers (or on the **Alerts** menu item) opens up a list of all active alerts fired with the relevant filters applied. You can filter on a range of properties, such as subscription, resource group, vault, severity, state, and so on. You can click any of the alerts to get more details about the alert, such as the affected datasource, alert description and recommended action, and so on.
+Clicking any of the numbers (or on the **Alerts** menu item) opens up a list of all active alerts fired with the relevant filters applied. You can filter on a range of properties, such as subscription, resource group, vault, severity, state, and so on. You can click any of the alerts to get more details about the alert, such as the affected datasource, alert description and recommended action, and so on.
![Screenshot for viewing details of the alert](media/backup-azure-monitoring-laworkspace/backup-center-alert-details.png)
You can change the state of an alert to **Acknowledged** or **Closed** by clicki
> [!NOTE] > - In Backup center, only alerts for Azure-based workloads are displayed currently. To view alerts for on-premises resources, navigate to the Recovery Services vault and click the **Alerts** menu item.
-> - Only Azure Monitor alerts are displayed in Backup center. Alerts raised by the older alerting solution (accessed via the [Backup Alerts](#backup-alerts-in-recovery-services-vault) tab in Recovery Services vault) are not displayed in Backup center.
-For more information about Azure Monitor alerts, see [Overview of alerts in Azure](../azure-monitor/alerts/alerts-overview.md).
+> - Only Azure Monitor alerts are displayed in Backup center. Alerts raised by the older alerting solution (accessed via the [Backup Alerts](#backup-alerts-in-recovery-services-vault) tab in Recovery Services vault) are not displayed in Backup center. For more information about Azure Monitor alerts, see [Overview of alerts in Azure](../azure-monitor/alerts/alerts-overview.md).
+> - Currently, in case of blob restore alerts, alerts appear under datasource alerts only if you select both the dimensions - *datasourceId* and *datasourceType* while creating the alert rule. If any dimensions aren't selected, the alerts appear under global alerts.
### Configuring notifications for alerts
backup Backup Center Monitor Operate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-center-monitor-operate.md
Title: Monitor and operate backups using Backup Center description: This article explains how to monitor and operate backups at scale using Backup Center Previously updated : 10/20/2021 Last updated : 03/21/2022+++ # Monitor and operate backups using Backup center
The following classes of alerts are displayed:
* **Datasource Alerts**: Alerts that are tied to a specific datasource being backed up (such as, backup or restore failure for a VM, deleting backup data for a database, and so on) appear under the **Datasource Alerts** section. For metric alerts, if the fired alert has a datasource ID dimension associated with it, the fired alert appears under **Datasource Alerts**. * **Global Alerts**: Alerts that aren't tied to a specific datasource (such as, disabling soft-delete functionality for a vault) appear under the **Global Alerts** section. For metric alerts, if the fired alert doesn't have a datasource ID associated with it, the fired alert appears under **Global Alerts**.
+>[!Note]
+>Currently, in case of blob restore alerts, alerts appear under datasource alerts only if you select both the dimensions - *datasourceId* and *datasourceType* while creating the alert rule. If any dimensions aren't selected, the alerts appear under global alerts.
+ ## Vaults Selecting the **Vaults** menu item in Backup center allows you to see a list of all [Recovery Services vaults](backup-azure-recovery-services-vault-overview.md) and [Backup vaults](backup-vault-overview.md) that you have access to. You can filter the list with the following parameters:
backup Backup Center Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-center-support-matrix.md
Title: Support matrix for Backup center description: This article summarizes the scenarios that Backup center supports for each workload type Previously updated : 10/20/2021 Last updated : 03/21/2022+++ # Support matrix for Backup center
Backup center helps enterprises to [govern, monitor, operate, and analyze backup
| Monitoring | View all backup policies | Azure Virtual Machine <br><br> Azure Database for PostgreSQL server <br><br> SQL in Azure VM <br><br> SAP HANA in Azure VM <br><br> Azure Files<br/><br/> Azure Blobs<br/><br/> Azure Managed Disks | Same as previous | | Monitoring | View all vaults | Azure Virtual Machine <br><br> Azure Database for PostgreSQL server <br><br> SQL in Azure VM <br><br> SAP HANA in Azure VM <br><br> Azure Files<br/><br/> Azure Blobs<br/><br/> Azure Managed Disks | Same as previous | | Monitoring | View Azure Monitor alerts at scale | Azure Virtual Machine <br><br> Azure Database for PostgreSQL server <br><br> SQL in Azure VM <br><br> SAP HANA in Azure VM <br><br> Azure Files<br/><br/> Azure Blobs<br/><br/> Azure Managed Disks | Refer [Alerts](./backup-azure-monitoring-built-in-monitor.md#azure-monitor-alerts-for-azure-backup-preview) documentation |
-| Monitoring | View Azure Backup metrics and write metric alert rules | Azure VM <br><br>SQL in Azure VM <br><br> SAP HANA in Azure VM<br><br>Azure Files | You can view metrics for all Recovery Services vaults for a region and subscription simultaneously. Viewing metrics for a larger scope in the Azure portal isnΓÇÖt currently supported. The same limits are also applicable to configure metric alert rules. For more information, see [View metrics in the Azure portal](metrics-overview.md#view-metrics-in-the-azure-portal).|
+| Monitoring | View Azure Backup metrics and write metric alert rules | Azure VM <br><br>SQL in Azure VM <br><br> SAP HANA in Azure VM<br><br>Azure Files <br><br>Azure Blobs | You can view metrics for all Recovery Services vaults for a region and subscription simultaneously. Viewing metrics for a larger scope in the Azure portal isnΓÇÖt currently supported. The same limits are also applicable to configure metric alert rules. For more information, see [View metrics in the Azure portal](metrics-overview.md#view-metrics-in-the-azure-portal).|
| Actions | Configure backup | Azure Virtual Machine <br><br> Azure Database for PostgreSQL server <br><br> SQL in Azure VM <br><br> SAP HANA in Azure VM <br><br> Azure Files<br/><br/> Azure Blobs<br/><br/> Azure Managed Disks | See support matrices for [Azure VM backup](./backup-support-matrix-iaas.md) and [Azure Database for PostgreSQL Server backup](backup-azure-database-postgresql-support-matrix.md) | | Actions | Restore Backup Instance | Azure Virtual Machine <br><br> Azure Database for PostgreSQL server <br><br> SQL in Azure VM <br><br> SAP HANA in Azure VM <br><br> Azure Files<br/><br/> Azure Blobs<br/><br/> Azure Managed Disks | See support matrices for [Azure VM backup](./backup-support-matrix-iaas.md) and [Azure Database for PostgreSQL Server backup](backup-azure-database-postgresql-support-matrix.md) | | Actions | Create vault | Azure Virtual Machine <br><br> Azure Database for PostgreSQL server <br><br> SQL in Azure VM <br><br> SAP HANA in Azure VM <br><br> Azure Files<br/><br/> Azure Blobs<br/><br/> Azure Managed Disks | Refer to support matrices for [Recovery Services vault](./backup-support-matrix.md#vault-support) |
backup Metrics Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/metrics-overview.md
Title: Monitor the health of your backups using Azure Backup Metrics (preview)
description: In this article, learn about the metrics available for Azure Backup to monitor your backup health Previously updated : 02/14/2022 Last updated : 03/21/2022
Azure Backup offers the following key capabilities:
- Azure VM, SQL databases in Azure VM - SAP HANA databases in Azure VM
- - Azure Files.
+ - Azure Files
+ - Azure Blobs.
Metrics for HANA instance workload type are currently not supported.
Currently, Azure Backup supports the following metrics:
- **Restore Health Events**: The value of this metric represents the count of health events pertaining to restore job health, which were fired for the vault within a specific time. When a restore job completes, the Azure Backup service creates a restore health event. Based on the job status (such as succeeded or failed), the dimensions associated with the event vary.
+>[!Note]
+>We support Restore Health Events only for Azure Blobs workload, as backups are continuous, and there's no notion of backup jobs here.
+ By default, the counts are surfaced at the vault level. To view the counts for a particular backup item and job status, you can filter the metrics on any of the supported dimensions. The following table lists the dimensions that Backup Health Events and Restore Health Events metrics supports:
Based on the alert rules configuration, the fired alert appears under the **Data
[Learn more about datasource and global alerts here](backup-center-monitor-operate.md#alerts)
+>[!Note]
+>Currently, in case of blob restore alerts, alerts appear under datasource alerts only if you select both the dimensions - *datasourceId* and *datasourceType* while creating the alert rule. If any dimensions aren't selected, the alerts appear under global alerts.
+ ### Accessing metrics programmatically You can use the different programmatic clients, such as PowerShell, CLI, or REST API, to access the metrics functionality. See [Azure Monitor REST API documentation](../azure-monitor/essentials/rest-api-walkthrough.md) for more details.
backup Sap Hana Backup Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/sap-hana-backup-support-matrix.md
Title: SAP HANA Backup support matrix description: In this article, learn about the supported scenarios and limitations when you use Azure Backup to back up SAP HANA databases on Azure VMs. Previously updated : 01/13/2022 Last updated : 03/28/2022
Azure Backup supports the backup of SAP HANA databases to Azure. This article su
| **Scenario** | **Supported configurations** | **Unsupported configurations** | | -- | | | | **Topology** | SAP HANA running in Azure Linux VMs only | HANA Large Instances (HLI) |
-| **Regions** | **GA:**<br> **Americas** ΓÇô Central US, East US 2, East US, North Central US, South Central US, West US 2, West US 3, West Central US, West US, Canada Central, Canada East, Brazil South <br> **Asia Pacific** ΓÇô Australia Central, Australia Central 2, Australia East, Australia Southeast, Japan East, Japan West, Korea Central, Korea South, East Asia, Southeast Asia, Central India, South India, West India, China East, China North, China East2, China North 2 <br> **Europe** ΓÇô West Europe, North Europe, France Central, UK South, UK West, Germany North, Germany West Central, Switzerland North, Switzerland West, Central Switzerland North, Norway East, Norway West <br> **Africa / ME** - South Africa North, South Africa West, UAE North, UAE Central <BR> **Azure Government regions** | France South, Germany Central, Germany Northeast, US Gov IOWA |
+| **Regions** | **Americas** ΓÇô Central US, East US 2, East US, North Central US, South Central US, West US 2, West US 3, West Central US, West US, Canada Central, Canada East, Brazil South <br> **Asia Pacific** ΓÇô Australia Central, Australia Central 2, Australia East, Australia Southeast, Japan East, Japan West, Korea Central, Korea South, East Asia, Southeast Asia, Central India, South India, West India, China East, China East 2, China East 3, China North, China North 2, China North 3 <br> **Europe** ΓÇô West Europe, North Europe, France Central, UK South, UK West, Germany North, Germany West Central, Switzerland North, Switzerland West, Central Switzerland North, Norway East, Norway West <br> **Africa / ME** - South Africa North, South Africa West, UAE North, UAE Central <BR> **Azure Government regions** | France South, Germany Central, Germany Northeast, US Gov IOWA |
| **OS versions** | SLES 12 with SP2, SP3, SP4 and SP5; SLES 15 with SP0, SP1, SP2 and SP3 <br><br> RHEL 7.4, 7.6, 7.7, 7.9, 8.1, 8.2 and 8.4 | | | **HANA versions** | SDC on HANA 1.x, MDC on HANA 2.x SPS04, SPS05 Rev <= 56, SPS 06 (validated for encryption enabled scenarios as well) | | | **Encryption** | SSLEnforce, HANA data encryption | |
cognitive-services Spatial Analysis Container https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/spatial-analysis-container.md
The Spatial Analysis container enables you to analyze real-time streaming video
### Spatial Analysis container requirements
-To run the Spatial Analysis container, you need a compute device with a [NVIDIA Tesla T4 GPU](https://www.nvidia.com/en-us/data-center/tesla-t4/). We recommend that you use [Azure Stack Edge](https://azure.microsoft.com/products/azure-stack/edge/) with GPU acceleration, however the container runs on any other desktop machine that meets the minimum requirements. We will refer to this device as the host computer.
+To run the Spatial Analysis container, you need a compute device with an NVIDIA CUDA Compute Capable GPU 6.0 or higher (for example, [NVIDIA Tesla T4](https://www.nvidia.com/en-us/data-center/tesla-t4/), 1080Ti, or 2080Ti). We recommend that you use [Azure Stack Edge](https://azure.microsoft.com/products/azure-stack/edge/) with GPU acceleration, however the container runs on any other desktop machine that meets the minimum requirements. We will refer to this device as the host computer.
#### [Azure Stack Edge device](#tab/azure-stack-edge)
Azure Stack Edge is a Hardware-as-a-Service solution and an AI-enabled edge comp
* 4 GB system RAM * 4 GB of GPU RAM * 8 core CPU
-* 1 NVIDIA CUDA Compute Capable devices 6.0 or higher ( e.g.: NVIDIA Tesla T4, 1080Ti, or 2080Ti )
+* 1 NVIDIA CUDA Compute Capable GPU 6.0 or higher (for example, [NVIDIA Tesla T4](https://www.nvidia.com/en-us/data-center/tesla-t4/), 1080Ti, or 2080Ti)
* 20 GB of HDD space #### Recommended hardware
Azure Stack Edge is a Hardware-as-a-Service solution and an AI-enabled edge comp
* 32 GB system RAM * 16 GB of GPU RAM * 8 core CPU
-* 2 NVIDIA Tesla T4 GPUs
+* 2 NVIDIA CUDA Compute Capable GPUs 6.0 or higher (for example, [NVIDIA Tesla T4](https://www.nvidia.com/en-us/data-center/tesla-t4/), 1080Ti, or 2080Ti)
* 50 GB of SSD space In this article, you will download and install the following software packages. The host computer must be able to run the following (see below for instructions):
cognitive-services Improve Accuracy Phrase List https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/improve-accuracy-phrase-list.md
You can use the Speech SDK or Speech Command Line Interface (CLI). The Batch tra
There are some situations where [training a custom model](custom-speech-overview.md) that includes phrases is likely the best option to improve accuracy. In these cases you would not use a phrase list: - If you need to use a large list of phrases. A phrase list shouldn't have more than 500 phrases. -- If you need a phrase list for languages that are not currently supported. For supported phrase list locales see [Language and voice support for the Speech service](language-support.md#phrase-list).
+- If you need a phrase list for languages that are not currently supported. For supported phrase list locales see [Language and voice support for the Speech service](language-support.md?tabs=phraselist).
- If you use a custom endpoint. Phrase lists can't be used with custom endpoints. ## Try it in Speech Studio
cognitive-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/language-support.md
Language support varies by Speech service functionality. The following tables su
## Speech-to-text
-Both the Microsoft Speech SDK and the REST API support the languages (locales) in the following table.
+The Speech service supports the languages (locales) in the following tables.
To improve accuracy, customization is available for some languages and baseline model versions by uploading audio + human-labeled transcripts, plain text, structured text, and pronunciation. By default, plain text customization is supported for all available baseline models. To learn more about customization, see [Get started with Custom Speech](./custom-speech-overview.md).
-| Language | Locale (BCP-47) | Customizations |
-|--|--|--|
-| Afrikaans (South Africa) | `af-ZA` | Plain text |
-| Amharic (Ethiopia) | `am-ET` | Plain text |
-| Arabic (Algeria) | `ar-DZ` | Plain text |
-| Arabic (Bahrain), modern standard | `ar-BH` | Plain text |
-| Arabic (Egypt) | `ar-EG` | Plain text |
-| Arabic (Iraq) | `ar-IQ` | Plain text |
-| Arabic (Israel) | `ar-IL` | Plain text |
-| Arabic (Jordan) | `ar-JO` | Plain text |
-| Arabic (Kuwait) | `ar-KW` | Plain text |
-| Arabic (Lebanon) | `ar-LB` | Plain text |
-| Arabic (Libya) | `ar-LY` | Plain text |
-| Arabic (Morocco) | `ar-MA` | Plain text |
-| Arabic (Oman) | `ar-OM` | Plain text |
-| Arabic (Palestinian Authority) | `ar-PS` | Plain text |
-| Arabic (Qatar) | `ar-QA` | Plain text |
-| Arabic (Saudi Arabia) | `ar-SA` | Plain text |
-| Arabic (Syria) | `ar-SY` | Plain text |
-| Arabic (Tunisia) | `ar-TN` | Plain text |
-| Arabic (United Arab Emirates) | `ar-AE` | Plain text |
-| Arabic (Yemen) | `ar-YE` | Plain text |
-| Bulgarian (Bulgaria) | `bg-BG` | Plain text |
-| Burmese (Myanmar) | `my-MM` | Plain text |
-| Catalan (Spain) | `ca-ES` | Plain text<br/>Pronunciation |
-| Chinese (Cantonese, Traditional) | `zh-HK` | Plain text |
-| Chinese (Mandarin, Simplified) | `zh-CN` | Plain text |
-| Chinese (Taiwanese Mandarin) | `zh-TW` | Plain text |
-| Croatian (Croatia) | `hr-HR` | Plain text<br/>Pronunciation |
-| Czech (Czech) | `cs-CZ` | Plain text<br/>Pronunciation |
-| Danish (Denmark) | `da-DK` | Plain text<br/>Pronunciation |
-| Dutch (Belgium) | `nl-BE` | Plain text |
-| Dutch (Netherlands) | `nl-NL` | Plain text<br/>Pronunciation |
-| English (Australia) | `en-AU` | Plain text<br/>Pronunciation |
-| English (Canada) | `en-CA` | Plain text<br/>Pronunciation |
-| English (Ghana) | `en-GH` | Plain text<br/>Pronunciation |
-| English (Hong Kong) | `en-HK` | Plain text<br/>Pronunciation |
-| English (India) | `en-IN` | Plain text<br>Structured Text (20210907)<br>Pronunciation |
-| English (Ireland) | `en-IE` | Plain text<br/>Pronunciation |
-| English (Kenya) | `en-KE` | Plain text<br/>Pronunciation |
-| English (New Zealand) | `en-NZ` | Plain text<br/>Pronunciation |
-| English (Nigeria) | `en-NG` | Plain text<br/>Pronunciation |
-| English (Philippines) | `en-PH` | Plain text<br/>Pronunciation |
-| English (Singapore) | `en-SG` | Plain text<br/>Pronunciation |
-| English (South Africa) | `en-ZA` | Plain text<br/>Pronunciation |
-| English (Tanzania) | `en-TZ` | Plain text<br/>Pronunciation |
-| English (United Kingdom) | `en-GB` | Audio (20201019)<br>Plain text<br>Structured Text (20210906)<br>Pronunciation |
-| English (United States) | `en-US` | Audio (20201019, 20210223)<br>Plain text<br>Structured Text (20211012)<br>Pronunciation |
-| Estonian (Estonia) | `et-EE` | Plain text<br/>Pronunciation |
-| Filipino (Philippines) | `fil-PH` | Plain text<br/>Pronunciation |
-| Finnish (Finland) | `fi-FI` | Plain text<br/>Pronunciation |
-| French (Belgium) | `fr-BE` | Plain text |
-| French (Canada) | `fr-CA` | Audio (20201015)<br>Plain text<br>Structured Text (20210908)<br>Pronunciation |
-| French (France) | `fr-FR` | Audio (20201015)<br>Plain text<br>Structured Text (20210908)<br>Pronunciation |
-| French (Switzerland) | `fr-CH` | Plain text<br/>Pronunciation |
-| German (Austria) | `de-AT` | Plain text<br/>Pronunciation |
-| German (Germany) | `de-DE` | Plain text<br/>Pronunciation |
-| German (Switzerland) | `de-CH` | Audio (20201127)<br>Plain text<br>Structured Text (20210831)<br>Pronunciation |
-| Greek (Greece) | `el-GR` | Plain text |
-| Gujarati (Indian) | `gu-IN` | Plain text |
-| Hebrew (Israel) | `he-IL` | Plain text |
-| Hindi (India) | `hi-IN` | Plain text |
-| Hungarian (Hungary) | `hu-HU` | Plain text<br/>Pronunciation |
-| Icelandic (Iceland) | `is-IS` | Plain text |
-| Indonesian (Indonesia) | `id-ID` | Plain text<br/>Pronunciation |
-| Irish (Ireland) | `ga-IE` | Plain text<br/>Pronunciation |
-| Italian (Italy) | `it-IT` | Audio (20201016)<br>Plain text<br>Pronunciation |
-| Japanese (Japan) | `ja-JP` | Plain text |
-| Javanese (Indonesia) | `jv-ID` | Plain text |
-| Kannada (India) | `kn-IN` | Plain text |
-| Khmer (Cambodia) | `km-KH` | Plain text |
-| Korean (Korea) | `ko-KR` | Audio (20201015)<br>Plain text |
-| Lao (Laos) | `lo-LA` | Plain text |
-| Latvian (Latvia) | `lv-LV` | Plain text<br/>Pronunciation |
-| Lithuanian (Lithuania) | `lt-LT` | Plain text<br/>Pronunciation |
-| Macedonian (North Macedonia) | `mk-MK` | Plain text |
-| Malay (Malaysia) | `ms-MY` | Plain text |
-| Maltese (Malta) | `mt-MT` | Plain text |
-| Marathi (India) | `mr-IN` | Plain text |
-| Norwegian (Bokmål, Norway) | `nb-NO` | Plain text |
-| Persian (Iran) | `fa-IR` | Plain text |
-| Polish (Poland) | `pl-PL` | Plain text<br/>Pronunciation |
-| Portuguese (Brazil) | `pt-BR` | Audio (20201015)<br>Plain text<br>Pronunciation |
-| Portuguese (Portugal) | `pt-PT` | Plain text<br/>Pronunciation |
-| Romanian (Romania) | `ro-RO` | Plain text<br/>Pronunciation |
-| Russian (Russia) | `ru-RU` | Plain text |
-| Serbian (Serbia) | `sr-RS` | Plain text |
-| Sinhala (Sri Lanka) | `si-LK` | Plain text |
-| Slovak (Slovakia) | `sk-SK` | Plain text<br/>Pronunciation |
-| Slovenian (Slovenia) | `sl-SI` | Plain text<br/>Pronunciation |
-| Spanish (Argentina) | `es-AR` | Plain text<br/>Pronunciation |
-| Spanish (Bolivia) | `es-BO` | Plain text<br/>Pronunciation |
-| Spanish (Chile) | `es-CL` | Plain text<br/>Pronunciation |
-| Spanish (Colombia) | `es-CO` | Plain text<br/>Pronunciation |
-| Spanish (Costa Rica) | `es-CR` | Plain text<br/>Pronunciation |
-| Spanish (Cuba) | `es-CU` | Plain text<br/>Pronunciation |
-| Spanish (Dominican Republic) | `es-DO` | Plain text<br/>Pronunciation |
-| Spanish (Ecuador) | `es-EC` | Plain text<br/>Pronunciation |
-| Spanish (El Salvador) | `es-SV` | Plain text<br/>Pronunciation |
-| Spanish (Equatorial Guinea) | `es-GQ` | Plain text |
-| Spanish (Guatemala) | `es-GT` | Plain text<br/>Pronunciation |
-| Spanish (Honduras) | `es-HN` | Plain text<br/>Pronunciation |
-| Spanish (Mexico) | `es-MX` | Plain text<br>Structured Text (20210908)<br>Pronunciation |
-| Spanish (Nicaragua) | `es-NI` | Plain text<br/>Pronunciation |
-| Spanish (Panama) | `es-PA` | Plain text<br/>Pronunciation |
-| Spanish (Paraguay) | `es-PY` | Plain text<br/>Pronunciation |
-| Spanish (Peru) | `es-PE` | Plain text<br/>Pronunciation |
-| Spanish (Puerto Rico) | `es-PR` | Plain text<br/>Pronunciation |
-| Spanish (Spain) | `es-ES` | Audio (20201015)<br>Plain text<br>Structured Text (20210908)<br>Pronunciation |
-| Spanish (Uruguay) | `es-UY` | Plain text<br/>Pronunciation |
-| Spanish (USA) | `es-US` | Plain text<br/>Pronunciation |
-| Spanish (Venezuela) | `es-VE` | Plain text<br/>Pronunciation |
-| Swahili (Kenya) | `sw-KE` | Plain text |
-| Swahili (Tanzania) | `sw-TZ` | Plain text |
-| Swedish (Sweden) | `sv-SE` | Plain text<br/>Pronunciation |
-| Tamil (India) | `ta-IN` | Plain text |
-| Telugu (India) | `te-IN` | Plain text |
-| Thai (Thailand) | `th-TH` | Plain text |
-| Turkish (Turkey) | `tr-TR` | Plain text |
-| Ukrainian (Ukraine) | `uk-UA` | Plain text |
-| Uzbek (Uzbekistan) | `uz-UZ` | Plain text |
-| Vietnamese (Vietnam) | `vi-VN` | Plain text |
-| Zulu (South Africa) | `zu-ZA` | Plain text |
-
-### Phrase list
+### [Speech-to-text](#tab/speechtotext)
+
+| Language | Locale (BCP-47) |
+|--|--|
+| Afrikaans (South Africa) | `af-ZA` |
+| Amharic (Ethiopia) | `am-ET` |
+| Arabic (Algeria) | `ar-DZ` |
+| Arabic (Bahrain), modern standard | `ar-BH` |
+| Arabic (Egypt) | `ar-EG` |
+| Arabic (Iraq) | `ar-IQ` |
+| Arabic (Israel) | `ar-IL` |
+| Arabic (Jordan) | `ar-JO` |
+| Arabic (Kuwait) | `ar-KW` |
+| Arabic (Lebanon) | `ar-LB` |
+| Arabic (Libya) | `ar-LY` |
+| Arabic (Morocco) | `ar-MA` |
+| Arabic (Oman) | `ar-OM` |
+| Arabic (Palestinian Authority) | `ar-PS` |
+| Arabic (Qatar) | `ar-QA` |
+| Arabic (Saudi Arabia) | `ar-SA` |
+| Arabic (Syria) | `ar-SY` |
+| Arabic (Tunisia) | `ar-TN` |
+| Arabic (United Arab Emirates) | `ar-AE` |
+| Arabic (Yemen) | `ar-YE` |
+| Bulgarian (Bulgaria) | `bg-BG` |
+| Burmese (Myanmar) | `my-MM` |
+| Catalan (Spain) | `ca-ES` |
+| Chinese (Cantonese, Traditional) | `zh-HK` |
+| Chinese (Mandarin, Simplified) | `zh-CN` |
+| Chinese (Taiwanese Mandarin) | `zh-TW` |
+| Croatian (Croatia) | `hr-HR` |
+| Czech (Czech) | `cs-CZ` |
+| Danish (Denmark) | `da-DK` |
+| Dutch (Belgium) | `nl-BE` |
+| Dutch (Netherlands) | `nl-NL` |
+| English (Australia) | `en-AU` |
+| English (Canada) | `en-CA` |
+| English (Ghana) | `en-GH` |
+| English (Hong Kong) | `en-HK` |
+| English (India) | `en-IN` |
+| English (Ireland) | `en-IE` |
+| English (Kenya) | `en-KE` |
+| English (New Zealand) | `en-NZ` |
+| English (Nigeria) | `en-NG` |
+| English (Philippines) | `en-PH` |
+| English (Singapore) | `en-SG` |
+| English (South Africa) | `en-ZA` |
+| English (Tanzania) | `en-TZ` |
+| English (United Kingdom) | `en-GB` |
+| English (United States) | `en-US` |
+| Estonian (Estonia) | `et-EE` |
+| Filipino (Philippines) | `fil-PH` |
+| Finnish (Finland) | `fi-FI` |
+| French (Belgium) | `fr-BE` |
+| French (Canada) | `fr-CA` |
+| French (France) | `fr-FR` |
+| French (Switzerland) | `fr-CH` |
+| German (Austria) | `de-AT` |
+| German (Germany) | `de-DE` |
+| German (Switzerland) | `de-CH` |
+| Greek (Greece) | `el-GR` |
+| Gujarati (Indian) | `gu-IN` |
+| Hebrew (Israel) | `he-IL` |
+| Hindi (India) | `hi-IN` |
+| Hungarian (Hungary) | `hu-HU` |
+| Icelandic (Iceland) | `is-IS` |
+| Indonesian (Indonesia) | `id-ID` |
+| Irish (Ireland) | `ga-IE` |
+| Italian (Italy) | `it-IT` |
+| Japanese (Japan) | `ja-JP` |
+| Javanese (Indonesia) | `jv-ID` |
+| Kannada (India) | `kn-IN` |
+| Khmer (Cambodia) | `km-KH` |
+| Korean (Korea) | `ko-KR` |
+| Lao (Laos) | `lo-LA` |
+| Latvian (Latvia) | `lv-LV` |
+| Lithuanian (Lithuania) | `lt-LT` |
+| Macedonian (North Macedonia) | `mk-MK` |
+| Malay (Malaysia) | `ms-MY` |
+| Maltese (Malta) | `mt-MT` |
+| Marathi (India) | `mr-IN` |
+| Norwegian (Bokmål, Norway) | `nb-NO` |
+| Persian (Iran) | `fa-IR` |
+| Polish (Poland) | `pl-PL` |
+| Portuguese (Brazil) | `pt-BR` |
+| Portuguese (Portugal) | `pt-PT` |
+| Romanian (Romania) | `ro-RO` |
+| Russian (Russia) | `ru-RU` |
+| Serbian (Serbia) | `sr-RS` |
+| Sinhala (Sri Lanka) | `si-LK` |
+| Slovak (Slovakia) | `sk-SK` |
+| Slovenian (Slovenia) | `sl-SI` |
+| Spanish (Argentina) | `es-AR` |
+| Spanish (Bolivia) | `es-BO` |
+| Spanish (Chile) | `es-CL` |
+| Spanish (Colombia) | `es-CO` |
+| Spanish (Costa Rica) | `es-CR` |
+| Spanish (Cuba) | `es-CU` |
+| Spanish (Dominican Republic) | `es-DO` |
+| Spanish (Ecuador) | `es-EC` |
+| Spanish (El Salvador) | `es-SV` |
+| Spanish (Equatorial Guinea) | `es-GQ` |
+| Spanish (Guatemala) | `es-GT` |
+| Spanish (Honduras) | `es-HN` |
+| Spanish (Mexico) | `es-MX` |
+| Spanish (Nicaragua) | `es-NI` |
+| Spanish (Panama) | `es-PA` |
+| Spanish (Paraguay) | `es-PY` |
+| Spanish (Peru) | `es-PE` |
+| Spanish (Puerto Rico) | `es-PR` |
+| Spanish (Spain) | `es-ES` |
+| Spanish (Uruguay) | `es-UY` |
+| Spanish (USA) | `es-US` |
+| Spanish (Venezuela) | `es-VE` |
+| Swahili (Kenya) | `sw-KE` |
+| Swahili (Tanzania) | `sw-TZ` |
+| Swedish (Sweden) | `sv-SE` |
+| Tamil (India) | `ta-IN` |
+| Telugu (India) | `te-IN` |
+| Thai (Thailand) | `th-TH` |
+| Turkish (Turkey) | `tr-TR` |
+| Ukrainian (Ukraine) | `uk-UA` |
+| Uzbek (Uzbekistan) | `uz-UZ` |
+| Vietnamese (Vietnam) | `vi-VN` |
+| Zulu (South Africa) | `zu-ZA` |
+
+### [Plain text](#tab/plaintext)
+
+| Language | Locale (BCP-47) |
+|--|--|
+| Afrikaans (South Africa) | `af-ZA` |
+| Amharic (Ethiopia) | `am-ET` |
+| Arabic (Algeria) | `ar-DZ` |
+| Arabic (Bahrain), modern standard | `ar-BH` |
+| Arabic (Egypt) | `ar-EG` |
+| Arabic (Iraq) | `ar-IQ` |
+| Arabic (Israel) | `ar-IL` |
+| Arabic (Jordan) | `ar-JO` |
+| Arabic (Kuwait) | `ar-KW` |
+| Arabic (Lebanon) | `ar-LB` |
+| Arabic (Libya) | `ar-LY` |
+| Arabic (Morocco) | `ar-MA` |
+| Arabic (Oman) | `ar-OM` |
+| Arabic (Palestinian Authority) | `ar-PS` |
+| Arabic (Qatar) | `ar-QA` |
+| Arabic (Saudi Arabia) | `ar-SA` |
+| Arabic (Syria) | `ar-SY` |
+| Arabic (Tunisia) | `ar-TN` |
+| Arabic (United Arab Emirates) | `ar-AE` |
+| Arabic (Yemen) | `ar-YE` |
+| Bulgarian (Bulgaria) | `bg-BG` |
+| Burmese (Myanmar) | `my-MM` |
+| Catalan (Spain) | `ca-ES` |
+| Chinese (Cantonese, Traditional) | `zh-HK` |
+| Chinese (Mandarin, Simplified) | `zh-CN` |
+| Chinese (Taiwanese Mandarin) | `zh-TW` |
+| Croatian (Croatia) | `hr-HR` |
+| Czech (Czech) | `cs-CZ` |
+| Danish (Denmark) | `da-DK` |
+| Dutch (Belgium) | `nl-BE` |
+| Dutch (Netherlands) | `nl-NL` |
+| English (Australia) | `en-AU` |
+| English (Canada) | `en-CA` |
+| English (Ghana) | `en-GH` |
+| English (Hong Kong) | `en-HK` |
+| English (India) | `en-IN` |
+| English (Ireland) | `en-IE` |
+| English (Kenya) | `en-KE` |
+| English (New Zealand) | `en-NZ` |
+| English (Nigeria) | `en-NG` |
+| English (Philippines) | `en-PH` |
+| English (Singapore) | `en-SG` |
+| English (South Africa) | `en-ZA` |
+| English (Tanzania) | `en-TZ` |
+| English (United Kingdom) | `en-GB` |
+| English (United States) | `en-US` |
+| Estonian (Estonia) | `et-EE` |
+| Filipino (Philippines) | `fil-PH` |
+| Finnish (Finland) | `fi-FI` |
+| French (Belgium) | `fr-BE` |
+| French (Canada) | `fr-CA` |
+| French (France) | `fr-FR` |
+| French (Switzerland) | `fr-CH` |
+| German (Austria) | `de-AT` |
+| German (Germany) | `de-DE` |
+| German (Switzerland) | `de-CH` |
+| Greek (Greece) | `el-GR` |
+| Gujarati (Indian) | `gu-IN` |
+| Hebrew (Israel) | `he-IL` |
+| Hindi (India) | `hi-IN` |
+| Hungarian (Hungary) | `hu-HU` |
+| Icelandic (Iceland) | `is-IS` |
+| Indonesian (Indonesia) | `id-ID` |
+| Irish (Ireland) | `ga-IE` |
+| Italian (Italy) | `it-IT` |
+| Japanese (Japan) | `ja-JP` |
+| Javanese (Indonesia) | `jv-ID` |
+| Kannada (India) | `kn-IN` |
+| Khmer (Cambodia) | `km-KH` |
+| Korean (Korea) | `ko-KR` |
+| Lao (Laos) | `lo-LA` |
+| Latvian (Latvia) | `lv-LV` |
+| Lithuanian (Lithuania) | `lt-LT` |
+| Macedonian (North Macedonia) | `mk-MK` |
+| Malay (Malaysia) | `ms-MY` |
+| Maltese (Malta) | `mt-MT` |
+| Marathi (India) | `mr-IN` |
+| Norwegian (Bokmål, Norway) | `nb-NO` |
+| Persian (Iran) | `fa-IR` |
+| Polish (Poland) | `pl-PL` |
+| Portuguese (Brazil) | `pt-BR` |
+| Portuguese (Portugal) | `pt-PT` |
+| Romanian (Romania) | `ro-RO` |
+| Russian (Russia) | `ru-RU` |
+| Serbian (Serbia) | `sr-RS` |
+| Sinhala (Sri Lanka) | `si-LK` |
+| Slovak (Slovakia) | `sk-SK` |
+| Slovenian (Slovenia) | `sl-SI` |
+| Spanish (Argentina) | `es-AR` |
+| Spanish (Bolivia) | `es-BO` |
+| Spanish (Chile) | `es-CL` |
+| Spanish (Colombia) | `es-CO` |
+| Spanish (Costa Rica) | `es-CR` |
+| Spanish (Cuba) | `es-CU` |
+| Spanish (Dominican Republic) | `es-DO` |
+| Spanish (Ecuador) | `es-EC` |
+| Spanish (El Salvador) | `es-SV` |
+| Spanish (Equatorial Guinea) | `es-GQ` |
+| Spanish (Guatemala) | `es-GT` |
+| Spanish (Honduras) | `es-HN` |
+| Spanish (Mexico) | `es-MX` |
+| Spanish (Nicaragua) | `es-NI` |
+| Spanish (Panama) | `es-PA` |
+| Spanish (Paraguay) | `es-PY` |
+| Spanish (Peru) | `es-PE` |
+| Spanish (Puerto Rico) | `es-PR` |
+| Spanish (Spain) | `es-ES` |
+| Spanish (Uruguay) | `es-UY` |
+| Spanish (USA) | `es-US` |
+| Spanish (Venezuela) | `es-VE` |
+| Swahili (Kenya) | `sw-KE` |
+| Swahili (Tanzania) | `sw-TZ` |
+| Swedish (Sweden) | `sv-SE` |
+| Tamil (India) | `ta-IN` |
+| Telugu (India) | `te-IN` |
+| Thai (Thailand) | `th-TH` |
+| Turkish (Turkey) | `tr-TR` |
+| Ukrainian (Ukraine) | `uk-UA` |
+| Uzbek (Uzbekistan) | `uz-UZ` |
+| Vietnamese (Vietnam) | `vi-VN` |
+| Zulu (South Africa) | `zu-ZA` |
++
+### [Structured text](#tab/structuredtext)
+
+| Language | Locale (BCP-47) |
+|--|--|
+| English (India) | `en-IN` |
+| English (United Kingdom) | `en-GB` |
+| English (United States) | `en-US` |
+| French (Canada) | `fr-CA` |
+| French (France) | `fr-FR` |
+| German (Switzerland) | `de-CH` |
+| Spanish (Mexico) | `es-MX` |
+| Spanish (Spain) | `es-ES` |
+
+### [Pronunciation data](#tab/pronunciation)
+
+| Language | Locale (BCP-47) |
+|--|--|
+| Catalan (Spain) | `ca-ES` |
+| Croatian (Croatia) | `hr-HR` |
+| Czech (Czech) | `cs-CZ` |
+| Danish (Denmark) | `da-DK` |
+| Dutch (Netherlands) | `nl-NL` |
+| English (Australia) | `en-AU` |
+| English (Canada) | `en-CA` |
+| English (Ghana) | `en-GH` |
+| English (Hong Kong) | `en-HK` |
+| English (India) | `en-IN` |
+| English (Ireland) | `en-IE` |
+| English (Kenya) | `en-KE` |
+| English (New Zealand) | `en-NZ` |
+| English (Nigeria) | `en-NG` |
+| English (Philippines) | `en-PH` |
+| English (Singapore) | `en-SG` |
+| English (South Africa) | `en-ZA` |
+| English (Tanzania) | `en-TZ` |
+| English (United Kingdom) | `en-GB` |
+| English (United States) | `en-US` |
+| Estonian (Estonia) | `et-EE` |
+| Filipino (Philippines) | `fil-PH` |
+| Finnish (Finland) | `fi-FI` |
+| French (Canada) | `fr-CA` |
+| French (France) | `fr-FR` |
+| French (Switzerland) | `fr-CH` |
+| German (Austria) | `de-AT` |
+| German (Germany) | `de-DE` |
+| German (Switzerland) | `de-CH` |
+| Hungarian (Hungary) | `hu-HU` |
+| Indonesian (Indonesia) | `id-ID` |
+| Irish (Ireland) | `ga-IE` |
+| Italian (Italy) | `it-IT` |
+| Latvian (Latvia) | `lv-LV` |
+| Lithuanian (Lithuania) | `lt-LT` |
+| Polish (Poland) | `pl-PL` |
+| Portuguese (Brazil) | `pt-BR` |
+| Portuguese (Portugal) | `pt-PT` |
+| Romanian (Romania) | `ro-RO` |
+| Slovak (Slovakia) | `sk-SK` |
+| Slovenian (Slovenia) | `sl-SI` |
+| Spanish (Argentina) | `es-AR` |
+| Spanish (Bolivia) | `es-BO` |
+| Spanish (Chile) | `es-CL` |
+| Spanish (Colombia) | `es-CO` |
+| Spanish (Costa Rica) | `es-CR` |
+| Spanish (Cuba) | `es-CU` |
+| Spanish (Dominican Republic) | `es-DO` |
+| Spanish (Ecuador) | `es-EC` |
+| Spanish (El Salvador) | `es-SV` |
+| Spanish (Guatemala) | `es-GT` |
+| Spanish (Honduras) | `es-HN` |
+| Spanish (Mexico) | `es-MX` |
+| Spanish (Nicaragua) | `es-NI` |
+| Spanish (Panama) | `es-PA` |
+| Spanish (Paraguay) | `es-PY` |
+| Spanish (Peru) | `es-PE` |
+| Spanish (Puerto Rico) | `es-PR` |
+| Spanish (Spain) | `es-ES` |
+| Spanish (Uruguay) | `es-UY` |
+| Spanish (USA) | `es-US` |
+| Spanish (Venezuela) | `es-VE` |
+| Swedish (Sweden) | `sv-SE` |
+
+### [Audio data](#tab/audiodata)
+
+| Language | Locale (BCP-47) |
+|--|--|
+| English (United Kingdom) | `en-GB` |
+| English (United States) | `en-US` |
+| French (Canada) | `fr-CA` |
+| French (France) | `fr-FR` |
+| German (Switzerland) | `de-CH` |
+| Italian (Italy) | `it-IT` |
+| Korean (Korea) | `ko-KR` |
+| Portuguese (Brazil) | `pt-BR` |
+| Spanish (Spain) | `es-ES` |
+
+### [Phrase list](#tab/phraselist)
You can use the locales in this table with [phrase list](improve-accuracy-phrase-list.md).
You can use the locales in this table with [phrase list](improve-accuracy-phrase
| Portuguese (Brazil) | `pt-BR` | | Spanish (Spain) | `es-ES` | ++ ## Text-to-speech Both the Microsoft Speech SDK and REST APIs support these neural voices, each of which supports a specific language and dialect, identified by locale. You can also get a full list of languages and voices supported for each specific region or endpoint through the [voices list API](rest-text-to-speech.md#get-a-list-of-voices).
cognitive-services Tutorial Voice Enable Your Bot Speech Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/tutorial-voice-enable-your-bot-speech-sdk.md
Follow these steps to use the Bot Framework Emulator to test your echo bot runni
## Deploy your bot to Azure App Service
-The next step is to deploy the echo bot to Azure. There are a few ways to deploy a bot, including the [Azure CLI](/azure/bot-service/bot-builder-deploy-az-cli) and [deployment templates](https://github.com/microsoft/BotBuilder-Samples/tree/master/samples/csharp_dotnetcore/adaptive-dialog/03.core-bot). This tutorial focuses on publishing directly from Visual Studio.
+The next step is to deploy the echo bot to Azure. There are a few ways to deploy a bot, including the [Azure CLI](/azure/bot-service/bot-builder-deploy-az-cli) and [deployment templates](https://github.com/microsoft/BotBuilder-Samples/tree/main/samples/csharp_dotnetcore/13.core-bot). This tutorial focuses on publishing directly from Visual Studio.
> [!NOTE] > If **Publish** doesn't appear as you perform the following steps, use Visual Studio Installer to add the **ASP.NET and web development** workload.
cognitive-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/language-support.md
| Serbian (Latin) | `sr-Latn` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö| | Slovak | `sk` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö| | Slovenian | `sl` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
-| Somali | `so` |Γ£ö|||Γ£ö||
+| 🆕Somali | `so` |✔|||✔||
| Spanish | `es` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö| | Swahili | `sw` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö| | Swedish | `sv` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
| Vietnamese | `vi` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö| | Welsh | `cy` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö| | Yucatec Maya | `yua` |Γ£ö|Γ£ö||Γ£ö||
+| 🆕Zulu | `zu` |✔|||||
> [!NOTE] > Language code `pt` will default to `pt-br`, Portuguese (Brazil).
cognitive-services Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-named-entity-recognition/quickstart.md
After you've created entity extraction model, you can:
* [Use the runtime API to extract entities](how-to/call-api.md)
-When you start to create your own entity classification projects, use the how-to articles to learn more about developing your model in greater detail:
+When you start to create your own custom NER projects, use the how-to articles to learn more about tagging, training and consuming your model in greater detail:
* [Data selection and schema design](how-to/design-schema.md) * [Tag data](how-to/tag-data.md)
communication-services Detailed Call Flows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/detailed-call-flows.md
Communication Services is built primarily on two types of traffic: **real-time m
Users of your Communication Services solution will be connecting to your services from their client devices. Communication between these devices and your servers is handled with **signaling**. For example: call initiation and real-time chat are supported by signaling between devices and your service. Most signaling traffic uses HTTPS REST, though in some scenarios, SIP can be used as a signaling traffic protocol. While this type of traffic is less sensitive to latency, low-latency signaling will the users of your solution a pleasant end-user experience.
+Call flows in ACS are based on the Session Description Protocol (SDP) RFC 4566 offer and answer model over HTTPS. Once the callee accepts an incoming call, the caller and callee agree on the session parameters.
+
+Media traffic is encrypted by, and flows between, the caller and callee using Secure RTP (SRTP), a profile of Real-time Transport Protocol (RTP) that provides confidentiality, authentication, and replay attack protection to RTP traffic. SRTP uses a session key generated by a secure random number generator and exchanged using the signaling TLS channel.
+
+ACS media traffic between two endpoints participating in ACS audio, video, and application sharing, utilizes SRTP to encrypt the media stream. Cryptographic keys are negotiated between the two endpoints over a signaling protocol which uses TLS 1.2 and AES-256 (in GCM mode) encrypted UDP/TCP channel.
+++ ### Interoperability restrictions Media flowing through Communication Services is restricted as follows:
Internal clients will obtain local, reflexive, and relay candidates in the same
The following documents may be interesting to you: - Learn more about [call types](../concepts/voice-video-calling/about-call-types.md)-- Learn about [Client-server architecture](./client-and-server-architecture.md)
+- Learn about [Client-server architecture](./client-and-server-architecture.md)
communication-services Direct Routing Infrastructure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/telephony/direct-routing-infrastructure.md
The port range of the Media Processors is shown in the following table:
## Media traffic: Media processors geography The media traffic flows via components called media processors. Media processors are placed in the same datacenters as SIP proxies:-- US (two in US West and US East datacenters)-- Europe (Amsterdam and Dublin datacenters)-- Asia (Singapore and Hong Kong SAR datacenters)-- Australia (AU East and Southeast datacenters)
+- NOAM (US South Central, two in US West and US East datacenters)
+- Europe (UK South, France Central, Amsterdam and Dublin datacenters)
+- Asia (Singapore datacenter)
- Japan (JP East and West datacenters)-
+- Australia (AU East and Southeast datacenters)
+- LATAM (Brazil South)
+- Africa (South Africa North)
## Media traffic: Codecs
communication-services Calling Sdk Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/voice-video-calling/calling-sdk-features.md
Key features of the Calling SDK:
- **Addressing** - Azure Communication Services provides generic [identities](../identity-model.md) that are used to address communication endpoints. Clients use these identities to authenticate to the service and communicate with each other. These identities are used in Calling APIs that provides clients visibility into who is connected to a call (the roster). - **Encryption** - The Calling SDK encrypts traffic and prevents tampering on the wire. - **Device Management and Media** - The Calling SDK provides facilities for binding to audio and video devices, encodes content for efficient transmission over the communications dataplane, and renders content to output devices and views that you specify. APIs are also provided for screen and application sharing.-- **PSTN** - The Calling SDK can receive and initiate voice calls with the traditional publically switched telephony system, [using phone numbers you acquire in the Azure portal](../../quickstarts/telephony/get-phone-number.md) or programmatically.
+- **PSTN** - The Calling SDK can initiate voice calls with the traditional publicly switched telephone network, [using phone numbers you acquire in the Azure portal](../../quickstarts/telephony/get-phone-number.md) or programmatically.
- **Teams Meetings** - The Calling SDK can [join Teams meetings](../../quickstarts/voice-video-calling/get-started-teams-interop.md) and interact with the Teams voice and video dataplane. - **Notifications** - The Calling SDK provides APIs allowing clients to be notified of an incoming call. In situations where your app is not running in the foreground, patterns are available to [fire pop-up notifications](../notifications.md) ("toasts") to inform end-users of an incoming call.
container-registry Container Registry Authentication Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-authentication-managed-identity.md
In this article, you learned about using managed identities with Azure Container
> * Use the managed identity to access the registry and pull a container image * Learn more about [managed identities for Azure resources](../active-directory/managed-identities-azure-resources/index.yml).
-* Learn how to use a [sytem-assigned](https://github.com/Azure/app-service-linux-docs/blob/master/HowTo/use_system-assigned_managed_identities.md) or [user-assigned](https://github.com/Azure/app-service-linux-docs/blob/master/HowTo/use_user-assigned_managed_identities.md) managed identity with App Service and Azure Container Registry.
+* Learn how to use a [system-assigned](https://github.com/Azure/app-service-linux-docs/blob/master/HowTo/use_system-assigned_managed_identities.md) or [user-assigned](https://github.com/Azure/app-service-linux-docs/blob/master/HowTo/use_user-assigned_managed_identities.md) managed identity with App Service and Azure Container Registry.
* Learn how to [deploy a container image from Azure Container Registry using a managed identity](../container-instances/using-azure-container-registry-mi.md). <!-- LINKS - external -->
cosmos-db Integrated Power Bi Synapse Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/integrated-power-bi-synapse-link.md
With the integrated Power BI experience, you can visualize your Azure Cosmos DB data in near real time in just a few clicks. It uses the built-in Power BI integration feature in the Azure portal along with [Azure Synapse Link](synapse-link.md).
-Synapse Link enables you to build Power BI dashboards with no performance or cost impact to your transactional workloads, and no ETL pipelines. You can either use [DirectQuery](/power-bi/connect-data/service-dataset-modes-understand#directquery-mode) or [import](/power-bi/connect-data/service-dataset-modes-understand#import-mode) modes. With DirectQuery, you can build dashboards using live data from your Azure Cosmos DB accounts, without importing or copying the data into Power BI.
+Synapse Link enables you to build Power BI dashboards with no performance or cost impact to your transactional workloads, and no ETL pipelines. With [DirectQuery](/power-bi/connect-data/service-dataset-modes-understand#directquery-mode), you can build dashboards using live data from your Azure Cosmos DB accounts, without importing or copying the data into Power BI.
## Build a Power BI report
Use the following steps to build a Power BI report from Azure Cosmos DB data in
* If you already enabled Synapse Link on some containers, you will see the checkbox next to the container name is selected. You may optionally deselect them, based on the data you'd like to visualize in Power BI.
- * You can enable Synapse Link on your existing containers.
+ * If Synapse Link is not enabled, you can enable this on your existing containers.
- > [!IMPORTANT]
- > Due to short-term capacity constraints, register to enable Synapse Link on your existing containers. Depending on the pending requests, approving this request may take anywhere from a day to a week. If you have any issues or questions, please reach out to the [Azure Cosmos DB Synapse team](mailto:cosmosdbsynapselink@microsoft.com).
-
- :::image type="content" source="./media/integrated-power-bi-synapse-link/register-synapse-link.png" alt-text="Register Synapse Link for selected containers." border="true" lightbox="./media/integrated-power-bi-synapse-link/register-synapse-link.png":::
-
- > [!NOTE]
- > You will only need to register once per each subscription. When the subscription is approved, you can enable Synapse Link for existing containers in all eligible accounts within that subscription.
-
-1. Select **register** to enable Synapse Link on your existing accounts. The status changes to "registration pending". After your request is approved by Azure Cosmos DB team, this button will go away, and you will be able to select your existing containers.
-
-1. Select any of the existing containers and select **Next**.
-
- If enabling Synapse Link is in progress on any of the containers, the data from those containers will not be included. You should come back to this tab later and import data when the containers are enabled.
+ If enabling Synapse Link is in progress on any of the containers, the data from those containers will not be included. You should come back to this tab later and import data when the containers are enabled.
:::image type="content" source="./media/integrated-power-bi-synapse-link/synapse-link-progress-existing-containers.png" alt-text="Progress of Synapse Link enabled on existing containers." border="true" lightbox="./media/integrated-power-bi-synapse-link/synapse-link-progress-existing-containers.png":::
Use the following steps to build a Power BI report from Azure Cosmos DB data in
:::image type="content" source="./media/integrated-power-bi-synapse-link/synapse-link-existing-containers-registration-complete.png" alt-text="Synapse Link successfully enabled on the selected containers." border="true" lightbox="./media/integrated-power-bi-synapse-link/synapse-link-existing-containers-registration-complete.png"::: 1. From the **Select workspace** tab, choose the Azure Synapse Analytics workspace and select **Next**. This will automatically create T-SQL views in Synapse Analytics, for the containers selected earlier. For more information on T-SQL views required to connect your Cosmos DB to Power BI, see [Prepare views](../synapse-analytics/sql/tutorial-connect-power-bi-desktop.md#3prepare-view) article.-
-1. You can either choose an existing workspace or create a new one. To select an existing workspace, provide the **Subscription**, **Workspace**, and the **Database** details. Azure portal will use your Azure AD credentials to automatically connect to your Synapse workspace and create views. Make sure you have "Synapse administrator" permissions to this workspace.
+ > [!NOTE]
+ > Your Cosmos DB container proprieties will be represented as columns in T-SQL views, including deep nested JSON data. This is a quick start for your BI dashboards. These views will be available in your Synapse workspace/database; you can also use these exact same views in Synapse Workspace for data exploration, data science, data engineering, etc. Please note that advanced scenarios may demand more complex views or fine tuning of these views, for better performance. For more information. see [best practices for Synapse Link when using Synapse serverless SQL pools](../synapse-analytics/sql/resources-self-help-sql-on-demand.md#cosmos-db-performance-issues) article.
+
+1. You can either choose an existing workspace or create a new one. To select an existing workspace, provide the **Subscription**, **Workspace**, and the **Database** details. Azure portal will use your Azure AD credentials to automatically connect to your Synapse workspace and create T-SQL views. Make sure you have "Synapse administrator" permissions to this workspace.
:::image type="content" source="./media/integrated-power-bi-synapse-link/synapse-create-views.png" alt-text="Connect to Synapse Link workspace and create views." border="true" lightbox="./media/integrated-power-bi-synapse-link/synapse-create-views.png":::
cosmos-db Conceptual Resilient Sdk Applications https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/conceptual-resilient-sdk-applications.md
+
+ Title: Designing resilient applications with Azure Cosmos DB SDKs
+description: Learn how to build resilient applications using the Azure Cosmos DB SDKs and what all are the expected error status codes to retry on.
++ Last updated : 03/25/2022++++
+# Designing resilient applications with Azure Cosmos DB SDKs
+
+When authoring client applications that interact with Azure Cosmos DB through any of the SDKs, it's important to understand a few key fundamentals. This article is a design guide to help you understand these fundamentals and design resilient applications.
+
+## Overview
+
+For a video overview of the concepts discussed in this article, see:
+
+> [!VIDEO https://www.youtube.com/embed/McZIQhZpvew?start=118]
+>
+
+## Connectivity modes
+
+Azure Cosmos DB SDKs can connect to the service in two [connectivity modes](sql-sdk-connection-modes.md). The .NET and Java SDKs can connect to the service in both Gateway and Direct mode, while the others can only connect in Gateway mode. Gateway mode uses the HTTP protocol and Direct mode uses the TCP protocol.
+
+Gateway mode is always used to fetch metadata such as the account, container, and routing information regardless of which mode SDK is configured to use. This information is cached in memory and is used to connect to the [service replicas](../partitioning-overview.md#replica-sets).
+
+In summary, for SDKs in Gateway mode, you can expect HTTP traffic, while for SDKs in Direct mode, you can expect a combination of HTTP and TCP traffic under different circumstances (like initialization, or fetching metadata, or routing information).
+
+## Client instances and connections
+
+Regardless of the connectivity mode, it's critical to maintain a Singleton instance of the SDK client per account per application. Connections, both HTTP and TCP, are scoped to the client instance. Most compute environments have limitations in terms of the number of connections that can be open at the same time and when these limits are reached, connectivity will be affected.
+
+## Distributed applications and networks
+
+When you design distributed applications, there are three key components:
+
+* Your application and the environment it runs on.
+* The network, which includes any component between your application and the Azure Cosmos DB service endpoint.
+* The Azure Cosmos DB service endpoint.
+
+When failures occur, they often fall into one of these three areas, and it's important to understand that due to the distributed nature of the system, it's impractical to expect 100% availability for any of these components.
+
+Azure Cosmos DB has a [comprehensive set of availability SLAs](../high-availability.md#slas), but none of them is 100%. The network components that connect your application to the service endpoint can have transient hardware issues and lose packets. Even the compute environment where your application runs could have a CPU spike affecting operations. These failure conditions can affect the operations of the SDKs and normally surface as errors with particular codes.
+
+Your application should be resilient to a [certain degree](#when-to-contact-customer-support) of potential failures across these components by implementing [retry policies](#should-my-application-retry-on-errors) over the responses provided by the SDKs.
+
+## Should my application retry on errors?
+
+The short answer is **yes**. But not all errors make sense to retry on, some of the error or status codes aren't transient. The table below describes them in detail:
+
+| Status Code | Should add retry | Description |
+|-|-|-|
+| 400 | No | [Bad request](troubleshoot-bad-request.md) |
+| 401 | No | [Not authorized](troubleshoot-unauthorized.md) |
+| 403 | Optional | [Forbidden](troubleshoot-forbidden.md) |
+| 404 | No | [Resource is not found](troubleshoot-not-found.md) |
+| 408 | Yes | [Request timed out](troubleshoot-dot-net-sdk-request-timeout.md) |
+| 409 | No | Conflict failure is when the identity (ID and partition key) provided for a resource on a write operation has been taken by an existing resource or when a [unique key constraint](../unique-keys.md) has been violated. |
+| 410 | Yes | Gone exceptions (transient failure that shouldn't violate SLA) |
+| 412 | No | Precondition failure is where the operation specified an eTag that is different from the version available at the server. It's an [optimistic concurrency](database-transactions-optimistic-concurrency.md#optimistic-concurrency-control) error. Retry the request after reading the latest version of the resource and updating the eTag on the request.
+| 413 | No | [Request Entity Too Large](../concepts-limits.md#per-item-limits) |
+| 429 | Yes | It's safe to retry on a 429. Review the [guide to troubleshoot HTTP 429](troubleshoot-request-rate-too-large.md).|
+| 449 | Yes | Transient error that only occurs on write operations, and is safe to retry. This can point to a design issue where too many concurrent operations are trying to update the same object in Cosmos DB. |
+| 500 | No | The operation failed due to an unexpected service error. Contact support by filing an [Azure support issue](https://aka.ms/azure-support). |
+| 503 | Yes | [Service unavailable](troubleshoot-service-unavailable.md) |
+
+In the table above, all the status codes marked with **Yes** should have some degree of retry coverage in your application.
+
+### HTTP 403
+
+The Azure Cosmos DB SDKs don't retry on HTTP 403 failures in general, but there are certain errors associated with HTTP 403 that your application might decide to react to. For example, if you receive an error indicating that [a Partition Key is full](troubleshoot-forbidden.md#partition-key-exceeding-storage), you might decide to alter the partition key of the document you're trying to write based on some business rule.
+
+### HTTP 429
+
+The Azure Cosmos DB SDKs will retry on HTTP 429 errors by default following the client configuration and honoring the service's response `x-ms-retry-after-ms` header, by waiting the indicated time and retrying after.
+
+When the SDK retries are exceeded, the error is returned to your application. Ideally inspecting the `x-ms-retry-after-ms` header in the response can be used as a hint to decide how long to wait before retrying the request. Another alternative would be an exponential back-off algorithm or configuring the client to extend the retries on HTTP 429.
+
+### HTTP 449
+
+The Azure Cosmos DB SDKs will retry on HTTP 449 with an incremental back-off during a fixed period of time to accommodate most scenarios.
+
+When the automatic SDK retries are exceeded, the error is returned to your application. HTTP 449 errors can be safely retried. Because of the highly concurrent nature of write operations, it's better to have a random back-off algorithm to avoid repeating the same degree of concurrency after a fixed interval.
+
+### Timeouts and connectivity related failures (HTTP 408/503)
+
+Network timeouts and connectivity failures are among the most common errors. The Azure Cosmos DB SDKs are themselves resilient and will retry timeouts and connectivity issues across the HTTP and TCP protocols if the retry is feasible:
+
+* For read operations, the SDKs will retry any timeout or connectivity related error.
+* For write operations, the SDKs will **not** retry because these operations are **not idempotent**. When a timeout occurs waiting for the response, it's not possible to know if the request reached the service.
+
+If the account has multiple regions available, the SDKs will also attempt a [cross-region retry](troubleshoot-sdk-availability.md#transient-connectivity-issues-on-tcp-protocol).
+
+Because of the nature of timeouts and connectivity failures, these might not appear in your [account metrics](../monitor-cosmos-db.md), as they only cover failures happening on the service side.
+
+It's recommended for applications to have their own retry policy for these scenarios and take into consideration how to resolve write timeouts. For example, retrying on a Create timeout can yield an HTTP 409 (Conflict) if the previous request did reach the service, but it would succeed if it didn't.
+
+## Do retries affect my latency?
+
+From the client perspective, any retries will affect the end to end latency of an operation. When your application P99 latency is being affected, understanding the retries that are happening and how to address them is important.
+
+Azure Cosmos DB SDKs provide detailed information in their logs and diagnostics that can help identify which retries are taking place. For more information, see [how to collect .NET SDK diagnostics](troubleshoot-dot-net-sdk-slow-request.md#capture-diagnostics) and [how to collect Java SDK diagnostics](troubleshoot-java-sdk-v4-sql.md#capture-the-diagnostics).
+
+## What about regional outages?
+
+The Azure Cosmos DB SDKs cover regional availability and can perform retries on another account regions. Refer to the [multiregional environments retry scenarios and configurations](troubleshoot-sdk-availability.md) article to understand which scenarios involve other regions.
+
+## When to contact customer support
+
+Before contacting customer support, go through these steps:
+
+* What is the impact measured in volume of operations affected compared to the operations succeeding? Is it within the service SLAs?
+* Is the P99 latency affected?
+* Are the failures related to [error codes](#should-my-application-retry-on-errors) that my application should retry on and does the application cover such retries?
+* Are the failures affecting all your application instances or only a subset? When the issue is reduced to a subset of instances, it's commonly a problem related to those instances.
+* Have you gone through the related troubleshooting documents in the above table to rule out a problem on the application environment?
+
+If all the application instances are affected, or the percentage of affected operations is outside service SLAs, or affecting your own application SLAs and P99s, contact customer support.
+
+## Next steps
+
+* Learn about [multiregional environments retry scenarios and configurations](troubleshoot-sdk-availability.md)
+* Review the [Availability SLAs](../high-availability.md#slas)
+* Use the latest [.NET SDK](sql-api-sdk-dotnet-standard.md)
+* Use the latest [Java SDK](sql-api-sdk-java-v4.md)
+* Use the latest [Python SDK](sql-api-sdk-python.md)
+* Use the latest [Node SDK](sql-api-sdk-node.md)
cosmos-db Create Sql Api Dotnet V4 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/create-sql-api-dotnet-v4.md
> * [Spark v3 connector](create-sql-api-spark.md) > * [Node.js](create-sql-api-nodejs.md) > * [Python](create-sql-api-python.md)
+> * [Go](create-sql-api-go.md)
Get started with the Azure Cosmos DB SQL API client library for .NET. Follow the steps in this article to install the .NET V4 (Azure.Cosmos) package and build an app. Then, try out the example code for basic create, read, update, and delete (CRUD) operations on the data stored in Azure Cosmos DB.
cosmos-db Create Sql Api Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/create-sql-api-dotnet.md
> * [Spark v3 connector](create-sql-api-spark.md) > * [Node.js](create-sql-api-nodejs.md) > * [Python](create-sql-api-python.md)
+> * [Go](create-sql-api-go.md)
Get started with the Azure Cosmos DB SQL API client library for .NET. Follow the steps in this doc to install the .NET package, build an app, and try out the example code for basic CRUD operations on the data stored in Azure Cosmos DB.
cosmos-db Create Sql Api Go https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/create-sql-api-go.md
+
+ Title: 'Quickstart: Build a Go app using Azure Cosmos DB SQL API account'
+description: Gives a Go code sample you can use to connect to and query the Azure Cosmos DB SQL API
+++
+ms.devlang: golang
+ Last updated : 3/4/2021+++
+# Quickstart: Build a Go application using an Azure Cosmos DB SQL API account
+
+> [!div class="op_single_selector"]
+> * [.NET V3](create-sql-api-dotnet.md)
+> * [.NET V4](create-sql-api-dotnet-V4.md)
+> * [Java SDK v4](create-sql-api-java.md)
+> * [Spring Data v3](create-sql-api-spring-data.md)
+> * [Spark v3 connector](create-sql-api-spark.md)
+> * [Node.js](create-sql-api-nodejs.md)
+> * [Python](create-sql-api-python.md)
+> * [Go](create-sql-api-go.md)
++
+In this quickstart, you'll build a sample Go application that uses the Azure SDK for Go to manage a Cosmos DB SQL API account.
+
+Azure Cosmos DB is a multi-model database service that lets you quickly create and query document, table, key-value, and graph databases with global distribution and horizontal scale capabilities.
+
+To learn more about Azure Cosmos DB, go to [Azure Cosmos DB](/azure/cosmos-db/introduction).
+
+## Prerequisites
+
+- A Cosmos DB Account. Your options are:
+ * Within an Azure active subscription:
+ * [Create an Azure free Account](https://azure.microsoft.com/free) or use your existing subscription
+ * [Visual Studio Monthly Credits](https://azure.microsoft.com/pricing/member-offers/credit-for-visual-studio-subscribers)
+ * [Azure Cosmos DB Free Tier](../optimize-dev-test.md#azure-cosmos-db-free-tier)
+ * Without an Azure active subscription:
+ * [Try Azure Cosmos DB for free](https://azure.microsoft.com/try/cosmosdb/), a tests environment that lasts for 30 days.
+ * [Azure Cosmos DB Emulator](https://aka.ms/cosmosdb-emulator)
+- [Go 1.16 or higher](https://golang.org/dl/)
+- [Azure CLI](/cli/azure/install-azure-cli)
++
+## Getting started
+
+For this quickstart, you'll need to create an Azure resource group and a Cosmos DB account.
+
+Run the following commands to create an Azure resource group:
+
+```azurecli
+az group create --name myResourceGroup --location eastus
+```
+
+Next create a Cosmos DB account by running the following command:
+
+```
+az cosmosdb create --name my-cosmosdb-account --resource-group myResourceGroup
+```
+
+### Install the package
+
+Use the `go get` command to install the [azcosmos](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/data/azcosmos) package.
+
+```bash
+go get github.com/Azure/azure-sdk-for-go/sdk/data/azcosmos
+```
+
+## Key concepts
+
+* A `Client` is a connection to an Azure Cosmos DB account.
+* Azure Cosmos DB accounts can have multiple `databases`. A `DatabaseClient` allows you to create, read, and delete databases.
+* Database within an Azure Cosmos Account can have multiple `containers`. A `ContainerClient` allows you to create, read, update, and delete containers, and to modify throughput provision.
+* Information is stored as items inside containers. And the client allows you to create, read, update, and delete items in containers.
+
+## Code examples
+
+**Authenticate the client**
+
+```go
+ var endpoint = "<azure_cosmos_uri>"
+ var key = "<azure_cosmos_primary_key"
+
+ cred, err := azcosmos.NewKeyCredential(key)
+ if err != nil {
+ log.Fatal("Failed to create a credential: ", err)
+ }
+
+ // Create a CosmosDB client
+ client, err := azcosmos.NewClientWithKey(endpoint, cred, nil)
+ if err != nil {
+ log.Fatal("Failed to create cosmos client: ", err)
+ }
+
+ // Create database client
+ databaseClient, err := client.NewDatabase("<databaseName>")
+ if err != nil {
+ log.fatal("Failed to create database client:", err)
+ }
+
+ // Create container client
+ containerClient, err := client.NewContainer("<databaseName>", "<containerName>")
+ if err != nil {
+ log.fatal("Failed to create a container client:", err)
+ }
+```
+
+**Create a Cosmos database**
+
+```go
+databaseProperties := azcosmos.DatabaseProperties{ID: "<databaseName>"}
+
+databaseResp, err := client.CreateDatabase(context.TODO(), databaseProperties, nil)
+if err != nil {
+ panic(err)
+}
+```
+
+**Create a container**
+
+```go
+database, err := client.NewDatabase("<databaseName>") //returns struct that represents a database.
+if err != nil {
+ panic(err)
+}
+
+properties := azcosmos.ContainerProperties{
+ ID: "ToDoItems",
+ PartitionKeyDefinition: azcosmos.PartitionKeyDefinition{
+ Paths: []string{"/category"},
+ },
+}
+
+resp, err := database.CreateContainer(context.TODO(), properties, nil)
+if err != nil {
+ panic(err)
+}
+```
+
+**Create an item**
+
+```go
+container, err := client.NewContainer("<databaseName>", "<containerName>")
+if err != nil {
+ panic(err)
+}
+
+pk := azcosmos.NewPartitionKeyString("personal") //specifies the value of the partition key
+
+item := map[string]interface{}{
+ "id": "1",
+ "category": "personal",
+ "name": "groceries",
+ "description": "Pick up apples and strawberries",
+ "isComplete": false,
+}
+
+marshalled, err := json.Marshal(item)
+if err != nil {
+ panic(err)
+}
+
+itemResponse, err := container.CreateItem(context.TODO(), pk, marshalled, nil)
+if err != nil {
+ panic(err)
+}
+```
+
+**Read an item**
+
+```go
+getResponse, err := container.ReadItem(context.TODO(), pk, "1", nil)
+if err != nil {
+ panic(err)
+}
+
+var getResponseBody map[string]interface{}
+err = json.Unmarshal([]byte(getResponse.Value), &getResponseBody)
+if err != nil {
+ panic(err)
+}
+
+fmt.Println("Read item with Id 1:")
+
+for key, value := range getResponseBody {
+ fmt.Printf("%s: %v\n", key, value)
+}
+```
+
+**Delete an item**
+
+```go
+delResponse, err := container.DeleteItem(context.TODO(), pk, "1", nil)
+
+if err != nil {
+ panic(err)
+}
+```
+
+## Run the code
+
+To authenticate, you need to pass the Azure Cosmos account credentials to the application.
+
+Get your Azure Cosmos account credentials by following these steps:
+
+1. Sign in to the [Azure portal](https://portal.azure.com/).
+
+1. Navigate to your Azure Cosmos account.
+
+1. Open the **Keys** pane and copy the **URI** and **PRIMARY KEY** of your account. You'll add the URI and keys values to an environment variable in the next step.
+
+After you've copied the **URI** and **PRIMARY KEY** of your account, save them to a new environment variable on the local machine running the application.
+
+Use the values copied from the Azure port to set the following environment variables:
+
+# [Bash](#tab/bash)
+
+```bash
+export AZURE_COSMOS_URL=<Your_AZURE_COSMOS_URI>
+export AZURE_COSMOS_PRIMARY_KEY=<Your_COSMOS_PRIMARY_KEY>
+```
+
+# [PowerShell](#tab/powershell)
+
+```powershell
+$env:AZURE_COSMOS_URL=<Your_AZURE_COSMOS_URI>
+$env:AZURE_COSMOS_PRIMARY_KEY=<Your_AZURE_COSMOS_URI>
+```
+++
+Create a new Go module by running the following command:
+
+```bash
+go mod init azcosmos
+```
+
+Create a new file named `main.go` and copy the desired code from the sample sections above.
+
+Run the following command to execute the app:
+
+```bash
+go run main.go
+```
++
+## Clean up resources
++
+## Next steps
+
+In this quickstart, you've learned how to create an Azure Cosmos DB account, create a database, container, and an item entry. Now import more data to your Azure Cosmos DB account.
+
+Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning.
+* If all you know is the number of vcores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md)
+* If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md)
+
+> [!div class="nextstepaction"]
+> [Import data into Azure Cosmos DB for the SQL API](../import-data.md)
cosmos-db Create Sql Api Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/create-sql-api-java.md
> * [Spark v3 connector](create-sql-api-spark.md) > * [Node.js](create-sql-api-nodejs.md) > * [Python](create-sql-api-python.md)
+> * [Go](create-sql-api-go.md)
+ In this quickstart, you create and manage an Azure Cosmos DB SQL API account from the Azure portal, and by using a Java app cloned from GitHub. First, you create an Azure Cosmos DB SQL API account using the Azure portal, then create a Java app using the SQL Java SDK, and then add resources to your Cosmos DB account by using the Java application. Azure Cosmos DB is a multi-model database service that lets you quickly create and query document, table, key-value, and graph databases with global distribution and horizontal scale capabilities.
cosmos-db Create Sql Api Nodejs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/create-sql-api-nodejs.md
> * [Spark v3 connector](create-sql-api-spark.md) > - [Node.js](create-sql-api-nodejs.md) > - [Python](create-sql-api-python.md)
+> * [Go](create-sql-api-go.md)
In this quickstart, you create and manage an Azure Cosmos DB SQL API account from the Azure portal, and by using a Node.js app cloned from GitHub. Azure Cosmos DB is a multi-model database service that lets you quickly create and query document, table, key-value, and graph databases with global distribution and horizontal scale capabilities.
cosmos-db Create Sql Api Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/create-sql-api-python.md
> * [Spark v3 connector](create-sql-api-spark.md) > * [Node.js](create-sql-api-nodejs.md) > * [Python](create-sql-api-python.md)
+> * [Go](create-sql-api-go.md)
In this quickstart, you create and manage an Azure Cosmos DB SQL API account from the Azure portal, and from Visual Studio Code with a Python app cloned from GitHub. Azure Cosmos DB is a multi-model database service that lets you quickly create and query document, table, key-value, and graph databases with global distribution and horizontal scale capabilities.
cosmos-db Create Sql Api Spark https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/create-sql-api-spark.md
> * [Spark 3 OLTP connector](create-sql-api-spark.md) > * [Node.js](create-sql-api-nodejs.md) > * [Python](create-sql-api-python.md)
+> * [Go](create-sql-api-go.md)
This tutorial is a quick start guide to show how to use Cosmos DB Spark Connector to read from or write to Cosmos DB. Cosmos DB Spark Connector supports Spark 3.1.x and 3.2.x.
cosmos-db Create Sql Api Spring Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/create-sql-api-spring-data.md
> * [Spark v3 connector](create-sql-api-spark.md) > * [Node.js](create-sql-api-nodejs.md) > * [Python](create-sql-api-python.md)
+> * [Go](create-sql-api-go.md)
In this quickstart, you create and manage an Azure Cosmos DB SQL API account from the Azure portal, and by using a Spring Data Azure Cosmos DB v3 app cloned from GitHub. First, you create an Azure Cosmos DB SQL API account using the Azure portal, then create a Spring Boot app using the Spring Data Azure Cosmos DB v3 connector, and then add resources to your Cosmos DB account by using the Spring Boot application. Azure Cosmos DB is a multi-model database service that lets you quickly create and query document, table, key-value, and graph databases with global distribution and horizontal scale capabilities.
cosmos-db Troubleshoot Changefeed Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/troubleshoot-changefeed-functions.md
description: Common issues, workarounds, and diagnostic steps, when using the Az
Previously updated : 12/29/2020 Last updated : 03/28/2022
This scenario can have multiple causes and all of them should be checked:
If it's the latter, there could be some delay between the changes being stored and the Azure Function picking them up. This is because internally, when the trigger checks for changes in your Azure Cosmos container and finds none pending to be read, it will sleep for a configurable amount of time (5 seconds, by default) before checking for new changes (to avoid high RU consumption). You can configure this sleep time through the `FeedPollDelay/feedPollDelay` setting in the [configuration](../../azure-functions/functions-bindings-cosmosdb-v2-trigger.md#configuration) of your trigger (the value is expected to be in milliseconds). 3. Your Azure Cosmos container might be [rate-limited](../request-units.md). 4. You can use the `PreferredLocations` attribute in your trigger to specify a comma-separated list of Azure regions to define a custom preferred connection order.
+5. The speed at which your Trigger receives new changes is dictated by the speed at which you are processing them. Verify the Function's [Execution Time / Duration](../../azure-functions/analyze-telemetry-data.md), if your Function is slow that will increase the time it takes for your Trigger to get new changes. If you see a recent increase in Duration, there could be a recent code change that might affect it. If the speed at which you are receiving operations on your Azure Cosmos container is faster than the speed of your Trigger, you will keep lagging behind. You might want to investigate in the Function's code, what is the most time consuming operation and how to optimize it.
### Some changes are repeated in my Trigger
cosmos-db Troubleshoot Dot Net Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/troubleshoot-dot-net-sdk.md
Check the [GitHub issues section](https://github.com/Azure/azure-cosmos-dotnet-v
### Check the portal metrics Checking the [portal metrics](../monitor-cosmos-db.md) will help determine if it's a client-side issue or if there is an issue with the service. For example, if the metrics contain a high rate of rate-limited requests (HTTP status code 429) which means the request is getting throttled then check the [Request rate too large](troubleshoot-request-rate-too-large.md) section.
-## Retry Logic <a id="retry-logics"></a>
-Cosmos DB SDK on any IO failure will attempt to retry the failed operation if retry in the SDK is feasible. Having a retry in place for any failure is a good practice but specifically handling/retrying write failures is a must. It's recommended to use the latest SDK as retry logic is continuously being improved.
-
-1. Read and query IO failures will get retried by the SDK without surfacing them to the end user.
-2. Writes (Create, Upsert, Replace, Delete) are "not" idempotent and hence SDK cannot always blindly retry the failed write operations. It is required that user's application logic to handle the failure and retry.
-3. [Trouble shooting sdk availability](troubleshoot-sdk-availability.md) explains retries for multi-region Cosmos DB accounts.
-
-### Retry design
-
-The application should be designed to retry on any exception unless it is a known issue where retrying will not help. For example, the application should retry on 408 request timeouts, this timeout is possibly transient so a retry may result in success. The application should not retry on 400s, this typically means that there is an issue with the request that must first be resolved. Retrying on the 400 will not fix the issue and will result in the same failure if retried again. The table below shows known failures and which ones to retry on.
-
-## Common error status codes <a id="error-codes"></a>
-
-| Status Code | Retryable | Description |
-|-|-|-|
-| 400 | No | Bad request (i.e. invalid json, incorrect headers, incorrect partition key in header)|
-| 401 | No | [Not authorized](troubleshoot-unauthorized.md) |
-| 403 | No | [Forbidden](troubleshoot-forbidden.md) |
-| 404 | No | [Resource is not found](troubleshoot-not-found.md) |
-| 408 | Yes | [Request timed out](troubleshoot-dot-net-sdk-request-timeout.md) |
-| 409 | No | Conflict failure is when the ID provided for a resource on a write operation has been taken by an existing resource. Use another ID for the resource to resolve this issue as ID must be unique within all documents with the same partition key value. |
-| 410 | Yes | Gone exceptions (transient failure that should not violate SLA) |
-| 412 | No | Precondition failure is where the operation specified an eTag that is different from the version available at the server. It's an optimistic concurrency error. Retry the request after reading the latest version of the resource and updating the eTag on the request.
-| 413 | No | [Request Entity Too Large](../concepts-limits.md#per-item-limits) |
-| 429 | Yes | It is safe to retry on a 429. This can be avoided by following the link for [too many requests](troubleshoot-request-rate-too-large.md).|
-| 449 | Yes | Transient error that only occurs on write operations, and is safe to retry. This can point to a design issue where too many concurrent operations are trying to update the same object in Cosmos DB. |
-| 500 | Yes | The operation failed due to an unexpected service error. Contact support by filing an [Azure support issue](https://aka.ms/azure-support). |
-| 503 | Yes | [Service unavailable](troubleshoot-service-unavailable.md) |
+## Retry design <a id="retry-logics"></a><a id="retry-design"></a><a id="error-codes"></a>
+See our guide to [designing resilient applications with Azure Cosmos SDKs](conceptual-resilient-sdk-applications.md) for guidance on how to design resilient applications and learn which are the retry semantics of the SDK.
### <a name="snat"></a>Azure SNAT (PAT) port exhaustion
cosmos-db Troubleshoot Java Sdk V4 Sql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/troubleshoot-java-sdk-v4-sql.md
filteredFamilies.byPage().toIterable().forEach(familyFeedResponse -> {
```
-## Retry Logic <a id="retry-logics"></a>
-Cosmos DB SDK on any IO failure will attempt to retry the failed operation if retry in the SDK is feasible. Having a retry in place for any failure is a good practice but specifically handling/retrying write failures is a must. It's recommended to use the latest SDK as retry logic is continuously being improved.
-
-1. Read and query IO failures will get retried by the SDK without surfacing them to the end user.
-2. Writes (Create, Upsert, Replace, Delete) are "not" idempotent and hence SDK cannot always blindly retry the failed write operations. It is required that user's application logic to handle the failure and retry.
-3. [Trouble shooting sdk availability](troubleshoot-sdk-availability.md) explains retries for multi-region Cosmos DB accounts.
-
-## Retry design
-
-The application should be designed to retry on any exception unless it is a known issue where retrying will not help. For example, the application should retry on 408 request timeouts, this timeout is possibly transient so a retry may result in success. The application should not retry on 400s, this typically means that there is an issue with the request that must first be resolved. Retrying on the 400 will not fix the issue and will result in the same failure if retried again. The table below shows known failures and which ones to retry on.
-
-## Common error status codes <a id="error-codes"></a>
-
-| Status Code | Retryable | Description |
-|-|-|-|
-| 400 | No | Bad request (i.e. invalid json, incorrect headers, incorrect partition key in header)|
-| 401 | No | [Not authorized](troubleshoot-unauthorized.md) |
-| 403 | No | [Forbidden](troubleshoot-forbidden.md) |
-| 404 | No | [Resource is not found](troubleshoot-not-found.md) |
-| 408 | Yes | [Request timed out](troubleshoot-request-timeout-java-sdk-v4-sql.md) |
-| 409 | No | Conflict failure is when the ID provided for a resource on a write operation has been taken by an existing resource. Use another ID for the resource to resolve this issue as ID must be unique within all documents with the same partition key value. |
-| 410 | Yes | Gone exceptions (transient failure that should not violate SLA) |
-| 412 | No | Precondition failure is where the operation specified an eTag that is different from the version available at the server. It's an optimistic concurrency error. Retry the request after reading the latest version of the resource and updating the eTag on the request.
-| 413 | No | [Request Entity Too Large](../concepts-limits.md#per-item-limits) |
-| 429 | Yes | It is safe to retry on a 429. This can be avoided by following the link for [too many requests](troubleshoot-request-rate-too-large.md).|
-| 449 | Yes | Transient error that only occurs on write operations, and is safe to retry. This can point to a design issue where too many concurrent operations are trying to update the same object in Cosmos DB. |
-| 500 | Yes | The operation failed due to an unexpected service error. Contact support by filing an [Azure support issue](https://aka.ms/azure-support). |
-| 503 | Yes | [Service unavailable](troubleshoot-service-unavailable-java-sdk-v4-sql.md) |
+## Retry design <a id="retry-logics"></a><a id="retry-design"></a><a id="error-codes"></a>
+See our guide to [designing resilient applications with Azure Cosmos SDKs](conceptual-resilient-sdk-applications.md) for guidance on how to design resilient applications and learn which are the retry semantics of the SDK.
## <a name="common-issues-workarounds"></a>Common issues and workarounds
data-factory Connector Sharepoint Online List https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-sharepoint-online-list.md
You can copy file from SharePoint Online by using **Web activity** to authentica
3. Chain with a **Copy activity** with HTTP connector as source to copy SharePoint Online file content: - HTTP linked service:
- - **Base URL**: `https://[site-url]/_api/web/GetFileByServerRelativeUrl('[relative-path-to-file]')/$value`. Replace the site URL and relative path to file. Sample relative path to file as `/sites/site2/Shared Documents/TestBook.xlsx`.
+ - **Base URL**: `https://[site-url]/_api/web/GetFileByServerRelativeUrl('[relative-path-to-file]')/$value`. Replace the site URL and relative path to file. Make sure to include the SharePoint site URL along with the Domain name, such as `https://[sharepoint-domain-name].sharepoint.com/sites/[sharepoint-site]/_api/web/GetFileByServerRelativeUrl('/sites/[sharepoint-site]/[relative-path-to-file]')/$value`.
- **Authentication type:** Anonymous *(to use the Bearer token configured in copy activity source later)* - Dataset: choose the format you want. To copy file as-is, select "Binary" type. - Copy activity source:
data-factory Control Flow Execute Data Flow Activity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/control-flow-execute-data-flow-activity.md
Property | Description | Allowed values | Required
dataflow | The reference to the Data Flow being executed | DataFlowReference | Yes integrationRuntime | The compute environment the data flow runs on. If not specified, the auto-resolve Azure integration runtime will be used. | IntegrationRuntimeReference | No compute.coreCount | The number of cores used in the spark cluster. Can only be specified if the auto-resolve Azure Integration runtime is used | 8, 16, 32, 48, 80, 144, 272 | No
-compute.computeType | The type of compute used in the spark cluster. Can only be specified if the auto-resolve Azure Integration runtime is used | "General", "ComputeOptimized", "MemoryOptimized" | No
+compute.computeType | The type of compute used in the spark cluster. Can only be specified if the auto-resolve Azure Integration runtime is used | "General", "MemoryOptimized" | No
staging.linkedService | If you're using an Azure Synapse Analytics source or sink, specify the storage account used for PolyBase staging.<br/><br/>If your Azure Storage is configured with VNet service endpoint, you must use managed identity authentication with "allow trusted Microsoft service" enabled on storage account, refer to [Impact of using VNet Service Endpoints with Azure storage](../azure-sql/database/vnet-service-endpoint-rule-overview.md#impact-of-using-virtual-network-service-endpoints-with-azure-storage). Also learn the needed configurations for [Azure Blob](connector-azure-blob-storage.md#managed-identity) and [Azure Data Lake Storage Gen2](connector-azure-data-lake-storage.md#managed-identity) respectively.<br/> | LinkedServiceReference | Only if the data flow reads or writes to an Azure Synapse Analytics staging.folderPath | If you're using an Azure Synapse Analytics source or sink, the folder path in blob storage account used for PolyBase staging | String | Only if the data flow reads or writes to Azure Synapse Analytics traceLevel | Set logging level of your data flow activity execution | Fine, Coarse, None | No
data-factory Tutorial Managed Virtual Network On Premise Sql Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tutorial-managed-virtual-network-on-premise-sql-server.md
data factory from the resources list.
## Create a linked service and test the connection
-1. Go to the **Manage** tab and then go to the **Managed private endpoints** section.
+1. Go to the **Manage** tab and then go to the **Linked services** section.
2. Select + **New** under **Linked Service**. 3. Select the **SQL Server** tile from the list and select **Continue**.
databox-online Azure Stack Edge Pro R Deploy Connect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-pro-r-deploy-connect.md
Previously updated : 02/22/2022 Last updated : 03/28/2022 # Customer intent: As an IT admin, I need to understand how to connect and activate Azure Stack Edge Pro R so I can use it to transfer data to Azure.
Before you configure and set up your Azure Stack Edge Pro R device, make sure th
1. Configure the Ethernet adapter on your computer to connect to the Azure Stack Edge Pro R device with a static IP address of 192.168.100.5 and subnet 255.255.255.0.
-2. Connect the computer to PORT 1 on your device. If connecting the computer to the device directly (without a switch), use a crossover cable or a USB Ethernet adapter. Use the following illustration to identify PORT 1 on your device.
+2. Connect the computer to PORT 1 on your device. If connecting the computer to the device directly (without a switch), use an Ethernet crossover cable or a USB Ethernet adapter. Use the following illustration to identify PORT 1 on your device.
![Backplane of a cabled device](./media/azure-stack-edge-pro-r-deploy-install/backplane-cabled.png)
ddos-protection Ddos Protection Reference Architectures https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/ddos-protection-reference-architectures.md
A standby region is set up for failover scenarios.
Azure Traffic Manager routes incoming requests to Application Gateway in one of the regions. During normal operations, it routes requests to Application Gateway in the active region. If that region becomes unavailable, Traffic Manager fails over to Application Gateway in the standby region.
-All traffic from the internet destined to the web application is routed to the [Application Gateway public IP address](../application-gateway/application-gateway-web-app-overview.md) via Traffic Manager. In this scenario, the app service (web app) itself is not directly externally facing and is protected by Application Gateway.
+All traffic from the internet destined to the web application is routed to the [Application Gateway public IP address](../application-gateway/configure-web-app.md) via Traffic Manager. In this scenario, the app service (web app) itself is not directly externally facing and is protected by Application Gateway.
We recommend that you configure the Application Gateway WAF SKU (prevent mode) to help protect against Layer 7 (HTTP/HTTPS/WebSocket) attacks. Additionally, web apps are configured to [accept only traffic from the Application Gateway](https://azure.microsoft.com/blog/ip-and-domain-restrictions-for-windows-azure-web-sites/) IP address.
defender-for-cloud Apply Security Baseline https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/apply-security-baseline.md
Use the security recommendations described in this article to assess the machine
Microsoft Defender for Cloud includes two recommendations that check whether the configuration of Windows and Linux machines in your environment meet the Azure security baseline configurations: -- For **Windows** machines, [Vulnerabilities in security configuration on your Windows machines should be remediated (powered by Guest Configuration)](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/1f655fb7-63ca-4980-91a3-56dbc2b715c6) compares the configuration with the [Windows security baseline](../governance/policy/samples/guest-configuration-baseline-windows.md).-- For **Linux** machines, [Vulnerabilities in security configuration on your Linux machines should be remediated (powered by Guest Configuration)](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/8c3d9ad0-3639-4686-9cd2-2b2ab2609bda) compares the configuration with the [Linux security baseline](../governance/policy/samples/guest-configuration-baseline-linux.md).
+- For **Windows** machines, [Vulnerabilities in security configuration on your Windows machines should be remediated (powered by Guest Configuration)](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/8c3d9ad0-3639-4686-9cd2-2b2ab2609bda) compares the configuration with the [Windows security baseline](../governance/policy/samples/guest-configuration-baseline-windows.md).
+- For **Linux** machines, [Vulnerabilities in security configuration on your Linux machines should be remediated (powered by Guest Configuration)](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/1f655fb7-63ca-4980-91a3-56dbc2b715c6) compares the configuration with the [Linux security baseline](../governance/policy/samples/guest-configuration-baseline-linux.md).
These recommendations use the guest configuration feature of Azure Policy to compare the OS configuration of a machine with the baseline defined in the [Azure Security Benchmark](/security/benchmark/azure/overview).
To compare machines with the OS security baselines:
1. From Defender for Cloud's portal pages, open the **Recommendations** page. 1. Select the relevant recommendation:
- - For **Windows** machines, [Vulnerabilities in security configuration on your Windows machines should be remediated (powered by Guest Configuration)](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/1f655fb7-63ca-4980-91a3-56dbc2b715c6)
- - For **Linux** machines, [Vulnerabilities in security configuration on your Linux machines should be remediated (powered by Guest Configuration)](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/8c3d9ad0-3639-4686-9cd2-2b2ab2609bda)
+ - For **Windows** machines, [Vulnerabilities in security configuration on your Windows machines should be remediated (powered by Guest Configuration)](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/8c3d9ad0-3639-4686-9cd2-2b2ab2609bda)
+ - For **Linux** machines, [Vulnerabilities in security configuration on your Linux machines should be remediated (powered by Guest Configuration)](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/1f655fb7-63ca-4980-91a3-56dbc2b715c6)
:::image type="content" source="media/apply-security-baseline/recommendations-baseline.png" alt-text="The two recommendations for comparing the OS configuration of machines with the relevant Azure security baseline." lightbox="media/apply-security-baseline/recommendations-baseline.png":::
To compare machines with the OS security baselines:
## FAQ - Hardening an OS according to the security baseline -- [How do I deploy the prerequisites for the security configuration recommendations?](#how-do-i-deploy-the-prerequisites-for-the-security-configuration-recommendations)-- [Why is a machine shown as not applicable?](#why-is-a-machine-shown-as-not-applicable)
+- [Apply Azure security baselines to machines](#apply-azure-security-baselines-to-machines)
+ - [Availability](#availability)
+ - [What are the hardening recommendations?](#what-are-the-hardening-recommendations)
+ - [Compare machines in your subscriptions with the OS security baselines](#compare-machines-in-your-subscriptions-with-the-os-security-baselines)
+ - [FAQ - Hardening an OS according to the security baseline](#faqhardening-an-os-according-to-the-security-baseline)
+ - [How do I deploy the prerequisites for the security configuration recommendations?](#how-do-i-deploy-the-prerequisites-for-the-security-configuration-recommendations)
+ - [Why is a machine shown as not applicable?](#why-is-a-machine-shown-as-not-applicable)
+ - [Next steps](#next-steps)
### How do I deploy the prerequisites for the security configuration recommendations?
defender-for-cloud Defender For Containers Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-containers-enable.md
Title: How to enable Microsoft Defender for Containers in Microsoft Defender for Cloud description: Enable the container protections of Microsoft Defender for Containers --++ zone_pivot_groups: k8s-host Previously updated : 03/15/2022 Last updated : 03/27/2022 # Enable Microsoft Defender for Containers
defender-for-cloud Defender For Containers Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-containers-introduction.md
The following describes the components necessary in order to receive the full pr
### What are the options to enable the new plan at scale? WeΓÇÖve rolled out a new policy in Azure Policy, **Configure Microsoft Defender for Containers to be enabled**, to make it easier to enable the new plan at scale.
-### Does Microsoft Defender for Containers support AKS with virtual machines?
-No. If your cluster is deployed on an Azure Kubernetes Service (AKS) virtual machines, it's not recommended to enable the Microsoft Defender for Containers plan.
+### Does Microsoft Defender for Containers support AKS without scale set (default) ?
+No. Only Azure Kubernetes Service (AKS) clusters that use virtual machine scale sets for the nodes is supported.
### Do I need to install the Log Analytics VM extension on my AKS nodes for security protection? No, AKS is a managed service, and manipulation of the IaaS resources isn't supported. The Log Analytics VM extension is not needed and may result in additional charges.
defender-for-cloud Integration Defender For Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/integration-defender-for-endpoint.md
Title: Using Microsoft Defender for Endpoint in Microsoft Defender for Cloud to protect native, on-premises, and AWS machines. description: Learn about deploying Microsoft Defender for Endpoint from Microsoft Defender for Cloud to protect Azure, hybrid, and multi-cloud machines.++ Last updated 03/22/2022
Microsoft Defender for Endpoint is a holistic, cloud-delivered, endpoint securit
| Pricing: | Requires [Microsoft Defender for servers](defender-for-servers-introduction.md) | | Supported environments: | :::image type="icon" source="./medi), [Windows 10 Enterprise multi-session](../virtual-desktop/windows-10-multisession-faq.yml)<br>:::image type="icon" source="./media/icons/no-icon.png"::: Azure VMs running Windows 11 or Windows 10 (except if running Azure Virtual Desktop or Windows 10 Enterprise multi-session) | | Required roles and permissions: | * To enable/disable the integration: **Security admin** or **Owner**<br>* To view Defender for Endpoint alerts in Defender for Cloud: **Security reader**, **Reader**, **Resource Group Contributor**, **Resource Group Owner**, **Security admin**, **Subscription owner**, or **Subscription Contributor** |
-| Clouds: | :::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Azure Government<br>:::image type="icon" source="./media/icons/no-icon.png"::: Azure China 21Vianet <br>:::image type="icon" source="./media/icons/yes-icon.png"::: Connected AWS accounts |
+| Clouds: | :::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Azure Government (Windows only)<br>:::image type="icon" source="./media/icons/no-icon.png"::: Azure China 21Vianet <br>:::image type="icon" source="./media/icons/yes-icon.png"::: Connected AWS accounts |
## Benefits of integrating Microsoft Defender for Endpoint with Defender for Cloud
defender-for-cloud Quickstart Onboard Aws https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/quickstart-onboard-aws.md
Title: Connect your AWS account to Microsoft Defender for Cloud description: Defend your AWS resources with Microsoft Defender for Cloud Previously updated : 03/15/2022 Last updated : 03/27/2022 zone_pivot_groups: connect-aws-accounts
If you have any existing connectors created with the classic cloud connectors ex
### Create a new connector
-Ensure that all relevant pre-requisites are enabled in order to use all of the available capabilities of Defender for servers on AWS
-Also, the Defender for Servers plan should be enabled on the subscription.
-
-Deploy Azure Arc on your EC2 instances to use as the vehicle to Azure. You can deploy Azure Arc on your EC2 instance in 3 different ways:
-- (Recommended) Use the Defender for Servers Arc auto-provisioning process. Azure Arc is enabled by default in the onboarding process. The process requires owner permissions on the subscription.-- Manual installation through Arc for servers.-- Through a recommendation, which will appear on the Microsoft Defender for Cloud's Recommendations page.-
-Additional extensions should be enabled on Arc-connected machines. These extensions are currently configured on the subscription level, and will be applied to all the multi-cloud accounts, and projects (from both AWS and GCP)
- - Microsoft Defender for Endpoint
- - VA solution (TVM/ Qualys)
- - LA agent on Arc machines (Ensure that the selected workspace has the security solution installed)
- **To create a new connector**: 1. Sign in to the [Azure portal](https://portal.azure.com).
defender-for-cloud Quickstart Onboard Gcp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/quickstart-onboard-gcp.md
Title: Connect your GCP project to Microsoft Defender for Cloud description: Monitoring your GCP resources from Microsoft Defender for Cloud Previously updated : 03/14/2022 Last updated : 03/27/2022 zone_pivot_groups: connect-gcp-accounts
Microsoft Defender for Containers brings threat detection, and advanced defences
- Defender for Cloud recommendations, for per cluster installation, which will appear on the Microsoft Defender for Cloud's Recommendations page. Learn how to [deploy the solution to specific clusters](defender-for-containers-enable.md?tabs=defender-for-container-gke#deploy-the-solution-to-specific-clusters). - Manual installation for [Arc-enabled Kubernetes](../azure-arc/kubernetes/quickstart-connect-cluster.md), and [extensions](../azure-arc/kubernetes/extensions.md).
-If you choose to disable all of available configuration options, no agents, or components will be deployed to your clusters. Learn more about the [features availability](supported-machines-endpoint-solutions-clouds-containers.md).
+> [!Note]
+> If you choose to disable the available configuration options, no agents, or components will be deployed to your clusters. Learn more about the [features availability](supported-machines-endpoint-solutions-clouds-containers.md).
**To configure the Containers plan**:
defender-for-cloud Security Center Planning And Operations Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/security-center-planning-and-operations-guide.md
After initial configuration and application of Defender for Cloud recommendation
The Defender for Cloud Overview provides a unified view of security across all your Azure resources and any non-Azure resources you have connected. The example below shows an environment with many issues to be addressed:
-![dashboard.](./media/security-center-planning-and-operations-guide/security-center-planning-and-operations-guide-fig11.png)
+![dashboard.](./media/security-center-planning-and-operations-guide/microsoft-defender-for-cloud-planning-and-operations-guide-fig-11.png)
> [!NOTE] > Defender for Cloud will not interfere with your normal operational procedures, it will passively monitor your deployments and provide recommendations based on the security policies you enabled.
defender-for-cloud Supported Machines Endpoint Solutions Clouds Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/supported-machines-endpoint-solutions-clouds-containers.md
Title: Microsoft Defender for Containers feature availability description: Learn about the availability of Microsoft Defender for Cloud containers features according to OS, machine type, and cloud deployment. Previously updated : 03/24/2022 Last updated : 03/27/2022
[!INCLUDE [Banner for top of topics](./includes/banner.md)]
-The **tabs** below show the features of Microsoft Defender for Cloud that are available by environment.
+The **tabs** below show the features that are available, by environment, for Microsoft Defender for Containers.
## Supported features by environment
defender-for-iot Agent Based Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/device-builders/agent-based-recommendations.md
Title: Agent based recommendations description: Learn about the concept of security recommendations and how they are used for Defender for IoT devices. Previously updated : 11/09/2021 Last updated : 03/28/2022 # Security recommendations for IoT devices
+> [!NOTE]
+> The Microsoft Defender for IoT legacy agent has been replaced by our new micro-agent experience, and will not be supported after **March 31, 2023**. For more information, see [Tutorial: Investigate security recommendations](tutorial-investigate-security-recommendations.md).
+>
++ Defender for IoT scans your Azure resources and IoT devices and provides security recommendations to reduce your attack surface. Security recommendations are actionable and aim to aid customers in complying with security best practices.
Device recommendations provide insights and suggestions to improve device securi
| Severity | Name | Data Source | Description | |--|--|--|--|
-| Medium | Open Ports on device | Classic Defender-IoT-micro-agent| A listening endpoint was found on the device. |
-| Medium | Permissive firewall policy found in one of the chains. | Classic Defender-IoT-micro-agent| Allowed firewall policy found (INPUT/OUTPUT). Firewall policy should deny all traffic by default, and define rules to allow necessary communication to/from the device. |
-| Medium | Permissive firewall rule in the input chain was found | Classic Defender-IoT-micro-agent| A rule in the firewall has been found that contains a permissive pattern for a wide range of IP addresses or ports. |
-| Medium | Permissive firewall rule in the output chain was found | Classic Defender-IoT-micro-agent| A rule in the firewall has been found that contains a permissive pattern for a wide range of IP addresses or ports. |
-| Medium | Operation system baseline validation has failed | Classic Defender-IoT-micro-agent| Device doesn't comply with [CIS Linux benchmarks](https://www.cisecurity.org/cis-benchmarks/). |
+| Medium | Open Ports on device | Legacy Defender-IoT-micro-agent| A listening endpoint was found on the device. |
+| Medium | Permissive firewall policy found in one of the chains. | Legacy Defender-IoT-micro-agent| Allowed firewall policy found (INPUT/OUTPUT). Firewall policy should deny all traffic by default, and define rules to allow necessary communication to/from the device. |
+| Medium | Permissive firewall rule in the input chain was found | Legacy Defender-IoT-micro-agent| A rule in the firewall has been found that contains a permissive pattern for a wide range of IP addresses or ports. |
+| Medium | Permissive firewall rule in the output chain was found | Legacy Defender-IoT-micro-agent| A rule in the firewall has been found that contains a permissive pattern for a wide range of IP addresses or ports. |
+| Medium | Operation system baseline validation has failed | Legacy Defender-IoT-micro-agent| Device doesn't comply with [CIS Linux benchmarks](https://www.cisecurity.org/cis-benchmarks/). |
### Agent based operational recommendations
Operational recommendations provide insights and suggestions to improve security
| Severity | Name | Data Source | Description | |--|--|--|--|
-| Low | Agent sends unutilized messages | Classic Defender-IoT-micro-agent | 10% or more of security messages were smaller than 4 KB during the last 24 hours. |
-| Low | Security twin configuration not optimal | Classic Defender-IoT-micro-agent | Security twin configuration is not optimal. |
-| Low | Security twin configuration conflict | Classic Defender-IoT-micro-agent | Conflicts were identified in the security twin configuration. |
+| Low | Agent sends unutilized messages | Legacy Defender-IoT-micro-agent | 10% or more of security messages were smaller than 4 KB during the last 24 hours. |
+| Low | Security twin configuration not optimal | Legacy Defender-IoT-micro-agent | Security twin configuration is not optimal. |
+| Low | Security twin configuration conflict | Legacy Defender-IoT-micro-agent | Conflicts were identified in the security twin configuration. |
## Next steps
defender-for-iot Agent Based Security Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/device-builders/agent-based-security-alerts.md
Title: Classic agent based security alerts
-description: Learn about the classic version of Defender for IoT's security alerts, and recommended remediation using Defender for IoT device's features, and service.
+ Title: Legacy agent based security alerts
+description: Learn about the legacy version of Defender for IoT's security alerts, and recommended remediation using Defender for IoT device's features, and service.
Last updated 11/09/2021
-# Classic Defender for IoT devices security alerts
+# Legacy Defender for IoT devices security alerts
+
+> [!NOTE]
+> The Microsoft Defender for IoT legacy agent has been replaced by our newer micro-agent experience. For more information, see [Tutorial: Investigate security alerts](tutorial-investigate-security-alerts.md).
+>
+> As of **March 31, 2022**, the legacy agent is sunset and no new features are being developed. The legacy agent will be fully retired on **March 31, 2023**, at which point we will no longer provide bug fixes or other support for the legacy agent.
+>
+ Defender for IoT continuously analyzes your IoT solution using advanced analytics and threat intelligence to alert you to malicious activity. In addition, you can create custom alerts based on your knowledge of expected device behavior.
For more information, see [customizable alerts](concept-customizable-security-al
| Name | Severity | Data Source | Description | Suggested remediation steps | |--|--|--|--|--| | **High** severity | | | |
-| Binary Command Line | High | Classic Defender-IoT-micro-agent | LA Linux binary being called/executed from the command line was detected. This process may be legitimate activity, or an indication that your device is compromised. | Review the command with the user that ran it and check if this is something legitimately expected to run on the device. If not, escalate the alert to your information security team. |
-| Disable firewall | High | Classic Defender-IoT-micro-agent | Possible manipulation of on-host firewall detected. Malicious actors often disable the on-host firewall in an attempt to exfiltrate data. | Review with the user that ran the command to confirm if this was legitimate expected activity on the device. If not, escalate the alert to your information security team. |
-| Port forwarding detection | High | Classic Defender-IoT-micro-agent | Initiation of port forwarding to an external IP address detected. | Review with the user that ran the command if this was legitimate activity that you expect to see on the device. If not, escalate the alert to the information security team. |
-| Possible attempt to disable Auditd logging detected | High | Classic Defender-IoT-micro-agent | Linux Auditd system provides a way to track security-relevant information on the system. The system records as much information about the events that are happening on your system as possible. This information is crucial for mission-critical environments to determine who violated the security policy and the actions they performed. Disabling Auditd logging may prevent your ability to discover violations of security policies used on the system. | Check with the device owner if this was legitimate activity with business reasons. If not, this event may be hiding activity by malicious actors. Immediately escalated the incident to your information security team. |
-| Reverse shells | High | Classic Defender-IoT-micro-agent | Analysis of host data on a device detected a potential reverse shell. Reverse shells are often used to get a compromised machine to call back into a machine controlled by a malicious actor. | Review with the user that ran the command if this was legitimate activity that you expect to see on the device. If not, escalate the alert to the information security team. |
-| Successful Bruteforce attempt | High | Classic Defender-IoT-micro-agent | Multiple unsuccessful login attempts were identified, followed by a successful login. Attempted Brute force attack may have succeeded on the device. | Review SSH Brute force alert and the activity on the devices. <br>If the activity was malicious:<br> Roll out password reset for compromised accounts.<br> Investigate and remediate (if found) devices for malware. |
-| Successful local login | High | Classic Defender-IoT-micro-agent | Successful local sign in to the device detected | Make sure the signed in user is an authorized party. |
-| Web shell | High | Classic Defender-IoT-micro-agent | Possible web shell detected. Malicious actors commonly upload a web shell to a compromised machine to gain persistence or for further exploitation. | Review with the user that ran the command if this was legitimate activity that you expect to see on the device. If not, escalate the alert to the information security team. |
+| Binary Command Line | High | Legacy Defender-IoT-micro-agent | LA Linux binary being called/executed from the command line was detected. This process may be legitimate activity, or an indication that your device is compromised. | Review the command with the user that ran it and check if this is something legitimately expected to run on the device. If not, escalate the alert to your information security team. |
+| Disable firewall | High | Legacy Defender-IoT-micro-agent | Possible manipulation of on-host firewall detected. Malicious actors often disable the on-host firewall in an attempt to exfiltrate data. | Review with the user that ran the command to confirm if this was legitimate expected activity on the device. If not, escalate the alert to your information security team. |
+| Port forwarding detection | High | Legacy Defender-IoT-micro-agent | Initiation of port forwarding to an external IP address detected. | Review with the user that ran the command if this was legitimate activity that you expect to see on the device. If not, escalate the alert to the information security team. |
+| Possible attempt to disable Auditd logging detected | High | Legacy Defender-IoT-micro-agent | Linux Auditd system provides a way to track security-relevant information on the system. The system records as much information about the events that are happening on your system as possible. This information is crucial for mission-critical environments to determine who violated the security policy and the actions they performed. Disabling Auditd logging may prevent your ability to discover violations of security policies used on the system. | Check with the device owner if this was legitimate activity with business reasons. If not, this event may be hiding activity by malicious actors. Immediately escalated the incident to your information security team. |
+| Reverse shells | High | Legacy Defender-IoT-micro-agent | Analysis of host data on a device detected a potential reverse shell. Reverse shells are often used to get a compromised machine to call back into a machine controlled by a malicious actor. | Review with the user that ran the command if this was legitimate activity that you expect to see on the device. If not, escalate the alert to the information security team. |
+| Successful Bruteforce attempt | High | Legacy Defender-IoT-micro-agent | Multiple unsuccessful login attempts were identified, followed by a successful login. Attempted Brute force attack may have succeeded on the device. | Review SSH Brute force alert and the activity on the devices. <br>If the activity was malicious:<br> Roll out password reset for compromised accounts.<br> Investigate and remediate (if found) devices for malware. |
+| Successful local login | High | Legacy Defender-IoT-micro-agent | Successful local sign in to the device detected | Make sure the signed in user is an authorized party. |
+| Web shell | High | Legacy Defender-IoT-micro-agent | Possible web shell detected. Malicious actors commonly upload a web shell to a compromised machine to gain persistence or for further exploitation. | Review with the user that ran the command if this was legitimate activity that you expect to see on the device. If not, escalate the alert to the information security team. |
| **Medium** severity | | | |
-| Behavior similar to common Linux bots detected | Medium | Classic Defender-IoT-micro-agent | Execution of a process normally associated with common Linux botnets detected. | Review with the user that ran the command if this was legitimate activity that you expect to see on the device. If not, escalate the alert to the information security team. |
-| Behavior similar to Fairware ransomware detected | Medium | Classic Defender-IoT-micro-agent | Execution of rm -rf commands applied to suspicious locations detected using analysis of host data. Because rm -rf recursively deletes files, it is normally only used on discrete folders. In this case, it is being used in a location that could remove a large amount of data. Fairware ransomware is known to execute rm -rf commands in this folder. | Review with the user that ran the command this was legitimate activity that you expect to see on the device. If not, escalate the alert to the information security team. |
-| Behavior similar to ransomware detected | Medium | Classic Defender-IoT-micro-agent | Execution of files similar to known ransomware that may prevent users from accessing their system, or personal files, and may demand ransom payment to regain access. | Review with the user that ran the command if this was legitimate activity that you expect to see on the device. If not, escalate the alert to the information security team. |
-| Crypto coin miner container image detected | Medium | Classic Defender-IoT-micro-agent | Container detecting running known digital currency mining images. | 1. If this behavior is not intended, delete the relevant container image.<br> 2. Make sure that the Docker daemon is not accessible via an unsafe TCP socket.<br> 3. Escalate the alert to the information security team. |
-| Crypto coin miner image | Medium | Classic Defender-IoT-micro-agent | Execution of a process normally associated with digital currency mining detected. | Verify with the user that ran the command if this was legitimate activity on the device. If not, escalate the alert to the information security team. |
-| Detected suspicious use of the nohup command | Medium | Classic Defender-IoT-micro-agent | Suspicious use of the nohup command on host detected. Malicious actors commonly run the nohup command from a temporary directory, effectively allowing their executables to run in the background. Seeing this command run on files located in a temporary directory is not expected or usual behavior. | Review with the user that ran the command if this was legitimate activity that you expect to see on the device. If not, escalate the alert to the information security team. |
-| Detected suspicious use of the useradd command | Medium | Classic Defender-IoT-micro-agent | Suspicious use of the useradd command detected on the device. | Review with the user that ran the command if this was legitimate activity that you expect to see on the device. If not, escalate the alert to the information security team. |
-| Exposed Docker daemon by TCP socket | Medium | Classic Defender-IoT-micro-agent | Machine logs indicate that your Docker daemon (dockerd) exposes a TCP socket. By default, Docker configuration, does not use encryption or authentication when a TCP socket is enabled. Default Docker configuration enables full access to the Docker daemon, by anyone with access to the relevant port. | Review with the user that ran the command if this was legitimate activity that you expect to see on the device. If not, escalate the alert to the information security team. |
-| Failed local login | Medium | Classic Defender-IoT-micro-agent | A failed local login attempt to the device was detected. | Make sure no unauthorized party has physical access to the device. |
-| File downloads from a known malicious source detected | Medium | Classic Defender-IoT-micro-agent | Download of a file from a known malware source detected. | Review with the user that ran the command if this was legitimate activity that you expect to see on the device. If not, escalate the alert to the information security team. |
-| htaccess file access detected | Medium | Classic Defender-IoT-micro-agent | Analysis of host data detected possible manipulation of a htaccess file. Htaccess is a powerful configuration file that allows you to make multiple changes to a web server running Apache Web software, including basic redirect functionality, and more advanced functions, such as basic password protection. Malicious actors often modify htaccess files on compromised machines to gain persistence. | Confirm this is legitimate expected activity on the host. If not, escalate the alert to your information security team. |
-| Known attack tool | Medium | Classic Defender-IoT-micro-agent | A tool often associated with malicious users attacking other machines in some way was detected. | Review with the user that ran the command if this was legitimate activity that you expect to see on the device. If not, escalate the alert to the information security team. |
-| IoT agent attempted and failed to parse the module twin configuration | Medium | Classic Defender-IoT-micro-agent | The Defender for IoT security agent failed to parse the module twin configuration due to type mismatches in the configuration object | Validate your module twin configuration against the IoT agent configuration schema, fix all mismatches. |
-| Local host reconnaissance detected | Medium | Classic Defender-IoT-micro-agent | Execution of a command normally associated with common Linux bot reconnaissance detected. | Review the suspicious command line to confirm that it was executed by a legitimate user. If not, escalate the alert to your information security team. |
-| Mismatch between script interpreter and file extension | Medium | Classic Defender-IoT-micro-agent | Mismatch between the script interpreter and the extension of the script file provided as input detected. This type of mismatch is commonly associated with attacker script executions. | Review with the user that ran the command if this was legitimate activity that you expect to see on the device. If not, escalate the alert to the information security team. |
-| Possible backdoor detected | Medium | Classic Defender-IoT-micro-agent | A suspicious file was downloaded and then run on a host in your subscription. This type of activity is commonly associated with the installation of a backdoor. | Review with the user that ran the command if this was legitimate activity that you expect to see on the device. If not, escalate the alert to the information security team. |
-| Potential loss of data detected | Medium | Classic Defender-IoT-micro-agent | Possible data egress condition detected using analysis of host data. Malicious actors often egress data from compromised machines. | Review with the user that ran the command if this was legitimate activity that you expect to see on the device. If not, escalate the alert to the information security team. |
-| Potential overriding of common files | Medium | Classic Defender-IoT-micro-agent | Common executable overwritten on the device. Malicious actors are known to overwrite common files as a way to hide their actions or as a way to gain persistence. | Review with the user that ran the command if this was legitimate activity that you expect to see on the device. If not, escalate the alert to the information security team. |
-| Privileged container detected | Medium | Classic Defender-IoT-micro-agent | Machine logs indicate that a privileged Docker container is running. A privileged container has full access to host resources. If compromised, a malicious actor can use the privileged container to gain access to the host machine. | If the container doesn't need to run in privileged mode, remove the privileges from the container. |
-| Removal of system logs files detected | Medium | Classic Defender-IoT-micro-agent | Suspicious removal of log files on the host detected. | Review with the user that ran the command if this was legitimate activity that you expect to see on the device. If not, escalate the alert to the information security team. |
-| Space after filename | Medium | Classic Defender-IoT-micro-agent | Execution of a process with a suspicious extension detected using analysis of host data. Suspicious extensions may trick users into thinking files are safe to be opened and can indicate the presence of malware on the system. | Review with the user that ran the command if this was legitimate activity that you expect to see on the device. If not, escalate the alert to the information security team. |
-| Suspected malicious credentials access tools detected | Medium | Classic Defender-IoT-micro-agent | Detection usage of a tool commonly associated with malicious attempts to access credentials. | Review with the user that ran the command if this was legitimate activity that you expect to see on the device. If not, escalate the alert to the information security team. |
-| Suspicious compilation detected | Medium | Classic Defender-IoT-micro-agent | Suspicious compilation detected. Malicious actors often compile exploits on a compromised machine to escalate privileges. | Review with the user that ran the command if this was legitimate activity that you expect to see on the device. If not, escalate the alert to the information security team. |
-| Suspicious file download followed by file run activity | Medium | Classic Defender-IoT-micro-agent | Analysis of host data detected a file that was downloaded and run in the same command. This technique is commonly used by malicious actors to get infected files onto victim machines. | Review with the user that ran the command if this was legitimate activity that you expect to see on the device. If not, escalate the alert to the information security team. |
-| Suspicious IP address communication | Medium | Classic Defender-IoT-micro-agent | Communication with a suspicious IP address detected. | Verify if the connection is legitimate. Consider blocking communication with the suspicious IP. |
+| Behavior similar to common Linux bots detected | Medium | Legacy Defender-IoT-micro-agent | Execution of a process normally associated with common Linux botnets detected. | Review with the user that ran the command if this was legitimate activity that you expect to see on the device. If not, escalate the alert to the information security team. |
+| Behavior similar to Fairware ransomware detected | Medium | Legacy Defender-IoT-micro-agent | Execution of rm -rf commands applied to suspicious locations detected using analysis of host data. Because rm -rf recursively deletes files, it is normally only used on discrete folders. In this case, it is being used in a location that could remove a large amount of data. Fairware ransomware is known to execute rm -rf commands in this folder. | Review with the user that ran the command this was legitimate activity that you expect to see on the device. If not, escalate the alert to the information security team. |
+| Behavior similar to ransomware detected | Medium | Legacy Defender-IoT-micro-agent | Execution of files similar to known ransomware that may prevent users from accessing their system, or personal files, and may demand ransom payment to regain access. | Review with the user that ran the command if this was legitimate activity that you expect to see on the device. If not, escalate the alert to the information security team. |
+| Crypto coin miner container image detected | Medium | Legacy Defender-IoT-micro-agent | Container detecting running known digital currency mining images. | 1. If this behavior is not intended, delete the relevant container image.<br> 2. Make sure that the Docker daemon is not accessible via an unsafe TCP socket.<br> 3. Escalate the alert to the information security team. |
+| Crypto coin miner image | Medium | Legacy Defender-IoT-micro-agent | Execution of a process normally associated with digital currency mining detected. | Verify with the user that ran the command if this was legitimate activity on the device. If not, escalate the alert to the information security team. |
+| Detected suspicious use of the nohup command | Medium | Legacy Defender-IoT-micro-agent | Suspicious use of the nohup command on host detected. Malicious actors commonly run the nohup command from a temporary directory, effectively allowing their executables to run in the background. Seeing this command run on files located in a temporary directory is not expected or usual behavior. | Review with the user that ran the command if this was legitimate activity that you expect to see on the device. If not, escalate the alert to the information security team. |
+| Detected suspicious use of the useradd command | Medium | Legacy Defender-IoT-micro-agent | Suspicious use of the useradd command detected on the device. | Review with the user that ran the command if this was legitimate activity that you expect to see on the device. If not, escalate the alert to the information security team. |
+| Exposed Docker daemon by TCP socket | Medium | Legacy Defender-IoT-micro-agent | Machine logs indicate that your Docker daemon (dockerd) exposes a TCP socket. By default, Docker configuration, does not use encryption or authentication when a TCP socket is enabled. Default Docker configuration enables full access to the Docker daemon, by anyone with access to the relevant port. | Review with the user that ran the command if this was legitimate activity that you expect to see on the device. If not, escalate the alert to the information security team. |
+| Failed local login | Medium | Legacy Defender-IoT-micro-agent | A failed local login attempt to the device was detected. | Make sure no unauthorized party has physical access to the device. |
+| File downloads from a known malicious source detected | Medium | Legacy Defender-IoT-micro-agent | Download of a file from a known malware source detected. | Review with the user that ran the command if this was legitimate activity that you expect to see on the device. If not, escalate the alert to the information security team. |
+| htaccess file access detected | Medium | Legacy Defender-IoT-micro-agent | Analysis of host data detected possible manipulation of a htaccess file. Htaccess is a powerful configuration file that allows you to make multiple changes to a web server running Apache Web software, including basic redirect functionality, and more advanced functions, such as basic password protection. Malicious actors often modify htaccess files on compromised machines to gain persistence. | Confirm this is legitimate expected activity on the host. If not, escalate the alert to your information security team. |
+| Known attack tool | Medium | Legacy Defender-IoT-micro-agent | A tool often associated with malicious users attacking other machines in some way was detected. | Review with the user that ran the command if this was legitimate activity that you expect to see on the device. If not, escalate the alert to the information security team. |
+| IoT agent attempted and failed to parse the module twin configuration | Medium | Legacy Defender-IoT-micro-agent | The Defender for IoT security agent failed to parse the module twin configuration due to type mismatches in the configuration object | Validate your module twin configuration against the IoT agent configuration schema, fix all mismatches. |
+| Local host reconnaissance detected | Medium | Legacy Defender-IoT-micro-agent | Execution of a command normally associated with common Linux bot reconnaissance detected. | Review the suspicious command line to confirm that it was executed by a legitimate user. If not, escalate the alert to your information security team. |
+| Mismatch between script interpreter and file extension | Medium | Legacy Defender-IoT-micro-agent | Mismatch between the script interpreter and the extension of the script file provided as input detected. This type of mismatch is commonly associated with attacker script executions. | Review with the user that ran the command if this was legitimate activity that you expect to see on the device. If not, escalate the alert to the information security team. |
+| Possible backdoor detected | Medium | Legacy Defender-IoT-micro-agent | A suspicious file was downloaded and then run on a host in your subscription. This type of activity is commonly associated with the installation of a backdoor. | Review with the user that ran the command if this was legitimate activity that you expect to see on the device. If not, escalate the alert to the information security team. |
+| Potential loss of data detected | Medium | Legacy Defender-IoT-micro-agent | Possible data egress condition detected using analysis of host data. Malicious actors often egress data from compromised machines. | Review with the user that ran the command if this was legitimate activity that you expect to see on the device. If not, escalate the alert to the information security team. |
+| Potential overriding of common files | Medium | Legacy Defender-IoT-micro-agent | Common executable overwritten on the device. Malicious actors are known to overwrite common files as a way to hide their actions or as a way to gain persistence. | Review with the user that ran the command if this was legitimate activity that you expect to see on the device. If not, escalate the alert to the information security team. |
+| Privileged container detected | Medium | Legacy Defender-IoT-micro-agent | Machine logs indicate that a privileged Docker container is running. A privileged container has full access to host resources. If compromised, a malicious actor can use the privileged container to gain access to the host machine. | If the container doesn't need to run in privileged mode, remove the privileges from the container. |
+| Removal of system logs files detected | Medium | Legacy Defender-IoT-micro-agent | Suspicious removal of log files on the host detected. | Review with the user that ran the command if this was legitimate activity that you expect to see on the device. If not, escalate the alert to the information security team. |
+| Space after filename | Medium | Legacy Defender-IoT-micro-agent | Execution of a process with a suspicious extension detected using analysis of host data. Suspicious extensions may trick users into thinking files are safe to be opened and can indicate the presence of malware on the system. | Review with the user that ran the command if this was legitimate activity that you expect to see on the device. If not, escalate the alert to the information security team. |
+| Suspected malicious credentials access tools detected | Medium | Legacy Defender-IoT-micro-agent | Detection usage of a tool commonly associated with malicious attempts to access credentials. | Review with the user that ran the command if this was legitimate activity that you expect to see on the device. If not, escalate the alert to the information security team. |
+| Suspicious compilation detected | Medium | Legacy Defender-IoT-micro-agent | Suspicious compilation detected. Malicious actors often compile exploits on a compromised machine to escalate privileges. | Review with the user that ran the command if this was legitimate activity that you expect to see on the device. If not, escalate the alert to the information security team. |
+| Suspicious file download followed by file run activity | Medium | Legacy Defender-IoT-micro-agent | Analysis of host data detected a file that was downloaded and run in the same command. This technique is commonly used by malicious actors to get infected files onto victim machines. | Review with the user that ran the command if this was legitimate activity that you expect to see on the device. If not, escalate the alert to the information security team. |
+| Suspicious IP address communication | Medium | Legacy Defender-IoT-micro-agent | Communication with a suspicious IP address detected. | Verify if the connection is legitimate. Consider blocking communication with the suspicious IP. |
| **LOW** severity | | | |
-| Bash history cleared | Low | Classic Defender-IoT-micro-agent | Bash history log cleared. Malicious actors commonly erase bash history to hide their own commands from appearing in the logs. | Review with the user that ran the command that the activity in this alert to see if you recognize this as legitimate administrative activity. If not, escalate the alert to the information security team. |
-| Device silent | Low | Classic Defender-IoT-micro-agent | Device has not sent any telemetry data in the last 72 hours. | Make sure device is online and sending data. Check that the Azure Security Agent is running on the device. |
-| Failed Bruteforce attempt | Low | Classic Defender-IoT-micro-agent | Multiple unsuccessful login attempts identified. Potential Brute force attack attempt failed on the device. | Review SSH Brute force alerts and the activity on the device. No further action required. |
-| Local user added to one or more groups | Low | Classic Defender-IoT-micro-agent | New local user added to a group on this device. Changes to user groups are uncommon, and can indicate a malicious actor may be collecting extra permissions. | Verify if the change is consistent with the permissions required by the affected user. If the change is inconsistent, escalate to your Information Security team. |
-| Local user deleted from one or more groups | Low | Classic Defender-IoT-micro-agent | A local user was deleted from one or more groups. Malicious actors are known to use this method in an attempt to deny access to legitimate users or to delete the history of their actions. | Verify if the change is consistent with the permissions required by the affected user. If the change is inconsistent, escalate to your Information Security team. |
-| Local user deletion detected | Low | Classic Defender-IoT-micro-agent | Deletion of a local user detected. Local user deletion is uncommon, a malicious actor may be trying to deny access to legitimate users or to delete the history of their actions. | Verify if the change is consistent with the permissions required by the affected user. If the change is inconsistent, escalate to your Information Security team. |
+| Bash history cleared | Low | Legacy Defender-IoT-micro-agent | Bash history log cleared. Malicious actors commonly erase bash history to hide their own commands from appearing in the logs. | Review with the user that ran the command that the activity in this alert to see if you recognize this as legitimate administrative activity. If not, escalate the alert to the information security team. |
+| Device silent | Low | Legacy Defender-IoT-micro-agent | Device has not sent any telemetry data in the last 72 hours. | Make sure device is online and sending data. Check that the Azure Security Agent is running on the device. |
+| Failed Bruteforce attempt | Low | Legacy Defender-IoT-micro-agent | Multiple unsuccessful login attempts identified. Potential Brute force attack attempt failed on the device. | Review SSH Brute force alerts and the activity on the device. No further action required. |
+| Local user added to one or more groups | Low | Legacy Defender-IoT-micro-agent | New local user added to a group on this device. Changes to user groups are uncommon, and can indicate a malicious actor may be collecting extra permissions. | Verify if the change is consistent with the permissions required by the affected user. If the change is inconsistent, escalate to your Information Security team. |
+| Local user deleted from one or more groups | Low | Legacy Defender-IoT-micro-agent | A local user was deleted from one or more groups. Malicious actors are known to use this method in an attempt to deny access to legitimate users or to delete the history of their actions. | Verify if the change is consistent with the permissions required by the affected user. If the change is inconsistent, escalate to your Information Security team. |
+| Local user deletion detected | Low | Legacy Defender-IoT-micro-agent | Deletion of a local user detected. Local user deletion is uncommon, a malicious actor may be trying to deny access to legitimate users or to delete the history of their actions. | Verify if the change is consistent with the permissions required by the affected user. If the change is inconsistent, escalate to your Information Security team. |
## Next steps
defender-for-iot Agent Based Security Custom Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/device-builders/agent-based-security-custom-alerts.md
Title: Agent based security custom alerts description: Learn about customizable security alerts and recommended remediation using Defender for IoT device's features and service. Previously updated : 11/09/2021 Last updated : 03/28/2022 # Defender for IoT devices custom security alerts
+> [!NOTE]
+> The Microsoft Defender for IoT legacy agent has been replaced by our new micro-agent experience, and will not be supported after **March 31, 2023**. For more information, see [Tutorial: Investigate security alerts](tutorial-investigate-security-alerts.md).
+>
++ Defender for IoT continuously analyzes your IoT solution using advanced analytics and threat intelligence to alert you to malicious activity. We encourage you to create custom alerts based on your knowledge of expected device behavior to ensure alerts act as the most efficient indicators of potential compromise in your unique organizational deployment and landscape.
The following lists of Defender for IoT alerts are definable by you based on you
| Severity | Alert name | Data source | Description | Suggested remediation | |--|--|--|--|--|
-| Low | Custom alert - The number of active connections is outside the allowed range | Classic Defender-IoT-micro-agent, Azure RTOS | Number of active connections within a specific time window is outside the currently configured and allowable range. | Investigate the device logs. Learn where the connection originated and determine if it is benign or malicious. If malicious, remove possible malware and understand source. If benign, add the source to the allowed connection list. |
-| Low | Custom alert - The outbound connection created to an IP that isn't allowed | Classic Defender-IoT-micro-agent, Azure RTOS | An outbound connection was created to an IP that is outside your allowed IP list. | Investigate the device logs. Learn where the connection originated and determine if it is benign or malicious. If malicious, remove possible malware and understand source. If benign, add the source to the allowed IP list. |
-| Low | Custom alert - The number of failed local logins is outside the allowed range | Classic Defender-IoT-micro-agent, Azure RTOS | The number of failed local logins within a specific time window is outside the currently configured and allowable range. | |
-| Low | Custom alert - The sign in of a user that is not on the allowed user list | Classic Defender-IoT-micro-agent, Azure RTOS | A local user outside your allowed user list, logged in to the device. | If you are saving raw data, navigate to your log analytics account and use the data to investigate the device, identify the source, and then fix the allow/block list for those settings. If you are not currently saving raw data, go to the device and fix the allow/block list for those settings. |
-| Low | Custom alert - A process was executed that is not allowed | Classic Defender-IoT-micro-agent, Azure RTOS | A process that is not allowed was executed on the device. | If you are saving raw data, navigate to your log analytics account and use the data to investigate the device, identify the source, and then fix the allow/block list for those settings. If you are not currently saving raw data, go to the device and fix the allow/block list for those settings. |
+| Low | Custom alert - The number of active connections is outside the allowed range | Legacy Defender-IoT-micro-agent, Azure RTOS | Number of active connections within a specific time window is outside the currently configured and allowable range. | Investigate the device logs. Learn where the connection originated and determine if it is benign or malicious. If malicious, remove possible malware and understand source. If benign, add the source to the allowed connection list. |
+| Low | Custom alert - The outbound connection created to an IP that isn't allowed | Legacy Defender-IoT-micro-agent, Azure RTOS | An outbound connection was created to an IP that is outside your allowed IP list. | Investigate the device logs. Learn where the connection originated and determine if it is benign or malicious. If malicious, remove possible malware and understand source. If benign, add the source to the allowed IP list. |
+| Low | Custom alert - The number of failed local logins is outside the allowed range | Legacy Defender-IoT-micro-agent, Azure RTOS | The number of failed local logins within a specific time window is outside the currently configured and allowable range. | |
+| Low | Custom alert - The sign in of a user that is not on the allowed user list | Legacy Defender-IoT-micro-agent, Azure RTOS | A local user outside your allowed user list, logged in to the device. | If you are saving raw data, navigate to your log analytics account and use the data to investigate the device, identify the source, and then fix the allow/block list for those settings. If you are not currently saving raw data, go to the device and fix the allow/block list for those settings. |
+| Low | Custom alert - A process was executed that is not allowed | Legacy Defender-IoT-micro-agent, Azure RTOS | A process that is not allowed was executed on the device. | If you are saving raw data, navigate to your log analytics account and use the data to investigate the device, identify the source, and then fix the allow/block list for those settings. If you are not currently saving raw data, go to the device and fix the allow/block list for those settings. |
| ## Next steps
defender-for-iot Azure Iot Security Local Configuration C https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/device-builders/azure-iot-security-local-configuration-c.md
Title: Security agent local configuration (C) description: Learn about Defender for agent local configurations for C. Previously updated : 11/09/2021 Last updated : 03/28/2022 # Understanding the LocalConfiguration.json file - C agent
+> [!NOTE]
+> The Microsoft Defender for IoT legacy agent has been replaced by our new micro-agent experience, and will not be supported after **March 31, 2023**. For more information, see [Defender for IoT micro agent troubleshooting (Preview)](troubleshoot-defender-micro-agent.md).
+>
+ The Defender for IoT security agent uses configurations from a local configuration file. The security agent reads the configuration once, at agent start-up. The configuration found in the local configuration file contains authentication configuration and other agent related configurations.
defender-for-iot Azure Iot Security Local Configuration Csharp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/device-builders/azure-iot-security-local-configuration-csharp.md
Title: Defender for IoT security agent local configuration (C#)
description: Learn more about the Defender for IoT security service, security agent local configuration file for C#. Previously updated : 11/09/2021 Last updated : 03/28/2022 # Understanding the local configuration file (C# agent)
+> [!NOTE]
+> The Microsoft Defender for IoT legacy agent has been replaced by our new micro-agent experience, and will not be supported after **March 31, 2023**. For more information, see [Defender for IoT micro agent troubleshooting (Preview)](troubleshoot-defender-micro-agent.md).
+>
+ The Defender for IoT security agent uses configurations from a local configuration file. The security agent reads the configuration file once when the agent starts running. Configurations found in the local configuration file contain both authentication configuration and other agent related configurations.
defender-for-iot Azure Rtos Security Module Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/device-builders/azure-rtos-security-module-api.md
Title: Defender-IoT-micro-agent for Azure RTOS API
description: Reference API for the Defender-IoT-micro-agent for Azure RTOS. Last updated 11/09/2021-
defender-for-iot Concept Baseline https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/device-builders/concept-baseline.md
Title: Baseline and custom checks description: Learn about the concept of Microsoft Defender for IoT baseline. Previously updated : 11/09/2021 Last updated : 03/28/2022 # Microsoft Defender for IoT baseline and custom checks
+> [!NOTE]
+> The Microsoft Defender for IoT legacy agent has been replaced by our new micro-agent experience, and will not be supported after **March 31, 2023**. For more information, see [Create custom alerts](quickstart-create-custom-alerts.md) and [Defender for IoT Hub custom security alerts](concept-customizable-security-alerts.md).
+>
++ This article explains Defender for IoT baseline, and summarizes all associated properties of baseline custom checks. ## Baseline
defender-for-iot Concept Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/device-builders/concept-recommendations.md
Recommendation alerts provide insight and suggestions for actions to improve the
## Next steps -- Learn more about the [Classic Defender for IoT devices security alerts](agent-based-security-alerts.md)
+- Learn more about the [Legacy Defender for IoT devices security alerts](agent-based-security-alerts.md)
defender-for-iot Concept Security Agent Authentication Methods https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/device-builders/concept-security-agent-authentication-methods.md
Title: Security agent authentication methods description: Learn about the different authentication methods available when using the Defender for IoT service. Previously updated : 11/09/2021 Last updated : 03/28/2022 # Security agent authentication methods
+> [!NOTE]
+> The Microsoft Defender for IoT legacy agent has been replaced by our new micro-agent experience, and will not be supported after **March 31, 2023**. For more information, see [Authenticate the micro agent](tutorial-standalone-agent-binary-installation.md#authenticate-the-micro-agent).
+>
++ This article explains the different authentication methods you can use with the AzureIoTSecurity agent to authenticate with the IoT Hub. For each device onboarded to Defender for IoT in the IoT Hub, a Defender-IoT-micro-agent is required. To authenticate the device, Defender for IoT can use one of two methods. Choose the method that works best for your existing IoT solution.
defender-for-iot Concept Security Module https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/device-builders/concept-security-module.md
Title: Defender-IoT-micro-agent and device twins description: Learn about the concept of Defender-IoT-micro-agent twins and how they are used in Defender for IoT. Previously updated : 11/09/2021 Last updated : 03/28/2022 # Defender-IoT-micro-agent
+> [!NOTE]
+> The Microsoft Defender for IoT legacy agent has been replaced by our new micro-agent experience, and will not be supported after **March 31, 2023**. For more information, see [Tutorial: Create a DefenderIotMicroAgent module twin (Preview)](tutorial-create-micro-agent-module-twin.md).
+>
++ This article explains how Defender for IoT uses device twins and modules. ## Device twins
defender-for-iot Edge Security Module Deprecation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/device-builders/edge-security-module-deprecation.md
This article describes Microsoft Defender for IoT features and support for diffe
The new micro agent will replace the current C, C#, and Edge Defender-IoT-micro-agent.ΓÇ»
-The new micro agent development is based on the knowledge, and experience gathered from the classic security module development, customers, and feedback from partners with four important improvements:
+The new micro agent development is based on the knowledge, and experience gathered from the legacy security module development, customers, and feedback from partners with four important improvements:
- **Depth security value**: The new agent will run on the host level, which will provide more visibility to the underlying operations of the device, and to allow for better security coverage.
defender-for-iot Event Aggregation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/device-builders/event-aggregation.md
Title: Defender-IoT-micro-agent classic event aggregation
+ Title: Defender-IoT-micro-agent legacy event aggregation
description: Learn about Defender for IoT event aggregation. Previously updated : 11/09/2021 Last updated : 03/28/2022
-# Defender-IoT-micro-agent classic event aggregation
+# Defender-IoT-micro-agent legacy event aggregation
+
+> [!NOTE]
+> The Microsoft Defender for IoT legacy agent has been replaced by our new micro-agent experience, and will not be supported after **March 31, 2023**. For more information, see [Micro agent event collection (Preview)](concept-event-aggregation.md).
+>
+ Defender for IoT security agents collects data and system events from your local device and send this data to the Azure cloud for processing and analytics. The security agent collects many types of device events including new process and new connection events. Both new process and new connection events may legitimately occur frequently on a device within a second, and while important for robust and comprehensive security, the number of messages the security agents are forced to send may quickly reach or exceed your IoT Hub quota and cost limits. However, these events contain highly valuable security information that is crucial to protecting your device.
defender-for-iot How To Agent Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/device-builders/how-to-agent-configuration.md
Title: Configure security agents description: Learn how to configure Defender for IoT security agents for use with the Defender for IoT security service. Previously updated : 11/09/2021 Last updated : 03/28/2022 # Tutorial: Configure security agents
+> [!NOTE]
+> The Microsoft Defender for IoT legacy agent has been replaced by our new micro-agent experience, and will not be supported after **March 31, 2023**. For more information, see [Tutorial: Configure Microsoft Defender for IoT agent-based solution](tutorial-configure-agent-based-solution.md).
+>
+ This article explains Defender for IoT security agents, and details how to change and configure them. > [!div class="checklist"]
defender-for-iot How To Deploy Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/device-builders/how-to-deploy-agent.md
Title: Select and deploy security agents description: Learn about how select and deploy Defender for IoT security agents on IoT devices. Previously updated : 11/09/2021 Last updated : 03/28/2022 # Select and deploy a security agent on your IoT device
+> [!NOTE]
+> The Microsoft Defender for IoT legacy agent has been replaced by our new micro-agent experience, and will not be supported after **March 31, 2023**. For more information, see [Select and deploy a security agent on your IoT device](how-to-deploy-agent.md).
+>
++ Defender for IoT provides reference architectures for security agents that monitor and collect data from IoT devices. To learn more, see [Security agent reference architecture](security-agent-architecture.md).
defender-for-iot How To Deploy Edge https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/device-builders/how-to-deploy-edge.md
Title: Deploy IoT Edge security module description: Learn about how to deploy a Defender for IoT security agent on IoT Edge. Previously updated : 11/09/2021 Last updated : 03/28/2022 # Deploy a security module on your IoT Edge device
+> [!NOTE]
+> The Microsoft Defender for IoT legacy agent has been replaced by our new micro-agent experience, and will not be supported after **March 31, 2023**. For more information, see [Install Defender for IoT micro agent for Edge (Preview)](how-to-install-micro-agent-for-edge.md).
+>
+ **Defender for IoT** module provides a comprehensive security solution for your IoT Edge devices. The security module collects, aggregates, and analyzes raw security data from your Operating System and Container system into actionable security recommendations and alerts. To learn more, see [Security module for IoT Edge](security-edge-architecture.md). In this article, you'll learn how to deploy a security module on your IoT Edge device.
defender-for-iot How To Deploy Linux C https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/device-builders/how-to-deploy-linux-c.md
Title: Install & deploy Linux C agent description: Learn how to install and deploy the Defender for IoT C-based security agent on Linux Previously updated : 11/09/2021 Last updated : 03/28/2022 # Deploy Defender for IoT C based security agent for Linux
+> [!NOTE]
+> The Microsoft Defender for IoT legacy agent has been replaced by our new micro-agent experience, and will not be supported after **March 31, 2023**. For more information, see [Standalone micro agent overview (Preview)](concept-standalone-micro-agent-overview.md).
+>
+ This guide explains how to install and deploy the Defender for IoT C-based security agent on Linux. - Install
defender-for-iot How To Deploy Linux Cs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/device-builders/how-to-deploy-linux-cs.md
Title: Install & deploy Linux C# agent description: Learn how to install and deploy the Defender for IoT C#-based security agent on Linux Previously updated : 11/09/2021 Last updated : 03/28/2022 # Deploy Defender for IoT C# based security agent for Linux
+> [!NOTE]
+> The Microsoft Defender for IoT legacy agent has been replaced by our new micro-agent experience, and will not be supported after **March 31, 2023**. For more information, see [Standalone micro agent overview (Preview)](concept-standalone-micro-agent-overview.md).
+>
++ This guide explains how to install and deploy the Defender for IoT C#-based security agent on Linux. In this guide, you learn how to:
This script performs the following actions:
- Adds a service user (with interactive sign in disabled). -- Installs the agent as a **Daemon** - assumes the device uses **systemd** for classic deployment model.
+- Installs the agent as a **Daemon** - assumes the device uses **systemd** for legacy deployment model.
- Configures **sudoers** to allow the agent to do certain tasks as root.
defender-for-iot How To Deploy Windows Cs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/device-builders/how-to-deploy-windows-cs.md
Title: Install C# agent on Windows device description: Learn about how to install Defender for IoT agent on 32-bit or 64-bit Windows devices. Previously updated : 11/09/2021 Last updated : 03/28/2022 # Deploy a Defender for IoT C#-based security agent for Windows
+> [!NOTE]
+> The Microsoft Defender for IoT legacy agent has been replaced by our new micro-agent experience, and will not be supported after **March 31, 2023**. For more information, see [Standalone micro agent overview (Preview)](concept-standalone-micro-agent-overview.md).
+>
++ This guide explains how to install the Defender for IoT C#-based security agent on Windows. In this guide, you learn how to:
defender-for-iot How To Investigate Device https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/device-builders/how-to-investigate-device.md
Title: Investigate a suspicious device description: This how to guide explains how to use Defender for IoT to investigate a suspicious IoT device using Log Analytics. Previously updated : 11/09/2021 Last updated : 03/28/2022 # Investigate a suspicious IoT device
+> [!NOTE]
+> The Microsoft Defender for IoT legacy agent has been replaced by our new micro-agent experience, and will not be supported after **March 31, 2023**. For more information, see [Tutorial: Investigate security recommendations](tutorial-investigate-security-recommendations.md).
+>
+ Defender for IoT service alerts provides clear indications when IoT devices are suspected of involvement in suspicious activities or when indications exist that a device is compromised. In this guide, use the investigation suggestions provided to help determine the potential risks to your organization, decide how to remediate, and discover the best ways to prevent similar attacks in the future.
defender-for-iot How To Security Data Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/device-builders/how-to-security-data-access.md
Title: Access security & recommendation data description: Learn about how to access your security alert and recommendation data when using Defender for IoT. Previously updated : 11/09/2021 Last updated : 03/28/2022 # Access your security data
+> [!NOTE]
+> The Microsoft Defender for IoT legacy agent has been replaced by our new micro-agent experience, and will not be supported after **March 31, 2023**. For more information, see [Tutorial: Configure Microsoft Defender for IoT agent-based solution](tutorial-configure-agent-based-solution.md).
+>
+ Defender for IoT stores security alerts, recommendations, and raw security data (if you choose to save it) in your Log Analytics workspace. ## Log Analytics
defender-for-iot How To Send Security Messages https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/device-builders/how-to-send-security-messages.md
Title: Send Defender for IoT device security messages description: Learn how to send your security messages using Defender for IoT. Previously updated : 11/09/2021 Last updated : 03/28/2022 # Send security messages SDK
+> [!NOTE]
+> The Microsoft Defender for IoT legacy agent has been replaced by our new micro-agent experience, and will not be supported after **March 31, 2023**. For more information, see [Create custom alerts](quickstart-create-custom-alerts.md).
+>
+ This how-to guide explains the Defender for IoT service capabilities when you choose to collect and send your device security messages without using a Defender for IoT agent, and explains how to do so. In this guide, you learn how to:
defender-for-iot Overview Security Agents https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/device-builders/overview-security-agents.md
Title: Security agents description: Get started with understanding, configuring, deploying, and using Microsoft Defender for IoT security service agents on your IoT devices. Previously updated : 11/09/2021 Last updated : 03/28/2022 # Get started with Microsoft Defender for IoT device micro agents
+> [!NOTE]
+> The Microsoft Defender for IoT legacy agent has been replaced by our new micro-agent experience, and will not be supported after **March 31, 2023**. For more information, see [Tutorial: Install the Defender for IoT micro agent (Preview)](tutorial-standalone-agent-binary-installation.md).
+>
++ Defender for IoT security agents offers enhanced security capabilities, such as monitoring operating system configuration best practices. Take control of your device field threat protection and security posture with a single service. The Defender for IoT security agents handle raw event collection from the device operating system, event aggregation to reduce cost, and configuration through a device module twin. Security messages are sent through your IoT Hub, into Defender for IoT analytics services.
defender-for-iot Quickstart Create Security Twin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/device-builders/quickstart-create-security-twin.md
Title: 'Quickstart: Create a security module twin' description: In this quickstart, learn how to create a Defender for IoT module twin for use with Microsoft Defender for IoT. Previously updated : 11/09/2021 Last updated : 03/28/2022 # Quickstart: Create an azureiotsecurity module twin
+> [!NOTE]
+> The Microsoft Defender for IoT legacy agent has been replaced by our new micro-agent experience, and will not be supported after **March 31, 2023**. For more information, see [Configure a micro agent twin](how-to-configure-micro-agent-twin.md).
+>
+ This quickstart explains how to create individual _azureiotsecurity_ module twins for new devices, or batch create module twins for all devices in an IoT Hub. ## Prerequisites
defender-for-iot Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/device-builders/release-notes.md
A new device builder module is available. The module, referred to as a micro-age
- **Continuous, real-time IoT/OT threat detection** - detect threats such as botnets, brute force attempts, crypto miners, and suspicious network activity
-The deprecated Defender-IoT-micro-agent documentation will be moved to the *Agent-based solution for device builders>Classic* folder.
+The deprecated Defender-IoT-micro-agent documentation will be moved to the *Agent-based solution for device builders>Legacy* folder.
This feature set is available with the current public preview cloud release.
defender-for-iot Security Agent Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/device-builders/security-agent-architecture.md
Title: 'Quickstart: Security agents overview' description: In this quickstart, learn how to understand security agent architecture for the agents used in the Microsoft Defender for IoT service. Previously updated : 11/09/2021 Last updated : 03/28/2022 # Quickstart: Security agent reference architecture
+> [!NOTE]
+> The Microsoft Defender for IoT legacy agent has been replaced by our new micro-agent experience, and will not be supported after **March 31, 2023**. For more information, see [Agent portfolio overview and OS support (Preview)](concept-agent-portfolio-overview-os-support.md).
+>
++ Microsoft Defender for IoT provides reference architecture for security agents that log, process, aggregate, and send security data through IoT Hub. Security agents are designed to work in a constrained IoT environment, and are highly customizable in terms of values they provide when compared to the resources they consume.
defender-for-iot Security Edge Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/device-builders/security-edge-architecture.md
Title: Defender for IoT azureiotsecurity for IoT Edge description: Understand the architecture and capabilities of Microsoft Defender for IoT azureiotsecurity for IoT Edge. Previously updated : 11/09/2021 Last updated : 03/28/2022 # Microsoft Defender for IoT Edge azureiotsecurity
+> [!NOTE]
+> The Microsoft Defender for IoT legacy agent has been replaced by our new micro-agent experience, and will not be supported after **March 31, 2023**. For more information, see [Install Defender for IoT micro agent for Edge (Preview)](how-to-install-micro-agent-for-edge.md).
+>
++ [Azure IoT Edge](../../iot-edge/index.yml) provides powerful capabilities to manage and perform business workflows at the edge. The key part that IoT Edge plays in IoT environments make it particularly attractive for malicious actors. Defender for IoT azureiotsecurity provides a comprehensive security solution for your IoT Edge devices. Defender for IoT module collects, aggregates and analyzes raw security data from your Operating System and container system into actionable security recommendations and alerts.
defender-for-iot Troubleshoot Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/device-builders/troubleshoot-agent.md
Title: Troubleshoot security agent start-up (Linux) description: Troubleshoot working with Microsoft Defender for IoT security agents for Linux. Previously updated : 11/09/2021 Last updated : 03/28/2022 # Security agent troubleshoot guide (Linux)
+> [!NOTE]
+> The Microsoft Defender for IoT legacy agent has been replaced by our new micro-agent experience, and will not be supported after **March 31, 2023**. For more information, see [Defender for IoT micro agent troubleshooting (Preview)](troubleshoot-defender-micro-agent.md).
+>
+ This article explains how to solve potential problems in the security agent start-up process. Microsoft Defender for IoT agent self-starts immediately after installation. The agent start up process includes reading local configuration, connecting to Azure IoT Hub, and retrieving the remote twin configuration. Failure in any one of these steps may cause the security agent to fail.
defender-for-iot Architecture Connections https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/architecture-connections.md
For more information, see [Connect via multi-cloud vendors](connect-sensors.md#c
If you are a customer with an existing production deployment, we recommend that upgrade any legacy sensor versions to version 22.1.x.
-While you'll need to migrate your connections before the [legacy version reaches end of support](release-notes.md#versions-and-support-dates), you can currently deploy a hybrid network of sensors, including legacy software versions with their IoT Hub connections, and sensors with the connection methods described in this article.
+While you'll need to migrate your connections before the [legacy version reaches end of support](release-notes.md#versioning-and-support-for-on-premises-software-versions), you can currently deploy a hybrid network of sensors, including legacy software versions with their IoT Hub connections, and sensors with the connection methods described in this article.
After migrating, you can remove any relevant IoT Hubs from your subscription as they'll no longer be required for your sensor connections.
defender-for-iot Concept Supported Protocols https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/concept-supported-protocols.md
Defender for IoT can detect the following protocols when identifying assets and
|**GE** | Bentley Nevada (System 1 / BN3500)<br> EGD<br> GSM (GE MarkVI and MarkVIe)<br> SRTP (GE) | |**Generic Applications** | Active Directory<br> RDP<br> Teamviewer<br> VNC<br> | |**Honeywell** | ENAP<br> Experion DCS CDA<br> Experion DCS FDA |
-|**IEC** | Codesys V3<br> ICCP TASE.2/IEC-60870<br> IEC60870-5 (IEC104/101)<br> IEC60870-5-103 (encapsulated serial)<br> IEC61850 GOOSE<br> IEC61850 MMS<br> IEC61850 SMV (SAMPLED-VALUES)<br> LonTalk (LonWorks) |
+|**IEC** | IEC 60870-5-7 (IEC 62351-3 + IEC 62351-5)<br> IEC 60870-5-101 (encapsulated serial)<br> IEC 60870-5-103 (encapsulated serial)<br> IEC 60870-5-104<br> IEC 60870-5-104 ASDU_APCI<br> Codesys V3<br> IEC 60870 ICCP TASE.2<br> IEC 61850 GOOSE<br> IEC61850 MMS<br> IEC 61850 SMV (SAMPLED-VALUES)<br> LonTalk (LonWorks) |
|**IEEE** | LLC<br> STP<br> VLAN | |**IETF** | ARP<br> DHCP<br> DCE RPC<br> DNS<br> FTP (FTP_ADAT<br> FTP_DATA)<br> GSSAPI (RFC2743)<br> HTTP<br> ICMP<br> IPv4<br> IPv6<br> LLDP<br> MDNS<br> NBNS<br> NTLM (NTLMSSP Auth Protocol)<br> RPC<br> SMB / Browse / NBDGM<br> SMB / CIFS<br> SNMP<br> SPNEGO (RFC4178)<br> SSH<br> Syslog<br> TCP<br> Telnet<br> TFTP<br> TPKT<br> UDP | |**ISO** | CLNP (ISO 8473)<br> COTP (ISO 8073)<br> ISO Industrial Protocol<br> MQTT (IEC 20922) |
defender-for-iot Connect Sensors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/connect-sensors.md
If you're an existing customer with a production deployment and sensors connecte
- **EventHub**: `*.servicebus.windows.net`
-While you'll need to migrate your connections before the [legacy version reaches end of support](release-notes.md#versions-and-support-dates), you can currently deploy a hybrid network of sensors, including legacy software versions with their IoT Hub connections, and sensors with the connection methods described in this article.
+While you'll need to migrate your connections before the [legacy version reaches end of support](release-notes.md#versioning-and-support-for-on-premises-software-versions), you can currently deploy a hybrid network of sensors, including legacy software versions with their IoT Hub connections, and sensors with the connection methods described in this article.
## Next steps
defender-for-iot How To Identify Required Appliances https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-identify-required-appliances.md
This section provides an overview of physical sensor models that are available.
- **About bringing your own appliance**: Review the supported models described below. After you've acquired your appliance, go to **Defender for IoT** > **Getting started** > **Sensor**. Under **Purchase an appliance and install software**, select **Download**.
- :::image type="content" source="media/how-to-prepare-your-network/azure-defender-for-iot-sensor-download-software-screen.png" alt-text="Network sensors ISO.":::
+ :::image type="content" source="media/how-to-prepare-your-network/azure-defender-for-iot-sensor-download-software-screen.png" alt-text="Screenshot for sensor software download.":::
> [!NOTE] > <a name="anchortext"></a>For each model, bandwidth capacity can vary, depending on the distribution of protocols.
-For more information about each model, see [Appliance specifications](#appliance-specifications).
#### Corporate sensors
This section details additional appliances that were certified by Microsoft but
After you purchase the appliance, go to **Defender for IoT** > **Network Sensors ISO** > **Installation** to download the software. ## Next steps
defender-for-iot How To Set Up Snmp Mib Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-set-up-snmp-mib-monitoring.md
Note that:
## Set up SNMP monitoring 1. On the side menu, select **System Settings**.
-2. Expand **Sensor Management**, and select **SNMP MIB Monitoring** :
-3. Select **Add host** and enter the IP address of the server that performs the system health monitoring. You can add multiple servers.
-4. In **Authentication** section, select the SNMP version.
+1. Expand **Sensor Management**, and select **SNMP MIB Monitoring** :
+1. Select **Add host** and enter the IP address of the server that performs the system health monitoring. You can add multiple servers.
+1. In **Authentication** section, select the SNMP version.
- If you select V2, type the string in **SNMP v2 Community String**. You can enter up to 32 characters, and include any combination of alphanumeric characters (uppercase letters, lowercase letters, and numbers). Spaces aren't allowed. - If you select V3, specify the following:
Note that:
| **Username** | The SNMP username can contain up to 32 characters and include any combination of alphanumeric characters (uppercase letters, lowercase letters, and numbers). Spaces are not allowed. <br /> <br />The username for the SNMP v3 authentication must be configured on the system and on the SNMP server. | | **Password** | Enter a case-sensitive authentication password. The authentication password can contain 8 to 12 characters and include any combination of alphanumeric characters (uppercase letters, lowercase letters, and numbers). <br /> <br/>The username for the SNMP v3 authentication must be configured on the system and on the SNMP server. | | **Auth Type** | Select MD5 or SHA-1. |
- | **Encryption** | Select DES (56 bit key size)[^1] or AES (AES 128 bits supported)[^2]. |
+ | **Encryption** | Select DES (56 bit key size)<sup>[1](#1)</sup> or AES (AES 128 bits supported)<sup>[2](#2)</sup>. |
| **Secret Key** | The key must contain exactly eight characters and include any combination of alphanumeric characters (uppercase letters, lowercase letters, and numbers). |
-[^1] RFC3414 User-based Security Model (USM) for version 3 of the Simple Network Management Protocol (SNMPv3)
+ <a name="1"></a><sup>1</sup> RFC3414 User-based Security Model (USM) for version 3 of the Simple Network Management Protocol (SNMPv3)
-[^2] RFC3826 The Advanced Encryption Standard (AES) Cipher Algorithm in the SNMP User-based Security Model
+ <a name="2"></a><sup>2</sup> RFC3826 The Advanced Encryption Standard (AES) Cipher Algorithm in the SNMP User-based Security Model
-5. Select **Save**.
+1. Select **Save**.
## Next steps
defender-for-iot References Work With Defender For Iot Cli Commands https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/references-work-with-defender-for-iot-cli-commands.md
The following table describes the commands available to configure your network o
## Network capture filter configuration
-The `network capture-filter` command allows administrators to eliminate network traffic that doesn't need to be analyzed. You can filter traffic by using an include list, or an exclude list.
+The `network capture-filter` command allows administrators to eliminate network traffic that doesn't need to be analyzed. You can filter traffic by using an include list, or an exclude list. This command does not support the malware detection engine.
```azurecli-interactive network capture-filter
You're asked the following question:
Your options are:ΓÇ»`all`, `dissector`, `collector`, `statistics-collector`, `rpc-parser`, or `smb-parser`.
-In most use-cases, select `all`.
+In most common use cases, we recommend that you select `all`. Selecting `all` does not include the malware detection engine, which is not supported by this command.
### Custom base capture filter
defender-for-iot Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/release-notes.md
Last updated 03/22/2022
[!INCLUDE [Banner for top of topics](../includes/banner.md)]
-This article lists Defender for IoT's new features and enhancements for organizations from the last 6 months.
+This article lists Microsoft Defender for IoT's new features and enhancements for end-user organizations from the last 6 months.
Features released earlier than 6 months ago are listed in [What's new archive for in Microsoft Defender for IoT for organizations](release-notes-archive.md). Noted features listed below are in PREVIEW. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include other legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
-## Versioning and support for Defender for IoT
+## Versioning and support for on-premises software versions
-Listed below are the support, breaking change policies for Microsoft Defender for IoT, and the versions of Microsoft Defender for IoT that are currently available.
+The Defender for IoT architecture uses on-premises sensors and management servers. This section describes the servicing information and timelines for the available on-premises software versions.
-### Servicing information and timelines
+- Each General Availability (GA) version of the Defender for IoT sensor and on-premises management console software is supported for nine months after release. Fixes and new functionality are applied to each new version and are not applied to older versions.
-Each General Availability (GA) version of the Defender for IoT sensor and on-premises management console is supported for nine months after release. Fixes and new functionality will be applied to the current GA version that is currently supported and won't be applied to older GA versions.
+- Software update packages include new functionality and security patches. Urgent, high-risk security updates are applied in minor versions that may be released throughout the quarter.
-The Defender for IoT sensor and on-premises management console update packages includes new functionality and security patches. Urgent, high-risk security updates will be applied to minor releases occurring during the quarter.
+For more information, see the [Microsoft Security Development Lifecycle practices](https://www.microsoft.com/en-us/securityengineering/sdl/), which describes Microsoft's SDK practices, including training, compliance, threat modeling, design requirements, tools such as Microsoft Component Governance, pen testing, and more.
-*Making changes to packages manually might have detrimental effects on the sensor and on-premises management console. In such cases, Microsoft is unable to provide support for your deployment.*
+> [!IMPORTANT]
+> Manual changes to software packages may have detrimental effects on the sensor and on-premises management cosnole. Microsoft is unable to support deployments with manual changes made to packages.
+>
-### Versions and support dates
+**Current versions of the sensor and on-premises management console software include**:
| Version | Date released | End support date | |--|--|--|
For more information, see [Use Azure Monitor workbooks in Microsoft Defender for
### IoT OT Threat Monitoring with Defender for IoT solution GA
-The IoT OT Threat Monitoring with Defender for IoT solution in Microsoft Sentinel is now GA. Use this solution to help secure your entire OT environment, whether you need to protect existing OT devices or build security into new OT innovations.
+The IoT OT Threat Monitoring with Defender for IoT solution in Microsoft Sentinel is now GA. In the Azure portal, use this solution to help secure your entire OT environment, whether you need to protect existing OT devices or build security into new OT innovations.
For more information, see [OT threat monitoring in enterprise SOCs](concept-sentinel-integration.md) and [Tutorial: Integrate Defender for IoT and Sentinel](../../sentinel/iot-solution.md?tabs=use-out-of-the-box-analytics-rules-recommended).
The **Device inventory** page in the Azure portal now supports the ability to ed
For more information, see [Edit device details](how-to-manage-device-inventory-for-organizations.md#edit-device-details).
-You can also delete devices from Defender for IoT, if they've been inactive for more than 14 days. For more information, see [Delete a device](how-to-manage-device-inventory-for-organizations.md#delete-a-device).
+You can only delete devices from Defender for IoT if they've been inactive for more than 14 days. For more information, see [Delete a device](how-to-manage-device-inventory-for-organizations.md#delete-a-device).
### Key state alert updates (Public preview)
defender-for-iot Tutorial Getting Started Eiot Sensor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/tutorial-getting-started-eiot-sensor.md
Before you deploy your Enterprise IoT sensor, you will need to configure your se
| Tier | Requirements | |--|--|
- | **Minimum** | To support at least 1 Gbps: <br><br>- 4 CPUs, each with 2.4 GHz or more<br>- 8 GB RAM of DDR4 or better<br>- 250 GB HDD |
- | **Recommended** | To support up to 10 Gbps: <br><br>- 8 CPUs, each with 2.4 GHz or more<br>- 32 GB RAM of DDR4 or better<br>- 500 GB HDD |
+ | **Minimum** | To support at least 1 Gbps: <br><br>- 4 CPUs, each with 2.4 GHz or more<br>- 16 GB RAM of DDR4 or better<br>- 250 GB HDD |
+ | **Recommended** | To support up to 15 Gbps: <br><br>- 8 CPUs, each with 2.4 GHz or more<br>- 32 GB RAM of DDR4 or better<br>- 500 GB HDD |
Make sure that your server or VM also has:
devtest-labs Create Lab Windows Vm Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/create-lab-windows-vm-bicep.md
+
+ Title: Create a lab in Azure DevTest Labs using Bicep
+description: Use Bicep to create a lab that has a virtual machine in Azure DevTest Labs.
++++ Last updated : 03/22/2022++
+# Quickstart: Use Bicep to create a lab in DevTest Labs
+
+This quickstart uses Bicep to create a lab in Azure DevTest Labs that has one Windows Server 2019 Datacenter virtual machine (VM) in it.
+
+In this quickstart, you take the following actions:
+
+> [!div class="checklist"]
+> * Review the Bicep file.
+> * Deploy the Bicep file to create a lab and VM.
+> * Verify the deployment.
+> * Clean up resources.
+
+## Prerequisites
+
+If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+
+## Review the Bicep file
++
+The Bicep file defines the following resource types:
+
+- [Microsoft.DevTestLab/labs](/azure/templates/microsoft.devtestlab/labs) creates the lab.
+- [Microsoft.DevTestLab/labs/virtualnetworks](/azure/templates/microsoft.devtestlab/labs/virtualnetworks) creates a virtual network.
+- [Microsoft.DevTestLab/labs/virtualmachines](/azure/templates/microsoft.devtestlab/labs/virtualmachines) creates the lab VM.
++
+## Deploy the Bicep file
+
+1. Save the Bicep file as **main.bicep** to your local computer.
+1. Deploy the Bicep file using either Azure CLI or Azure PowerShell.
+
+ # [CLI](#tab/CLI)
+
+ ```azurecli
+ az group create --name exampleRG --location eastus
+ az deployment group create --resource-group exampleRG --template-file main.bicep --parameters labName=<lab-name> vmName=<vm-name> userName=<user-name>
+ ```
+
+ # [PowerShell](#tab/PowerShell)
+
+ ```azurepowershell
+ New-AzResourceGroup -Name exampleRG -Location eastus
+ New-AzResourceGroupDeployment -ResourceGroupName exampleRG -TemplateFile ./main.bicep -labName "<lab-name>" -vmName "<vm-name>" -userName "<user-name>"
+ ```
+
+
+
+ > [!NOTE]
+ > Replace **\<lab-name\>** with the name of the new lab instance. Replace **\<vm-name\>** with the name of the new VM. Replace **\<user-name\>** with username of the local account that will be created on the new VM. You'll also be prompted to enter a password for the local account.
+
+ When the deployment finishes, you should see a message indicating the deployment succeeded.
+
+## Validate the deployment
+
+Use the Azure portal, Azure CLI, or Azure PowerShell to list the deployed resources in the resource group.
+
+# [CLI](#tab/CLI)
+
+```azurecli-interactive
+az resource list --resource-group exampleRG
+```
+
+# [PowerShell](#tab/PowerShell)
+
+```azurepowershell-interactive
+Get-AzResource -ResourceGroupName exampleRG
+```
+++
+> [!NOTE]
+> The deployment also creates a resource group for the VM. The resource group contains VM resources like the IP address, network interface, and disk. The resource group appears in your subscription's **Resource groups** list with the name **\<lab name>-\<vm name>-\<numerical string>**.
+
+## Clean up resources
+
+When no longer needed, use the Azure portal, Azure CLI, or Azure PowerShell to delete the resource group and all of its resources.
+
+# [CLI](#tab/CLI)
+
+```azurecli-interactive
+az group delete --name exampleRG
+```
+
+# [PowerShell](#tab/PowerShell)
+
+```azurepowershell-interactive
+Remove-AzResourceGroup -Name exampleRG
+```
+++
+## Next steps
+
+In this quickstart, you created a lab that has a Windows VM. To learn how to connect to and manage lab VMs, see the next tutorial:
+
+> [!div class="nextstepaction"]
+> [Tutorial: Work with lab VMs](tutorial-use-custom-lab.md)
digital-twins Concepts Apis Sdks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/concepts-apis-sdks.md
Azure Digital Twins comes equipped with control plane APIs, data plane APIs, and
The control plane APIs are [ARM](../azure-resource-manager/management/overview.md) APIs used to manage your Azure Digital Twins instance as a whole, so they cover operations like creating or deleting your entire instance. You'll also use these APIs to create and delete endpoints.
-The most current control plane API version is 2020-12-01.
- To use the control plane APIs:
-* You can call the APIs directly by referencing the latest Swagger folder in the [control plane Swagger repo](https://github.com/Azure/azure-rest-api-specs/tree/master/specification/digitaltwins/resource-manager/Microsoft.DigitalTwins/stable). This folder also includes a folder of examples that show the usage.
+* You can call the APIs directly by referencing the latest Swagger folder in the [control plane Swagger repo](https://github.com/Azure/azure-rest-api-specs/tree/main/specification/digitaltwins/resource-manager/Microsoft.DigitalTwins/stable). This folder also includes a folder of examples that show the usage.
* You can currently access SDKs for control APIs in...
- - [.NET (C#)](https://www.nuget.org/packages/Microsoft.Azure.Management.DigitalTwins/) ([reference [auto-generated]](/dotnet/api/overview/azure/digitaltwins/management?view=azure-dotnet&preserve-view=true)) ([source](https://github.com/Azure/azure-sdk-for-net/tree/master/sdk/digitaltwins/Microsoft.Azure.Management.DigitalTwins))
- - [Java](https://search.maven.org/search?q=a:azure-mgmt-digitaltwins) ([reference [auto-generated]](/java/api/overview/azure/digitaltwins?view=azure-java-stable&preserve-view=true)) ([source](https://github.com/Azure/azure-sdk-for-java/tree/master/sdk/digitaltwins))
- - [JavaScript](https://www.npmjs.com/package/@azure/arm-digitaltwins) ([source](https://github.com/Azure/azure-sdk-for-js/tree/master/sdk/digitaltwins/arm-digitaltwins))
+ - [.NET (C#)](https://www.nuget.org/packages/Microsoft.Azure.Management.DigitalTwins/) ([reference [auto-generated]](/dotnet/api/overview/azure/digitaltwins/management?view=azure-dotnet&preserve-view=true)) ([source](https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/digitaltwins/Microsoft.Azure.Management.DigitalTwins))
+ - [Java](https://search.maven.org/search?q=a:azure-mgmt-digitaltwins) ([reference [auto-generated]](/java/api/overview/azure/digitaltwins?view=azure-java-stable&preserve-view=true)) ([source](https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/digitaltwins))
+ - [JavaScript](https://www.npmjs.com/package/@azure/arm-digitaltwins) ([source](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/digitaltwins/arm-digitaltwins))
- [Python](https://pypi.org/project/azure-mgmt-digitaltwins/) ([source](https://github.com/Azure/azure-sdk-for-python/tree/release/v3/sdk/digitaltwins/azure-mgmt-digitaltwins))
- - [Go](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/services/digitaltwins/mgmt) ([source](https://github.com/Azure/azure-sdk-for-go/tree/master/services/digitaltwins/mgmt))
+ - [Go](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/services/digitaltwins/mgmt) ([source](https://github.com/Azure/azure-sdk-for-go/tree/main/services/digitaltwins/mgmt))
You can also exercise control plane APIs by interacting with Azure Digital Twins through the [Azure portal](https://portal.azure.com) and [CLI](/cli/azure/dt).
The data plane APIs are the Azure Digital Twins APIs used to manage the elements
* Query - The Query category lets developers [find sets of digital twins in the twin graph](how-to-query-graph.md) across relationships. * Event Routes - The Event Routes category contains APIs to [route data](concepts-route-events.md), through the system and to downstream services.
-The most current data plane API version is 2020-10-31.
- To use the data plane APIs: * You can call the APIs directly, by...
- - Referencing the latest Swagger folder in the [data plane Swagger repo](https://github.com/Azure/azure-rest-api-specs/tree/master/specification/digitaltwins/data-plane/Microsoft.DigitalTwins). This folder also includes a folder of examples that show the usage.
+ - Referencing the latest Swagger folder in the [data plane Swagger repo](https://github.com/Azure/azure-rest-api-specs/tree/main/specification/digitaltwins/data-plane/Microsoft.DigitalTwins). This folder also includes a folder of examples that show the usage.
- Viewing the [API reference documentation](/rest/api/azure-digitaltwins/). * You can use the .NET (C#) SDK. To use the .NET SDK... - You can view and add the package from NuGet: [Azure.DigitalTwins.Core](https://www.nuget.org/packages/Azure.DigitalTwins.Core). - You can view the [SDK reference documentation](/dotnet/api/overview/azure/digitaltwins/client?view=azure-dotnet&preserve-view=true).
- - You can find the SDK source, including a folder of samples, in GitHub: [Azure IoT Digital Twins client library for .NET](https://github.com/Azure/azure-sdk-for-net/tree/master/sdk/digitaltwins/Azure.DigitalTwins.Core).
+ - You can find the SDK source, including a folder of samples, in GitHub: [Azure IoT Digital Twins client library for .NET](https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/digitaltwins/Azure.DigitalTwins.Core).
- You can see detailed information and usage examples by continuing to the [.NET (C#) SDK (data plane)](#net-c-sdk-data-plane) section of this article. * You can use the Java SDK. To use the Java SDK... - You can view and install the package from Maven: [`com.azure:azure-digitaltwins-core`](https://search.maven.org/artifact/com.azure/azure-digitaltwins-core/1.0.0/jar) - You can view the [SDK reference documentation](/java/api/overview/azure/digitaltwins/client?view=azure-java-stable&preserve-view=true)
- - You can find the SDK source in GitHub: [Azure IoT Digital Twins client library for Java](https://github.com/Azure/azure-sdk-for-java/tree/master/sdk/digitaltwins/azure-digitaltwins-core)
+ - You can find the SDK source in GitHub: [Azure IoT Digital Twins client library for Java](https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/digitaltwins/azure-digitaltwins-core)
* You can use the JavaScript SDK. To use the JavaScript SDK... - You can view and install the package from npm: [Azure Azure Digital Twins Core client library for JavaScript](https://www.npmjs.com/package/@azure/digital-twins-core). - You can view the [SDK reference documentation](/javascript/api/@azure/digital-twins-core/?view=azure-node-latest&preserve-view=true).
- - You can find the SDK source in GitHub: [Azure Azure Digital Twins Core client library for JavaScript](https://github.com/Azure/azure-sdk-for-js/tree/master/sdk/digitaltwins/digital-twins-core)
+ - You can find the SDK source in GitHub: [Azure Azure Digital Twins Core client library for JavaScript](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/digitaltwins/digital-twins-core)
* You can use the Python SDK. To use the Python SDK... - You can view and install the package from PyPi: [Azure Azure Digital Twins Core client library for Python](https://pypi.org/project/azure-digitaltwins-core/). - You can view the [SDK reference documentation](/python/api/azure-digitaltwins-core/azure.digitaltwins.core?view=azure-python&preserve-view=true).
- - You can find the SDK source in GitHub: [Azure Azure Digital Twins Core client library for Python](https://github.com/Azure/azure-sdk-for-python/tree/master/sdk/digitaltwins/azure-digitaltwins-core)
+ - You can find the SDK source in GitHub: [Azure Azure Digital Twins Core client library for Python](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/digitaltwins/azure-digitaltwins-core)
You can also exercise date plane APIs by interacting with Azure Digital Twins through the [CLI](/cli/azure/dt).
The following list provides more detail and general guidelines for using the API
* All service functions throw an exception for any return status of 400 or above. Make sure you wrap calls into a `try` section, and catch at least `RequestFailedExceptions`. For more about this type of exception, see its [reference documentation](/dotnet/api/azure.requestfailedexception?view=azure-dotnet&preserve-view=true). * Most service methods return `Response<T>` or (`Task<Response<T>>` for the asynchronous calls), where `T` is the class of return object for the service call. The [Response](/dotnet/api/azure.response-1?view=azure-dotnet&preserve-view=true) class encapsulates the service return and presents return values in its `Value` field. * Service methods with paged results return `Pageable<T>` or `AsyncPageable<T>` as results. For more about the `Pageable<T>` class, see its [reference documentation](/dotnet/api/azure.pageable-1?view=azure-dotnet&preserve-view=true); for more about `AsyncPageable<T>`, see its [reference documentation](/dotnet/api/azure.asyncpageable-1?view=azure-dotnet&preserve-view=true).
-* You can iterate over paged results using an `await foreach` loop. For more about this process, see the [relevant documentation](/archive/msdn-magazine/2019/november/csharp-iterating-with-async-enumerables-in-csharp-8).
+* You can iterate over paged results using an `await foreach` loop. For more about this process, see [Iterating with Async Enumerables in C# 8](/archive/msdn-magazine/2019/november/csharp-iterating-with-async-enumerables-in-csharp-8).
* The underlying SDK is `Azure.Core`. See the [Azure namespace documentation](/dotnet/api/azure?view=azure-dotnet&preserve-view=true) for reference on the SDK infrastructure and types.
digital-twins Concepts Data Explorer Plugin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/concepts-data-explorer-plugin.md
# Mandatory fields. Title: Querying with Azure Data Explorer
+ Title: Querying with the Azure Data Explorer plugin
description: Learn about the Azure Digital Twins query plugin for Azure Data Explorer Previously updated : 03/01/2022 Last updated : 03/23/2022
For more information on using the plugin, see the [Kusto documentation for the a
To see example queries and complete a walkthrough with sample data, see [Azure Digital Twins query plugin for Azure Data Explorer: Sample queries and walkthrough](https://github.com/Azure-Samples/azure-digital-twins-getting-started/tree/main/adt-adx-queries) in GitHub.
-## Using Azure Data Explorer IoT data with Azure Digital Twins
+## Ingesting Azure Digital Twins data into Azure Data Explorer
-There are various ways to ingest IoT data into Azure Data Explorer. Here are two of them that you might take advantage of when using Azure Data Explorer with Azure Digital Twins:
-* Create a history record of digital twin property values in Azure Data Explorer with an Azure function that handles twin change events and writes the twin data to Azure Data Explorer. This process is similar to the one used in [Integrate with Azure Time Series Insights](how-to-integrate-time-series-insights.md). This path will be suitable for customers who use telemetry data to bring their digital twins to life.
-* [Ingest IoT data directly into your Azure Data Explorer cluster from IoT Hub](/azure/data-explorer/ingest-data-iot-hub) or from other sources. Then, the Azure Digital Twins graph will be used to contextualize the time series data using joint Azure Digital Twins/Azure Data Explorer queries. This path may be suitable for direct-ingestion workloads.
+Before querying with the plugin, you'll need to ingest your Azure Digital Twins data into Azure Data Explorer. There are two main ways you can do so: through the **data history (preview)** feature, or through direct ingestion. The following sections describe these options in more detail.
-### Mapping data across Azure Data Explorer and Azure Digital Twins
+### Ingesting with data history
-If you're ingesting time series data directly into Azure Data Explorer, you'll likely need to convert this raw time series data into a schema suitable for joint Azure Digital Twins/Azure Data Explorer queries.
+The simplest way to ingest IoT data from Azure Digital Twins into Azure Data Explorer is to use the **data history (preview)** feature. This feature allows you to set up a connection between your Azure Digital Twins instance and an Azure Data Explorer cluster, and twin property updates are automatically historized to the cluster. This is a good choice if you're using telemetry data to bring your digital twins to life. For more information about this feature, see [Data history (with Azure Data Explorer) (preview)](concepts-data-history.md).
+
+### Direct ingestion
+
+You can also opt to [ingest IoT data directly into your Azure Data Explorer cluster from IoT Hub](/azure/data-explorer/ingest-data-iot-hub), or from other sources. Then, the Azure Digital Twins graph will be used to contextualize the time series data using joint Azure Digital Twins/Azure Data Explorer queries. This option is a good choice for direct-ingestion workloads. For more information about this process, continue through the rest of this section.
+
+#### Mapping data across Azure Data Explorer and Azure Digital Twins
+
+If you're ingesting time series data directly into Azure Data Explorer, you may need to convert this raw time series data into a schema suitable for joint Azure Digital Twins/Azure Data Explorer queries.
An [update policy](/azure/data-explorer/kusto/management/updatepolicy) in Azure Data Explorer allows you to automatically transform and append data to a target table whenever new data is inserted into a source table.
-You can use an update policy to enrich your raw time series data with the corresponding twin ID from Azure Digital Twins, and persist it to a target table. Using the twin ID, the target table can then be joined against the digital twins selected by the Azure Digital Twins plugin.
+If the sensor ID in your telemetry data differs from the corresponding twin ID in Azure Digital Twins, you can use an update policy to enrich your raw time series data with the twin ID and persist it to a target table. Using the twin ID, the target table can then be joined against the digital twins selected by the Azure Digital Twins plugin.
For example, say you created the following table to hold the raw time series data flowing into your Azure Data Explorer instance.
Lastly, create an update policy to call the function and update the target table
Once the target table is created, you can use the Azure Digital Twins plugin to select twins of interest and then join them against time series data in the target table.
-### Example schema
+#### Example schema
Here's an example of a schema that might be used to represent shared data.
-| timestamp | twinId | modelId | name | value | relationshipTarget | relationshipID |
+| `timestamp` | `twinId` | `modelId` | `name` | `value` | `relationshipTarget` | `relationshipID` |
| | | | | | | |
-| 2021-02-01 17:24 | ConfRoomTempSensor | dtmi:com:example:TemperatureSensor;1 | temperature | 301.0 | | |
+| 2021-02-01 17:24 | ConfRoomTempSensor | `dtmi:com:example:TemperatureSensor;1` | temperature | 301.0 | | |
Digital twin properties are stored as key-value pairs (`name, value`). `name` and `value` are stored as dynamic data types. The schema also supports storing properties for relationships, per the `relationshipTarget` and `relationshipID` fields. The key-value schema avoids the need to create a column for each twin property.
-### Representing properties with multiple fields
+#### Representing properties with multiple fields
You may want to store a property in your schema with multiple fields. These properties are represented by storing a JSON object as `value` in your schema.
digital-twins Concepts Data History https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/concepts-data-history.md
+
+# Mandatory fields.
+ Title: Data history (with Azure Data Explorer) (preview)
+
+description: Understand data history for Azure Digital Twins.
++ Last updated : 03/28/2022+++
+# Optional fields. Don't forget to remove # if you need a field.
+#
+#
+#
++
+# Azure Digital Twins data history (with Azure Data Explorer) (preview)
+
+**Data history (preview)** is an integration feature of Azure Digital Twins. It allows you to connect an Azure Digital Twins instance to an [Azure Data Explorer](/azure/data-explorer/data-explorer-overview) cluster so that digital twin property updates are automatically historized to Azure Data Explorer.
+
+Once twin property values are historized to Azure Data Explorer, you can run joint queries using the [Azure Digital Twins plugin for Azure Data Explorer](concepts-data-explorer-plugin.md) to reason across digital twins, their relationships, and time series data to gain insights into the behavior of modeled environments. You can also use these queries to power operational dashboards, enrich 2D and 3D web applications, and drive immersive augmented/mixed reality experiences to convey the current and historical state of assets, processes, and people modeled in Azure Digital Twins.
+
+## Required resources and data flow
+
+Data history requires the following resources:
+* Azure Digital Twins instance, with a [managed identity](concepts-security.md#managed-identity-for-accessing-other-resources) enabled
+* [Event Hubs](../event-hubs/event-hubs-about.md) namespace containing an event hub
+* [Azure Data Explorer](/azure/data-explorer/data-explorer-overview) cluster containing a database
+
+These resources are connected into the following flow:
++
+Data moves through these resources in this order:
+1. A property of a digital twin in Azure Digital Twins is updated.
+1. Data history forwards a message containing the twin's updated property value and metadata to the event hub.
+1. The event hub forwards the message to the target Azure Data Explorer cluster.
+1. The Azure Data Explorer cluster maps the message fields to the data history schema, and stores the data as a timestamped record in a data history table.
+
+When working with data history, you'll also need to use the [2021-06-30-preview](https://github.com/Azure/azure-rest-api-specs/tree/main/specification/digitaltwins/data-plane/Microsoft.DigitalTwins/preview/2021-06-30-preview) version of the APIs.
+
+### Required permissions
+
+In order to set up a data history connection, your Azure Digital Twins instance must have the following permissions to access the Event Hubs and Azure Data Explorer resources. These roles enable Azure Digital Twins to configure the event hub and Azure Data Explorer database on your behalf (for example, creating a table in the database). These permissions can optionally be removed after data history is set up.
+* Event Hubs resource: **Azure Event Hubs Data Owner**
+* Azure Data Explorer cluster: **Contributor** (scoped to either the entire cluster or specific database)
+* Azure Data Explorer database principal assignment: **Admin** (scoped to the database being used)
+
+Later, your Azure Digital Twins instance must have the following permission on the Event Hubs resource while data history is being used: **Azure Event Hubs Data Sender** (you can also opt instead to keep **Azure Event Hubs Data Owner** from data history setup).
+
+## Creating a data history connection
+
+Once all the [required resources](#required-resources-and-data-flow) are set up, you can use the [Azure CLI](/cli/azure/what-is-azure-cli), [Azure portal](https://portal.azure.com), or the [Azure Digital Twins SDK](concepts-apis-sdks.md) to create the data history connection between them. The CLI command is part of the [az iot](/cli/azure/iot?view=azure-cli-latest&preserve-view=true) extension.
+
+For instructions on how to set up a data history connection, see [Use data history with Azure Data Explorer (preview)](how-to-use-data-history.md).
+
+## Data schema
+
+Time series data for twin property updates is stored in Azure Data Explorer with the following schema:
+
+| Attribute | Type | Description |
+| | | |
+| `TimeStamp` | DateTime | The date/time the property update message was processed by Azure Digital Twins. This field is set by the system and isn't writable by users. |
+| `SourceTimeStamp` | DateTime | An optional, writable property representing the timestamp when the property update was observed in the real world. This property can only be written using the **2021-06-30-preview** version of the [Azure Digital Twins APIs/SDKs](concepts-apis-sdks.md) and the value must comply to ISO 8601 date and time format. For more information about how to update this property, see [Update a property's sourceTime](how-to-manage-twin.md#update-a-propertys-sourcetime). |
+| `ServiceId` | String | The service instance ID of the Azure IoT service logging the record |
+| `Id` | String | The twin ID |
+| `ModelId` | String | The DTDL model ID (DTMI) |
+| `Key` | String | The name of the updated property |
+| `Value` | Dynamic | The value of the updated property |
+| `RelationshipId` | String | For properties defined on relationships (as opposed to twins or devices), this column contains the ID of the relationship; otherwise, empty |
+| `RelationshipTarget` | String | For properties defined on relationships, this column defines the twin ID of the twin targeted by the relationship; otherwise, empty |
+
+Below is an example table of twin property updates stored to Azure Data Explorer.
+
+| `TimeStamp` | `SourceTimeStamp` | `ServiceId` | `Id` | `ModelId` | `Key` | `Value` | `RelationshipTarget` | `RelationshipID` |
+| | | | | | | | | |
+| 2021-06-30T20:23:29.8697482Z | 2021-06-30T20:22:14.3854859Z | myInstance.api.neu.digitaltwins.azure.net | solar_plant_3 | `dtmi:example:grid:plants:solarPlant;1` | Output | 130 | | |
+| 2021-06-30T20:23:39.3235925Z| 2021-06-30T20:22:26.5837559Z | myInstance.api.neu.digitaltwins.azure.net | solar_plant_3 | `dtmi:example:grid:plants:solarPlant;1` | Output | 140 | | |
+| 2021-06-30T20:23:47.078367Z | 2021-06-30T20:22:34.9375957Z | myInstance.api.neu.digitaltwins.azure.net | solar_plant_3 | `dtmi:example:grid:plants:solarPlant;1` | Output | 130 | | |
+| 2021-06-30T20:23:57.3794198Z | 2021-06-30T20:22:50.1028562Z | myInstance.api.neu.digitaltwins.azure.net | solar_plant_3 | `dtmi:example:grid:plants:solarPlant;1` | Output | 123 | | |
+
+### Representing properties with multiple fields
+
+You may need to store a property with multiple fields. These properties are represented with a JSON object in the `Value` attribute of the schema.
+
+For instance, if you're representing a property with three fields for roll, pitch, and yaw, data history will store the following JSON object as the `Value`: `{"roll": 20, "pitch": 15, "yaw": 45}`.
+
+## Pricing
+
+Messages emitted by data history are metered under the [Message pricing dimension](https://azure.microsoft.com/pricing/details/digital-twins/#pricing).
+
+## End-to-end ingestion latency
+
+Azure Digital Twins data history builds on the existing ingestion mechanism provided by Azure Data Explorer. Azure Digital Twins will ensure that property updates are made available to Azure Data Explorer within less than two seconds. Extra latency may be introduced by Azure Data Explorer ingesting the data.
+
+There are two methods in Azure Data Explorer for ingesting data: [batch ingestion](#batch-ingestion-default) and [streaming ingestion](#streaming-ingestion). You can configure these ingestion methods for individual tables according to your needs and the specific data ingestion scenario.
+
+Streaming ingestion has the lowest latency. However, due to processing overhead, this mode should only be used if less than 4 GB of data is ingested every hour. Batch ingestion works best if high ingestion data rates are expected. Azure Data Explorer uses batch ingestion by default. The following table summarizes the expected worst-case end-to-end latency:
+
+| Azure Data Explorer configuration | Expected end-to-end latency | Recommended data rate |
+| | | |
+| Streaming ingestion | <12 sec (<3 sec typical) | <4 GB / hr |
+| Batch ingestion | Varies (12 sec-15 m, depending on configuration) | >4 GB / hr
+
+The rest of this section contains details for enabling each type of ingestion.
+
+### Batch ingestion (default)
+
+If not configured otherwise, Azure Data Explorer will use **batch ingestion**. The default settings may lead to data being available for query only 5-10 minutes after an update to a digital twin was performed. The ingestion policy can be altered, such that the batch processing occurs at most every 10 seconds (at minimum; or 15 minutes at maximum). To alter the ingestion policy, the following command must be issued in the Azure Data Explorer query view:
+
+```kusto
+.alter table <table_name> policy ingestionbatching @'{"MaximumBatchingTimeSpan":"00:00:10", "MaximumNumberOfItems": 500, "MaximumRawDataSizeMB": 1024}'
+```
+
+Ensure that `<table_name>` is replaced with the name of the table that was set up for you. MaximumBatchingTimeSpan should be set to the preferred batching interval. It may take 5-10 minutes for the policy to take effect. You can read more about ingestion batching at the following link: [Kusto IngestionBatching policy management command](/azure/data-explorer/kusto/management/batching-policy).
+
+### Streaming ingestion
+
+Enabling **streaming ingestion** is a 2-step process:
+1. Enable streaming ingestion for your cluster. This action only has to be done once. (Warning: Doing so will have an effect on the amount of storage available for hot cache, and may introduce extra limitations). For instructions, see [Configure streaming ingestion on your Azure Data Explorer cluster](/azure/data-explorer/ingest-data-streaming?tabs=azure-portal%2Ccsharp).
+2. Add a streaming ingestion policy for the desired table. You can read more about enabling streaming ingestion for your cluster in the Azure Data Explorer documentation: [Kusto IngestionBatching policy management command](/azure/data-explorer/kusto/management/batching-policy).
+
+To enable streaming ingestion for your Azure Digital Twins data history table, the following command must be issued in the Azure Data Explorer query pane:
+
+```kusto
+.alter table <table_name> policy streamingingestion enable
+```
+
+Ensure that `<table_name>` is replaced with the name of the table that was set up for you. It may take 5-10 minutes for the policy to take effect.
+
+## Next steps
+
+Once twin data has been historized to Azure Data Explorer, you can use the Azure Digital Twins query plugin for Azure Data Explorer to run queries across the data. Read more about the plugin here: [Querying with the Azure Data Explorer plugin](concepts-data-explorer-plugin.md).
+
+Or, dive deeper into data history with an example scenario in this how-to: [Use data history with Azure Data Explorer](how-to-use-data-history.md).
digital-twins Concepts Data Ingress Egress https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/concepts-data-ingress-egress.md
You can also learn how to connect Azure Digital Twins to a Logic Apps trigger in
## Data egress services
-Azure Digital Twins can send data to connected endpoints. Supported endpoints can be:
+You may want to send Azure Digital Twins data to other downstream services for storage or additional processing.
+
+To send twin data to [Azure Data Explorer](/azure/data-explorer/data-explorer-overview), set up a [data history (preview) connection](concepts-data-history.md) that automatically historizes digital twin property updates from your Azure Digital Twins instance to an Azure Data Explorer cluster. You can then query this data in Azure Data Explorer using the [Azure Digital Twins query plugin for Azure Data Explorer](concepts-data-explorer-plugin.md).
+
+To send data to other services, such as [Azure Maps](../azure-maps/about-azure-maps.md), [Time Series Insights](../time-series-insights/overview-what-is-tsi.md), or [Azure Storage](../storage/common/storage-introduction.md), start by attaching the destination service to an *endpoint*.
+
+Endpoints can be instances of any of these Azure
* [Event Hubs](../event-hubs/event-hubs-about.md) * [Event Grid](../event-grid/overview.md) * [Service Bus](../service-bus-messaging/service-bus-messaging-overview.md)
-Endpoints are attached to Azure Digital Twins using management APIs or the Azure portal. Learn more about how to attach an endpoint to Azure Digital Twins in [Manage endpoints and routes](how-to-manage-routes.md).
-
-There are many other services where you may want to ultimately direct your data, such as [Azure Storage](../storage/common/storage-introduction.md), [Azure Maps](../azure-maps/about-azure-maps.md), [Azure Data Explorer](/azure/data-explorer/data-explorer-overview), or [Time Series Insights](../time-series-insights/overview-what-is-tsi.md). To send your data to services like these, attach the destination service to an endpoint.
+The endpoint is attached to an Azure Digital Twins instance using management APIs or the Azure portal, and can carry data along from the instance to other listening services. For more information about Azure Digital Twins endpoints, see [Endpoints and event routes](concepts-route-events.md).
-For example, if you're also using Azure Maps and want to correlate location with your Azure Digital Twins graph, you can use Azure Functions with Event Grid to establish communication between all the services in your deployment. For more information on integrating Azure Maps, see [Use Azure Digital Twins to update an Azure Maps indoor map](how-to-integrate-maps.md). For information on routing data in a similar way to Time Series Insights, see [Integrate with Time Series Insights](how-to-integrate-time-series-insights.md).
+For detailed instructions on how to send Azure Digital Twins data to Azure Maps, see [Use Azure Digital Twins to update an Azure Maps indoor map](how-to-integrate-maps.md). For detailed instructions on how to send Azure Digital Twins data to Time Series Insights, see [Integrate with Time Series Insights](how-to-integrate-time-series-insights.md).
Azure Digital Twins implements *at least once* delivery for data emitted to egress services.
digital-twins Concepts Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/concepts-models.md
However, if you have many models to uploadΓÇöor if they have many interdependenc
### Model visualizer
-Once you have uploaded models into your Azure Digital Twins instance, you can view the models in your Azure Digital Twins instance, including any inheritance and model relationships, using the [Azure Digital Twins Model Visualizer](https://github.com/Azure/opendigitaltwins-building-tools/tree/master/AdtModelVisualizer). This sample is currently in a draft state. We encourage the digital twins development community to extend and contribute to the sample.
+Once you have uploaded models into your Azure Digital Twins instance, you can view the models in your Azure Digital Twins instance, including any inheritance and model relationships, using the [Azure Digital Twins Model Visualizer](https://github.com/Azure/opendigitaltwins-tools/tree/master/AdtModelVisualizer). This sample is currently in a draft state. We encourage the digital twins development community to extend and contribute to the sample.
## Next steps
digital-twins Concepts Ontologies Convert https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/concepts-ontologies-convert.md
You can use this sample to see the conversion patterns in context, and to have a
### OWL2DTDL converter
-The [OWL2DTDL Converter](https://github.com/Azure/opendigitaltwins-building-tools/tree/master/OWL2DTDL) is a sample that translates an OWL ontology into a set of DTDL interface declarations, which can be used with the Azure Digital Twins service. It also works for ontology networks, made of one root ontology reusing other ontologies through `owl:imports` declarations.
+The [OWL2DTDL Converter](https://github.com/Azure/opendigitaltwins-tools/tree/master/OWL2DTDL) is a sample that translates an OWL ontology into a set of DTDL interface declarations, which can be used with the Azure Digital Twins service. It also works for ontology networks, made of one root ontology reusing other ontologies through `owl:imports` declarations.
This converter was used to translate the [Real Estate Core Ontology](https://doc.realestatecore.io/3.1/full.html) to DTDL and can be used for any OWL-based ontology.
digital-twins Concepts Ontologies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/concepts-ontologies.md
No matter which strategy you choose for integrating an ontology into Azure Digit
Reading this series of articles will guide you in how to use your models in your Azure Digital Twins instance. >[!TIP]
-> You can visualize the models in your ontology using the [Azure Digital Twins Explorer](concepts-azure-digital-twins-explorer.md) or [Azure Digital Twins Model Visualizer](https://github.com/Azure/opendigitaltwins-building-tools/tree/master/AdtModelVisualizer).
+> You can visualize the models in your ontology using the [Azure Digital Twins Explorer](concepts-azure-digital-twins-explorer.md) or [Azure Digital Twins Model Visualizer](https://github.com/Azure/opendigitaltwins-tools/tree/master/AdtModelVisualizer).
## Next steps
digital-twins Concepts Query Language https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/concepts-query-language.md
When writing queries for Azure Digital Twins, keep the following considerations
[!INCLUDE [digital-twins-query-latency-note.md](../../includes/digital-twins-query-latency-note.md)]
+## Querying historized twin data over time
+
+The Azure Digital Twins query language is only for querying the **present** state of your digital twins and relationships.
+
+To run queries on historized digital twin data collected over time, use the [data history (preview)](concepts-data-history.md) feature to connect your Azure Digital Twins instance to an [Azure Data Explorer](/azure/data-explorer/data-explorer-overview) cluster. This will automatically historize digital twin property updates to Azure Data Explorer, where they can be queried using the [Azure Digital Twins plugin for Azure Data Explorer](concepts-data-explorer-plugin.md).
+ ## Next steps Learn how to write queries and see client code examples in [Query the twin graph](how-to-query-graph.md).
digital-twins Concepts Route Events https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/concepts-route-events.md
This article covers *event routes* and how Azure Digital Twins uses them to send
There are two major cases for sending Azure Digital Twins data: * Sending data from one twin in the Azure Digital Twins graph to another. For instance, when a property on one digital twin changes, you may want to notify and update another digital twin based on the updated data.
-* Sending data to downstream data services for more storage or processing (also known as *data egress*). For instance,
- - A hospital may want to send Azure Digital Twins event data to [Time Series Insights](../time-series-insights/overview-what-is-tsi.md), to record time series data of handwashing-related events for bulk analytics.
- - A business that is already using [Azure Maps](../azure-maps/about-azure-maps.md) may want to use Azure Digital Twins to enhance their solution. They can quickly enable an Azure Map after setting up Azure Digital Twins, bring Azure Map entities into Azure Digital Twins as [digital twins](concepts-twins-graph.md) in the twin graph, or run powerful queries using their Azure Maps and Azure Digital Twins data together.
+* Sending data to downstream data services for more storage or processing (also known as *data egress*). For instance, a business that is already using [Azure Maps](../azure-maps/about-azure-maps.md) may want to use Azure Digital Twins to enhance their solution. They can quickly enable an Azure Map after setting up Azure Digital Twins, bring Azure Map entities into Azure Digital Twins as [digital twins](concepts-twins-graph.md) in the twin graph, or run powerful queries using their Azure Maps and Azure Digital Twins data together.
Event routes are used for both of these scenarios.
digital-twins Concepts Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/concepts-security.md
The following list describes the levels at which you can scope access to Azure D
* Digital Twin relationship: The actions for this resource define control over CRUD operations on [relationships](concepts-twins-graph.md) between digital twins in the twin graph. * Event route: The actions for this resource determine permissions to [route events](concepts-route-events.md) from Azure Digital Twins to an endpoint service like [Event Hubs](../event-hubs/event-hubs-about.md), [Event Grid](../event-grid/overview.md), or [Service Bus](../service-bus-messaging/service-bus-messaging-overview.md).
-### Troubleshooting permissions
+### Troubleshoot permissions
-If a user attempts to perform an action not allowed by their role, they may receive an error from the service request reading `403 (Forbidden)`. For more information and troubleshooting steps, see [Troubleshooting failed service request: Error 403 (Forbidden)](troubleshoot-error-403.md).
+If a user attempts to perform an action not allowed by their role, they may receive an error from the service request reading `403 (Forbidden)`. For more information and troubleshooting steps, see [Troubleshoot failed service request: Error 403 (Forbidden)](troubleshoot-error-403.md).
## Managed identity for accessing other resources
digital-twins Concepts Twins Graph https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/concepts-twins-graph.md
When represented as a JSON object, a digital twin will display the following fie
| `$etag` | Standard HTTP field assigned by the web server | | `$metadata.$model` | The ID of the model interface that characterizes this digital twin | | `$metadata.<property-name>` | Other metadata information about properties of the digital twin |
+| `$metadata.<property-name>.lastUpdateTime` | The date/time the property update message was processed by Azure Digital Twins |
+| `$metadata.<property-name>.sourceTime` | An optional, writable property representing the timestamp when the property update was observed in the real world. This property can only be written using the **2021-06-30-preview** version of the [Azure Digital Twins APIs/SDKs](concepts-apis-sdks.md) and the value must comply to ISO 8601 date and time format. For more information about how to update this property, see [Update a property's sourceTime](how-to-manage-twin.md#update-a-propertys-sourcetime). |
| `<property-name>` | The value of a property in JSON (`string`, number type, or object) | | `$relationships` | The URL of the path to the relationships collection. This field is absent if the digital twin has no outgoing relationship edges. | | `<component-name>` | A JSON object containing the component's property values and metadata, similar to those of the root object. This object exists even if the component has no properties. |
digital-twins How To Enable Private Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/how-to-enable-private-link.md
For a sample template that allows an Azure function to connect to Azure Digital
This template creates an Azure Digital Twins instance, a virtual network, an Azure function connected to the virtual network, and a Private Link connection to make the Azure Digital Twins instance accessible to the Azure function through a private endpoint.
-## Troubleshooting Private Link with Azure Digital Twins
+## Troubleshoot Private Link with Azure Digital Twins
Here are some common issues experienced with Private Link for Azure Digital Twins.
digital-twins How To Integrate Maps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/how-to-integrate-maps.md
This pattern reads from the room twin directly, rather than the IoT device, whic
>[!NOTE] >There is currently a known issue in Cloud Shell affecting these command groups: `az dt route`, `az dt model`, `az dt twin`. >
- >To resolve, either run `az login` in Cloud Shell prior to running the command, or use the [local CLI](/cli/azure/install-azure-cli) instead of Cloud Shell. For more detail on this, see [Troubleshooting: Known issues in Azure Digital Twins](troubleshoot-known-issues.md#400-client-error-bad-request-in-cloud-shell).
+ >To resolve, either run `az login` in Cloud Shell prior to running the command, or use the [local CLI](/cli/azure/install-azure-cli) instead of Cloud Shell. For more detail on this, see [Troubleshoot known issues](troubleshoot-known-issues.md#400-client-error-bad-request-in-cloud-shell).
```azurecli-interactive az dt route create --dt-name <your-Azure-Digital-Twins-instance-name> --endpoint-name <Event-Grid-endpoint-name> --route-name <my-route> --filter "type = 'Microsoft.DigitalTwins.Twin.Update'"
digital-twins How To Integrate Time Series Insights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/how-to-integrate-time-series-insights.md
In this article, you'll learn how to integrate Azure Digital Twins with [Azure Time Series Insights (TSI)](../time-series-insights/overview-what-is-tsi.md).
-The solution described in this article will allow you to gather and analyze historical data about your IoT solution. Azure Digital Twins is a great fit for feeding data into Time Series Insights, as it allows you to correlate multiple data streams and standardize your information before sending it to Time Series Insights.
+The solution described in this article uses Time Series Insights to collect and analyze historical data about your IoT solution. Azure Digital Twins is a good fit for feeding data into Time Series Insights, as it allows you to correlate multiple data streams and standardize your information before sending it to Time Series Insights.
+
+>[!TIP]
+>The simplest way to analyze historical twin data over time is to use the [data history (preview)](concepts-data-history.md) feature to connect an Azure Digital Twins instance to an Azure Data Explorer cluster, so that digital twin property updates are automatically historized to Azure Data Explorer. You can then query this data in Azure Data Explorer using the [Azure Digital Twins query plugin for Azure Data Explorer](concepts-data-explorer-plugin.md). If you don't need to use Time Series Insights specifically, you might consider this alternative for a simpler integration experience.
## Prerequisites
digital-twins How To Manage Twin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/how-to-manage-twin.md
Here is the body of the basic query that will return a list of all digital twins
## Update a digital twin
-To update properties of a digital twin, you write the information you want to replace in [JSON Patch](http://jsonpatch.com/) format. In this way, you can replace multiple properties at once. You then pass the JSON Patch document into an `UpdateDigitalTwin()` method:
+To update properties of a digital twin, write the information you want to replace in [JSON Patch](http://jsonpatch.com/) format. For a full list of JSON Patch operations that can be used, including `replace`, `add` and `remove`, see the [Operations for JSON Patch](http://jsonpatch.com/#operations).
+
+After crafting the JSON Patch document containing update information, pass the document into the `UpdateDigitalTwin()` method:
:::code language="csharp" source="~/digital-twins-docs-samples/sdks/csharp/twin_operations_sample.cs" id="UpdateTwinCall":::
-A patch call can update as many properties on a single twin as you want (even all of them). If you need to update properties across multiple twins, you'll need a separate update call for each twin.
+A single patch call can update as many properties on a single twin as you want (even all of them). If you need to update properties across multiple twins, you'll need a separate update call for each twin.
> [!TIP] > After creating or updating a twin, there may be a latency of up to 10 seconds before the changes will be reflected in [queries](how-to-query-graph.md). The `GetDigitalTwin` API (described [earlier in this article](#get-data-for-a-digital-twin)) does not experience this delay, so use the API call instead of querying to see your newly-updated twins if you need an instant response.
-Here is an example of JSON Patch code. This document replaces the *mass* and *radius* property values of the digital twin it is applied to.
+Here is an example of JSON Patch code. This document replaces the *mass* and *radius* property values of the digital twin it is applied to. This example shows the JSON Patch `replace` operation, which replaces the value of an existing property.
:::code language="json" source="~/digital-twins-docs-samples/models/patch.json":::
->[!NOTE]
-> This example shows the JSON Patch `replace` operation, which replaces the value of an existing property. For a full list of JSON Patch operations that can be used, including `add` and `remove`, see the [Operations for JSON Patch](http://jsonpatch.com/#operations).
-
-When updating a twin from a code project using the .NET SDK, you can create JSON patches using the Azure .NET SDK's [JsonPatchDocument](/dotnet/api/azure.jsonpatchdocument?view=azure-dotnet&preserve-view=true). Here is an example.
+When updating a twin from a code project using the .NET SDK, you can create JSON patches using the Azure .NET SDK's [JsonPatchDocument](/dotnet/api/azure.jsonpatchdocument?view=azure-dotnet&preserve-view=true). Here is an example of creating a JSON Patch document and using `UpdateDigitalTwin()` in project code.
:::code language="csharp" source="~/digital-twins-docs-samples/sdks/csharp/twin_operations_other.cs" id="UpdateTwin":::
+> [!TIP]
+> You can maintain source timestamps on your digital twins by updating the `$metadata.<property-name>.sourceTime` field with the process described in this section. For more information on this field and other fields that are writable on digital twins, see [Digital twin JSON format](concepts-twins-graph.md#digital-twin-json-format).
+ ### Update sub-properties in digital twin components Recall that a model may contain components, allowing it to be made up of other models.
The patch for this situation needs to update both the model and the twin's tempe
:::code language="json" source="~/digital-twins-docs-samples/models/patch-model-2.json":::
+### Update a property's sourceTime
+
+You may optionally decide to use the `sourceTime` field on twin properties to record timestamps for when property updates are observed in the real world. Azure Digital Twins natively supports `sourceTime` in the metadata for each twin property. For more information about this field and other fields on digital twins, see [Digital twin JSON format](concepts-twins-graph.md#digital-twin-json-format).
+
+This property can only be written using the latest version of the [Azure Digital Twins APIs/SDKs](concepts-apis-sdks.md). The value must comply to ISO 8601 date and time format.
+
+Here's an example of a JSON Patch document that updates both the value and the `sourceTime` field of a `Temperature` property:
++
+>[!TIP]
+>To update the `sourceTime` field on a property that's part of a component, include the component at the start of the path. In the previous example, this would mean changing the path from `/$metadata/Temperature/sourceTime` to `myComponent/$metadata/Temperature/sourceTime`.
+ ### Handle conflicting update calls Azure Digital Twins ensures that all incoming requests are processed one after the other. This means that even if multiple functions try to update the same property on a twin at the same time, there's **no need** for you to write explicit locking code to handle the conflict.
digital-twins How To Monitor Resource Health https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/how-to-monitor-resource-health.md
To check the health of your instance, follow these steps:
1. Sign in to the [Azure portal](https://portal.azure.com) and navigate to your Azure Digital Twins instance. You can find it by typing its name into the portal search bar.
-2. From your instance's menu, select **Resource health** under Support + troubleshooting. This will take you to the page for viewing resource health history.
+2. From your instance's menu, select **Resource health** under **Support + troubleshooting**. This will take you to the page for viewing resource health history.
:::image type="content" source="media/how-to-monitor-resource-health/resource-health.png" alt-text="Screenshot showing the 'Resource health' page. There is a 'Health history' section showing a daily report from the last nine days.":::
digital-twins How To Move Regions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/how-to-move-regions.md
Here are some questions to consider:
- Azure Event Grid, Azure Event Hubs, or Azure Service Bus - Azure Functions - Azure Logic Apps
+ - Azure Data Explorer
- Azure Time Series Insights - Azure Maps - Azure IoT Hub Device Provisioning Service
The exact resources you need to edit depends on your scenario, but here are some
* Azure Functions. If you have an Azure function whose code includes the host name of the original instance, you should update this value to the new instance's host name and republish the function. * Event Grid, Event Hubs, or Service Bus. * Logic Apps.
+* Azure Data Explorer.
* Time Series Insights. * Azure Maps. * IoT Hub Device Provisioning Service.
digital-twins How To Use Data History https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/how-to-use-data-history.md
+
+# Mandatory fields.
+ Title: Use data history (preview) with Azure Data Explorer
+
+description: See how to set up and use data history for Azure Digital Twins, using the CLI or Azure portal.
++ Last updated : 03/23/2022+++
+# Optional fields. Don't forget to remove # if you need a field.
+#
+#
+#
++
+# Use Azure Digital Twins data history (preview)
+
+[Data history (preview)](concepts-data-history.md) is an Azure Digital Twins feature for automatically historizing twin property updates to [Azure Data Explorer](/azure/data-explorer/data-explorer-overview). This data can be queried using the [Azure Digital Twins query plugin for Azure Data Explorer](concepts-data-explorer-plugin.md) to gain insights about your environment over time.
+
+This article shows how to set up a working data history connection between Azure Digital Twins and Azure Data Explorer. It uses the [Azure CLI](/cli/azure/what-is-azure-cli) and the [Azure portal](https://portal.azure.com) to set up and connect the required data history resources, including:
+* an Azure Digital Twins instance
+* an [Event Hubs](../event-hubs/event-hubs-about.md) namespace containing an event hub
+* an [Azure Data Explorer](/azure/data-explorer/data-explorer-overview) cluster containing a database
+
+It also contains a sample twin graph and telemetry scenario that you can use to see the historized twin updates in Azure Data Explorer.
+
+>[!NOTE]
+>You can also work with data history using the [2021-06-30-preview](https://github.com/Azure/azure-rest-api-specs/tree/main/specification/digitaltwins/data-plane/Microsoft.DigitalTwins/preview/2021-06-30-preview) version of the rest APIs. That process isn't shown in this article.
+
+## Prerequisites
++
+>[!NOTE]
+> You can also use Azure Cloud Shell in the PowerShell environment instead of the Bash environment, if you prefer. The commands on this page are written for the Bash environment, so they may require some small adjustments to be run in PowerShell.
++
+### Set up local variables for CLI session
+
+This article provides CLI commands that you can use to create the data history resources. In order to make it easy to copy and run those commands later, you can set up local variables in your CLI session now, and then refer to those variables later in the CLI commands when creating your resources. Update the placeholders (identified with `<...>` brackets) in the commands below, and run these commands to create the variables. Make sure to follow the naming rules described in the comments. These values will be used later when creating the new resources.
+
+>[!NOTE]
+>These commands are written for the Bash environment. They can be adjusted for PowerShell if you prefer to use a PowerShell CLI environment.
+
+```azurecli-interactive
+## General Setup
+location="<your-resource-region>"
+resourcegroup="<your-resource-group-name>"
+
+## Azure Digital Twins Setup
+# Instance name can contain letters, numbers, and hyphens. It must start and end with a letter or number, and be between 4 and 62 characters long.
+dtname="<name-for-your-digital-twins-instance>"
+# Connection name can contain letters, numbers, and hyphens. It must contain at least one letter, and be between 3 and 50 characters long.
+connectionname="<name-for-your-data-history-connection>"
+
+## Event Hub Setup
+# Namespace can contain letters, numbers, and hyphens. It must start with a letter, end with a letter or number, and be between 6 and 50 characters long.
+eventhubnamespace="<name-for-your-event-hub-namespace>"
+# Event hub name can contain only letters, numbers, periods, hyphens and underscores. It must start and end with a letter or number.
+eventhub="<name-for-your-event-hub>"
+
+## Azure Data Explorer Setup
+# Cluster name can contain only lowercase alphanumeric characters. It must start with a letter, and be between 4 and 22 characters long.
+clustername="<name-for-your-cluster>"
+# Database name can contain only alphanumeric, spaces, dash and dot characters, and be up to 260 characters in length.
+databasename="<name-for-your-database>"
+```
+
+## Create an Azure Digital Twins instance with a managed identity
+
+If you already have an Azure Digital Twins instance, ensure that you've enabled a [system-managed identity](how-to-route-with-managed-identity.md#add-a-system-managed-identity-to-an-existing-instance) for it.
+
+If you don't have an Azure Digital Twins instance, set one up using the instructions in this section.
+
+# [CLI](#tab/cli)
+
+Use the following command to create a new instance with a system-managed identity. The command uses three local variables (`$dtname`, `$resourcegroup`, and `$location`) that were created earlier in [Set up local variables for CLI session](#set-up-local-variables-for-cli-session).
+
+```azurecli-interactive
+az dt create --dt-name $dtname --resource-group $resourcegroup --location $location --assign-identity
+```
+
+Next, use the following command to grant yourself the *Azure Digital Twins Data Owner* role on the instance. The command has one placeholder, `<owneruser@microsoft.com>`, that you should replace with your own Azure account information, and uses a local variable (`$dtname`) that was created earlier in [Set up local variables for CLI session](#set-up-local-variables-for-cli-session).
+
+```azurecli-interactive
+az dt role-assignment create --dt-name $dtname --assignee "<owneruser@microsoft.com>" --role "Azure Digital Twins Data Owner"
+```
+
+>[!NOTE]
+>It may take up to five minutes for this RBAC change to apply.
+
+# [Portal](#tab/portal)
+
+Follow the instructions in [Set up an Azure Digital Twins instance and authentication](how-to-set-up-instance-portal.md) to create an instance, making sure to enable a **system-managed identity** in the [Advanced](how-to-set-up-instance-portal.md#additional-setup-options) tab during setup. Then, continue through the article's instructions to set up user access permissions so that you have the Azure Digital Twins Data Owner role on the instance.
+
+Remember the name you give to your instance so you can use it later.
+++
+## Create an Event Hubs namespace and event hub
+
+The next step is to create an Event Hubs namespace and an event hub. This hub will receive digital twin property update notifications from the Azure Digital Twins instance and then forward the messages to the target Azure Data Explorer cluster.
+
+As part of the [data history connection setup](#set-up-data-history-connection) later, you'll grant the Azure Digital Twins instance the *Azure Event Hubs Data Owner* role on the event hub resource.
+
+For more information about Event Hubs and their capabilities, see the [Event Hubs documentation](../event-hubs/event-hubs-about.md).
+
+# [CLI](#tab/cli)
+
+Use the following CLI commands to create the required resources. The commands use several local variables (`$location`, `$resourcegroup`, `$eventhubnamespace`, and `$eventhub`) that were created earlier in [Set up local variables for CLI session](#set-up-local-variables-for-cli-session).
+
+Create an Event Hubs namespace:
+
+```azurecli-interactive
+az eventhubs namespace create --name $eventhubnamespace --resource-group $resourcegroup --location $location
+```
+
+Create an event hub in your namespace:
+
+```azurecli-interactive
+az eventhubs eventhub create --name $eventhub --resource-group $resourcegroup --namespace-name $eventhubnamespace
+```
+
+# [Portal](#tab/portal)
+
+Follow the instructions in [Create an event hub using Azure portal](../event-hubs/event-hubs-create.md) to create an Event Hubs namespace and an event hub. (The article also contains instructions on how to create a new resource group. You can create a new resource group for the Event Hubs resources, or skip that step and use an existing resource group for your new Event Hubs resources.)
+
+Remember the names you give to these resources so you can use them later.
+++
+## Create a Kusto (Azure Data Explorer) cluster and database
+
+Next, create a Kusto (Azure Data Explorer) cluster and database to receive the data from Azure Digital Twins.
+
+As part of the [data history connection setup](#set-up-data-history-connection) later, you'll grant the Azure Digital Twins instance the *Contributor* role on at least the database (it can also be scoped to the cluster), and the *Admin* role on the database.
+
+# [CLI](#tab/cli)
+
+Use the following CLI commands to create the required resources. The commands use several local variables (`$location`, `$resourcegroup`, `$clustername`, and `$databasename`) that were created earlier in [Set up local variables for CLI session](#set-up-local-variables-for-cli-session).
+
+Start by adding the Kusto extension to your CLI session, if you don't have it already.
+
+```azurecli-interactive
+az extension add --name kusto
+```
+
+Next, create the Kusto cluster. The command below requires 5-10 minutes to execute, and will create an E2a v4 cluster in the developer tier. This type of cluster has a single node for the engine and data-management cluster, and is applicable for development and test scenarios. For more information about the tiers in Azure Data Explorer and how to select the right options for your production workload, see [Select the correct compute SKU for your Azure Data Explorer cluster](/azure/data-explorer/manage-cluster-choose-sku) and [Azure Data Explorer Pricing](https://azure.microsoft.com/pricing/details/data-explorer).
+
+```azurecli-interactive
+az kusto cluster create --cluster-name $clustername --sku name="Dev(No SLA)_Standard_E2a_v4" tier="Basic" --resource-group $resourcegroup --location $location --type SystemAssigned
+```
+
+Create a database in your new Kusto cluster (using the cluster name from above and in the same location). This database will be used to store contextualized Azure Digital Twins data. The command below creates a database with a soft delete period of 365 days, and a hot cache period of 31 days. For more information about the options available for this command, see [az kusto database create](/cli/azure/kusto/database?view=azure-cli-latest&preserve-view=true#az_kusto_database_create).
+
+```azurecli-interactive
+az kusto database create --cluster-name $clustername --database-name $databasename --resource-group $resourcegroup --read-write-database soft-delete-period=P365D hot-cache-period=P31D location=$location
+```
+
+# [Portal](#tab/portal)
+
+Follow the instructions in [Create an Azure Data Explorer cluster and database](/azure/data-explorer/create-cluster-database-portal?tabs=one-click-create-database) to create an Azure Data Explorer cluster and a database in the cluster.
+
+Remember the names you give to these resources so you can use them later.
+++
+## Set up data history connection
+
+Now that you've created the required resources, use the command below to create a data history connection between the Azure Digital Twins instance, the event hub, and the Azure Data Explorer cluster.
+
+# [CLI](#tab/cli)
+
+Use the following command to create a data history connection. By default, this command assumes all resources are in the same resource group as the Azure Digital Twins instance. You can also specify resources that are in different resource groups using the parameter options for this command, which can be displayed by running `az dt data-history connection create adx -h`.
+The command uses several local variables (`$connectionname`, `$dtname`, `$clustername`, `$databasename`, `$eventhub`, and `$eventhubnamespace`) that were created earlier in [Set up local variables for CLI session](#set-up-local-variables-for-cli-session).
+
+```azurecli-interactive
+az dt data-history connection create adx --cn $connectionname --dt-name $dtname --adx-cluster-name $clustername --adx-database-name $databasename --eventhub $eventhub --eventhub-namespace $eventhubnamespace
+```
+
+When executing the above command, you'll be given the option of assigning the necessary permissions required for setting up your data history connection on your behalf (if you've already assigned the necessary permissions, you can skip these prompts). These permissions are granted to the managed identity of your Azure Digital Twins instance. The minimum required roles are:
+* Azure Event Hubs Data Owner on the event hub
+* Contributor scoped at least to the specified database (it can also be scoped to the cluster)
+* Database principal assignment with role Admin (for table creation / management) scoped to the specified database
+
+For regular data plane operation, these roles can be reduced to a single Azure Event Hubs Data Sender role, if desired.
+
+>[!NOTE]
+> If you encounter the error "Could not create Azure Digital Twins instance connection. Unable to create table and mapping rule in database. Check your permissions for the Azure Database Explorer and run `az login` to refresh your credentials," resolve the error by adding yourself as an *AllDatabasesAdmin* under Permissions in your Azure Data Explorer cluster.
+>
+>If you're using the Cloud Shell and encounter the error "Failed to connect to MSI. Please make sure MSI is configured correctly," try running the command with a local Azure CLI installation instead.
+
+# [Portal](#tab/portal)
+
+Start by navigating to your Azure Digital Twins instance in the Azure portal (you can find the instance by entering its name into the portal search bar). Then complete the following steps.
+
+1. Select **Data history** from the Connect Outputs section of the instance's menu.
+ :::image type="content" source="media/how-to-use-data-history/select-data-history.png" alt-text="Screenshot of the Azure portal showing the data history option in the menu for an Azure Digital Twins instance." lightbox="media/how-to-use-data-history/select-data-history.png":::
+
+ Select **Create a connection**. Doing so will begin the process of creating a data history connection.
+
+2. **(SOME USERS)** If you **don't** already have a [managed identity enabled for your Azure Digital Twins instance](how-to-route-with-managed-identity.md), you'll see this page first, asking you to turn on Identity for the instance as the first step for the data history connection.
+
+ :::image type="content" source="media/how-to-use-data-history/authentication.png" alt-text="Screenshot of the Azure portal showing the first step in the data history connection setup, Authentication." lightbox="media/how-to-use-data-history/authentication.png":::
+
+ If you **do** already have a managed identity enabled, your setup will **skip this step** and you'll see the next page immediately.
+
+3. On the **Send** page, enter the details of the [Event Hubs resources](#create-an-event-hubs-namespace-and-event-hub) that you created earlier.
+ :::image type="content" source="media/how-to-use-data-history/send.png" alt-text="Screenshot of the Azure portal showing the Send step in the data history connection setup." lightbox="media/how-to-use-data-history/send.png":::
+
+ Select **Next**.
+
+4. On the **Store** page, enter the details of the [Azure Data Explorer resources](#create-a-kusto-azure-data-explorer-cluster-and-database) that you created earlier and choose a name for your database table.
+ :::image type="content" source="media/how-to-use-data-history/store.png" alt-text="Screenshot of the Azure portal showing the Store step in the data history connection setup." lightbox="media/how-to-use-data-history/store.png":::
+
+ Select **Next**.
+
+5. On the **Permission** page, select all of the checkboxes to give your Azure Digital Twins instance permission to connect to the Event Hubs and Azure Data Explorer resources. If you already have equal or higher permissions in place, you can skip this step.
+ :::image type="content" source="media/how-to-use-data-history/permission.png" alt-text="Screenshot of the Azure portal showing the Permission step in the data history connection setup." lightbox="media/how-to-use-data-history/permission.png":::
+
+ Select **Next**.
+
+6. On the **Review + create** page, review the details of your resources and select **Create connection**.
+ :::image type="content" source="media/how-to-use-data-history/review-create.png" alt-text="Screenshot of the Azure portal showing the Review and Create step in the data history connection setup." lightbox="media/how-to-use-data-history/review-create.png":::
+
+When the connection is finished creating, you'll be taken back to the **Data history** page for the Azure Digital Twins instance, which now shows details of the data history connection you've created.
++++
+After setting up the data history connection, you can optionally remove the roles granted to your Azure Digital Twins instance for accessing the Event Hubs and Azure Data Explorer resources. In order to use data history, the only role the instance needs going forward is *Azure Event Hubs Data Sender* (or a higher role that includes these permissions, such as *Azure Event Hubs Data Owner*) on the Event Hubs resource.
+
+>[!NOTE]
+>Once the connection is set up, the default settings on your Azure Data Explorer cluster will result in an ingestion latency of approximately 10 minutes or less. You can reduce this latency by enabling [streaming ingestion](/azure/data-explorer/ingest-data-streaming) (less than 10 seconds of latency) or an [ingestion batching policy](/azure/data-explorer/kusto/management/batchingpolicy). For more information about Azure Data Explorer ingestion latency, see [End-to-end ingestion latency](concepts-data-history.md#end-to-end-ingestion-latency).
+
+## Verify with a sample twin graph
+
+Now that your data history connection is set up, you can test it with data from your digital twins.
+
+If you already have twins in your Azure Digital Twins instance that are receiving telemetry updates, you can skip this section and visualize the results using your own resources.
+
+Otherwise, continue through this section to set up a sample graph containing twins that can receive telemetry updates.
+
+You can set up a sample graph for this scenario using the **Azure Digital Twins Data Simulator**. The Azure Digital Twins Data Simulator continuously pushes telemetry to several twins in an Azure Digital Twins instance.
+
+### Create a sample graph
+
+You can use the **Azure Digital Twins Data Simulator** to provision a sample twin graph and push telemetry data to it. The twin graph created here models pasteurization processes for a dairy company.
+
+Start by opening the [Azure Digital Twins Data Simulator](https://explorer.digitaltwins.azure.net/tools/data-pusher) web application in your browser.
++
+Enter the host name of your Azure Digital Twins instance in the Instance URL field. The host name can be found in the [portal](https://portal.azure.com) page for your instance, and has a format like `<Azure-Digital-Twins-instance-name>.api.<region-code>.digitaltwins.azure.net`. Select **Generate Environment**.
+
+You'll see confirmation messages on the screen as models, twins, and relationships are created in your environment. When the simulation is ready, the **Start simulation** button will become enabled. Select **Start simulation** to push simulated data to your Azure Digital Twins instance. To continuously update the twins in your Azure Digital Twins instance, keep this browser window in the foreground on your desktop (and complete other browser actions in a separate window).
+
+To verify that data is flowing through the data history pipeline, navigate to the [Azure portal](https://portal.azure.com) and open the Event Hubs namespace resource you created. You should see charts showing the flow of messages into and out of the namespace, indicating the flow of incoming messages from Azure Digital Twins and outgoing messages to Azure Data Explorer.
++
+### View the historized twin updates in Azure Data Explorer
+
+In this section, you'll view the historized twin updates being stored in Azure Data Explorer.
+
+Start in the [Azure portal](https://portal.azure.com) and navigate to the Azure Data Explorer cluster you created earlier. Choose the **Databases** pane from the left menu to open the database view. Find the database you created for this article and select the checkbox next to it, then select **Query**.
++
+Next, expand the cluster and database in the left pane to see the name of the table. You'll use this name to run queries on the table.
++
+Copy the command below. The command will change the ingestion to [batched mode](concepts-data-history.md#batch-ingestion-default) and ingest every 10 seconds.
+
+```kusto
+.alter table <table-name> policy ingestionbatching @'{"MaximumBatchingTimeSpan":"00:00:10", "MaximumNumberOfItems": 500, "MaximumRawDataSizeMB": 1024}'
+```
+
+Paste the command into the query window, replacing the `<table-name>` placeholder with the name of your table. Select the **Run** button.
++
+Next, add the following command to the query window, and run it again to verify that Azure Data Explorer has ingested twin updates into the table.
+
+>[!NOTE]
+> It may take up to 5 minutes for the first batch of ingested data to appear.
+
+```kusto
+<table_name>
+| count
+```
+
+You should see in the results that the count of items in the table is something greater than 0.
+
+You can also add and run the following command to view 100 records in the table:
+
+```kusto
+<table_name>
+| limit 100
+```
+
+Next, run a query based on the data of your twins to see the contextualized time series data.
+
+Use the query below to chart the outflow of all salt machine twins in the Oslo dairy. This Kusto query uses the Azure Digital Twins plugin to select the twins of interest, joins those twins against the data history time series in Azure Data Explorer, and then charts the results. Make sure to replace the `<ADT-instance>` placeholder with the URL of your instance, in the format `https://<instance-host-name>`.
+
+```kusto
+let ADTendpoint = "<ADT-instance>";
+let ADTquery = ```SELECT SALT_MACHINE.$dtId as tid
+FROM DIGITALTWINS FACTORY
+JOIN SALT_MACHINE RELATED FACTORY.contains
+WHERE FACTORY.$dtId = 'OsloFactory'
+AND IS_OF_MODEL(SALT_MACHINE , 'dtmi:assetGen:SaltMachine;1')```;
+evaluate azure_digital_twins_query_request(ADTendpoint, ADTquery)
+| extend Id = tostring(tid)
+| join kind=inner (<table_name>) on Id
+| extend val_double = todouble(Value)
+| where Key == "OutFlow"
+| render timechart with (ycolumns = val_double)
+```
+
+The results should show the outflow numbers changing over time.
++
+## Next steps
+
+To keep exploring the dairy scenario, you can view [more sample queries on GitHub](https://github.com/Azure-Samples/azure-digital-twins-getting-started/blob/main/adt-adx-queries/Dairy_operation_with_data_history/ContosoDairyDataHistoryQueries.kql) that show how you can monitor the performance of the dairy operation based on machine type, factory, maintenance technician, and various combinations of these parameters.
+
+To build Grafana dashboards that visualize the performance of the dairy operation, read [Creating dashboards with Azure Digital Twins, Azure Data Explorer, and Grafana](https://github.com/Azure-Samples/azure-digital-twins-getting-started/tree/main/adt-adx-queries/Dairy_operation_with_data_history/Visualize_with_Grafana).
+
+For more information on using the Azure Digital Twins query plugin for Azure Data Explorer, see [Querying with the Azure Data Explorer plugin](concepts-data-explorer-plugin.md) and [this blog post](https://techcommunity.microsoft.com/t5/internet-of-things/adding-context-to-iot-data-just-became-easier/ba-p/2459987). You can also read more about the plugin here: [Querying with the Azure Data Explorer plugin](concepts-data-explorer-plugin.md).
digital-twins How To Use Postman https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/how-to-use-postman.md
The first step in importing the API set is to download a collection. Choose the
There are currently two Azure Digital Twins data plane collections available for you to choose from: * [Azure Digital Twins Postman Collection](https://github.com/microsoft/azure-digital-twins-postman-samples): This collection provides a simple getting started experience for Azure Digital Twins in Postman. The requests include sample data, so you can run them with minimal edits required. Choose this collection if you want a digestible set of key API requests containing sample information. - To find the collection, navigate to the repo link and open the file named *postman_collection.json*.
-* [Azure Digital Twins data plane Swagger](https://github.com/Azure/azure-rest-api-specs/tree/master/specification/digitaltwins/data-plane/Microsoft.DigitalTwins): This repo contains the complete Swagger file for the Azure Digital Twins API set, which can be downloaded and imported to Postman as a collection. This will provide a comprehensive set of every API request, but with empty data bodies rather than sample data. Choose this collection if you want to have access to every API call and fill in all the data yourself.
- - To find the collection, navigate to the repo link and choose the folder for the latest spec version. From here, open the file called *digitaltwins.json*.
+* [Azure Digital Twins data plane Swagger](https://github.com/Azure/azure-rest-api-specs/tree/main/specification/digitaltwins/data-plane/Microsoft.DigitalTwins): This repo contains complete Swagger files for the Azure Digital Twins API set, which can be downloaded and imported to Postman as a collection. This will provide a comprehensive set of every API request, but with empty data bodies rather than sample data. Choose this collection if you want to have access to every API call and fill in all the data yourself. You should also use this collection if you need a specific version of the APIs (like one that supports a preview feature, such as [data history](concepts-data-history.md)).
+ - To find the collection, navigate to the repo link and choose the folder for your preferred spec version. From here, open the file called *digitaltwins.json*.
# [Control plane](#tab/control-plane)
-The collection currently available for control plane is the [Azure Digital Twins control plane Swagger](https://github.com/Azure/azure-rest-api-specs/tree/master/specification/digitaltwins/data-plane/Microsoft.DigitalTwins). This repo contains the complete Swagger file for the Azure Digital Twins API set, which can be downloaded and imported to Postman as a collection. This will provide a comprehensive set of every API request.
+The collection currently available for control plane is the [Azure Digital Twins control plane Swagger](https://github.com/Azure/azure-rest-api-specs/tree/main/specification/digitaltwins/data-plane/Microsoft.DigitalTwins). This repo contains the complete Swagger file for the Azure Digital Twins API set, which can be downloaded and imported to Postman as a collection. This will provide a comprehensive set of every API request.
-To find the collection, navigate to the repo link and choose the folder for the latest spec version. From here, open the file called *digitaltwins.json*.
+To find the collection, navigate to the repo link and choose the folder for your preferred spec version. From here, open the file called *digitaltwins.json*.
digital-twins Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/overview.md
description: Overview of Azure Digital Twins, what the service comprises, and how it can be used in a wider cloud solution. Previously updated : 02/24/2022 Last updated : 03/24/2022
You can also extract insights from the live execution environment, using Azure D
To keep the live execution environment of Azure Digital Twins up to date with the real world, you can use [IoT Hub](../iot-hub/about-iot-hub.md) to connect your solution to IoT and IoT Edge devices. These hub-managed devices are represented as part of your twin graph, and provide the data that drives your model.
-You can create a new IoT Hub for this purpose with Azure Digital Twins, or connect an existing IoT Hub along with the devices it already manages.
+You can create a new IoT Hub for this purpose with Azure Digital Twins, or [connect an existing IoT Hub](how-to-ingest-iot-hub-data.md) along with the devices it already manages.
You can also drive Azure Digital Twins from other data sources, using REST APIs or connectors to other services like [Logic Apps](../logic-apps/logic-apps-overview.md). ### Output data for storage and analytics
-The data in your Azure Digital Twins model can be routed to downstream Azure services for more analytics or storage. This functionality is provided through *event routes*, which use [Event Hubs](../event-hubs/event-hubs-about.md), [Event Grid](../event-grid/overview.md), or [Service Bus](../service-bus-messaging/service-bus-messaging-overview.md) to drive your data flows.
+The data in your Azure Digital Twins model can be routed to downstream Azure services for more analytics or storage.
-Some things you can do with event routes include:
-* Sending digital twin data to Azure Data Explorer for querying with the [Azure Digital Twins query plugin for Azure Data Explorer](concepts-data-explorer-plugin.md)
-* [Connecting Azure Digital Twins to Time Series Insights](how-to-integrate-time-series-insights.md) to track time series history of each twin
-* Aligning a Time Series Model in Time Series Insights with a source in Azure Digital Twins
-* Storing Azure Digital Twins data in [Azure Data Lake](../storage/blobs/data-lake-storage-introduction.md)
-* Analyzing Azure Digital Twins data with [Azure Synapse Analytics](../synapse-analytics/sql-data-warehouse/sql-data-warehouse-overview-what-is.md), or other Microsoft data analytics tools
-* Integrating larger workflows with Logic AppsΓÇï
+To send digital twin data to [Azure Data Explorer](/azure/data-explorer/data-explorer-overview), you can take advantage of Azure Digital Twin's [data history (preview)](concepts-data-history.md) feature, which connects an Azure Digital Twins instance to an Azure Data Explorer cluster so that digital twin property updates are automatically historized to Azure Data Explorer. You can then query this data in Azure Data Explorer using the [Azure Digital Twins query plugin for Azure Data Explorer](concepts-data-explorer-plugin.md).
-This option is another way that Azure Digital Twins can connect into a larger solution, and support your custom needs for continued work with these insights.
+To send digital twin data to other Azure services or ultimately outside of Azure, you can create *event routes*, which utilize [Event Hubs](../event-hubs/event-hubs-about.md), [Event Grid](../event-grid/overview.md), and [Service Bus](../service-bus-messaging/service-bus-messaging-overview.md) to send data through custom flows.
+
+Here are some things you can do with event routes in Azure Digital Twins:
+* [Connect Azure Digital Twins to Time Series Insights](how-to-integrate-time-series-insights.md) to track time series history of each twin
+* Store Azure Digital Twins data in [Azure Data Lake](../storage/blobs/data-lake-storage-introduction.md)
+* Analyze Azure Digital Twins data with [Azure Synapse Analytics](../synapse-analytics/sql-data-warehouse/sql-data-warehouse-overview-what-is.md), or other Microsoft data analytics tools
+* Integrate larger workflows with [Logic AppsΓÇï](../logic-apps/logic-apps-overview.md)
+* Send data to custom applications for flexible and customized actions
+
+Flexible egress of data is another way that Azure Digital Twins can connect into a larger solution, and support your custom needs for continued work with these insights.
## Azure Digital Twins in a solution context Azure Digital Twins is commonly used in combination with other Azure services as part of a larger IoT solution.
-A sample architecture of a complete solution using Azure Digital Twins may look like the following:
+A sample architecture of a complete solution using Azure Digital Twins may contain the following components:
* The Azure Digital Twins service instance. This service stores your twin models and your twin graph with its state, and orchestrates event processing. * One or more client apps that drive the Azure Digital Twins instance by configuring models, creating topology, and extracting insights from the twin graph. * One or more external compute resources to process events generated by Azure Digital Twins, or connected data sources such as devices. One common way to provide compute resources is via [Azure Functions](../azure-functions/functions-overview.md). * An IoT hub to provide device management and IoT data stream capabilities.
-* Downstream services to handle tasks such as workflow integration (like [Logic Apps](../logic-apps/logic-apps-overview.md), cold storage, Azure Data Explorer, time series integration, or analytics).
+* Downstream services to provide things like workflow integration (like Logic Apps), cold storage (like Azure Data Lake), or analytics (like Azure Data Explorer or Time Series Insights).
The following diagram shows where Azure Digital Twins lies in the context of a larger Azure IoT solution.
digital-twins Quickstart Azure Digital Twins Explorer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/quickstart-azure-digital-twins-explorer.md
The rest of this section walks you through these steps.
4. Select **Review + Create** to finish creating your instance.
- :::image type="content" source= "media/quickstart-azure-digital-twins-explorer/create-azure-digital-twins-basics.png" alt-text="Screenshot of the Create Resource process for Azure Digital Twins in the Azure portal. The described values are filled in.":::
+ :::image type="content" source="media/quickstart-azure-digital-twins-explorer/create-azure-digital-twins-basics.png" alt-text="Screenshot of the Create Resource process for Azure Digital Twins in the Azure portal. The described values are filled in.":::
5. You will see a summary page showing the details you've entered. Confirm and create the instance by selecting **Create**.
digital-twins Troubleshoot Error 403 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/troubleshoot-error-403.md
Title: "Troubleshooting failed service request: Error 403 (Forbidden)"
+ Title: "Troubleshoot failed service request: Error 403 (Forbidden)"
description: Learn how to diagnose and resolve error 403 (Forbidden) status responses from Azure Digital Twins.
Last updated 02/24/2022
-# Troubleshooting failed service request: Error 403 (Forbidden)
+# Troubleshoot failed service request: Error 403 (Forbidden)
This article describes causes and resolution steps for receiving a 403 error from service requests to Azure Digital Twins.
digital-twins Troubleshoot Error 404 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/troubleshoot-error-404.md
Title: "Troubleshooting failed service request: Error 404 (Sub-Domain not found)"
+ Title: "Troubleshoot failed service request: Error 404 (Sub-Domain not found)"
description: Learn how to diagnose and resolve error 404 (Sub-Domain not found) status responses from Azure Digital Twins.
Last updated 02/22/2022
-# Troubleshooting failed service request: Error 404 (Sub-Domain not found)
+# Troubleshoot failed service request: Error 404 (Sub-Domain not found)
This article describes causes and resolution steps for receiving a 404 error from service requests to Azure Digital Twins.
digital-twins Troubleshoot Error Azure Digital Twins Explorer Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/troubleshoot-error-azure-digital-twins-explorer-authentication.md
Title: "Troubleshooting Azure Digital Twins Explorer: Authentication error"
+ Title: "Troubleshoot Azure Digital Twins Explorer: Authentication error"
description: Learn how to diagnose and resolve authentication errors in Azure Digital Twins Explorer. Previously updated : 02/23/2022 Last updated : 03/28/2022
-# Troubleshooting Azure Digital Twins Explorer: Authentication error
+# Troubleshoot Azure Digital Twins Explorer: Authentication errors
-This article describes causes and resolution steps for receiving an 'Authentication failed' error while running the [Azure Digital Twins Explorer](/samples/azure-samples/digital-twins-explorer/digital-twins-explorer/) sample on your local machine.
+This article describes causes and resolution steps for receiving authentication errors while running [Azure Digital Twins Explorer](/samples/azure-samples/digital-twins-explorer/digital-twins-explorer/).
## Symptoms
-When setting up and running the Azure Digital Twins Explorer application, attempts to authenticate with the app are met with the following error message:
+When running Azure Digital Twins Explorer, you encounter the following error message:
+
+If you are running the code locally, you might see this error message instead:
+ ## Causes ### Cause #1
-This error might occur if your Azure account does not have the required Azure role-based access control (Azure RBAC) permissions set on your Azure Digital Twins instance. In order to access data in your instance, you must have the *Azure Digital Twins Data Reader* or *Azure Digital Twins Data Owner* role on the instance you are trying to read or manage, respectively.
+You will see these errors if your Azure account doesn't have the required Azure role-based access control (Azure RBAC) permissions set on your Azure Digital Twins instance. In order to access data in your instance, you must have the *Azure Digital Twins Data Reader* or *Azure Digital Twins Data Owner* role on the instance you are trying to read or manage, respectively.
For more information about security and roles in Azure Digital Twins, see [Security for Azure Digital Twins solutions](concepts-security.md).
Note that this role is different from...
* the *Owner* role on the entire Azure subscription. *Azure Digital Twins Data Owner* is a role within Azure Digital Twins and is scoped to this individual Azure Digital Twins instance. * the *Owner* role in Azure Digital Twins. These are two distinct Azure Digital Twins management roles, and *Azure Digital Twins Data Owner* is the role that should be used for management.
- If you do not have this role, set it up to resolve the issue.
+If you do not have this role, set it up to resolve the issue.
#### Check current setup
Note that this role is different from...
If you do not have this role assignment, someone with an Owner role in your Azure subscription should run the following command to give your Azure user the appropriate role on the Azure Digital Twins instance.
-If you're an Owner on the subscription, you can run this command yourself. If you're not, contact an Owner to run this command on your behalf. The role name is either *Azure Digital Twins Data Owner* for edit access or *Azure Digital Twins Data Reader* for read access.
+If you're an Owner on the subscription, you can run this command yourself. If you're not, contact an Owner to run this command on your behalf. The role name is *Azure Digital Twins Data Owner* for edit access, or *Azure Digital Twins Data Reader* for read access.
```azurecli-interactive az dt role-assignment create --dt-name <your-Azure-Digital-Twins-instance> --assignee "<your-Azure-AD-email>" --role "<role-name>"
digital-twins Troubleshoot Known Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/troubleshoot-known-issues.md
Title: "Troubleshooting: Known issues"
+ Title: "Troubleshoot known issues"
description: Get help recognizing and mitigating known issues with Azure Digital Twins.
Last updated 02/28/2022
-# Troubleshooting Azure Digital Twins: Known issues
+# Troubleshoot Azure Digital Twins known issues
This article provides information about known issues associated with Azure Digital Twins.
digital-twins Troubleshoot Performance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/troubleshoot-performance.md
# Mandatory fields. Title: "Troubleshooting performance"
+ Title: "Troubleshoot performance"
description: Tips for troubleshooting performance of an Azure Digital Twins instance.
#
-# Troubleshooting Azure Digital Twins performance
+# Troubleshoot Azure Digital Twins performance
If you're experiencing delays or other performance issues when working with Azure Digital Twins, use the tips in this article to help you troubleshoot.
digital-twins Tutorial Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/tutorial-code.md
To keep the program from crashing, you can add exception code around the model u
:::code language="csharp" source="~/digital-twins-docs-samples/sdks/csharp/fullClientApp.cs" id="Model_try_catch":::
-Now, if you run the program with `dotnet run` in your command window now, you'll see that you get an error code back. The output from the model creation code shows this error:
-
+Run the program again with `dotnet run` in your command window. You'll see that you get back more details about the model upload issue, including an error code stating that `ModelIdAlreadyExists`.
From this point forward, the tutorial will wrap all calls to service methods in try/catch handlers.
digital-twins Tutorial Command Line Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/tutorial-command-line-cli.md
Otherwise, follow the instructions in [Set up an instance and authentication](ho
After you set up your Azure Digital Twins instance, make a note of the following values that you'll need to connect to the instance later: * The instance's **host name**
-* The **Azure subscription** that you used to create the instance.
-
-You can get both of these values for your instance in the output of the following Azure CLI command:
-
-```azurecli-interactive
-az dt show --dt-name <Azure-Digital-Twins-instance-name>
-```
-
+* The **Azure subscription** that you used to create the instance
+
+>[!TIP]
+>If you know the name of your instance, you can use the following CLI command to get the host name and subscription values:
+>
+>```azurecli-interactive
+>az dt show --dt-name <Azure-Digital-Twins-instance-name>
+>```
+>
+>They'll appear in the output like this:
+>:::image type="content" source="media/tutorial-command-line/cli/instance-details.png" alt-text="Screenshot of Cloud Shell browser window showing the output of the az dt show command. The hostName field and subscription ID are highlighted.":::
## Model a physical environment with DTDL
dms Resource Custom Roles Sql Db Managed Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/resource-custom-roles-sql-db-managed-instance.md
We currently recommend creating a minimum of two custom roles for the APP ID, on
} ```
-The json above must be stored in three text files, and you can use either the AzureRM, AZ PowerShell cmdlets, or Azure CLI to create the roles using either **New-AzureRmRoleDefinition (AzureRM)** or **New-AzRoleDefinition (AZ)**.
+The json above must be stored in two text files, and you can use either the AzureRM, AZ PowerShell cmdlets, or Azure CLI to create the roles using either **New-AzureRmRoleDefinition (AzureRM)** or **New-AzRoleDefinition (AZ)**.
For more information, see the article [Azure custom roles](../role-based-access-control/custom-roles.md).
event-grid Event Schema https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/event-schema.md
To learn about the properties in the data object, see the event source:
* [Blob storage](event-schema-blob-storage.md) * [Event Hubs](event-schema-event-hubs.md) * [IoT Hub](event-schema-iot-hub.md)
-* [Media Services](../media-services/latest/media-services-event-schemas.md?toc=%2fazure%2fevent-grid%2ftoc.json)
+* [Media Services](/azure/media-services/latest/monitoring/media-services-event-schemas?toc=%2fazure%2fevent-grid%2ftoc.json)
* [Resource groups (management operations)](event-schema-resource-groups.md) * [Service Bus](event-schema-service-bus.md) * [Azure SignalR](event-schema-azure-signalr.md)
event-grid Resize Images On Storage Blob Upload Event https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/resize-images-on-storage-blob-upload-event.md
blobStorageAccountKey=$(az storage account keys list -g $resourceGroupName -n $b
storageConnectionString=$(az storage account show-connection-string --resource-group $resourceGroupName --name $blobStorageAccount --query connectionString --output tsv)
-outputFileName="resized-image.png"
-
-az functionapp config appsettings set --name $functionapp --resource-group $resourceGroupName --settings FUNCTIONS_EXTENSION_VERSION=~2 BLOB_CONTAINER_NAME=thumbnails AZURE_STORAGE_ACCOUNT_NAME=$blobStorageAccount AZURE_STORAGE_ACCOUNT_ACCESS_KEY=$blobStorageAccountKey AZURE_STORAGE_CONNECTION_STRING=$storageConnectionString OUT_BLOB_NAME=$outputFileName FUNCTIONS_WORKER_RUNTIME=node WEBSITE_NODE_DEFAULT_VERSION=~10
+az functionapp config appsettings set --name $functionapp --resource-group $resourceGroupName --settings FUNCTIONS_EXTENSION_VERSION=~2 BLOB_CONTAINER_NAME=thumbnails AZURE_STORAGE_ACCOUNT_NAME=$blobStorageAccount AZURE_STORAGE_ACCOUNT_ACCESS_KEY=$blobStorageAccountKey AZURE_STORAGE_CONNECTION_STRING=$storageConnectionString FUNCTIONS_WORKER_RUNTIME=node WEBSITE_NODE_DEFAULT_VERSION=~10
```
expressroute Circuit Placement Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/circuit-placement-api.md
The ExpressRoute partner circuit placement API allows ExpressRoute partners to p
This API uses the expressRouteCrossConnection resource type. For more information, see [ExpressRoute CrossConnection API development and integration](cross-connections-api-development.md).
+## Register provider subscription to the expressRouteProviderPort resource type
+To use the circuit placement API, you first need to enroll your subscription to access the port resource type.
+
+1. Sign in to Azure and select the subscription you wish to enroll.
+
+ ```azurepowershell-interactive
+ Connect-AzAccount
+
+ Select-AzSubscription -Subscription "<SubscriptionID or SubscriptionName>"
+ ```
+
+1. Register your subscription.
+
+ ```azurepowershell-interactive
+ Register-AzProviderFeature -FeatureName AllowExpressRoutePorts -ProviderNamespace Microsoft.Network
+ ```
+
+Once enrolled, verify that **Microsoft.Network** resource provider is registered to your subscription. Registering a resource provider configures your subscription to work with the resource provider.
+ ## Workflow 1. ExpressRoute customers share the service key of the target ExpressRoute circuit.
The ExpressRoute partner can list all port pairs within the target provider subs
### To get a list of all port pairs for a provider ```rest
-https://management.azure.com/subscriptions/{subscriptionId}/providers/Microsoft.Network/expressRouteProviderPorts
+https://management.azure.com/subscriptions/{subscriptionId}/providers/Microsoft.Network/expressRouteProviderPorts?api-version={api-version}
{ "parameters": {
- "api-version": "2020-03-01",
+ "api-version": "2021-12-01",
"subscriptionId": "subid" }, "responses": {
https://management.azure.com/subscriptions/{subscriptionId}/providers/Microsoft.
### To get a list of all port pairs by location ```rest
-https://management.azure.com/subscriptions/{subscriptionId}/providers/Microsoft.Network/expressRouteProviderPorts?location={locationName}
+https://management.azure.com/subscriptions/{subscriptionId}/providers/Microsoft.Network/expressRouteProviderPorts?location={locationName}&api-version={api-version}
{ "parameters": {
- "api-version": "2020-03-01",
+ "api-version": "2021-12-01",
"locationName": "SiliconValley", "subscriptionId": "subid" },
https://management.azure.com/subscriptions/{subscriptionId}/providers/Microsoft.
### To get a specific port pair using the port pair descriptor ID. ```rest
-https://management.azure.com/subscriptions/{subscriptionId}/providers/Microsoft.Network/expressRouteProviderPorts/{portPairDescriptor}
+https://management.azure.com/subscriptions/{subscriptionId}/providers/Microsoft.Network/expressRouteProviderPorts/{portPairDescriptor}?api-version={api-version}
{ "parameters": {
- "api-version": "2020-03-01",
+ "api-version": "2021-12-01",
"portPairDescriptor": " bvtazureixpportpair1", "subscriptionId": "subid" },
Currently this API is used by providers to update provisioning state of circuit.
Currently the primaryAzurePort and secondaryAzurePort are read-only properties. Now we've disabled the read-only properties for these ports. ```rest
-https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/expressRouteCrossConnections/{crossConnectionName}?api-version=2021-02-01
+https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/expressRouteCrossConnections/{crossConnectionName}?api-version={api-version}
{ "parameters": {
- "api-version": "2020-03-01",
+ "api-version": "2021-12-01",
"crossConnectionName": "The name of the cross connection", "subscriptionId": "subid" }
expressroute Quickstart Create Expressroute Vnet Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/quickstart-create-expressroute-vnet-bicep.md
+
+ Title: 'Quickstart: Create an Azure ExpressRoute circuit using Bicep'
+description: This quickstart shows you how to create an ExpressRoute circuit using Bicep.
+++ Last updated : 03/24/2022+++++
+# Quickstart: Create an ExpressRoute circuit with private peering using Bicep
+
+This quickstart describes how to use Bicep to create an ExpressRoute circuit with private peering.
++
+## Prerequisites
+
+If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+
+## Review the Bicep file
+
+The Bicep file used in this quickstart is from [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/expressroute-private-peering-vnet).
+
+In this quickstart, you'll create an ExpressRoute circuit with *Equinix* as the service provider. The circuit will be using a *Premium SKU*, with a bandwidth of *50 Mbps*, and the peering location of *Washington DC*. Private peering will be enabled with a primary and secondary subnet of *192.168.10.16/30* and *192.168.10.20/30* respectively. A virtual network will also be created along with a *HighPerformance ExpressRoute gateway*.
++
+Multiple Azure resources have been defined in the Bicep file:
+
+* [**Microsoft.Network/expressRouteCircuits**](/azure/templates/microsoft.network/expressRouteCircuits)
+* [**Microsoft.Network/expressRouteCircuits/peerings**](/azure/templates/microsoft.network/expressRouteCircuits/peerings) (Used to enabled private peering on the circuit)
+* [**Microsoft.Network/networkSecurityGroups**](/azure/templates/microsoft.network/networkSecurityGroups) (network security group is applied to the subnets in the virtual network)
+* [**Microsoft.Network/virtualNetworks**](/azure/templates/microsoft.network/virtualNetworks)
+* [**Microsoft.Network/publicIPAddresses**](/azure/templates/microsoft.network/publicIPAddresses) (Public IP is used by the ExpressRoute gateway)
+* [**Microsoft.Network/virtualNetworkGateways**](/azure/templates/microsoft.network/virtualNetworkGateways) (ExpressRoute gateway is used to link VNet to the circuit)
+
+## Deploy the Bicep file
+
+1. Save the Bicep file as **main.bicep** to your local computer.
+1. Deploy the Bicep file using either Azure CLI or Azure PowerShell.
+
+ # [CLI](#tab/CLI)
+
+ ```azurecli
+ az group create --name exampleRG --location eastus
+ az deployment group create --resource-group exampleRG --template-file main.bicep
+ ```
+
+ # [PowerShell](#tab/PowerShell)
+
+ ```azurepowershell
+ New-AzResourceGroup -Name exampleRG -Location eastus
+ New-AzResourceGroupDeployment -ResourceGroupName exampleRG -TemplateFile ./main.bicep
+ ```
+
+
+
+ When the deployment finishes, you should see a message indicating the deployment succeeded.
+
+## Validate the deployment
+
+Use the Azure portal, Azure CLI, or Azure PowerShell to list the deployed resources in the resource group.
+
+# [CLI](#tab/CLI)
+
+```azurecli-interactive
+az resource list --resource-group exampleRG
+```
+
+# [PowerShell](#tab/PowerShell)
+
+```azurepowershell-interactive
+Get-AzResource -ResourceGroupName exampleRG
+```
+++
+> [!NOTE]
+> You will need to call the provider to complete the provisioning process before you can link the virtual network to the circuit.
+
+## Clean up resources
+
+When no longer needed, use the Azure portal, Azure CLI, or Azure PowerShell to delete the VM and all of the resources in the resource group.
+
+# [CLI](#tab/CLI)
+
+```azurecli-interactive
+az group delete --name exampleRG
+```
+
+# [PowerShell](#tab/PowerShell)
+
+```azurepowershell-interactive
+Remove-AzResourceGroup -Name exampleRG
+```
+++
+## Next steps
+
+In this quickstart, you created a:
+
+* ExpressRoute circuit
+* Virtual Network
+* VPN Gateway
+* Public IP
+* Network security group
+
+To learn how to link a virtual network to a circuit, continue to the ExpressRoute tutorials.
+
+> [!div class="nextstepaction"]
+> [ExpressRoute tutorials](expressroute-howto-linkvnet-portal-resource-manager.md)
firewall Premium Migrate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/premium-migrate.md
Previously updated : 03/22/2022 Last updated : 03/28/2022
Usage example:
`Transform-Policy -PolicyId /subscriptions/XXXXX-XXXXXX-XXXXX/resourceGroups/some-resource-group/providers/Microsoft.Network/firewallPolicies/policy-name` > [!IMPORTANT]
-> The script doesn't migrate Threat Intelligence settings. You'll need to note those settings before proceeding and migrate them manually.
+> The script doesn't migrate Threat Intelligence and SNAT private ranges settings. You'll need to note those settings before proceeding and migrate them manually. Otherwise, you might encounter inconsistent traffic filtering with your new upgraded firewall.
This script requires the latest Azure PowerShell. Run `Get-Module -ListAvailable Az` to see which versions are installed. If you need to install, see [Install Azure PowerShell module](/powershell/azure/install-az-ps).
frontdoor Create Front Door Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/create-front-door-portal.md
+
+ Title: 'Quickstart: Create an Azure Front Door profile - Azure portal'
+description: This quickstart shows how to use Azure Front Door service for your highly available and high-performance global web application by using the Azure portal.
+++++
+ na
+ Last updated : 03/22/2022++
+#Customer intent: As an IT admin, I want to direct user traffic to ensure high availability of web applications.
++
+# Quickstart: Create an Azure Front Door profile - Azure portal
++
+In this quickstart, you'll learn how to create an Azure Front Door profile using the Azure portal. You can create an Azure Front Door profile through *Quick Create* with basic configurations or through the *Custom create* which allows a more advanced configuration. With *Custom create*, you deploy two App services. Then, you create the Azure Front Door profile using the two App services as your origin. Lastly, you'll verify connectivity to your App services using the Azure Front Door frontend hostname.
+
+## Prerequisites
+
+An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+
+## Create Front Door profile - Quick Create
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+
+1. From the home page or the Azure menu, selectΓÇ»**+ Create a resource**. Search forΓÇ»*Front Door and CDN profiles*. Then select **Create**.
+
+1. On the **Compare offerings** page, select **Quick create**. Then select **Continue to create a Front Door**.
+
+ :::image type="content" source="./media/create-front-door-portal/front-door-quick-create.png" alt-text="Screenshot of compare offerings.":::
+
+1. On the **Create a Front Door profile** page, enter, or select the following settings.
+
+ :::image type="content" source="./media/create-front-door-portal/front-door-quick-create-2.png" alt-text="Screenshot of Front Door quick create page.":::
+
+ | Settings | Description |
+ | | |
+ | **Subscription** | Select your subscription. |
+ | **Resource group** | Select **Create new** and enter *myAFDResourceGroup* in the text box.|
+ | **Name** | Give your profile a name. This example uses **myAzureFrontDoor**. |
+ | **Tier** | Select either Standard or Premium tier. Standard tier is content delivery optimized. Premium tier builds on Standard tier and is focused on security. See [Tier Comparison](standard-premium/tier-comparison.md). |
+ | **Endpoint name** | Enter a globally unique name for your endpoint. |
+ | **Origin type** | Select the type of resource for your origin. In this example, we select an App service as the origin that has Private Link enabled. |
+ | **Origin host name** | Enter the hostname for your origin. |
+ | **Private link** | Enable private link service if you want to have a private connection between your Azure Front Door and your origin. Only internal load balancers, Storage Blobs and App services are supported. For more information, see [Private Link service with Azure Front Door](private-link.md).
+ | **Caching** | Select the check box if you want to cache contents closer to your users globally using Azure Front Door's edge POPs and the Microsoft network. |
+ | **WAF policy** | Select **Create new** or select an existing WAF policy from the dropdown if you want to enable this feature. |
+
+ > [!NOTE]
+ > When creating an Azure Front Door profile, you must select an origin from the same subscription the Front Door is created in.
+ >
+
+1. Select **Review + Create** and then select **Create** to deploy your Azure Front Door profile.
+
+ > [!NOTE]
+ > * It may take a few minutes for the Azure Front Door configuration to be propagated to all edge POPs.
+ > * If you enabled Private Link, go to the origin's resource page. Select **Networking** > **Configure Private Link**. Then select the pending request from Azure Front Door, and select **Approve**. After a few seconds, your origin will be accessible through Azure Front Door in a secured manner.
+
+## Create Front Door profile - Custom Create
+
+### Create two Web App instances
+
+If you already have services to use as an origin, skip to [create a Front Door for your application](#create-a-front-door-for-your-application).
+
+In this example, we create two Web App instances that is deployed in two different Azure regions. Both web application instances will run in *Active/Active* mode, so either one can service incoming traffic. This configuration differs from an *Active/Stand-By* configuration, where one acts as a failover.
+
+Use the following steps to create two Web Apps used in this example.
+
+1. Sign in to the Azure portal at https://portal.azure.com.
+
+1. On the top left-hand side of the portal, select **+ Create a resource**. Then search for **Web App**. Select **Create** to begin configuring the first Web App.
+
+1. On the **Basics** tab of **Create Web App** page, enter, or select the following information.
+
+ :::image type="content" source="./media/create-front-door-portal/create-web-app.png" alt-text="Quick create Azure Front Door premium tier in the Azure portal.":::
+
+ | Setting | Description |
+ |--|--|
+ | **Subscription** | Select your subscription. |
+ | **Resource group** | Select **Create new** and enter *myAppResourceGroup* in the text box. |
+ | **Name** | Enter a unique **Name** for your web app. This example uses *webapp-contoso-001*. |
+ | **Publish** | Select **Code**. |
+ | **Runtime stack** | Select **.NET Core 3.1 (LTS)**. |
+ | **Operating System** | Select **Windows**. |
+ | **Region** | Select **Central US**. |
+ | **Windows Plan** | Select **Create new** and enter *myAppServicePlanCentralUS* in the text box. |
+ | **Sku and size** | Select **Standard S1 100 total ACU, 1.75-GB memory**. |
+
+1. Select **Review + create**, review the summary, and then select **Create**. Deployment of the Web App can take up to a minute.
+
+1. After your create the first Web App, create a second Web App. Use the same settings as above, except for the following settings:
+
+ | Setting | Description |
+ |--|--|
+ | **Resource group** | Select **Create new** and enter *myAppResourceGroup2*. |
+ | **Name** | Enter a unique name for your Web App, in this example, *webapp-contoso-002*. |
+ | **Region** | A different region, in this example, *South Central US* |
+ | **App Service plan** > **Windows Plan** | Select **New** and enter *myAppServicePlanSouthCentralUS*, and then select **OK**. |
+
+### Create a Front Door for your application
+
+Configure Azure Front Door to direct user traffic based on lowest latency between the two Web Apps origins. You will also secure your Azure Front Door with a Web Application Firewall (WAF) policy.
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+
+1. From the home page or the Azure menu, selectΓÇ»**+ Create a resource**. Search forΓÇ»*Front Door and CDN profiles*. Then select **Create**.
+
+1. On the **Compare offerings** page, select **Custom create**. Then select **Continue to create a Front Door**.
+
+1. On the **Basics** tab, enter or select the following information, and then select **Next: Secret**.
+
+ :::image type="content" source="./media/create-front-door-portal/front-door-custom-create-2.png" alt-text="Create Front Door profile":::
+
+ | Setting | Value |
+ | | |
+ | **Subscription** | Select your subscription. |
+ | **Resource group** | Select **Create new** and enter *myAFDResourceGroup* into the text box. |
+ | **Resource group location** | Select **East US** |
+ | **Name** | Enter a unique name in this subscription **Webapp-Contoso-AFD** |
+ | **Tier** | Select **Premium**. |
+
+1. *Optional*: **Secrets**. If you plan to use managed certificates, this step is optional. If you have an existing Key Vault in Azure that you plan to use to Bring Your Own Certificate for a custom domain, then select **Add a certificate**. You can also add a certificate in the management experience after creation.
+
+ > [!NOTE]
+ > You need to have the right permission to add the certificate from Azure Key Vault as a user.
+
+ :::image type="content" source="./media/create-front-door-portal/front-door-custom-create-secret.png" alt-text="Screenshot of add a secret in custom create.":::
+
+1. In the **Endpoint** tab, select **Add an endpoint** and give your endpoint a globally unique name. You can create more endpoints in your Azure Front Door profile after you complete the deployment. This example uses *contoso-frontend*. Select **Add** to add the endpoint.
+
+ :::image type="content" source="./media/create-front-door-portal/front-door-custom-create-add-endpoint.png" alt-text="Screenshot of add an endpoint.":::
+
+1. Next, select **+ Add a route** to configure routing to your Web App origin.
+
+ :::image type="content" source="./media/create-front-door-portal/add-route.png" alt-text="Screenshot of add a route from the endpoint page." lightbox="./media/create-front-door-portal/add-route-expanded.png":::
+
+1. On the **Add a route** page, enter, or select the following information, select **Add** to add the route to the endpoint configuration.
+
+ :::image type="content" source="./media/create-front-door-portal/add-route-page.png" alt-text="Screenshot of add a route configuration page." lightbox="./media/create-front-door-portal/add-route-page-expanded.png":::
+
+ | Setting | Description |
+ |--|--|
+ | Name | Enter a name to identify the mapping between domains and origin group. |
+ | Domains | A domain name has been auto-generated for you to use. If you want to add a custom domain, select **Add a new domain**. This example will use the default. |
+ | Patterns to match | Set all the URLs this route will accept. This example will use the default, and accept all URL paths. |
+ | Accepted protocols | Select the protocol the route will accept. This example will accept both HTTP and HTTPS requests. |
+ | Redirect | Enable this setting to redirect all HTTP traffic to the HTTPS endpoint. |
+ | Origin group | Select **Add a new origin group**. For the origin group name, enter **myOriginGroup**. Then select **+ Add an origin**. For the first origin, enter **WebApp1** for the *Name* and then for the *Origin Type* select **App services**. In the *Host name*, select **webapp-contoso-001.azurewebsite.net**. Select **Add** to add the origin to the origin group. Repeat the steps to add the second Web App as an origin. For the origin *Name*, enter **WebApp2**. The *Host name* is **webapp-contoso-002.azurewebsite.net**. Once both Web App origins have been added, select **Add** to save the origin group configuration. |
+ | Origin path | Leave blank. |
+ | Forwarding protocol | Select the protocol that will be forwarded to the origin group. This example will match the incoming requests to origins. |
+ | Caching | Select the check box if you want to cache contents closer to your users globally using Azure Front Door's edge POPs and the Microsoft network. |
+ | Rules | Once you've deployed the Azure Front Door profile, you can configure Rules to apply to your route. |
+
+1. Select **+ Add a policy** to apply a Web Application Firewall (WAF) policy to one or more domains in the Azure Front Door profile.
+
+ :::image type="content" source="./media/create-front-door-portal/add-policy.png" alt-text="Screenshot of add a policy from endpoint page." lightbox="./media/create-front-door-portal/add-policy-expanded.png":::
+
+1. On the **Add security policy** page, enter a name to identify this security policy. Then select domains you want to associate the policy with. For *WAF Policy*, you can select a previously created policy or select **Create New** to create a new policy. Select **Save** to add the security policy to the endpoint configuration.
+
+ :::image type="content" source="./media/create-front-door-portal/add-security-policy.png" alt-text="Screenshot of add security policy page.":::
+
+1. SelectΓÇ»**Review + Create**, and thenΓÇ»**Create** to deploy the Azure Front Door profile. It will take a few minutes for configurations to be propagated to all edge locations.
+
+## Verify Azure Front Door
+
+When you create the Azure Front Door profile, it takes a few minutes for the configuration to be deployed globally. Once completed, you can access the frontend host you created. In a browser, enter the endpoint hostname. For example `contoso-frontend.z01.azurefd.net`. Your request will automatically get routed to the nearest server from the specified servers in the origin group.
+
+If you created these apps in this quickstart, you'll see an information page.
+
+To test instant global failover, do the following steps:
+
+1. Open a browser, as described above, and go to the frontend address: `contoso-frontend.z01.azurefd.net`.
+
+1. In the Azure portal, search and select *App services*. Scroll down to find one of your Web Apps, **WebApp-Contoso-001** in this example.
+
+1. Select your web app, and then select **Stop**, and **Yes** to verify.
+
+1. Refresh your browser. You should see the same information page.
+
+ > [!TIP]
+ > There is a delay between when the traffic will be directed to the second Web app. You may need to refresh again.
+
+1. Go to the second Web app, and stop that one as well.
+
+1. Refresh your browser. This time, you should see an error message.
+
+ :::image type="content" source="./media/create-front-door-portal/web-app-stopped-message.png" alt-text="Both instances of the web app stopped":::
+
+## Clean up resources
+
+After you're done, you can remove all the items you created. Deleting a resource group also deletes its contents. If you don't intend to use this Azure Front Door, you should remove these resources to avoid unnecessary charges.
+
+1. In the Azure portal, search for and select **Resource groups**, or select **Resource groups** from the Azure portal menu.
+
+1. Filter or scroll down to find a resource group, such as **myAFDResourceGroup**, **myAppResourceGroup** or **myAppResourceGroup2**.
+
+1. Select the resource group, then select **Delete resource group**.
+
+ > [!WARNING]
+ > Once a resource group has been deleted, there is no way to recover the resources.
+
+1. Type the resource group name to verify, and then select **Delete**.
+
+1. Repeat the procedure for the other two resource groups.
+
+## Next steps
+
+Advance to the next article to learn how to add a custom domain to your Front Door.
+
+> [!div class="nextstepaction"]
+> [Add a custom domain](standard-premium/how-to-add-custom-domain.md)
frontdoor End To End Tls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/end-to-end-tls.md
+
+ Title: 'End-to-end TLS with Azure Front Door'
+description: Learn about end-to-end TLS encryption when using Azure Front Door.
+++++ Last updated : 03/14/2022+++
+# End-to-end TLS with Azure Front Door
+
+Transport Layer Security (TLS), previously known as Secure Sockets Layer (SSL), is the standard security technology for establishing an encrypted link between a web server and a browser. This link ensures that all data passed between the web server and the web browser remain private and encrypted.
+
+To meet your security or compliance requirements, Azure Front Door (AFD) supports end-to-end TLS encryption. Front Door TLS/SSL offload terminates the TLS connection, decrypts the traffic at the Azure Front Door, and re-encrypts the traffic before forwarding it to the backend. Since connections to the backend happen over the public IP. It's highly recommended you configure HTTPS as the forwarding protocol on your Azure Front Door to enforce end-to-end TLS encryption from the client to the backend.
+
+## End-to-end TLS encryption
+
+End-to-end TLS allows you to secure sensitive data while in transit to the backend while benefiting from Azure Front Door features like global load balancing and caching. Some of the features also include URL-based routing, TCP split, caching on edge location closest to the clients, and customizing HTTP requests at the edge.
+
+Azure Front Door offloads the TLS sessions at the edge and decrypts client requests. It then applies the configured routing rules to route the requests to the appropriate backend in the backend pool. Azure Front Door then starts a new TLS connection to the backend and re-encrypts all data using the backendΓÇÖs certificate before transmitting the request to the backend. Any response from the backend is encrypted through the same process back to the end user. You can configure your Azure Front Door to use HTTPS as the forwarding protocol to enable end-to-end TLS.
+
+## Supported TLS versions
+
+Azure Front Door supports three versions of the TLS protocol: TLS versions 1.0, 1.1, and 1.2. All Azure Front Door profiles created after September 2019 use TLS 1.2 as the default minimum, but TLS 1.0 and TLS 1.1 are still supported for backward compatibility.
+
+Although Azure Front Door supports TLS 1.2, which introduced client/mutual authentication in RFC 5246, currently, Azure Front Door doesn't support client/mutual authentication.
+
+You can configure the minimum TLS version in Azure Front Door in the custom domain HTTPS settings using the Azure portal or theΓÇ»[Azure REST API](/rest/api/frontdoorservice/frontdoor/frontdoors/createorupdate#minimumtlsversion). Currently, you can choose between 1.0 and 1.2. As such, specifying TLS 1.2 as the minimum version controls the minimum acceptable TLS version Azure Front Door will accept from a client. When Azure Front Door initiates TLS traffic to the backend, it will attempt to negotiate the best TLS version that the backend can reliably and consistently accept.
+
+## Supported certificates
+
+When you create your TLS/SSL certificate, you must create a complete certificate chain with an allowed Certificate Authority (CA) that is part of theΓÇ»[Microsoft Trusted CA List](https://ccadb-public.secure.force.com/microsoft/IncludedCACertificateReportForMSFT). If you use a non-allowed CA, your request will be rejected.
+
+Certificates from internal CAs or self-signed certificates aren't allowed.
+
+## Online Certificate Status Protocol (OCSP) stapling
+
+OCSP stapling is supported by default in Azure Front Door and no configuration is required.
+
+## Backend TLS connection (Azure Front Door to backend)
+
+For HTTPS connections, Azure Front Door expects that your backend presents a certificate from a valid Certificate Authority (CA) with subject name(s) matching the backend *hostname*. As an example, if your backend hostname is set to `myapp-centralus.contosonews.net` and the certificate that your backend presents during the TLS handshake doesn't have `myapp-centralus.contosonews.net` or `*.contosonews.net` in the subject name, then Azure Front Door will refuse the connection and as a result an error.
+
+> [!NOTE]
+> The certificate must have a complete certificate chain with leaf and intermediate certificates. The root CA must be part of theΓÇ»[Microsoft Trusted CA List](https://ccadb-public.secure.force.com/microsoft/IncludedCACertificateReportForMSFT). If a certificate without complete chain is presented, the requests which involve that certificate are not guaranteed to work as expected.
+
+From a security standpoint, Microsoft doesn't recommend disabling certificate subject name check. In certain use cases such as for testing, as a work-around to resolve failing HTTPS connection, you can disable certificate subject name check for your Azure Front Door. Note that the origin still needs to present a certificate with a valid trusted chain, but doesn't have to match the origin host name. The option to disable this feature is different for each Azure Front Door tier:
+
+* Azure Front Door Standard and Premium - it is present in the origin settings.
+* Azure Front Door (classic) - it is present under the Azure Front Door settings in the Azure portal and in the Backend PoolsSettings in the Azure Front Door API.
+
+ under the Azure Front Door settings in the Azure portal and on the BackendPoolsSettings in the Azure Front Door API.
+
+## Frontend TLS connection (Client to Front Door)
+
+To enable the HTTPS protocol for secure delivery of contents on an Azure Front Door custom domain, you can choose to use a certificate that is managed by Azure Front Door or use your own certificate.
+
+* Azure Front Door managed certificate provides a standard TLS/SSL certificate via DigiCert and is stored in Azure Front Door's Key Vault.
+
+* If you choose to use your own certificate, you can onboard a certificate from a supported CA that can be a standard TLS, extended validation certificate, or even a wildcard certificate.
+
+* Self-signed certificates aren't supported. LearnΓÇ»[how to enable HTTPS for a custom domain](front-door-custom-domain-https.md).
+
+### Certificate autorotation
+
+For the Azure Front Door managed certificate option, the certificates are managed and auto-rotates within 90 days of expiry time by Azure Front Door. For the Azure Front Door Standard/Premium managed certificate option, the certificates are managed and auto-rotates within 45 days of expiry time by Azure Front Door. If you're using an Azure Front Door managed certificate and see that the certificate expiry date is less than 60 days away or 30 days for the Standard/Premium SKU, file a support ticket.
+
+For your own custom TLS/SSL certificate:
+
+1. You set the secret version to 'Latest' for the certificate to be automatically rotated to the latest version when a newer version of the certificate is available in your key vault. For custom certificates, the certificate gets auto-rotated within 1-2 days with a newer version of certificate, no matter what the certificate expired time is.
+
+1. If a specific version is selected, autorotation isnΓÇÖt supported. You've will have to reselect the new version manually to rotate certificate. It takes up to 24 hours for the new version of the certificate/secret to be deployed.
+
+ You'll need to ensure that the service principal for Front Door has access to the key vault. Refer to how to grant access to your key vault. The updated certificate rollout operation by Azure Front Door won't cause any production down time provided the subject name or subject alternate name (SAN) for the certificate didn't changed.
+
+## Supported cipher suites
+
+For TLS1.2 the following cipher suites are supported:
+
+* TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
+* TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
+* TLS_DHE_RSA_WITH_AES_256_GCM_SHA384
+* TLS_DHE_RSA_WITH_AES_128_GCM_SHA256
+
+> [!NOTE]
+> For Windows 10 and later versions, we recommend enabling one or both of the ECDHE cipher suites for better security. Windows 8.1, 8, and 7 aren't compatible with these ECDHE cipher suites. The DHE cipher suites have been provided for compatibility with those operating systems.
+
+Using custom domains with TLS1.0/1.1 enabled the following cipher suites are supported:
+
+* TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256
+* TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384
+* TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
+* TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
+* TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256
+* TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384
+* TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256
+* TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384
+* TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA
+* TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA
+* TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA
+* TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA
+* TLS_RSA_WITH_AES_256_GCM_SHA384
+* TLS_RSA_WITH_AES_128_GCM_SHA256
+* TLS_RSA_WITH_AES_256_CBC_SHA256
+* TLS_RSA_WITH_AES_128_CBC_SHA256
+* TLS_RSA_WITH_AES_256_CBC_SHA
+* TLS_RSA_WITH_AES_128_CBC_SHA
+* TLS_DHE_RSA_WITH_AES_128_GCM_SHA256
+* TLS_DHE_RSA_WITH_AES_256_GCM_SHA384
+
+Azure Front Door doesnΓÇÖt support configuring specific cipher suites. You can get your own custom TLS/SSL certificate from your Certificate Authority (For example: Verisign, Entrust, or DigiCert). Then have specific cipher suites marked on the certificate when you generate it.
+
+## Next steps
+
+* [Configure a custom domain](front-door-custom-domain.md) for Azure Front Door.
+* [Enable HTTPS for a custom domain](front-door-custom-domain-https.md).
frontdoor Front Door Backend Pool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/front-door-backend-pool.md
- Title: Backends and backend pools in Azure Front Door| Microsoft Docs
-description: This article describes what backends and backend pools are in Front Door configuration.
----- Previously updated : 09/28/2020---
-# Backends and backend pools in Azure Front Door
-This article describes concepts about how to map your web application deployment with Azure Front Door. It also explains the different terminologies used in the Front Door configuration around the application backends.
-
-## Backends
-A backend refers to a web application deployment in a region. Front Door supports both Azure and non-Azure resources in the backend pool. The application can either be in your on-premises datacenter or located in another cloud provider.
-
-Front Door backends refers to the host name or public IP of your application that serves your client requests. Backends shouldn't be confused with your database tier, storage tier, and so on. Backends should be viewed as the public endpoint for your application backend. When you add a backend to a Front Door backend pool, you must also add the following:
--- **Backend host type**. The type of resource you want to add. Front Door supports autodiscovery of your app backends from app service, cloud service, or storage. If you want a different resource in Azure or even a non-Azure backend, select **Custom host**.-
- >[!IMPORTANT]
- >During configuration, APIs don't validate if the backend is inaccessible from Front Door environments. Make sure that Front Door can reach your backend.
--- **Subscription and Backend host name**. If you haven't selected **Custom host** for backend host type, select your backend by choosing the appropriate subscription and the corresponding backend host name in the UI.--- **Backend host header**. The host header value sent to the backend for each request. For more information, see [Backend host header](#hostheader).--- **Priority**. Assign priorities to your different backends when you want to use a primary service backend for all traffic. Also, provide backups if the primary or the backup backends are unavailable. For more information, see [Priority](front-door-routing-methods.md#priority).--- **Weight**. Assign weights to your different backends to distribute traffic across a set of backends, either evenly or according to weight coefficients. For more information, see [Weights](front-door-routing-methods.md#weighted).-
-### <a name = "hostheader"></a>Backend host header
-
-Requests forwarded by Front Door to a backend include a host header field that the backend uses to retrieve the targeted resource. The value for this field typically comes from the backend URI that has the host header and port.
-
-For example, a request made for `www.contoso.com` will have the host header www.contoso.com. If you use Azure portal to configure your backend, the default value for this field is the host name of the backend. If your backend is contoso-westus.azurewebsites.net, in the Azure portal, the autopopulated value for the backend host header will be contoso-westus.azurewebsites.net. However, if you use Azure Resource Manager templates or another method without explicitly setting this field, Front Door will send the incoming host name as the value for the host header. If the request was made for www\.contoso.com, and your backend is contoso-westus.azurewebsites.net that has an empty header field, Front Door will set the host header as www\.contoso.com.
-
-Most app backends (Azure Web Apps, Blob storage, and Cloud Services) require the host header to match the domain of the backend. However, the frontend host that routes to your backend will use a different hostname such as `www.contoso.net`.
-
-If your backend requires the host header to match the backend host name, make sure that the backend host header includes the host name of the backend.
-
-#### Configuring the backend host header for the backend
-
-To configure the **backend host header** field for a backend in the backend pool section:
-
-1. Open your Front Door resource and select the backend pool with the backend to configure.
-
-2. Add a backend if you haven't done so, or edit an existing one.
-
-3. Set the backend host header field to a custom value or leave it blank. The hostname for the incoming request will be used as the host header value.
-
-## Backend pools
-A backend pool in Front Door refers to the set of backends that receive similar traffic for their app. In other words, it's a logical grouping of your app instances across the world that receive the same traffic and respond with expected behavior. These backends are deployed across different regions or within the same region. All backends can be in Active/Active deployment mode or what is defined as Active/Passive configuration.
-
-A backend pool defines how the different backends should be evaluated via health probes. It also defines how load balancing occurs between them.
-
-### Health probes
-Front Door sends periodic HTTP/HTTPS probe requests to each of your configured backends. Probe requests determine the proximity and health of each backend to load balance your end-user requests. Health probe settings for a backend pool define how we poll the health status of app backends. The following settings are available for load-balancing configuration:
--- **Path**: The URL used for probe requests for all the backends in the backend pool. For example, if one of your backends is contoso-westus.azurewebsites.net and the path is set to /probe/test.aspx, then Front Door environments, assuming the protocol is set to HTTP, will send health probe requests to http\://contoso-westus.azurewebsites.net/probe/test.aspx. The health probe path is case sensitive.--- **Protocol**: Defines whether to send the health probe requests from Front Door to your backends with HTTP or HTTPS protocol.--- **Method**: The HTTP method to be used for sending health probes. Options include GET or HEAD (default).
- > [!NOTE]
- > For lower load and cost on your backends, Front Door recommends using HEAD requests for health probes.
--- **Interval (seconds)**: Defines the frequency of health probes to your backends, or the intervals in which each of the Front Door environments sends a probe.-
- >[!NOTE]
- >For faster failovers, set the interval to a lower value. The lower the value, the higher the health probe volume your backends receive. For example, if the interval is set to 30 seconds with say, 100 Front Door POPs globally, each backend will receive about 200 probe requests per minute.
-
-For more information, see [Health probes](front-door-health-probes.md).
-
-### Load-balancing settings
-Load-balancing settings for the backend pool define how we evaluate health probes. These settings determine if the backend is healthy or unhealthy. They also check how to load-balance traffic between different backends in the backend pool. The following settings are available for load-balancing configuration:
--- **Sample size**. Identifies how many samples of health probes we need to consider for backend health evaluation.--- **Successful sample size**. Defines the sample size as previously mentioned, the number of successful samples needed to call the backend healthy. For example, assume a Front Door health probe interval is 30 seconds, sample size is 5, and successful sample size is 3. Each time we evaluate the health probes for your backend, we look at the last five samples over 150 seconds (5 x 30). At least three successful probes are required to declare the backend as healthy.--- **Latency sensitivity (additional latency)**. Defines whether you want Front Door to send the request to backends within the latency measurement sensitivity range or forward the request to the closest backend.-
-For more information, see [Least latency based routing method](front-door-routing-methods.md#latency).
-
-## Next steps
--- [Create a Front Door profile](quickstart-create-front-door.md)-- [How Front Door works](front-door-routing-architecture.md)
frontdoor Front Door Caching https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/front-door-caching.md
Previously updated : 03/08/2022 Last updated : 03/22/2022 zone_pivot_groups: front-door-tiers
zone_pivot_groups: front-door-tiers
::: zone pivot="front-door-standard-premium"
-In this article, you'll learn how Front Door Standard/Premium (Preview) Routes and Rule set behaves when you have caching enabled. Azure Front Door is a modern Content Delivery Network (CDN) with dynamic site acceleration and load balancing.
+In this article, you'll learn how Azure Front Door Standard and Premium tier routes and Rule set behaves when you have caching enabled. Azure Front Door is a modern Content Delivery Network (CDN) with dynamic site acceleration and load balancing.
## Request methods
Only the GET request method can generate cached content in Azure Front Door. All
::: zone pivot="front-door-classic"
-The following document specifies behaviors for Front Door with routing rules that have enabled caching. Front Door is a modern Content Delivery Network (CDN) with dynamic site acceleration and load balancing, it also supports caching behaviors just like any other CDN.
+The following document specifies behaviors for Azure Front Door (classic) with routing rules that have enabled caching. Front Door is a modern Content Delivery Network (CDN) with dynamic site acceleration and load balancing, it also supports caching behaviors just like any other CDN.
::: zone-end
Refer to [improve performance by compressing files](standard-premium/how-to-comp
::: zone pivot="front-door-classic"
-Front Door can dynamically compress content on the edge, resulting in a smaller and faster response time to your clients. In order for a file to be eligible for compression, caching must be enabled and the file must be of a MIME type to be eligible for compression. Currently, Front Door doesn't allow this list to be changed. The current list is:
+Azure Front Door (classic) can dynamically compress content on the edge, resulting in a smaller and faster response time to your clients. In order for a file to be eligible for compression, caching must be enabled and the file must be of a MIME type to be eligible for compression. Currently, Front Door (classic) doesn't allow this list to be changed. The current list is:
- "application/eot" - "application/font" - "application/font-sfnt"
These profiles support the following compression encodings:
- [Brotli](https://en.wikipedia.org/wiki/Brotli) If a request supports gzip and Brotli compression, Brotli compression takes precedence.</br>
-When a request for an asset specifies compression and the request results in a cache miss, Front Door does compression of the asset directly on the POP server. Afterward, the compressed file is served from the cache. The resulting item is returned with a transfer-encoding: chunked.
-
-> [!NOTE]
-> Range requests may be compressed into different sizes. Azure Front Door requires the content-length values to be the same for any GET HTTP request. If clients send byte range requests with the `accept-encoding` header that leads to the Origin responding with different content lengths, then Azure Front Door will return a 503 error. You can either disable compression on Origin/Azure Front Door or create a Rules Set rule to remove `accept-encoding` from the request for byte range requests.
+When a request for an asset specifies compression and the request results in a cache miss, Azure Front Door (classic) does compression of the asset directly on the POP server. Afterward, the compressed file is served from the cache. The resulting item is returned with a transfer-encoding: chunked.
::: zone-end
+> [!NOTE]
+> Range requests may be compressed into different sizes. Azure Front Door requires the content-length values to be the same for any GET HTTP request. If clients send byte range requests with the `accept-encoding` header that leads to the Origin responding with different content lengths, then Azure Front Door will return a 503 error. You can either disable compression on the Origin or create a Rules Set rule to remove `accept-encoding` from the request for byte range requests.
+ ## Query string behavior
-With Front Door, you can control how files are cached for a web request that contains a query string. In a web request with a query string, the query string is that portion of the request that occurs after a question mark (?). A query string can contain one or more key-value pairs, in which the field name and its value are separated by an equals sign (=). Each key-value pair is separated by an ampersand (&). For example, `http://www.contoso.com/content.mov?field1=value1&field2=value2`. If there's more than one key-value pair in a query string of a request then their order doesn't matter.
+With Azure Front Door, you can control how files are cached for a web request that contains a query string. In a web request with a query string, the query string is that portion of the request that occurs after a question mark (?). A query string can contain one or more key-value pairs, in which the field name and its value are separated by an equals sign (=). Each key-value pair is separated by an ampersand (&). For example, `http://www.contoso.com/content.mov?field1=value1&field2=value2`. If there's more than one key-value pair in a query string of a request then their order doesn't matter.
-* **Ignore query strings**: In this mode, Front Door passes the query strings from the requestor to the backend on the first request and caches the asset. All ensuing requests for the asset that are served from the Front Door environment ignore the query strings until the cached asset expires.
+* **Ignore query strings**: In this mode, Azure Front Door passes the query strings from the requestor to the backend on the first request and caches the asset. All ensuing requests for the asset that are served from the Front Door environment ignore the query strings until the cached asset expires.
* **Cache every unique URL**: In this mode, each request with a unique URL, including the query string, is treated as a unique asset with its own cache. For example, the response from the backend for a request for `www.example.ashx?q=test1` is cached at the Front Door environment and returned for ensuing caches with the same query string. A request for `www.example.ashx?q=test2` is cached as a separate asset with its own time-to-live setting.
With Front Door, you can control how files are cached for a web request that con
::: zone pivot="front-door-standard-premium"
-See [Cache purging in Azure Front Door Standard/Premium (Preview)](standard-premium/how-to-cache-purge.md) to learn how to configure cache purge.
+See [Cache purging in Azure Front Door](standard-premium/how-to-cache-purge.md) to learn how to configure cache purge.
::: zone-end ::: zone pivot="front-door-classic"
-Front Door caches assets until the asset's time-to-live (TTL) expires. Whenever a client requests an asset with expired TTL, the Front Door environment retrieves a new updated copy of the asset to serve the request and then stores the refreshed cache.
+Azure Front Door caches assets until the asset's time-to-live (TTL) expires. Whenever a client requests an asset with expired TTL, the Front Door environment retrieves a new updated copy of the asset to serve the request and then stores the refreshed cache.
The best practice to make sure your users always obtain the latest copy of your assets is to version your assets for each update and publish them as new URLs. Front Door will immediately retrieve the new assets for the next client requests. Sometimes you may wish to purge cached content from all edge nodes and force them all to retrieve new updated assets. The reason could be because of updates to your web application, or to quickly update assets that contain incorrect information.
Cache behavior and duration can be configured in both the Front Door designer ro
* When *Use cache default duration* is set to **No**, Azure Front Door (classic) will always override with the *cache duration* (required fields), meaning that it will cache the contents for the cache duration ignoring the values from origin response directives. > [!NOTE]
-> * The *cache duration* set in the Front Door designer routing rule is the **minimum cache duration**. This override won't work if the cache control header from the backend has a greater TTL than the override value.
> * Azure Front Door (classic) makes no guarantees about the amount of time that the content is stored in the cache. Cached content may be removed from the edge cache before the content expiration if the content is not frequently used. Azure Front Door (classic) might be able to serve data from the cache even if the cached data has expired. This behavior can help your site to remain partially available when your backends are offline.
+> * The *cache duration* set in the Front Door designer routing rule is the **minimum cache duration**. This override won't work if the cache control header from the backend has a greater TTL than the override value.
> ::: zone-end
frontdoor Front Door Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/front-door-diagnostics.md
Title: Monitoring metrics and logs in Azure Front Door| Microsoft Docs
-description: This article describes the different metrics and access logs that Azure Front Door supports
+ Title: Monitoring metrics and logs in - Azure Front Door (classic)
+description: This article describes the different metrics and access logs that Azure Front Door (classic) supports
documentationcenter: ''
na Previously updated : 11/23/2020- Last updated : 03/22/2022+
-# Monitoring metrics and logs in Azure Front Door
+# Monitoring metrics and logs in Azure Front Door (classic)
-By using Azure Front Door, you can monitor resources in the following ways:
+When using Azure Front Door (classic), you can monitor resources in the following ways:
- **Metrics**. Azure Front Door currently has eight metrics to view performance counters. - **Logs**. Activity and diagnostic logs allow performance, access, and other data to be saved or consumed from a resource for monitoring purposes.
Metrics are a feature for certain Azure resources that allow you to view perform
## <a name="activity-log"></a>Activity logs
-Activity logs provide information about the operations done on Front Door. They also determine the what, who, and when for any write operations (put, post, or delete) taken on Front Door.
+Activity logs provide information about the operations done on an Azure Front Door (classic) profile. They also determine the what, who, and when for any write operations (put, post, or delete) done against an Azure Front Door (classic) profile.
>[!NOTE] >Activity logs don't include read (get) operations. They also don't include operations that you perform by using either the Azure portal or the original Management API.
Activity logs provide insights into the operations done on Azure resources. Diag
:::image type="content" source="./media/front-door-diagnostics/diagnostic-log.png" alt-text="Diagnostic logs":::
-To configure diagnostic logs for your Front Door:
+To configure diagnostic logs for your Azure Front Door (classic):
-1. Select your Azure Front Door.
+1. Select your Azure Front Door (classic) profile.
2. Choose **Diagnostic settings**.
If the value is false, then it means the request is responded from origin shield
| Routing rule with caching enabled. Cache misses at both edge and parent cache POPP | 2 | 1. Edge POP code</br>2. Parent cache POP code | 1. Edge POP code</br>2. Parent cache POP code | 1. True</br>2. False | 1. MISS</br>2. MISS | > [!NOTE]
-> For caching scenarios, the value for Cache Status will be partial_hit when some of the bytes for a request get served from Front Door edge or origin shield cache while some of the bytes get served from the origin for large objects.
+> For caching scenarios, the value for Cache Status will be partial_hit when some of the bytes for a request get served from the Azure Front Door edge or origin shield cache while some of the bytes get served from the origin for large objects.
-Front Door uses a technique called object chunking. When a large file is requested, the Front Door retrieves smaller pieces of the file from the origin. After the Front Door POP server receives a full or byte-ranges of the file requested, the Front Door edge server requests the file from the origin in chunks of 8 MB.
+Azure Front Door uses a technique called object chunking. When a large file is requested, the Azure Front Door retrieves smaller pieces of the file from the origin. After the Azure Front Door POP server receives a full or byte-ranges of the file requested, the Azure Front Door edge server requests the file from the origin in chunks of 8 MB.
-After the chunk arrives at the Front Door edge, it's cached and immediately served to the user. The Front Door then prefetches the next chunk in parallel. This prefetch ensures the content stays one chunk ahead of the user, which reduces latency. This process continues until the entire file gets downloaded (if requested), all byte ranges are available (if requested), or the client closes the connection. For more information on the byte-range request, see RFC 7233. The Front Door caches any chunks as they're received. The entire file doesn't need to be cached on the Front Door cache. Ensuing requests for the file or byte ranges are served from the Front Door cache. If not all the chunks are cached on the Front Door, prefetch is used to request chunks from the origin. This optimization relies on the ability of the origin server to support byte-range requests. If the origin server doesn't support byte-range requests, this optimization isn't effective.
+After the chunk arrives at the Azure Front Door edge, it's cached and immediately served to the user. The Azure Front Door then prefetches the next chunk in parallel. This prefetch ensures the content stays one chunk ahead of the user, which reduces latency. This process continues until the entire file gets downloaded (if requested), all byte ranges are available (if requested), or the client closes the connection. For more information on the byte-range request, see RFC 7233. The Azure Front Door caches any chunks as they're received. The entire file doesn't need to be cached on the Front Door cache. Ensuing requests for the file or byte ranges are served from the Azure Front Door cache. If not all the chunks are cached on the Azure Front Door, prefetch is used to request chunks from the origin. This optimization relies on the ability of the origin server to support byte-range requests. If the origin server doesn't support byte-range requests, this optimization isn't effective.
## Next steps -- [Create a Front Door profile](quickstart-create-front-door.md)-- [How Front Door works](front-door-routing-architecture.md)
+- Learn how to [create an Azure Front Door (classic) profile](quickstart-create-front-door.md)
+- Learn [how Azure Front Door (classic) works](front-door-routing-architecture.md)
frontdoor Front Door Lb With Azure App Delivery Suite https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/front-door-lb-with-azure-app-delivery-suite.md
- Title: Azure Front Door - Load Balancing with Azure's application delivery suite | Microsoft Docs
-description: This article helps you learn about how Azure recommends load balancing with its application delivery suite
----- Previously updated : 05/16/2021---
-# Load-balancing with AzureΓÇÖs application delivery suite
-
-## Introduction
-Microsoft Azure provides various global and regional services for managing how your network traffic is distributed and load balanced:
-
-* Application Gateway
-* Front Door
-* Load Balancer
-* Traffic Manager
-
-Along with AzureΓÇÖs many regions and zonal architecture, using these services together can enable you to build robust, scalable, and high-performance applications.
-
-
-These services are broken into two categories:
-1. **Global load-balancing services** such as Traffic Manager and Front Door distribute traffic from your end users across your regional backends, across clouds and even your hybrid on-premises services. Global load balancing routes your traffic to your closest service backend and reacts to changes in service reliability to maintain always-on availability and high performance for your users.
-1. **Regional load-balancing services** such as Load Balancers and Application Gateways provide the ability to distribute traffic to virtual machines (VMs) within a virtual network (VNETs) or service endpoints within a region.
-
-When you combine these global and regional services, your application will benefit from reliable and secured end-to-end traffic that gets sent from your end users to your IaaS, PaaS, or on-premises services. In the next section, we describe each of these services.
-
-## Global load balancing
-**Traffic Manager** provides global DNS load balancing. It looks at incoming DNS requests and responds with a healthy backend, following the routing policy the customer has selected. Options for routing methods are:
-- **Performance routing** sends requests to the closest backend with the least latency.-- **Priority routing** directs all traffic to a backend, with other backends as backup.-- **Weighted round-robin routing** distributes traffic based on the weighting that is assigned to each backend.-- **Geographic routing** ensures requests that get sourced from specific geographical regions get handled by backends mapped for those regions. (For example, all requests from Spain should be directed to the France Central Azure region)-- **Subnet routing** allows you to map IP address ranges to backends, so that incoming requests for those IPs will be sent to the specific backend. (For example, any users that connect from your corporate HQΓÇÖs IP address range should get different web content than the general users)-
-The client connects directly to that backend. Azure Traffic Manager detects when a backend is unhealthy and then redirects the clients to another healthy instance. Refer to [Azure Traffic Manager](../traffic-manager/traffic-manager-overview.md) documentation to learn more about the service.
-
-**Azure Front Door** provides dynamic website acceleration (DSA) including global HTTP load balancing. It looks at incoming HTTP requests routes to the closest service backend / region for the specified hostname, URL path, and configured rules.
-Front Door terminates HTTP requests at the edge of MicrosoftΓÇÖs network and actively probes to detect application or infrastructure health or latency changes. Front Door then always routes traffic to the fastest and available (healthy) backend. Refer to Front Door's [routing architecture](front-door-routing-architecture.md) details and [traffic routing methods](front-door-routing-methods.md) to learn more about the service.
-
-## Regional load balancing
-Application Gateway provides application delivery controller (ADC) as a service, offering various Layer 7 load-balancing capabilities for your application. It allows customers to optimize web farm productivity by offloading CPU-intensive TLS termination to the application gateway. Other additional Layer 7 routing capabilities also include round-robin distribution of incoming traffic, cookie-based session affinity, URL path-based routing, and the ability to host multiple websites behind a single application gateway.
-Application Gateway can be configured as an Internet-facing endpoint, an internal-only endpoint, or a combination of both. Application Gateway is fully Azure managed, providing you with scalability, and highly availability. It provides a rich set of diagnostics and logging capabilities for better manageability.
-
-Load Balancers are an integral part of the Azure SDN stack, which provides you with high-performance, low-latency Layer 4 load-balancing services for all UDP and TCP protocols. You can configure public or internal load-balanced endpoints by defining rules that map inbound connections to back-end pools. With health-probing monitoring using TCP or HTTPS, it can help you manage your service availability.
-
-## Choosing a global load balancer
-When choosing a global load balancer between Traffic Manager and Azure Front Door for global routing, you should consider whatΓÇÖs similar and whatΓÇÖs different about the two services. Both services provide
-- **Multi-geo redundancy:** If one region goes out of service, traffic seamlessly routes to the closest region without any intervention from the application owner.-- **Closest region routing:** Traffic is automatically routed to the closest region-
-</br>The following table describes the differences between Traffic Manager and Azure Front Door:</br>
-
-| Traffic Manager | Azure Front Door |
-| | |
-|**Any protocol:** Since Traffic Manager works at the DNS layer, you can route any type of network traffic; HTTP, TCP, UDP, and so on. | **HTTP acceleration:** With Front Door, traffic is proxied at the edge of the Microsoft network. HTTP/S requests will see latency and throughput improvements, which reduce latency for TLS negotiation.|
-|**On-premises routing:** With routing at a DNS layer, traffic always goes from point to point. Routing from your branch office to your on premises datacenter can take a direct path; even on your own network using Traffic Manager. | **Independent scalability:** Since Front Door works with the HTTP request, requests to different URL paths can be routed to different backend / regional service pools (microservices) based on rules and the health of each application microservice.|
-|**Billing format:** DNS-based billing scales with your users and for services with more users, plateaus to reduce cost at higher usage. |**Inline security:** Front Door enables rules such as rate limiting and IP ACL-ing to let you protect your backends before traffic reaches your application.
-
-</br>We recommend customers to use Front Door for their HTTP workload because of the performance, operability, and security benefits that HTTP works with Front Door. Traffic Manager and Front Door can be used in parallel to serve all traffic for your application.
-
-## Building with AzureΓÇÖs application delivery suite
-We recommend all websites, APIs, services be geographically redundant so it can deliver traffic to its users from the nearest location whenever possible. Combining multiple load-balancing services enables you to build geographical and zonal redundancy to maximize reliability and performance.
-
-In the following diagram, we describe an example architecture that uses a combination of all these services to build a global web service. The architecture uses Traffic Manager to route traffic to global backends for file and object delivery, Front Door to globally route URL paths that match the pattern /store/* to their service that theyΓÇÖve migrated to App Service, and all other requests to regional Application Gateways.
-
-In each region of IaaS service, the application developer has decided that any URLs that match the pattern /images/* get served from a dedicated pool of VMs. This pool of VMs is different from the rest of the web farm.
-
-Additionally, the default VM pool serving the dynamic content needs to talk to a back-end database that is hosted on a high-availability cluster. The entire deployment is configured through Azure Resource Manager.
-
-The following diagram shows the architecture of this scenario:
--
-> [!NOTE]
-> This example is only one of many possible configurations of the load-balancing services that Azure offers. Traffic Manager, Front Door, Application Gateway, and Load Balancer can be mixed and matched to best suit your load-balancing needs. For example, if TLS/SSL offload or Layer 7 processing is not necessary, Load Balancer can be used in place of Application Gateway.
-
-## Next Steps
--- Learn how to [create a Front Door](quickstart-create-front-door.md).-- Learn [how Front Door works](front-door-routing-architecture.md).
frontdoor Front Door Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/front-door-overview.md
Previously updated : 01/27/2022 Last updated : 03/18/2022 # Customer intent: As an IT admin, I want to learn about Front Door and what I can use it for. # What is Azure Front Door?
-> [!IMPORTANT]
-> This documentation is for Azure Front Door. Looking for information on Azure Front Door Standard/Premium (Preview)? View [here](standard-premium/overview.md).
+Whether youΓÇÖre delivering content and files or building global apps and APIs, Azure Front Door can help you deliver higher availability, lower latency, greater scale, and more secure experiences to your users wherever they are.
-Azure Front Door is a global, scalable entry-point that uses the Microsoft global edge network to create fast, secure, and widely scalable web applications. With Front Door, you can transform your global consumer and enterprise applications into robust, high-performing personalized modern applications with contents that reach a global audience through Azure.
+Azure Front Door is MicrosoftΓÇÖs modern cloud Content Delivery Network (CDN) that provides fast, reliable, and secure access between your users and your applicationsΓÇÖ static and dynamic web content across the globe. Azure Front Door delivers your content using the MicrosoftΓÇÖs global edge network with hundreds of [global and local POPs](edge-locations-by-region.md) distributed around the world close to both your enterprise and consumer end users.
-<p align="center">
- <img src="./media/front-door-overview/front-door-visual-diagram.png" alt="Front Door architecture" width="600" title="Azure Front Door">
-</p>
-Front Door works at Layer 7 (HTTP/HTTPS layer) using anycast protocol with split TCP and Microsoft's global network to improve global connectivity. Based on your routing method you can ensure that Front Door will route your client requests to the fastest and most available application backend. An application backend is any Internet-facing service hosted inside or outside of Azure. Front Door provides a range of [traffic-routing methods](front-door-routing-methods.md) and [backend health monitoring options](front-door-health-probes.md) to suit different application needs and automatic failover scenarios. Similar to [Traffic Manager](../traffic-manager/traffic-manager-overview.md), Front Door is resilient to failures, including failures to an entire Azure region.
-
->[!NOTE]
-> Azure provides a suite of fully managed load-balancing solutions for your scenarios.
-> * If you are looking to do DNS based global routing and do **not** have requirements for Transport Layer Security (TLS) protocol termination ("SSL offload"), per-HTTP/HTTPS request or application-layer processing, review [Traffic Manager](../traffic-manager/traffic-manager-overview.md).
-> * If you want to load balance between your servers in a region at the application layer, review [Application Gateway](../application-gateway/overview.md).
-> * To do network layer load balancing, review [Load Balancer](../load-balancer/load-balancer-overview.md).
->
-> Your end-to-end scenarios may benefit from combining these solutions as needed.
-> For an Azure load-balancing options comparison, see [Overview of load-balancing options in Azure](/azure/architecture/guide/technology-choices/load-balancing-overview).
## Why use Azure Front Door?
-With Front Door you can build, operate, and scale out your dynamic web application and static content. Front Door enables you to define, manage, and monitor the global routing for your web traffic by optimizing for top-tier end-user performance and reliability through quick global failover.
+Azure Front Door enables internet-facing application to:
+
+* **Build and operate modern internet-first architectures** that have dynamic, high-quality digital experiences with highly automated, secure, and reliable platforms.
+
+* **Accelerate and deliver your app and content globally** at scale to your users wherever they're creating opportunities for you to compete, weather change, and quickly adapt to new demand and markets.
+
+* **Intelligently secure your digital estate** against known and new threats with intelligent security that embrace a **_Zero Trust_** framework.
+
+## Key Benefits
+
+### Global delivery scale using MicrosoftΓÇÖs network
+
+Scale out and improve performance of your applications and content using MicrosoftΓÇÖs global Cloud CDN and WAN.
+
+* Leverage over [118 edge locations](edge-locations-by-region.md) across 100 metro cities connected to Azure using a private enterprise-grade WAN and improve latency for apps by up to 3 times.
+
+* Accelerate application performance by using Front DoorΓÇÖs [anycast](front-door-traffic-acceleration.md#select-the-front-door-edge-location-for-the-request-anycast) network and [split TCP](front-door-traffic-acceleration.md#connect-to-the-front-door-edge-location-split-tcp) connections.
+
+* Terminate SSL offload at the edge and use integrated [certificate management](standard-premium/how-to-configure-https-custom-domain.md).
+
+* Natively support end-to-end IPv6 connectivity and the HTTP/2 protocol.
+
+### Deliver modern apps and architectures
+
+Modernize your internet first applications on Azure with Cloud Native experiences
+
+* Integrate with DevOps friendly command line tools across SDKs of different languages, Bicep, ARM templates, CLI and PowerShell.
+
+* Define your own [custom domain](standard-premium/how-to-add-custom-domain.md) with flexible domain validation.
+
+* Load balance and route traffic across [origins](origin.md) and use intelligent [health probe](health-probes.md) monitoring across apps or content hosted in Azure or anywhere.
+
+* Integrate with other Azure services such as DNS, Web Apps, Storage and many more for domain and origin management.
+
+* Move your routing business logic to the edge with [enhanced rules engine](front-door-rules-engine.md) capabilities including regular expressions and server variables.
+
+* Analyze [built-in reports](standard-premium/how-to-reports.md) with an all-in-one dashboard for both Front Door and security patterns.
+
+* [Monitoring your Front Door traffic in real time](standard-premium/how-to-monitor-metrics.md), and configure alerts that integrate with Azure Monitor.
+
+* [Log each Front Door request](standard-premium/how-to-logs.md) and failed health probes.
+
+### Simple and cost-effective
-Key features included with Front Door:
+* Unified static and dynamic delivery offered in a single tier to accelerate and scale your application through caching, SSL offload, and layer 3-4 DDoS protection.
-* Accelerated application performance by using **[split TCP](front-door-traffic-acceleration.md?pivots=front-door-classic#splittcp)**-based **[anycast protocol](front-door-traffic-acceleration.md?pivots=front-door-classic#anycast)**.
+* Free, autorotation managed SSL certificates that save time and quickly secure apps and content.
-* Intelligent **[health probe](front-door-health-probes.md)** monitoring for backend resources.
+* Low entry fee and a simplified cost model that reduces billing complexity by having fewer meters needed to plan for.
-* **[URL-path based](front-door-route-matching.md)** routing for requests.
+* Azure to Front Door integrated egress pricing that removes the separate egress charge from Azure regions to Azure Front Door. Refer to [Azure Front Door pricing](https://azure.microsoft.com/pricing/details/frontdoor/) for more details.
-* Enables hosting of multiple websites for efficient application infrastructure.
+### Intelligent secure internet perimeter
-* Cookie-based **[session affinity](front-door-routing-methods.md#affinity)**.
+* Secure applications with built-in layer 3-4 DDoS protection, seamlessly attached [Web Application Firewall (WAF)](../web-application-firewall/afds/afds-overview.md), and [Azure DNS to protect your domains](how-to-configure-endpoints.md).
-* **[SSL offloading](front-door-custom-domain-https.md)** and certificate management.
+* Protect your apps from malicious actors with Bot manager rules based on MicrosoftΓÇÖs own Threat Intelligence.
-* Define your own **[custom domain](front-door-custom-domain.md)**.
+* Privately connect to your backend behind Azure Front Door with [Private Link](private-link.md) and embrace a zero-trust access model.
-* Application security with integrated **[Web Application Firewall (WAF)](../web-application-firewall/overview.md)**.
+* Provide a centralized security experience for your application via Azure Policy and Azure Advisor that ensures consistent security features across apps.
-* Redirect HTTP traffic to HTTPS with **[URL redirect](front-door-url-redirect.md)**.
-* Custom forwarding path with **[URL rewrite](front-door-url-rewrite.md)**.
+## How to choose between Azure Front Door tiers?
-* Native support of end-to-end IPv6 connectivity and **[HTTP/2 protocol](front-door-http2.md)**.
+For a comparison of supported features in Azure Front Door, see [Tier comparison](standard-premium/tier-comparison.md).
## Pricing
-For pricing information, see [Front Door Pricing](https://azure.microsoft.com/pricing/details/frontdoor/). See [SLA for Azure Front Door](https://azure.microsoft.com/support/legal/sla/frontdoor/v1_0/).
+For pricing information, see [Front Door Pricing](https://azure.microsoft.com/pricing/details/frontdoor/). For information about service-level agreements, See [SLA for Azure Front Door](https://azure.microsoft.com/support/legal/sla/frontdoor/v1_0/).
## What's new?
Subscribe to the RSS feed and view the latest Azure Front Door feature updates o
## Next steps -- [Quickstart: Create a Front Door](quickstart-create-front-door.md).-- [Learn module: Introduction to Azure Front Door](/learn/modules/intro-to-azure-front-door/).-- Learn [how Front Door works](front-door-routing-architecture.md).
+* Learn about [Azure Front Door routing architecture](front-door-routing-architecture.md)
+* Learn how to [create an Azure Front Door profile](create-front-door-portal.md).
+* [Learn module: Introduction to Azure Front Door](/learn/modules/intro-to-azure-front-door/).
frontdoor Front Door Route Matching https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/front-door-route-matching.md
Title: Azure Front Door - Routing rule matching
-description: This article helps you understand how Azure Front Door match incoming requests to a routing rule.
+description: This article helps you understand how Azure Front Door matches incoming requests to a routing rule.
zone_pivot_groups: front-door-tiers
::: zone pivot="front-door-standard-premium"
-In Azure Front Door Standard/Premium tier a route defines how the traffic is handled when the incoming request arrives at the Azure Front Door environment. Through the route settings, an association is defined between a domain and a backend origin group. By turning on advance features such as Pattern to Match and Rule set, you can have a more granular control over traffic to your backend resources.
-
-> [!IMPORTANT]
-> Azure Front Door Standard/Premium (Preview) is currently in public preview.
-> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
-> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+In Azure Front Door a route defines how the traffic gets handled when the incoming request arrives at the Azure Front Door environment. Through the route settings, an association is defined between a domain and a backend origin group. By turning on advance features such as Pattern to Match and Rule set, you can have more granular control over traffic to your backend resources.
> [!NOTE]
-> When you use the [Front Door rules engine](front-door-rules-engine.md), you can configure a rule to [override the origin group](front-door-rules-engine-actions.md#origin-group-override) for a request. The origin group set by the rules engine overrides the routing process described in this article.
+> When you use the [Front Door rules engine](front-door-rules-engine.md), you can configure a rule to [override the origin group](front-door-rules-engine-actions.md#RouteConfigurationOverride) for a request. The origin group set by the rules engine overrides the routing process described in this article.
::: zone-end ::: zone pivot="front-door-classic"
-After establishing a connection and completing a TLS handshake, when a request lands on a Front Door environment one of the first things that Front Door does is determine which particular routing rule to match the request to and then take the defined action in the configuration. The following document explains how Front Door determines which Route configuration to use when processing an HTTP request.
+After an established connection and a complete TLS handshake, when a request lands on the Azure Front Door (classic) environment, one of the first things that Front Door does is determine which routing rule to match the request to and then take the defined action in the configuration. The following document explains how Front Door determines which Route configuration to use when processing an HTTP request.
::: zone-end
Given that configuration, the following example matching table would result:
Once Azure Front Door Standard/Premium has matched to a single routing rule, it then needs to choose how to process the request. If Azure Front Door Standard/Premium has a cached response available for the matched routing rule, then the request gets served back to the client.
-Finally, Azure Front Door Standard/Premium evaluates whether or not you have a [rule set](front-door-rules-engine.md) for the matched routing rule. If there's no rule set defined, then the request gets forwarded to the origin group as-is. Otherwise, the rule sets get executed in the order they're configured. [Rule sets can override the route](front-door-rules-engine-actions.md#origin-group-override), forcing traffic to a specific origin group.
+Finally, Azure Front Door Standard/Premium evaluates whether or not you have a [rule set](front-door-rules-engine.md) for the matched routing rule. If there's no rule set defined, then the request gets forwarded to the origin group as-is. Otherwise, the rule sets get executed in the order they're configured. [Rule sets can override the route](front-door-rules-engine-actions.md#RouteConfigurationOverride), forcing traffic to a specific origin group.
::: zone-end ::: zone pivot="front-door-classic"
-After you have matched to a single Front Door routing rule, choose how to process the request. If Front Door has a cached response available for the matched routing rule, the cached response is served back to the client. If Front Door doesn't have a cached response for the matched routing rule, what's evaluated next is whether you have configured [URL rewrite (a custom forwarding path)](front-door-url-rewrite.md) for the matched routing rule. If no custom forwarding path is defined, the request is forwarded to the appropriate backend in the configured backend pool as-is. If a custom forwarding path has been defined, the request path is updated per the defined [custom forwarding path](front-door-url-rewrite.md) and then forwarded to the backend.
+After you've matched to a single Front Door routing rule, choose how to process the request. If Front Door has a cached response available for the matched routing rule, the cached response is served back to the client. If Front Door doesn't have a cached response for the matched routing rule, the thing that gets evaluated is whether you have configured [URL rewrite](front-door-url-rewrite.md) for the matched routing rule. If there's no custom forwarding path, the request gets forwarded to the appropriate backend in the configured backend pool as-is. If a custom forwarding path has been defined, the request path gets updated per the defined [custom forwarding path](front-door-url-rewrite.md) and then forwarded to the backend.
::: zone-end
frontdoor Front Door Routing Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/front-door-routing-architecture.md
For more information about how requests are made to Front Door, see [Front Door
## Match request to a Front Door profile
-When Front Door receives an HTTP request, it uses the request's `Host` header to match the request to the correct customer's Front Door profile. If the request is using a [custom domain name](standard-premium/how-to-add-custom-domain.md), the domain name must be registered with Front Door to enable requests to be matched to your profile.
+When Front Door receives an HTTP request, it uses the request's `Host` header to match the request to the correct customer's Front Door profile. If the request is using a [custom domain name](standard-premium/how-to-add-custom-domain.md), the domain name must be registered with Front Door to enable requests to get matched to your profile.
::: zone-end
When Front Door receives an HTTP request, it uses the request's `Host` header to
## Match request to a front door
-When Front Door receives an HTTP request, it uses the request's `Host` header to match the request to the correct customer's Front Door instance. If the request is using a [custom domain name](front-door-custom-domain.md), the domain name must be registered with Front Door to enable requests to be matched to your front door.
+When Front Door receives an HTTP request, it uses the request's `Host` header to match the request to the correct customer's Front Door instance. If the request is using a [custom domain name](front-door-custom-domain.md), the domain name must be registered with Front Door to enable requests to get matched to your Front door.
::: zone-end
The route specifies the [backend pool](front-door-backend-pool.md) that the requ
## Evaluate rule sets
-If you have defined [rule sets](front-door-rules-engine.md) for the route, they're executed in the order they're configured. [Rule sets can override the origin group](front-door-rules-engine-actions.md#origin-group-override) specified in a route. Rule sets can also trigger a redirection response to the request instead of forwarding it to an origin.
+If you have defined [rule sets](front-door-rules-engine.md) for the route, they're executed in the order they're configured. [Rule sets can override the origin group](front-door-rules-engine-actions.md#RouteConfigurationOverride) specified in a route. Rule sets can also trigger a redirection response to the request instead of forwarding it to an origin.
::: zone-end
Front Door selects an origin to use within the origin group. Origin selection is
- The health of each origin, which Front Door monitors by using [health probes](front-door-health-probes.md). - The [routing method](front-door-routing-methods.md) for your origin group.-- Whether you have enabled [session affinity](front-door-routing-methods.md#affinity).
+- Whether you have enabled [session affinity](front-door-routing-methods.md#affinity)
## Forward request to origin
Front Door selects a backend to use within the backend pool. Backend selection i
- The health of each backend, which Front Door monitors by using [health probes](front-door-health-probes.md). - The [routing method](front-door-routing-methods.md) for your backend pool.-- Whether you have enabled [session affinity](front-door-routing-methods.md#affinity).
+- Whether you have enabled [session affinity](front-door-routing-methods.md#affinity)
## Forward request to backend
frontdoor Front Door Routing Methods https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/front-door-routing-methods.md
- Title: Azure Front Door - traffic routing methods | Microsoft Docs
-description: This article helps you understand the different traffic routing methods used by Front Door
----- Previously updated : 07/14/2021---
-# Front Door routing methods
-
-Azure Front Door supports different kinds of traffic-routing methods to determine how to route your HTTP/HTTPS traffic to different backends. When your client requests reaching Front Door, the configured routing method gets applied to ensure the requests are forwarded to the best backend.
-
-There are four traffic routing methods available in Front Door:
-
-* **[Latency](#latency):** The latency-based routing ensures that requests are sent to the lowest latency backends acceptable within a sensitivity range. Basically, your user requests are sent to the "closest" set of backends in respect to network latency.
-* **[Priority](#priority):** You can assign priorities to your backends when you want to configure a primary backend to service all traffic. The secondary backend can be a backup in case the primary backend becomes unavailable.
-* **[Weighted](#weighted):** You can assign weights to your backends when you want to distribute traffic across a set of backends evenly or according to the weight coefficients. Traffic is distributed as per weights if the latencies of the backends are within the acceptable latency sensitivity range in the backend pool.
-* **[Session Affinity](#affinity):** You can configure session affinity for your frontend hosts or domains to ensure requests from the same end user gets sent to the same backend.
-
-All Front Door configurations include monitoring of backend health and automated instant global failover. For more information, see [Front Door Backend Monitoring](front-door-health-probes.md). Your Front Door can work based off of a single routing method. But depending on your application needs, you can also combine multiple routing methods to build an optimal routing topology.
-
-> [!NOTE]
-> When you use the [Front Door rules engine](front-door-rules-engine.md), you can configure a rule to [override the backend pool](front-door-rules-engine-actions.md#route-configuration-overrides) for a request. The backend pool set by the rules engine overrides the routing process described in this article.
-
-## <a name = "latency"></a>Lowest latencies based traffic-routing
-
-Deploying backends in two or more locations across the globe can improve the responsiveness of your applications by routing traffic to the destination that is 'closest' to your end users. The default traffic-routing method for your Front Door configuration forwards requests from your end users to the closest backend of the Front Door environment that received the request. Combined with the Anycast architecture of Azure Front Door, this approach ensures that each of your end users get maximum performance personalized based on their location.
-
-The 'closest' backend isn't necessarily closest as measured by geographic distance. Instead, Front Door determines the closest backends by measuring network latency. Read more about [Front Door's routing architecture](front-door-routing-architecture.md).
-
-Below is the overall decision flow:
-
-| Available backends | Priority | Latency signal (based on health probe) | Weights |
-|-| -- | -- | -- |
-| First, select all backends that are enabled and returned healthy (200 OK) for the health probe. If there are six backends A, B, C, D, E, and F, and among them C is unhealthy and E is disabled. The list of available backends is A, B, D, and F. | Next, the top priority backends among the available ones are selected. If backend A, B, and D have priority 1 and backend F has a priority of 2. Then, the selected backends will be A, B, and D.| Select the backends with latency range (least latency & latency sensitivity in ms specified). If backend A is 15 ms, B is 30 ms and D is 60 ms away from the Front Door environment where the request landed, and latency sensitivity is 30 ms, then the lowest latency pool consist of backend A and B, because D is beyond 30 ms away from the closest backend that is A. | Lastly, Front Door will round robin the traffic among the final selected pool of backends in the ratio of weights specified. Say, if backend A has a weight of 5 and backend B has a weight of 8, then the traffic will be distributed in the ratio of 5:8 among backends A and B. |
-
->[!NOTE]
-> By default, the latency sensitivity property is set to 0 ms, that is, request is always forwarded to the fastest available backend and weights on the backends will not take effect unless two backends have the same network latency.
-
-## <a name = "priority"></a>Priority-based traffic-routing
-
-Often an organization wants to provide high availability for their services by deploying more than one backup service in case the primary one goes down. Across the industry, this topology is also referred to as Active/Standby or Active/Passive deployment topology. The 'Priority' traffic-routing method allows Azure customers to easily implement this failover pattern.
-
-Your default Front Door contains an equal priority list of backends. By default, Front Door sends traffic only to the top priority backends (lowest value for priority) that is, the primary set of backends. If the primary backends aren't available, Front Door routes the traffic to the secondary set of backends (second lowest value for priority). If both the primary and secondary backends aren't available, the traffic goes to the third, and so on. Availability of the backend is based on the configured status (enabled or disabled) and the ongoing backend health status as determined by the health probes.
-
-### Configuring priority for backends
-
-Each backend in your backend pool of the Front Door configuration has a property called 'Priority', which can be a number between 1 and 5. With Azure Front Door, you configure the backend priority explicitly using this property for each backend. This property is a value between 1 and 5. Lower values represent a higher priority. Backends can share priority values.
-
-## <a name = "weighted"></a>Weighted traffic-routing method
-The 'Weighted' traffic-routing method allows you to distribute traffic evenly or to use a pre-defined weighting.
-
-In the Weighted traffic-routing method, you assign a weight to each backend in the Front Door configuration of your backend pool. The weight is an integer from 1 to 1000. This parameter uses a default weight of '50'.
-
-With the list of available backends that have an acceptable latency sensitivity, the traffic gets distributed with a round-robin mechanism using the ratio of weights specified. If the latency sensitivity gets set to 0 milliseconds, then this property doesn't take effect unless there are two backends with the same network latency.
-
-The weighted method enables some useful scenarios:
-
-* **Gradual application upgrade**: Gives a percentage of traffic to route to a new backend, and gradually increase the traffic over time to bring it at par with other backends.
-* **Application migration to Azure**: Create a backend pool with both Azure and external backends. Adjust the weight of the backends to prefer the new backends. You can gradually set this up starting with having the new backends disabled, then assigning them the lowest weights, slowly increasing it to levels where they take most traffic. Then finally disabling the less preferred backends and removing them from the pool.
-* **Cloud-bursting for additional capacity**: Quickly expand an on-premises deployment into the cloud by putting it behind Front Door. When you need extra capacity in the cloud, you can add or enable more backends and specify what portion of traffic goes to each backend.
-
-## <a name = "affinity"></a>Session Affinity
-By default, without session affinity, Front Door forwards requests originating from the same client to different backends. Some stateful applications or in certain scenarios ensuing requests from the same user prefers the same backend that processed the initial request. The cookie-based session affinity feature is useful when you want to keep a user session on the same backend. Using managed cookies, Azure Front Door can direct ensuing traffic from a user session to the same backend for processing.
-
-Session affinity can be enabled at a frontend host level that is for each of your configured domains (or subdomains). Once enabled, Front Door adds a cookie to the user's session. Cookie-based session affinity allows Front Door to identify different users even if behind the same IP address, which in turn allows a more even distribution of traffic between your different backends.
-
-The lifetime of the cookie is the same as the user's session, as Front Door currently only supports session cookie.
-
-> [!NOTE]
-> Public proxies may interfere with session affinity. This is because establishing a session requires Front Door to add a session affinity cookie to the response, which cannot be done if the response is cacheable as it would disrupt the cookies of other clients requesting the same resource. To protect against this, session affinity will **not** be established if the backend sends a cacheable response when this is attempted. If the session has already been established, it does not matter if the response from the backend is cacheable.
-> Session affinity will be established in the following circumstances, **unless** the response has an HTTP 304 status code:
-> - The response has specific values set for the ```Cache-Control``` header that prevents caching, such as "private" or no-store".
-> - The response contains an ```Authorization``` header that has not expired.
-> - The response has an HTTP 302 status code.
-
-## Next steps
--- Learn how to [create a Front Door](quickstart-create-front-door.md).-- Learn [how Front Door works](front-door-routing-architecture.md).
frontdoor Front Door Rules Engine Actions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/front-door-rules-engine-actions.md
Title: Azure Front Door Rules actions
-description: This article provides a list of various actions you can do with Azure Front Door Rules engine/Rules set.
+description: This article provides a list of various actions you can do with Azure Front Door Rules set/Rules engine.
Previously updated : 03/07/2022 Last updated : 03/22/2022 zone_pivot_groups: front-door-tiers
zone_pivot_groups: front-door-tiers
::: zone pivot="front-door-standard-premium"
-An Azure Front Door Standard/Premium [Rule Set](front-door-rules-engine.md) consist of rules with a combination of match conditions and actions. This article provides a detailed description of the actions you can use in Azure Front Door Standard/Premium Rule Set. The action defines the behavior that gets applied to a request type that a match condition(s) identifies. In an Azure Front Door (Standard/Premium) Rule Set, a rule can contain up to five actions.
+An Azure Front Door [Rule set](front-door-rules-engine.md) consist of rules with a combination of match conditions and actions. This article provides a detailed description of actions you can use in an Azure Front Door Rule set. An action defines the behavior that gets applied to a request type that a match condition(s) identifies. In an Azure Front Door Rule set, a rule can contain up to five actions.
-> [!IMPORTANT]
-> Azure Front Door Standard/Premium (Preview) is currently in public preview.
-> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
-> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+Azure Front Door supports [server variable](rule-set-server-variables.md) in a Rule set action.
-The following actions are available to use in Azure Front Door rule set.
+The following actions are available to use in an Azure Front Door rule set:
-## <a name="CacheExpiration"></a> Cache expiration
+## <a name="RouteConfigurationOverride"></a> Route configuration override
-Use the **cache expiration** action to overwrite the time to live (TTL) value of the endpoint for requests that the rules match conditions specify.
+Use the **route configuration override** action to override the origin group or the caching configuration to use for the request. You can choose to override or honor the origin group configurations specified in the route. However, whenever you override the route configuration, you must configure caching. Otherwise, caching will be disabled for the request.
-> [!NOTE]
-> Origins may specify not to cache specific responses using the `Cache-Control` header with a value of `no-cache`, `private`, or `no-store`. In these circumstances, Front Door will never cache the content and this action will have no effect.
+You can also override how files get cached for specific requests, including:
+
+- Override the caching behavior specified by the origin.
+- How query string parameters are used to generate the request's cache key.
+- The time to live (TTL) value to control how long contents stay in cache.
### Properties
+| Property | Supported values |
+|-||
+| Override origin group | <ul><li>**Yes:** Override the origin group used for the request.</li> <li>**No:** Use the origin group specified in the route.</li></ul> |
+| Caching | <ul><li>**Enabled:** Force caching to be enabled for the request.</li><li>**Disabled:** Force caching to be disabled for the request.</li></ul> |
+
+When **Override origin group** is set to **Yes**, set the following properties:
+
+| Property | Supported values |
+|-||
+| Origin group | The origin group that the request should be routed to. This overrides the configuration specified in the Front Door endpoint route. |
+| Forwarding protocol | The protocol for Front Door to use when forwarding the request to the origin. Supported values are HTTP only, HTTPS only, Match incoming request. This overrides the configuration specified in the Front Door endpoint route. |
+
+When **Caching** is set to **Enabled**, set the following properties:
+ | Property | Supported values | |-||
-| Cache behavior | <ul><li>**Bypass cache:** The content should not be cached. In ARM templates, set the `cacheBehavior` property to `BypassCache`.</li><li>**Override:** The TTL value returned from your origin is overwritten with the value specified in the action. This behavior will only be applied if the response is cacheable. In ARM templates, set the `cacheBehavior` property to `Override`.</li><li>**Set if missing:** If no TTL value gets returned from your origin, the rule sets the TTL to the value specified in the action. This behavior will only be applied if the response is cacheable. In ARM templates, set the `cacheBehavior` property to `SetIfMissing`.</li></ul> |
-| Cache duration | When _Cache behavior_ is set to `Override` or `Set if missing`, these fields must specify the cache duration to use. The maximum duration is 366 days.<ul><li>In the Azure portal: specify the days, hours, minutes, and seconds.</li><li>In ARM templates: specify the duration in the format `d.hh:mm:ss`. |
+| Query string caching behavior | <ul><li>**Ignore Query String:** Query strings aren't considered when the cache key gets generated. In ARM templates, set the `queryStringCachingBehavior` property to `IgnoreQueryString`.</li><li>**Use query string:** Each unique URL has its own cache key. In ARM templates, use the `queryStringCachingBehavior` of `UseQueryString`.</li><li>**Ignore specified query string:** Query strings specified in the parameters get excluded when the cache key gets generated. In ARM templates, set the `queryStringCachingBehavior` property to `IgnoreSpecifiedQueryStrings`.</li><li>**Include specified query string:** Query strings specified in the parameters get included when the cache key gets generated. In ARM templates, set the `queryStringBehavior` property to `IncludeSpecifiedQueryStrings`.</li></ul> |
+| Query parameters | The list of query string parameter names, separated by commas. This property is only set when *Query string caching behavior* is set to *Ignore Specified Query Strings* or *Include Specified Query Strings*. |
+| Compression | <ul><li>**Enabled:** Front Door dynamically compresses content at the edge, resulting in a smaller and faster response. For more information, see [File compression](front-door-caching.md#file-compression). In ARM templates, set the `isCompressionEnabled` property to `Enabled`.</li><li>**Disabled.** Front Door does not perform compression. In ARM templates, set the `isCompressionEnabled` property to `Disabled`.</li></ul> |
+| Cache behavior | <ul><li>**Honor origin:** Front Door will always honor origin response header directive. If the origin directive is missing, Front Door will cache contents anywhere from 1 to 3 days. In ARM templates, set the `cacheBehavior` property to `HonorOrigin`.</li><li>**Override always:** The TTL value returned from your origin is overwritten with the value specified in the action. This behavior will only be applied if the response is cacheable. In ARM templates, set the `cacheBehavior` property to `OverrideAlways`.</li><li>**Override if origin missing:** If no TTL value gets returned from your origin, the rule sets the TTL to the value specified in the action. This behavior will only be applied if the response is cacheable. In ARM templates, set the `cacheBehavior` property to `OverrideIfOriginMissing`.</li></ul> |
+| Cache duration | When _Cache behavior_ is set to `Override always` or `Override if origin missing`, these fields must specify the cache duration to use. The maximum duration is 366 days. For a value of 0 seconds, the CDN caches the content, but must revalidate each request with the origin server. This property is only set when *Cache behavior* is set to *Override always* or *Override if origin missing*.<ul><li>In the Azure portal: specify the days, hours, minutes, and seconds.</li><li>In ARM templates: use the `cacheDuration` to specify the duration in the format `d.hh:mm:ss`. |
-### Example
+### Examples
-In this example, we override the cache expiration to 6 hours, for matched requests that don't specify a cache duration already.
+In this example, we route all matched requests to an origin group named `MyOriginGroup`, regardless of the configuration in the Front Door endpoint route.
# [Portal](#tab/portal) # [JSON](#tab/json) ```json {
- "name": "CacheExpiration",
+ "name": "RouteConfigurationOverride",
"parameters": {
- "cacheBehavior": "SetIfMissing",
- "cacheType": "All",
- "cacheDuration": "0.06:00:00",
- "typeName": "DeliveryRuleCacheExpirationActionParameters"
+ "originGroupOverride": {
+ "originGroup": {
+ "id": "/subscriptions/<subscription-id>/resourcegroups/<resource-group-name>/providers/Microsoft.Cdn/profiles/<profile-name>/originGroups/MyOriginGroup"
+ },
+ "forwardingProtocol": "MatchRequest"
+ },
+ "cacheConfiguration": null,
+ "typeName": "DeliveryRuleRouteConfigurationOverrideActionParameters"
} } ```
In this example, we override the cache expiration to 6 hours, for matched reques
```bicep {
- name: 'CacheExpiration'
+ name: 'RouteConfigurationOverride'
parameters: {
- cacheBehavior: 'SetIfMissing'
- cacheType: All
- cacheDuration: '0.06:00:00'
- typeName: 'DeliveryRuleCacheExpirationActionParameters'
+ originGroupOverride: {
+ originGroup: {
+ id: '/subscriptions/<subscription-id>/resourcegroups/<resource-group-name>/providers/Microsoft.Cdn/profiles/<profile-name>/originGroups/MyOriginGroup'
+ }
+ forwardingProtocol: 'MatchRequest'
+ }
+ cacheConfiguration: null
+ typeName: 'DeliveryRuleRouteConfigurationOverrideActionParameters'
} } ```
-## <a name="CacheKeyQueryString"></a> Cache key query string
+In this example, we set the cache key to include a query string parameter named `customerId`. Compression is enabled, and the origin's caching policies are honored.
-Use the **cache key query string** action to modify the cache key based on query strings. The cache key is the way that Front Door identifies unique requests to cache.
+# [Portal](#tab/portal)
-### Properties
-| Property | Supported values |
-|-||
-| Behavior | <ul><li>**Include:** Query strings specified in the parameters get included when the cache key gets generated. In ARM templates, set the `queryStringBehavior` property to `Include`.</li><li>**Cache every unique URL:** Each unique URL has its own cache key. In ARM templates, use the `queryStringBehavior` of `IncludeAll`.</li><li>**Exclude:** Query strings specified in the parameters get excluded when the cache key gets generated. In ARM templates, set the `queryStringBehavior` property to `Exclude`.</li><li>**Ignore query strings:** Query strings aren't considered when the cache key gets generated. In ARM templates, set the `queryStringBehavior` property to `ExcludeAll`.</li></ul> |
-| Parameters | The list of query string parameter names, separated by commas. |
+# [JSON](#tab/json)
-### Example
+```json
+{
+ "name": "RouteConfigurationOverride",
+ "parameters": {
+ "cacheConfiguration": {
+ "queryStringCachingBehavior": "IncludeSpecifiedQueryStrings",
+ "queryParameters": "customerId",
+ "isCompressionEnabled": "Enabled",
+ "cacheBehavior": "HonorOrigin",
+ "cacheDuration": null
+ },
+ "originGroupOverride": null,
+ "typeName": "DeliveryRuleRouteConfigurationOverrideActionParameters"
+ }
+}
+```
-In this example, we modify the cache key to include a query string parameter named `customerId`.
+# [Bicep](#tab/bicep)
+
+```bicep
+{
+ name: 'RouteConfigurationOverride'
+ parameters: {
+ cacheConfiguration: {
+ queryStringCachingBehavior: 'IncludeSpecifiedQueryStrings'
+ queryParameters: 'customerId'
+ isCompressionEnabled: 'Enabled'
+ cacheBehavior: 'HonorOrigin'
+ cacheDuration: null
+ }
+ originGroupOverride: null
+ typeName: 'DeliveryRuleRouteConfigurationOverrideActionParameters'
+ }
+}
+```
+++
+In this example, we override the cache expiration to 6 hours for matched requests that don't specify a cache duration already. Front Door ignores the query string when it determines the cache key, and compression is enabled.
# [Portal](#tab/portal) # [JSON](#tab/json) ```json {
- "name": "CacheKeyQueryString",
+ "name": "RouteConfigurationOverride",
"parameters": {
- "queryStringBehavior": "Include",
- "queryParameters": "customerId",
- "typeName": "DeliveryRuleCacheKeyQueryStringBehaviorActionParameters"
+ "cacheConfiguration": {
+ "queryStringCachingBehavior": "IgnoreQueryString",
+ "cacheBehavior": "OverrideIfOriginMissing",
+ "cacheDuration": "0.06:00:00",
+ },
+ "originGroupOverride": null,
+ "typeName": "DeliveryRuleRouteConfigurationOverrideActionParameters"
} } ```
In this example, we modify the cache key to include a query string parameter nam
```bicep {
- name: 'CacheKeyQueryString'
+ name: 'RouteConfigurationOverride'
parameters: {
- queryStringBehavior: 'Include'
- queryParameters: 'customerId'
- typeName: 'DeliveryRuleCacheKeyQueryStringBehaviorActionParameters'
+ cacheConfiguration: {
+ queryStringCachingBehavior: 'IgnoreQueryString'
+ cacheBehavior: 'OverrideIfOriginMissing'
+ cacheDuration: '0.06:00:00'
+ }
+ originGroupOverride: null
+ typeName: 'DeliveryRuleRouteConfigurationOverrideActionParameters'
} } ```
In this example, we append the value `AdditionalValue` to the `MyRequestHeader`
# [Portal](#tab/portal) # [JSON](#tab/json)
In this example, we delete the header with the name `X-Powered-By` from the resp
# [Portal](#tab/portal) # [JSON](#tab/json)
Use the **URL redirect** action to redirect clients to a new URL. Clients are se
### Example
-In this example, we redirect the request to `https://contoso.com/exampleredirection?clientIp={client_ip}`, while preserving the fragment. An HTTP Temporary Redirect (307) is used. The IP address of the client is used in place of the `{client_ip}` token within the URL by using the `client_ip` [server variable](#server-variables).
+In this example, we redirect the request to `https://contoso.com/exampleredirection?clientIp={client_ip}`, while preserving the fragment. An HTTP Temporary Redirect (307) is used. The IP address of the client is used in place of the `{client_ip}` token within the URL by using the `client_ip` [server variable](rule-set-server-variables.md).
# [Portal](#tab/portal) # [JSON](#tab/json)
In this example, we rewrite all requests to the path `/redirection`, and don't p
# [Portal](#tab/portal) # [JSON](#tab/json)
In this example, we rewrite all requests to the path `/redirection`, and don't p
-## Origin group override
-
-Use the **Origin group override** action to change the origin group that the request should be routed to.
-
-### Properties
-
-| Property | Supported values |
-|-||
-| Origin group | The origin group that the request should be routed to. This overrides the configuration specified in the Front Door endpoint route. |
-
-### Example
-
-In this example, we route all matched requests to an origin group named `SecondOriginGroup`, regardless of the configuration in the Front Door endpoint route.
-
-# [Portal](#tab/portal)
--
-# [JSON](#tab/json)
-
-```json
-{
- "name": "OriginGroupOverride",
- "parameters": {
- "originGroup": {
- "id": "/subscriptions/<subscription-id>/resourcegroups/<resource-group-name>/providers/Microsoft.Cdn/profiles/<profile-name>/originGroups/SecondOriginGroup"
- },
- "typeName": "DeliveryRuleOriginGroupOverrideActionParameters"
- }
-}
-```
-
-# [Bicep](#tab/bicep)
-
-```bicep
-{
- name: 'OriginGroupOverride'
- parameters: {
- originGroup: {
- id: '/subscriptions/<subscription-id>/resourcegroups/<resource-group-name>/providers/Microsoft.Cdn/profiles/<profile-name>/originGroups/SecondOriginGroup'
- }
- typeName: 'DeliveryRuleOriginGroupOverrideActionParameters'
- }
-}
-```
---
-## Server variables
-
-Rule Set server variables provide access to structured information about the request. You can use server variables to dynamically change the request/response headers or URL rewrite paths/query strings, for example, when a new page load or when a form is posted.
-
-### Supported variables
-
-| Variable name | Description |
-||-|
-| `socket_ip` | The IP address of the direct connection to Azure Front Door edge. If the client used an HTTP proxy or a load balancer to send the request, the value of `socket_ip` is the IP address of the proxy or load balancer. |
-| `client_ip` | The IP address of the client that made the original request. If there was an `X-Forwarded-For` header in the request, then the client IP address is picked from the header. |
-| `client_port` | The IP port of the client that made the request. |
-| `hostname` | The host name in the request from the client. |
-| `geo_country` | Indicates the requester's country/region of origin through its country/region code. |
-| `http_method` | The method used to make the URL request, such as `GET` or `POST`. |
-| `http_version` | The request protocol. Usually `HTTP/1.0`, `HTTP/1.1`, or `HTTP/2.0`. |
-| `query_string` | The list of variable/value pairs that follows the "?" in the requested URL.<br />For example, in the request `http://contoso.com:8080/article.aspx?id=123&title=fabrikam`, the `query_string` value will be `id=123&title=fabrikam`. |
-| `request_scheme` | The request scheme: `http` or `https`. |
-| `request_uri` | The full original request URI (with arguments).<br />For example, in the request `http://contoso.com:8080/article.aspx?id=123&title=fabrikam`, the `request_uri` value will be `/article.aspx?id=123&title=fabrikam`. |
-| `ssl_protocol` | The protocol of an established TLS connection. |
-| `server_port` | The port of the server that accepted a request. |
-| `url_path` | Identifies the specific resource in the host that the web client wants to access. This is the part of the request URI without the arguments.<br />For example, in the request `http://contoso.com:8080/article.aspx?id=123&title=fabrikam`, the `uri_path` value will be `/article.aspx`. |
-
-### Server variable format
-
-Server variables can be specified using the following formats:
-
-* `{variable}`: Include the entire server variable. For example, if the client IP address is `111.222.333.444` then the `{client_ip}` token would evaluate to `111.222.333.444`.
-* `{variable:offset}`: Include the server variable after a specific offset, until the end of the variable. The offset is zero-based. For example, if the client IP address is `111.222.333.444` then the `{client_ip:3}` token would evaluate to `.222.333.444`.
-* `{variable:offset:length}`: Include the server variable after a specific offset, up to the specified length. The offset is zero-based. For example, if the client IP address is `111.222.333.444` then the `{client_ip:4:3}` token would evaluate to `222`.
-
-### Supported actions
-
-Server variables are supported on the following actions:
-
-* Cache key query string
-* Modify request header
-* Modify response header
-* URL redirect
-* URL rewrite
- ::: zone-end ::: zone pivot="front-door-classic"
-In Azure Front Door, a [Rules Engine](front-door-rules-engine.md) can consist up to 25 rules containing matching conditions and associated actions. This article provides a detailed description of each action you can define in a rule.
+In Azure Front Door (classic), a [Rules engine](front-door-rules-engine.md) can consist up to 25 rules containing matching conditions and associated actions. This article provides a detailed description of each action you can define in a rule.
An action defines the behavior that gets applied to the request type that matches the condition or set of match conditions. In the Rules engine configuration, a rule can have up to 10 matching conditions and 5 actions. You can only have one *Override Routing Configuration* action in a single rule.
frontdoor Front Door Rules Engine https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/front-door-rules-engine.md
Previously updated : 03/07/2022 Last updated : 03/22/2022 zone_pivot_groups: front-door-tiers
zone_pivot_groups: front-door-tiers
::: zone pivot="front-door-standard-premium"
-A Rule Set is a customized rule engine that groups a combination of rules into a single set. You can associate a Rule Set with multiple routes. The Rule Set allows you to customize how requests get processed at the edge, and how Azure Front Door handles those requests.
-
-> [!IMPORTANT]
-> Azure Front Door Standard/Premium (Preview) is currently in public preview.
-> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
-> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+A Rule set is a customized rules engine that groups a combination of rules into a single set. You can associate a Rule Set with multiple routes. The Rule set allows you to customize how requests get processed at the edge, and how Azure Front Door handles those requests.
## Common supported scenarios
A Rule Set is a customized rule engine that groups a combination of rules into a
* Add, modify, or remove request/response header to hide sensitive information or capture important information through headers.
-* Support server variables to dynamically change the request/response headers or URL rewrite paths/query strings, for example, when a new page load or when a form is posted. Server variable is currently supported on **[Rule Set actions](front-door-rules-engine-actions.md)** only.
+* Support server variables to dynamically change the request/response headers or URL rewrite paths/query strings, for example, when a new page load or when a form is posted. Server variable is currently supported on **[Rule set actions](front-door-rules-engine-actions.md)** only.
## Architecture
In the following diagram, WAF policies get executed first. A Rule Set gets confi
## Terminology
-With Azure Front Door Rule Set, you can create a combination of Rules Set configuration, each composed of a set of rules. The following out lines some helpful terminologies you'll come across when configuring your Rule Set.
+With Azure Front Door Rule set, you can create a combination of Rules Set configuration, each composed of a set of rules. The following out lines some helpful terminologies you'll come across when configuring your Rule Set.
For more quota limit, refer to [Azure subscription and service limits, quotas and constraints](../azure-resource-manager/management/azure-subscription-service-limits.md).
-* *Rule Set*: A set of rules that gets associated to one or multiple [routes](front-door-route-matching.md).
+* *Rule set*: A set of rules that gets associated to one or multiple [routes](front-door-route-matching.md).
-* *Rule Set rule*: A rule composed of up to 10 match conditions and 5 actions. Rules are local to a Rule Set and cannot be exported to use across Rule Sets. Users can create the same rule in multiple Rule Sets.
+* *Rule set rule*: A rule composed of up to 10 match conditions and 5 actions. Rules are local to a Rule Set and cannot be exported to use across Rule Sets. Users can create the same rule in multiple Rule Sets.
-* *Match condition*: There are many match conditions that can be utilized to parse your incoming requests. A rule can contain up to 10 match conditions. Match conditions are evaluated with an **AND** operator. *Regular expression is supported in conditions*. A full list of match conditions can be found in [Rule Set match conditions](rules-match-conditions.md).
+* *Match condition*: There are many match conditions that you can configure to parse your incoming requests. A rule can contain up to 10 match conditions. Match conditions are evaluated with an **AND** operator. *Regular expression is supported in conditions*. A full list of match conditions can be found in [Rule set match conditions](rules-match-conditions.md).
-* *Action*: Actions dictate how AFD handles the incoming requests based on the matching conditions. You can modify caching behaviors, modify request headers/response headers, do URL rewrite and URL redirection. *Server variables are supported on Action*. A rule can contain up to 10 match conditions. A full list of actions can be found [Rule Set actions](front-door-rules-engine-actions.md).
+* *Action*: An action dictate how Azure Front Door handles the incoming requests based on the matching conditions. You can modify the caching behaviors, modify request headers, response headers, set URL rewrite and URL redirection. *Server variables are supported on Action*. A rule can contain up to 10 match conditions. A full list of actions can be found in [Rule set actions](front-door-rules-engine-actions.md).
## ARM template support
-Rule Sets can be configured using Azure Resource Manager templates. [See an example template](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.cdn/front-door-standard-premium-rule-set). You can customize the behavior by using the JSON or Bicep snippets included in the documentation examples for [match conditions](rules-match-conditions.md) and [actions](front-door-rules-engine-actions.md).
+Rule sets can be configured using Azure Resource Manager templates. [See an example template](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.cdn/front-door-standard-premium-rule-set). You can customize the behavior by using the JSON or Bicep snippets included in the documentation examples for [match conditions](rules-match-conditions.md) and [actions](front-door-rules-engine-actions.md).
## Next steps
-* Learn how to [create a Front Door Standard/Premium](standard-premium/create-front-door-portal.md).
-* Learn how to configure your first [Rule Set](standard-premium/how-to-configure-rule-set.md).
+* Learn how to [create an Azure Front Door profile](standard-premium/create-front-door-portal.md).
+* Learn how to configure your first [Rule set](standard-premium/how-to-configure-rule-set.md).
::: zone-end ::: zone pivot="front-door-classic"
-Rules Engine allows you to customize how HTTP requests gets handled at the edge and provides a more controlled behavior to your web application. Rules Engine for Azure Front Door has several key features, including:
+Rules Engine allows you to customize how HTTP requests gets handled at the edge and provides a more controlled behavior to your web application. Rules Engine for Azure Front Door (classic) has several key features, including:
* Enforces HTTPS to ensure all your end users interact with your content over a secure connection. * Implements security headers to prevent browser-based vulnerabilities like HTTP Strict-Transport-Security (HSTS), X-XSS-Protection, Content-Security-Policy, X-Frame-Options, and Access-Control-Allow-Origin headers for Cross-Origin Resource Sharing (CORS) scenarios. Security-based attributes can also be defined with cookies.
Rules Engine allows you to customize how HTTP requests gets handled at the edge
## Architecture
-Rules engine handles requests at the edge. When a request hits your Front Door endpoint, WAF is executed first, followed by the Rules Engine configuration associated with your Frontend/Domain. If a Rules Engine configuration is executed, the means the parent routing rule is already a match. In order for all the actions in each rule to get executed, all the match conditions within a rule has to be satisfied. If a request doesn't match any of the conditions in your Rule Engine configuration, then the default Routing Rule is executed.
+Rules engine handles requests at the edge. When a request hits your Azure Front Door (classic) endpoint, WAF is executed first, followed by the Rules Engine configuration associated with your Frontend/Domain. If a Rules Engine configuration is executed, the means the parent routing rule is already a match. In order for all the actions in each rule to get executed, all the match conditions within a rule has to be satisfied. If a request doesn't match any of the conditions in your Rule Engine configuration, then the default Routing Rule is executed.
For example, in the following diagram, a Rules Engine gets configured to append a response header. The header changes the max-age of the cache control if the match condition gets met.
In both of these examples, when none of the match conditions are met, the specif
## Terminology
-With AFD Rules Engine, you can create a combination of Rules Engine configurations, each composed of a set of rules. The following outlines some helpful terminology you will come across when configuring your Rules Engine.
+With Azure Front Door (classic) Rules Engine, you can create a combination of Rules Engine configurations, each composed of a set of rules. The following outlines some helpful terminology you will come across when configuring your Rules Engine.
- *Rules Engine Configuration*: A set of rules that are applied to single Route Rule. Each configuration is limited to 25 rules. You can create up to 10 configurations. - *Rules Engine Rule*: A rule composed of up to 10 match conditions and 5 actions.
With AFD Rules Engine, you can create a combination of Rules Engine configuratio
## Next steps - Learn how to configure your first [Rules Engine configuration](front-door-tutorial-rules-engine.md). -- Learn how to [create a Front Door](quickstart-create-front-door.md).
+- Learn how to [create an Azure Front Door (classic) profile](quickstart-create-front-door.md).
- Learn [how Front Door works](front-door-routing-architecture.md).
frontdoor Front Door Url Rewrite https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/front-door-url-rewrite.md
Title: Azure Front Door - URL Rewrite | Microsoft Docs
+ Title: Azure Front Door - URL Rewrite
description: This article helps you understand how URL rewrites works in Azure Front Door. Previously updated : 03/09/2022 Last updated : 03/22/2022 zone_pivot_groups: front-door-tiers
zone_pivot_groups: front-door-tiers
::: zone pivot="front-door-standard-premium"
-Azure Front Door Standard/Premium supports URL rewrite to change the path of a request that is being routed to your origin. URL rewrite also allows you to add conditions to make sure that the URL or the specified headers gets rewritten only when certain conditions gets met. These conditions are based on the request and response information.
+Azure Front Door supports URL rewrite to change the path of a request that is being routed to your origin. URL rewrite also allows you to add conditions to make sure that the URL or the specified headers gets rewritten only when certain conditions gets met. These conditions are based on the request and response information.
With this feature, you can redirect users to different origins based on scenarios, device types, or the requested file type.
For example, if I set **Preserve unmatched path to No**.
::: zone pivot="front-door-classic"
-Azure Front Door supports URL rewrite by configuring an optional **Custom Forwarding Path** to use when constructing the request to forward to the backend. By default, if a custom forwarding path isn't provided, the Front Door will copy the incoming URL path to the URL used in the forwarded request. The Host header used in the forwarded request is as configured for the selected backend. Read [Backend Host Header](front-door-backend-pool.md#hostheader) to learn what it does and how you can configure it.
+Azure Front Door (classic) supports URL rewrite by configuring an optional **Custom Forwarding Path** to use when constructing the request to forward to the backend. By default, if a custom forwarding path isn't provided, the Front Door will copy the incoming URL path to the URL used in the forwarded request. The Host header used in the forwarded request is as configured for the selected backend. Read [Backend Host Header](front-door-backend-pool.md#hostheader) to learn what it does and how you can configure it.
The robust part of URL rewrite is that the custom forwarding path will copy any part of the incoming path that matches the wildcard path to the forwarded path (these path segments are the **green** segments in the example below):
For example, if we read across the second row, it's saying that for incoming req
| www\.contoso.com/foo/**bar** | /foo/\* | /**bar** | /fwd/**bar** | /foo/**bar** | /foo/bar/**bar** | > [!NOTE]
-> Azure Front Door only supports URL rewrite from a static path to another static path. Preserve unmatched path is supported with Azure Front Door Standard/Premium SKU. For more information, see [Preserve unmatched path](front-door-url-rewrite.md#preserve-unmatched-path).
+> Azure Front Door only supports URL rewrite from a static path to another static path. Preserve unmatched path is supported with Azure Front Door Standard and Premium tier. For more information, see [Preserve unmatched path](front-door-url-rewrite.md#preserve-unmatched-path).
> ## Optional settings There are extra optional settings you can also specify for any given routing rule settings:
-* **Cache Configuration** - If disabled or not specified, requests that match to this routing rule won't attempt to use cached content and instead will always fetch from the backend. Read more about [Caching with Front Door](front-door-caching.md).
+* **Cache Configuration** - If disabled or not specified, requests that match to this routing rule won't attempt to use cached content and instead will always fetch from the backend. Read more about [Caching with Azure Front Door](front-door-caching.md).
::: zone-end ## Next steps -- Learn how to [create a Front Door](quickstart-create-front-door.md).-- Learn more about [Azure Front Door Rules engine](front-door-rules-engine.md)
+- Learn how to [create an Azure Front Door profile](create-front-door-portal.md).
+- Learn more about [Azure Front Door Rule set](front-door-rules-engine.md)
- Learn about [Azure Front Door routing architecture](front-door-routing-architecture.md).
frontdoor Front Door Wildcard Domain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/front-door-wildcard-domain.md
Title: Azure Front Door - Support for wildcard domains
+ Title: Support for wildcard domains - Azure Front Door
description: This article helps you understand how Azure Front Door supports mapping and managing wildcard domains in the list of custom domains. -+ na Previously updated : 09/29/2020 Last updated : 03/17/2022
+zone_pivot_groups: front-door-tiers
# Wildcard domains
Key scenarios that are improved with support for wildcard domains include:
> [!NOTE] > Currently, adding wildcard domains through Azure DNS is only supported via API, PowerShell, and the Azure CLI. Support for adding and managing wildcard domains in the Azure portal isn't available. +
+## Add a wildcard domain and certificate binding
+
+You can add a wildcard domain following guidance in [add a custom domain](standard-premium/how-to-add-custom-domain.md) for subdomains.
+
+> [!NOTE]
+> * Azure DNS supports wildcard records.
+> * Cache purge for wildcard domain is not supported, you have to specify a subdomain for cache purge.
+
+You can add as many single-level subdomains of the wildcard. For example, for the wildcard domain *.contoso.com, you can add subdomains in the form of image.contosto.com, cart.contoso.com, etc. Subdomains like www.image.contoso.com aren't a single-level subdomain of *.contoso.com. This functionality might be required for:
+
+* Defining a different route for a subdomain than the rest of the domains (from the wildcard domain).
+
+* Set up a different WAF policy for a specific subdomain.
+
+For accepting HTTPS traffic on your wildcard domain, you must enable HTTPS on the wildcard domain. The certificate binding for a wildcard domain requires a wildcard certificate. That is, the subject name of the certificate should also have the wildcard domain.
+
+> [!NOTE]
+> * Currently, only using your own custom SSL certificate option is available for enabling HTTPS for wildcard domains. Azure Front Door managed certificates can't be used for wildcard domains.
+> * You can choose to use the same wildcard certificate from Azure Key Vault or from Azure Front Door managed certificates for subdomains.
+> * If you want to add a subdomain of the wildcard domain thatΓÇÖs already validated in the Azure Front Door Standard or Premium profile, the domain validation is automatically approved if it uses the same use your own custom SSL certificate.
+> * If a wildcard domain is validated and already added to one profile, a single-level subdomain can still be added to another profile as long as it is also validated.
+++ ## Adding wildcard domains
-You can add a wildcard domain under the section for front-end hosts or domains. Similar to subdomains, Azure Front Door validates that there's CNAME record mapping for your wildcard domain. This DNS mapping can be a direct CNAME record mapping like `*.contoso.com` mapped to `contoso.azurefd.net`. Or you can use afdverify temporary mapping. For example, `afdverify.contoso.com` mapped to `afdverify.contoso.azurefd.net` validates the CNAME record map for the wildcard.
+You can add a wildcard domain under the section for front-end hosts or domains. Similar to subdomains, Azure Front Door (classic) validates that there's CNAME record mapping for your wildcard domain. This DNS mapping can be a direct CNAME record mapping like `*.contoso.com` mapped to `contoso.azurefd.net`. Or you can use afdverify temporary mapping. For example, `afdverify.contoso.com` mapped to `afdverify.contoso.azurefd.net` validates the CNAME record map for the wildcard.
> [!NOTE] > Azure DNS supports wildcard records.
You can add as many single-level subdomains of the wildcard domain in front-end
- Defining a different route for a subdomain than the rest of the domains (from the wildcard domain). -- Having a different WAF policy for a specific subdomain. For example, `*.contoso.com` allows adding `foo.contoso.com` without having to again prove domain ownership. But it doesn't allow `foo.bar.contoso.com` because it isn't a single level subdomain of `*.contoso.com`. To add `foo.bar.contoso.com` without additional domain ownership validation, `*.bar.contosonews.com` needs to be added.
+- Having a different WAF policy for a specific subdomain. For example, `*.contoso.com` allows adding `foo.contoso.com` without having to again prove domain ownership. But it doesn't allow `foo.bar.contoso.com` because it isn't a single level subdomain of `*.contoso.com`. To add `foo.bar.contoso.com` without extra domain ownership validation, `*.bar.contosonews.com` needs to be added.
You can add wildcard domains and their subdomains with certain limitations: -- If a wildcard domain is added to an Azure Front Door profile:
- - The wildcard domain can't be added to any other Azure Front Door profile.
- - First-level subdomains of the wildcard domain can't be added to another Azure Front Door profile or an Azure Content Delivery Network profile.
-- If a subdomain of a wildcard domain is already added to an Azure Front Door profile or an Azure Content Delivery Network profile, the wildcard domain can't be used for other Azure Front Door profile.
+- If a wildcard domain is added to an Azure Front Door (classic) profile:
+ - The wildcard domain can't be added to any other Azure Front Door (classic) profile.
+ - First-level subdomains of the wildcard domain can't be added to another Azure Front Door (classic) profile or an Azure Content Delivery Network profile.
+- If a subdomain of a wildcard domain is already added to an Azure Front Door (classic) profile or an Azure Content Delivery Network profile, the wildcard domain can't be used for other Azure Front Door (classic) profile.
- If two profiles (Azure Front Door or Azure Content Delivery Network) have various subdomains of a root domain, then wildcard domains can't be added to either of the profiles. ## Certificate binding
You can choose to use the same wildcard certificate from Azure Key Vault or from
If a subdomain is added for a wildcard domain that already has a certificate associated with it, then you can't disable HTTPS for the subdomain. The subdomain uses the certificate binding for the wildcard domain, unless a different Key Vault or Azure Front Door managed certificate overrides it. + ## WAF policies
-WAF policies can be attached to wildcard domains, similar to other domains. A different WAF policy can be applied to a subdomain of a wildcard domain. For the subdomains, you must specify the WAF policy to be used even if it's the same policy as the wildcard domain. Subdomains do *not* automatically inherit the WAF policy from the wildcard domain.
+
+WAF policies can be attached to wildcard domains, similar to other domains. A different WAF policy can be applied to a subdomain of a wildcard domain. Subdomains will automatically inherit the WAF policy from the wildcard domain if there is no explicit WAF policy associated to the subdomain. However, if the subdomain is added to a different profile from the wildcard domain profile, the subdomain cannot inherit the WAF policy associated with the wildcard domain.
+++
+WAF policies can be attached to wildcard domains, similar to other domains. A different WAF policy can be applied to a subdomain of a wildcard domain. For the subdomains, you must specify the WAF policy to be used even if it's the same policy as the wildcard domain. Subdomains *don't* automatically inherit the WAF policy from the wildcard domain.
+ If you don't want a WAF policy to run for a subdomain, you can create an empty WAF policy with no managed or custom rulesets.
When configuring a routing rule, you can select a wildcard domain as a front-end
## Next steps - Learn how to [create an Azure Front Door profile](quickstart-create-front-door.md).-- Learn how to [add a custom domain on Azure Front Door](front-door-custom-domain.md).
+- Learn how to [add a custom domain](front-door-custom-domain.md) to your Azure Front Door.
- Learn how to [enable HTTPS on a custom domain](front-door-custom-domain-https.md).
frontdoor Health Probes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/health-probes.md
+
+ Title: Backend health monitoring - Azure Front Door
+description: This article helps you understand how Azure Front Door monitors the health of your origins.
+
+documentationcenter: ''
+++
+ na
+ Last updated : 03/17/2022+++
+# Health probes
+
+> [!NOTE]
+> An *Origin* and a *origin group* in this article refers to the backend and backend pool of the Azure Front Door (classic) configuration.
+>
+
+To determine the health and proximity of each backend for a given Azure Front Door environment, each Front Door environment periodically sends a synthetic HTTP/HTTPS request to each of your configured origins. Azure Front Door then uses these responses from the probe to determine the "best" origin to route your client requests.
+
+> [!WARNING]
+> Since each Azure Front Door edge POP emits health probes to your origins, the health probe volume for your origins can be quite high. The number of probes depends on your customer's traffic location and your health probe frequency. If the Azure Front Door edge POP doesnΓÇÖt receive real traffic from your end users, the frequency of the health probe from the edge POP is decreased from the configured frequency. If there is customer traffic to all the Azure Front Door edge POP, the health probe volume can be high depending on your health probes frequency.
+>
+> An example to roughly estimate the health probe volume per minute to your origin when using the default probe frequency of 30 seconds. The probe volume on each of your origin is equal to the number of edge POPs times two requests per minute. The probing requests will be less if there is no traffic sent to all of the edge POPs. For a list of edge locations, see [edge locations by region](edge-locations-by-region.md) for Azure Front Door. There could be more than one POP in each edge location.
+
+> [!NOTE]
+> Azure Front Door HTTP/HTTPS probes are sent with `User-Agent` header set with value: `Edge Health Probe`.
+
+## Supported protocols
+
+Azure Front Door supports sending probes over either HTTP or HTTPS protocols. These probes are sent over the same TCP ports configured for routing client requests, and cannot be overridden.
+
+## Supported HTTP methods for health probes
+
+Azure Front Door supports the following HTTP methods for sending the health probes:
+
+1. **GET:** The GET method means retrieve whatever information (in the form of an entity) is identified by the Request-URI.
+2. **HEAD:** The HEAD method is identical to GET except that the server MUST NOT return a message-body in the response. For new Front Door profiles, by default, the probe method is set as HEAD.
+
+> [!NOTE]
+> For lower load and cost on your backends, Front Door recommends using HEAD requests for health probes.
+
+## Health probe responses
+
+| Responses | Description |
+| - | - |
+| Determining Health | A 200 OK status code indicates the backend is healthy. Everything else is considered a failure. If for any reason (including network failure) a valid HTTP response isn't received for a probe, the probe is counted as a failure.|
+| Measuring Latency | Latency is the wall-clock time measured from the moment immediately before we send the probe request to the moment when we receive the last byte of the response. We use a new TCP connection for each request, so this measurement isn't biased towards backends with existing warm connections. |
+
+## How Front Door determines backend health
+
+Azure Front Door uses the same three-step process below across all algorithms to determine health.
+
+1. Exclude disabled backends.
+
+1. Exclude backends that have health probes errors:
+
+ * This selection is done by looking at the last _n_ health probe responses. If at least _x_ are healthy, the backend is considered healthy.
+
+ * _n_ is configured by changing the SampleSize property in load-balancing settings.
+
+ * _x_ is configured by changing the SuccessfulSamplesRequired property in load-balancing settings.
+
+1. For the sets of healthy backends in the backend pool, Front Door additionally measures and maintains the latency (round-trip time) for each backend.
+
+> [!NOTE]
+> If a single endpoint is a member of multiple backend pools, Azure Front Door optimizes the number of health probes sent to the backend to reduce the load on the backend. Health probe requests will be sent based on the lowest configured sample interval. The health of the endpoint in all pools will be determined by the responses from same health probes.
+
+## Complete health probe failure
+
+If health probes fail for every backend in a backend pool, then Front Door considers all backends unhealthy and routes traffic in a round robin distribution across all of them.
+
+Once any backend returns to a healthy state, then Front Door will resume the normal load-balancing algorithm.
+
+## Disabling health probes
+
+If you have a single backend in your backend pool, you can choose to disable the health probes reducing the load on your application backend. Even if you have multiple backends in the backend pool but only one of them is in enabled state, you can disable health probes.
+
+## Next steps
+
+- Learn how to [create an Front Door profile](create-front-door-portal.md).
+- Learn about Azure Front Door [routing architecture](front-door-routing-architecture.md).
frontdoor How To Configure Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/how-to-configure-endpoints.md
+
+ Title: 'Configure an endpoint with Front Door manager - Azure Front Door'
+description: This article shows you how to configure an endpoint for an existing Azure Front Door profile with Front Door manager.
++++ Last updated : 03/16/2022+++
+# Configure an endpoint with Front Door manager
+
+This article shows you how to create an endpoint for an existing Azure Front Door profile with Front Door manager.
+
+## Prerequisites
+
+Before you can create an Azure Front Door endpoint with Front Door manager, you must have an Azure Front Door profile created. The profile must have at least one or more endpoints. To organize your Azure Front Door endpoints by internet domains, web applications, or other criteria, you can use multiple profiles.
+
+To create an Azure Front Door profile, see [create a Azure Front Door](create-front-door-portal.md).
+
+## Create a new Azure Front Door endpoint
+
+1. Sign in to the [Azure portal](https://portal.azure.com) and navigate to your Azure Front Door profile.
+
+1. Select **Front Door manager**. Then select **+ Add an endpoint** to create a new endpoint.
+
+ :::image type="content" source="./media/how-to-configure-endpoints/select-create-endpoint.png" alt-text="Screenshot of add an endpoint through Front Door manager." lightbox="./media/how-to-configure-endpoints/select-create-endpoint-expanded.png":::
+
+1. On the **Add an endpoint** page, enter a unique name for the endpoint.
+
+ :::image type="content" source="./media/how-to-configure-endpoints/create-endpoint-page.png" alt-text="Screenshot of add an endpoint page.":::
+
+ | Setting | Description |
+ |--|--|
+ | Name | Enter a unique name for the new Azure Front Door Standard/Premium endpoint. Azure Front Door will generate a unique Endpoint hostname based on the endpoint name in the form of `<endpointname>-hash.z01.azurefd.net`. The Endpoint hostname is a deterministic DNS name that helps prevent subdomain takeover. This name is used to access your cached resources at the domainΓÇ»`<endpointname>-hash.z01.azurefd.net`. |
+ | Status | Select the checkbox to enable this endpoint. |
+
+### Add a route
+
+1. To add a **Route**, first expand an endpoint from the list of endpoints in the Front Door manager.
+
+ :::image type="content" source="./media/how-to-configure-endpoints/select-endpoint.png" alt-text="Screenshot of list of endpoints in Front Door manager." lightbox="./media/how-to-configure-endpoints/select-endpoint-expanded.png":::
+
+1. In the endpoint configuration pane, select **+ Add a route** to configure the mapping of your domains and matching URL path patterns to an origin group.
+
+ :::image type="content" source="./media/how-to-configure-endpoints/add-route.png" alt-text="Screenshot of add a route button from endpoint configuration pane." lightbox="./media/how-to-configure-endpoints/add-route-expanded.png":::
+
+1. On the **Add a route** page, enter, or select the following information:
+
+ :::image type="content" source="./media/how-to-configure-endpoints/create-route.png" alt-text="Screenshot of the add a route page.":::
+
+ | Setting | Description |
+ |--|--|
+ | Name | Enter a unique name for the new route. |
+ | **Domains** | |
+ | Domains | Select one or more domains that have been validated and isn't associated to another Route. To add a new Domain or a Custom Domain, see [Add a Custom Domain](standard-premium/how-to-add-custom-domain.md) |
+ | Patterns to match | Configure all URL path patterns that this route will accept. For example, you can set the pattern to match to `/images/*` to accept all requests on the URL `www.contoso.com/images/*`. Azure Front Door will try to determine the traffic based on Exact Match first, if no exact match Paths, then look for a wildcard Path that matches. If no routing rules are found with a matching Path, then reject the request and return a 400: Bad Request error HTTP response. Patterns to match paths are case insensitive, meaning paths with different casings are treated as duplicates. For example, you have the same host using the same protocol with paths /FOO and /foo. These paths are considered duplicates, which isn't allowed in the *Patterns to match* setting. |
+ | Accepted protocols | Specify the protocols you want Azure Front Door to accept when the client is making the request. |
+ | **Redirect** | |
+ | Redirect all traffic to use HTTPS | Specify whether HTTPS is enforced for the incoming request with HTTP request |
+ | **Origin group** | |
+ | Origin group | Select which origin group should be forwarded to when the back to origin request occurs. To add a new Origin Group, see [Configure an origin group](standard-premium/how-to-create-origin.md). |
+ | Origin path | This path is used to rewrite the URL that Azure Front Door will use when constructing the request forwarded to the origin. By default, this path isn't provided. As such, Azure Front Door will use the incoming URL path in the request to the origin. You can also specify a wildcard path, which will copy any matching part of the incoming path to the request path to the origin. Origin path is case sensitive. <br><br/> Pattern to match: /foo/* <br/> Origin path: /fwd/ <br><br/> Incoming URL path: /foo/a/b/c/ <br/> URL from Azure Front Door to origin: fwd/a/b/c. |
+ | Forwarding protocol | Select the protocol used for forwarding request. |
+ | Caching | Select this option to enable caching of static content with Azure Front Door. |
+ | Rules | Select Rule Sets that will be applied to this Route. For more information about how to configure Rules, see [Configure a Rule Set for Azure Front Door](standard-premium/how-to-configure-rule-set.md). |
+
+1. Select **Add** to create the new route. The route will appear in the list of Routes for the endpoints.
+
+ :::image type="content" source="./media/how-to-configure-endpoints/endpoint-route.png" alt-text="Screenshot of new created route in an endpoint.":::
+
+### Add security policy
+
+1. Select **+ Add a policy**, in the *Security policy* pane to apply or create a new web application firewall policy to your domains.
+
+ :::image type="content" source="./media/how-to-configure-endpoints/add-policy.png" alt-text="Screenshot of add a policy button from endpoint configuration pane." lightbox="./media/how-to-configure-endpoints/add-policy.png":::
+
+1. On the **Add security policy** page, enter, or select the following information:
+
+ :::image type="content" source="./media/how-to-configure-endpoints/add-security-policy.png" alt-text="Screenshot of add security policy page." :::
+
+ | Setting | Description |
+ |--|--|
+ | Name | Enter a unique name within this Front Door profile for the security policy. |
+ | **Web application firewall policy** | |
+ | Domains | Select one or more domains you wish to apply this web application firewall (WAF) policy to. |
+ | WAF Policy | Select or create a new WAF policy. When you select an existing WAF policy, it must be the same tier as the Azure Front Door profile. For more information about how to create a WAF policy to use with Azure Front Door, see [Configure WAF policy](../web-application-firewall/afds/waf-front-door-create-portal.md). |
+
+1. Select **Save** to create the security policy and associate it with the endpoint.
+
+ :::image type="content" source="./media/how-to-configure-endpoints/associated-security-policy.png" alt-text="Screenshot of security policy associated with an endpoint." lightbox="./media/how-to-configure-endpoints/associated-security-policy-expanded.png":::
+
+## Clean up resources
+
+In order to remove an endpoint, you first have to remove any security policies associated with the endpoint. Then select **Delete endpoint** to remove the endpoint from the Azure Front Door profile.
++
+## Next steps
+
+* Learn about the use of [origins and origin groups](origin.md) in an Azure Front Door configuration.
+* Learn about [rules match conditions](rules-match-conditions.md) in an Azure Front Door rule set.
+* Learn more about [policy settings](../web-application-firewall/afds/waf-front-door-policy-settings.md) for WAF with Azure Front Door.
+* Learn how to create [custom rules](../web-application-firewall/afds/waf-front-door-custom-rules.md) to protect your Azure Front Door profile.
frontdoor How To Configure Origin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/how-to-configure-origin.md
+
+ Title: How to configure origins - Azure Front Door
+description: This article shows how to configure origins in an origin group to use with your Azure Front Door profile.
++++ Last updated : 03/22/2022+++
+# How to configure an origin for Azure Front Door
+
+This article will show you how to create an Azure Front Door origin in an existing origin group. The origin group can be then associated with a route to determine how traffic will reach your origins.
+
+> [!NOTE]
+> An *Origin* and a *origin group* in this article refers to the backend and backend pool of the Azure Front Door (classic) configuration.
+>
+
+## Prerequisites
+
+Before you can create an Azure Front Door origin, you must have an Azure Front Door Standard or Premium tier profile. To create an Azure Front Door profile, see [create a Azure Front Door](create-front-door-portal.md).
+
+## Create a new origin group
+
+1. Sign in to the [Azure portal](https://portal.azure.com) and navigate to your Azure Front Door profile.
+
+1. Select **Origin groups** and then select **+ Add** to create a new origin group.
+
+ :::image type="content" source="./media/how-to-configure-origin/select-origin-group.png" alt-text="Screenshot of origin groups landing page.":::
+
+1. On the **Add an origin group** page, enter a unique **Name** for the new origin group. Then select **+ Add an Origin** to add a new origin.
+
+ :::image type="content" source="./media/how-to-configure-origin/add-origin-group.png" alt-text="Screenshot of add an origin group page.":::
+
+## Add an origin
+
+1. On the **Add an origin** page, enter, or select the values based on your requirements:
+
+ :::image type="content" source="./media/how-to-configure-origin/add-origin.png" alt-text="Screenshot of add an origin page.":::
+
+ | Setting | Value |
+ | | |
+ | Name | Enter a unique name for the new Azure Front Door origin. |
+ | Origin Type | The type of resource you want to add. Azure Front Door Standard and Premium tier supports autodiscovery of your application origin such as Azure App services, Azure Cloud service, and Azure Storage. If you want a different origin type in Azure or a non-Azure backend, you can select **Custom host**. |
+ | Host Name | If you didn't select **Custom host** as origin host type, then select your backend origin host name in the dropdown. |
+ | Origin host header | Enter the host header value being sent to the backend for each request. For more information, see [origin host header](origin.md#origin-host-header). |
+ | Certificate subject name validation | During the Azure Front Door and origin TLS connection, Azure Front Door will validate if the request host name matches the host name in the certificate provided by the origin. For more information, see [End-to-end TLS](end-to-end-tls.md). |
+ | HTTP Port | Enter the value for the port that the origin supports for HTTP protocol. |
+ | HTTPS Port | Enter the value for the port that the origin supports for HTTPS protocol. |
+ | Priority | Assign a priority value to this origin when you want to use a primary service origin for all traffic. This set up will provide backups if the primary or another backup origin is unavailable. For more information, see [Priority](routing-methods.md#priority). |
+ | Weight | Assign a weight value to this origin to distribute traffic across a set of origins, either evenly or according to weight coefficients. For more information, see [Weights](routing-methods.md#weighted). |
+ | Private link | You can enable the private link service to secure connectivity to your origin. Supported origin types are Azure Blobs, App services, Internal Load Balancers. |
+ | Status | Select this option to enable the origin. |
+
+ > [!IMPORTANT]
+ > During configuration, the Azure portal doesn't validate if the origin is accessible from Azure Front Door environments. You need to verify that Azure Front Door can reach your origin.
+ >
+
+1. Select **Add** once you have completed the origin settings. The origin should now appear in the origin group.
+
+1. Configure the rest of the origin group settings. You can update *Health probes* and *Load balancing* settings to meet your requirements.
+
+ > [!NOTE]
+ > * You can configure session affinity to ensure requests from the same end user gets directed to the same origin. For more information, see [session affinity](routing-methods.md#affinity).
+ > * The health probe path is **case sensitive**.
+ >
+
+ :::image type="content" source="./media/how-to-configure-origin/save-origin-group.png" alt-text="Screenshot of a configured origin group.":::
+
+1. Select **Add** to save the origin group configuration. The origin group now should appear on the origin group page.
+
+ :::image type="content" source="./media/how-to-configure-origin/origin-group-list.png" alt-text="Screenshot of origin group in origin groups list.":::
+
+## Origin response timeout
+
+1. Origin response timeout can be found on the **Overview** page of your Azure Front Door profile.
+
+ :::image type="content" source="./media/how-to-configure-origin/origin-response-timeout.png" alt-text="Screenshot of origin response timeout button from the overview page.":::
+
+ > [!IMPORTANT]
+ > This timeout value is applied to all endpoints in the Azure Front Door profile.
+ >
+
+1. The value of the response timeout must be between 16 and 240 seconds.
+
+ :::image type="content" source="./media/how-to-configure-origin/origin-response-timeout-box.png" alt-text="Screenshot of origin response timeout field.":::
+
+## Clean up resources
+
+To delete an origin group when you no longer needed it, select the **...** and then select **Delete** from the drop-down.
++
+To remove an origin when you no longer need it, select the **...** and then select **Delete** from the drop-down.
++
+## Next steps
+
+To learn about custom domains, see [adding a custom domain](standard-premium/how-to-add-custom-domain.md) to your Azure Front Door endpoint.
frontdoor Manager https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/manager.md
+
+ Title: 'Front Door manager - Azure Front Door'
+description: This article is about concepts of the Front Door manager. You'll learn about routes and security policies in an endpoint.
+++++ Last updated : 03/16/2022+++
+# What is Azure Front Door manager?
+
+The Front Door manager in Azure Front Door Standard and Premium provides an overview of endpoints you've configured for your Azure Front Door profile. With Front Door manager, you can manage your collection of endpoints. You can also configure routings rules along with their domains and origin groups, and security policies you want to apply to protect your web application.
++
+## Routes within an endpoint
+
+An endpoint is a logical grouping of one or more routes that associates with domains. A route contains the origin group configuration and routing rules between domains and origins. An endpoint can have one or more routes. A route can have multiple domains but only one origin group. You need to have at least one configured route in order for traffic to route between your domains and the origin group.
+
+> [!NOTE]
+> * You can *enable* or *disable* an endpoint or a route.
+> * Traffic will only flow to origins once both the endpoint and route is **enabled**.
+>
+
+Domains configured within a route can either be a custom domain or an endpoint domain. For more information about custom domains, see [create a custom domain](standard-premium/how-to-add-custom-domain.md) with Azure Front Door. Endpoint domains refer to the auto generated domain name when you create a new endpoint. The name is a unique endpoint hostname with a hash value in the format of `endpointname-hash.z01.azurefd.net`. The endpoint domain will be accessible if you associate it with a route.
+
+### Reuse of an endpoint domain name
+
+An endpoint domain can be reused within the same tenant, subscription, or resource group scope level. You can also choose to not allow the reuse of an endpoint domain. The Azure portal default settings allow tenant level reuse of the endpoint domain. You can use command line to configure the scope level of the endpoint domain reuse. The Azure portal will use the scope level you define through the command line once it has been changed.
+
+| Value | Behavior |
+|--|--|
+| TenantReuse | This is the default value. Object with the same name in the same tenant will receive the same domain label. |
+| SubscriptionReuse | Object with the same name in the same subscription will receive the same domain label. |
+| ResourceGroupReuse | Object with the same name in the same resource group will receive the same domain label. |
+| NoReuse | Object with the same will receive a new domain label for each new instance. |
+
+## Security policy in an endpoint
+
+A security policy is an association of one or more domains with a Web Application Firewall (WAF) policy. The WAF policy will provide centralized protection for your web applications. If you manage security policies using the Azure portal, you can only associate a security policy with domains that are in the Routes configuration of that endpoint.
+
+> [!TIP]
+> * If you see one of your domains is unhealthy, you can select the domain to take you to the domains page. From there you can take appropriate actions to troubleshoot the unhealthy domain.
+> * If you're running a large Azure Front Door profile, review [**Azure Front Door service limits**](../azure-resource-manager/management/azure-subscription-service-limits.md#azure-front-door-standard-and-premium-tier-service-limits) and [**Azure Front Door routing limits**](front-door-routing-limits.md) to better manage your Azure Front Door.
+>
+
+## Front Door manager (classic)
+
+In Azure Front Door (classic), the Front Door manager is called Front Door designer. In Azure Front Door (classic), only one endpoint is supported for each Front Door profile.
++
+## Next steps
+
+* Learn how to [configure endpoints with Front Door manager](how-to-configure-endpoints.md).
+* Learn about the Azure Front Door [routing architecture](front-door-routing-architecture.md).
+* Learn [how traffic is matched to a route](front-door-routing-architecture.md) in Azure Front Door.
frontdoor Origin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/origin.md
+
+ Title: Origins and origin groups - Azure Front Door
+description: This article explains the concept of what an origin and origin group is in a Front Door configuration.
+++++ Last updated : 03/10/2022+
+zone_pivot_groups: front-door-tiers
++
+# Origins and origin groups in Azure Front Door
++
+> [!NOTE]
+> An *Origin* and a *origin group* in this article refers to the backend and backend pool of the Azure Front Door (classic) configuration.
+>
++
+This article describes concepts about how to map your web application deployment with Azure Front Door. You'll also learn about what an *origin* and *origin group* is in the Azure Front Door configuration.
+
+## Origin
+
+An origin refers to the application deployment that Azure Front Door will retrieve contents from when caching isn't enabled or when a cache gets missed. Azure Front Door supports origins hosted in Azure as well as applications hosted in your on-premises datacenter or with another cloud provider. An origin shouldn't be confused with your database tier or storage tier. The origin should be viewed as the endpoint for your application backend. When you add an origin to an origin group in the Front Door configuration, you must also configure the following:
+
+* **Origin type:** The type of resource you want to add. Front Door supports autodiscovery of your application backends from App Service, Cloud Service, or Storage. If you want a different resource in Azure or even a non-Azure backend, select **Custom host**.
+
+ >[!IMPORTANT]
+ >During configuration, APIs doesn't validate if the origin is not accessible from the Front Door environment. Make sure that Front Door can reach your origin.
+
+* **Subscription and origin host name:** If you didn't select **Custom host** for your backend host type, select your backend by choosing the appropriate subscription and the corresponding backend host name.
++
+* **Private Link:** Azure Front Door Premium tier supports sending traffic to an origin by using Private Link. For more information, see [Secure your Origin with Private Link](private-link.md).
+
+* **Certificate subject name validation:** during Azure Front Door to origin TLS connection, Azure Front Door will validate if the request host name matches the host name in the certificate provided by the origin. From a security standpoint, Microsoft doesn't recommend disabling certificate subject name check. For more information, see [End-to-end TLS encryption](end-to-end-tls.md), especially if you want to disable this feature.
++
+* **Origin host header:** The host header value sent to the backend for each request. For more information, see [Origin host header](#origin-host-header).
+
+* **Priority**. Assign priorities to your different backends when you want to use a primary service backend for all traffic. Also, provide backups if the primary or the backup backends are unavailable. For more information, see [Priority](routing-methods.md#priority).
+
+* **Weight**. Assign weights to your different backends to distribute traffic across a set of backends, either evenly or according to weight coefficients. For more information, see [Weights](routing-methods.md#weighted).
+
+### Origin host header
+
+Requests that are forwarded by Azure Front Door to an origin will include a host header field that the origin uses to retrieve the targeted resource. The value for this field typically comes from the origin URI that has the host header and port.
+
+For example, a request made for `www.contoso.com` will have the host header `www.contoso.com`. If you use the Azure portal to configure your origin, the default value for this field is the host name of the origin. If your origin is `contoso-westus.azurewebsites.net`, in the Azure portal, the autopopulated value for the origin host header will be `contoso-westus.azurewebsites.net`. However, if you use Azure Resource Manager templates or another method without explicitly setting this field, Front Door will send the incoming host name as the value for the host header. If the request was made for `www.contoso.com`, and your origin `contoso-westus.azurewebsites.net` has an empty header field, Front Door will set the host header as `www.contoso.com`.
+
+Most app backends (Azure Web Apps, Blob storage, and Cloud Services) require the host header to match the domain of the backend. However, the frontend host that routes to your origin will use a different hostname such as `www.contoso.net`.
+
+If your origin requires the host header to match the origin hostname, make sure that the origin host header includes the hostname of the origin.
+
+#### Configure the origin host header for the origin
+
+To configure the **origin host header** field for an origin in the origin group section:
+
+1. Open your Front Door resource and select the origin group with the origin to configure.
+
+1. Add an origin if you haven't done so, or edit an existing one.
+
+1. Set the origin host header field to a custom value or leave it blank. The hostname for the incoming request will be used as the host header value.
+
+## Origin group
+
+An origin group in Azure Front Door refers to a set of origins that receives similar traffic for their application. You can define the origin group as a logical grouping of your application instances across the world that receives the same traffic and responds with an expected behavior. These origins can be deployed across different regions or within the same region. All origins can be deployed in an Active/Active or Active/Passive configuration.
+
+An origin group defines how origins should be evaluated by health probes. It also defines the load balancing method between them.
+
+### Health probes
+
+Azure Front Door sends periodic HTTP/HTTPS probe requests to each of your configured origins. Probe requests determine the proximity and health of each origin to load balance your end-user requests. Health probe settings for an origin group define how we poll the health status of app backends. The following settings are available for load-balancing configuration:
+
+* **Path**: The URL used for probe requests for all the origins in the origin group. For example, if one of your origins is `contoso-westus.azurewebsites.net` and the path gets set to /probe/test.aspx, then Front Door environments, assuming the protocol is HTTP, will send health probe requests to `http://contoso-westus.azurewebsites.net/probe/test.aspx`.
+
+* **Protocol**: Defines whether to send the health probe requests from Front Door to your origins with HTTP or HTTPS protocol.
+
+* **Method**: The HTTP method to be used for sending health probes. Options include GET or HEAD (default).
+
+ > [!NOTE]
+ > For lower load and cost on your backends, Front Door recommends using HEAD requests for health probes.
+
+* **Interval (seconds)**: Defines the frequency of health probes to your origins, or the intervals in which each of the Front Door environments sends a probe.
+
+ >[!NOTE]
+ >For faster failovers, set the interval to a lower value. The lower the value, the higher the health probe volume your backends receive. For example, if the interval is set to 30 seconds with say, 100 Front Door POPs globally, each backend will receive about 200 probe requests per minute.
+
+For more information, see [Health probes](health-probes.md).
+
+### Load-balancing settings
+
+Load-balancing settings for the origin group define how we evaluate health probes. These settings determine if the origin is healthy or unhealthy. They also check how to load-balance traffic between different origins in the origin group. The following settings are available for load-balancing configuration:
+
+* **Sample size:** Identifies how many samples of health probes we need to consider for origin health evaluation.
+
+* **Successful sample size:** Defines the sample size as previously mentioned, the number of successful samples needed to call the origin healthy. For example, assume a Front Door health probe interval is 30 seconds, sample size is 5, and successful sample size is 3. Each time we evaluate the health probes for your origin, we look at the last five samples over 150 seconds (5 x 30). At least three successful probes are required to declare the origin as healthy.
+
+* **Latency sensitivity (extra latency):** Defines whether you want Azure Front Door to send the request to the origin within the latency measurement sensitivity range or forward the request to the closest backend.
+
+For more information, see [Least latency based routing method](routing-methods.md#latency).
+
+## Next steps
++
+- Learn how to [create an Azure Front Door profile](create-front-door-portal.md).
+- Learn about [Azure Front Door routing architecture](front-door-routing-architecture.md?pivots=front-door-standard-premium).
+++
+- Learn how to [create an Azure Front Door (classic) profile](quickstart-create-front-door.md).
+- Learn about [Azure Front Door (classic) routing architecture](front-door-routing-architecture.md?pivots=front-door-classic).
+
frontdoor Private Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/private-link.md
+
+ Title: 'Secure your Origin with Private Link in Azure Front Door Standard/Premium (Preview)'
+description: This page provides information about how to secure connectivity to your origin using Private Link.
+
+documentationcenter: ''
+++ Last updated : 02/12/2022++++
+# Secure your Origin with Private Link in Azure Front Door Standard/Premium (Preview)
+
+> [!Note]
+> This documentation is for Azure Front Door Standard/Premium (Preview). Looking for information on Azure Front Door? View [Azure Front Door Docs](front-door-overview.md).
+
+## Overview
+
+[Azure Private Link](../private-link/private-link-overview.md) enables you to access Azure PaaS Services and Azure hosted services over a Private Endpoint in your virtual network. Traffic between your virtual network and the service traverses over the Microsoft backbone network, eliminating exposure from the public Internet.
+
+> [!IMPORTANT]
+> Azure Front Door Standard/Premium (Preview) is currently in public preview.
+> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
+> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+Azure Front Door Premium can connect to your origin via Private Link. Your origin can be hosted in your private VNet or by using a PaaS service such as Azure App Service or Azure Storage. Private Link removing the need for your origin to be publically accessible.
++
+When you enable Private Link to your origin in Azure Front Door Premium, Front Door creates a private endpoint on your behalf from a regional network managed Front Door's regional private network. This endpoint is managed by Azure Front Door. You'll receive an Azure Front Door private endpoint request for approval message at your origin.
+
+You must approve the private endpoint connection before traffic will flow to the origin. You can approve private endpoint connections by using the Azure portal, the Azure CLI, or Azure PowerShell. For more information, see [Manage a Private Endpoint connection](../private-link/manage-private-endpoint.md).
+
+> [!IMPORTANT]
+> You must approve the private endpoint connection before traffic will flow to your origin.
+
+After you enable a Private Link origin and approve the private endpoint connection, it takes a few minutes for the connection to be established. During this time, requests to the origin will receive a Front Door error message. The error message will go away once the connection is established.
+
+After you approve the request, a private IP address gets assigned from Front Door's virtual network. Traffic between Azure Front Door and your origin traverses the established private link by using Azure's network backbone. Incoming traffic to your origin is now secured when coming from your Azure Front Door.
++
+## Limitations
+
+Azure Front Door private endpoints are available in the following regions during public preview: East US, West US 2, South Central US, UK South, and Japan East.
+
+The backends that support direct private end point connectivity are now limited to Storage (Azure Blobs) and App Services. All other backends will have to be put behind an Internal Load Balancer as explained in the Next Steps below.
+
+For the best latency, you should always pick an Azure region closest to your origin when choosing to enable Front Door private link endpoint.
+
+## Next steps
+
+* To connect Azure Front Door Premium to your Web App via Private Link service, see [Connect Azure Front Door Premium to a Web App origin with Private Link](standard-premium/how-to-enable-private-link-web-app.md).
+* To connect Azure Front Door Premium to your Storage Account via private link service, see [Connect Azure Front Door Premium to a storage account origin with Private Link](standard-premium/how-to-enable-private-link-storage-account.md).
+* To connect Azure Front Door Premium to an internal load balancer origin with Private Link service, see [Connect Azure Front Door Premium to an internal load balancer origin with Private Link](standard-premium/how-to-enable-private-link-internal-load-balancer.md).
frontdoor Routing Methods https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/routing-methods.md
+
+ Title: Traffic routing methods to origin - Azure Front Door | Microsoft Docs
+description: This article explains the four different traffic routing methods used by Azure Front Door to origin.
+
+documentationcenter: ''
+++
+ na
+ Last updated : 03/11/2022+++
+# Traffic routing methods to origin
+
+Azure Front Door supports four different traffic routing methods to determine how your HTTP/HTTPS traffic is distributed between different origins. When user requests reach the Front Door edge locations, the configured routing method gets applied to ensure requests are forwarded to the best backend resource.
+
+> [!NOTE]
+> An *Origin* and a *origin group* in this article refers to the backend and backend pool of the Azure Front Door (classic) configuration.
+>
+
+The four traffic routing methods are:
+
+* **[Latency](#latency):** The latency-based routing ensures that requests are sent to the lowest latency origins acceptable within a sensitivity range. In other words, requests get sent to the nearest set of origins in respect to network latency.
+
+* **[Priority](#priority):** A priority can be set to your origins when you want to configure a primary origin to service all traffic. The secondary origin can be a backup in case the primary origin becomes unavailable.
+
+* **[Weighted](#weighted):** A weighted value can be assigned to your origins when you want to distribute traffic across a set of origins evenly or according to the weight coefficients. Traffic gets distributed by the weight value if the latencies of the origins are within the acceptable latency sensitivity range in the origin group.
+
+* **[Session Affinity](#affinity):** You can configure session affinity for your frontend hosts or domains to ensure requests from the same end user gets sent to the same origin.
+
+> [!NOTE]
+> **Endpoint name** in Azure Front Door Standard and Premium tier is called **Frontend host** in Azure Front Door (classic).
+>
+
+All Front Door configurations have backend health monitoring and automated instant global failover. For more information, see [Front Door backend monitoring](front-door-health-probes.md). Azure Front Door can be used with a single routing method. Depending on your application needs, you can combine multiple routing methods to build an optimal routing topology.
+
+> [!NOTE]
+> When you use the [Front Door rules engine](front-door-rules-engine.md), you can configure a rule to [override route configurations](front-door-rules-engine-actions.md#route-configuration-overrides) in Azure Front Door Standard and Premium tier or [override the backend pool](front-door-rules-engine-actions.md#route-configuration-overrides) in Azure Front Door (classic) for a request. The origin group or backend pool set by the rules engine overrides the routing process described in this article.
+
+## <a name = "latency"></a>Lowest latencies based traffic-routing
+
+Deploying origins in two or more locations across the globe can improve the responsiveness of your applications by routing traffic to the destination that is 'closest' to your end users. Latency is the default traffic-routing method for your Front Door configuration. This routing method forwards requests from your end users to the closest origin behind Azure Front Door. This routing mechanism combined with the anycast architecture of Azure Front Door ensures that each of your end users get the best performance based on their location.
+
+The 'closest' origin isn't necessarily closest as measured by geographic distance. Instead, Azure Front Door determines the closest origin by measuring network latency. Read more about [Azure Front Door routing architecture](front-door-routing-architecture.md).
+
+The following table shows the overall decision flow:
+
+| Available origins | Priority | Latency signal (based on health probe) | Weights |
+|-| -- | -- | -- |
+| First, select all origins that are enabled and returned healthy (200 OK) for the health probe. If there are six origins A, B, C, D, E, and F, and among them C is unhealthy and E is disabled. The list of available origins is A, B, D, and F. | Next, the top priority origins among the available ones are selected. If origin A, B, and D have priority 1 and origin F has a priority of 2. Then, the selected origins will be A, B, and D.| Select the origins with latency range (least latency & latency sensitivity in ms specified). If origin A is 15 ms, B is 30 ms and D is 60 ms away from the Azure Front Door environment where the request landed, and latency sensitivity is 30 ms, then the lowest latency pool consist of origin A and B, because D is beyond 30 ms away from the closest origin that is A. | Lastly, Azure Front Door will round robin the traffic among the final selected group of origins in the ratio of weights specified. For example, if origin A has a weight of 5 and origin B has a weight of 8, then the traffic will be distributed in the ratio of 5:8 among origins A and B. |
+
+>[!NOTE]
+> By default, the latency sensitivity property is set to 0 ms. With this setting the request is always forwarded to the fastest available origins and weights on the origin don't take effect unless two origins have the same network latency.
+>
+
+## <a name = "priority"></a>Priority-based traffic-routing
+
+Often an organization wants to provide high availability for their services by deploying more than one backup service in case the primary one goes down. Across the industry, this type of topology is also referred to as Active/Standby or Active/Passive deployment. The *Priority* traffic-routing method allows you to easily implement this failover pattern.
+
+The default Azure Front Door contains an equal priority list of origins. By default, Azure Front Door sends traffic only to the top priority origins (lowest value in priority) as the primary set of origins. If the primary origins aren't available, Azure Front Door routes the traffic to the secondary set of origins (second lowest value for priority). If both the primary and secondary origins aren't available, the traffic goes to the third, and so on. Availability of the origin is based on the configured status of enabled or disabled and the ongoing origin health status as determined by the health probes.
+
+### Configuring priority for origins
+
+Each origin in your origin group of the Azure Front Door configuration has a property called *Priority*, which can be a number between 1 and 5. With Azure Front Door, you can configure the origin priority explicitly using this property for each origin. This property is a value between 1 and 5. The lower the value the higher the priority. Origins can share the same priority values.
+
+## <a name = "weighted"></a>Weighted traffic-routing method
+
+The *Weighted* traffic-routing method allows you to distribute traffic evenly or to use a pre-defined weighting.
+
+In the weighted traffic-routing method, you assign a weight to each origin in the Azure Front Door configuration of your origin group. The weight is an integer ranging from 1 to 1000. This parameter uses a default weight of **50**.
+
+With the list of available origins that have an acceptable latency sensitivity, the traffic gets distributed with a round-robin mechanism using the ratio of weights specified. If the latency sensitivity gets set to 0 milliseconds, then this property doesn't take effect unless there are two origins with the same network latency.
+
+The weighted method enables some useful scenarios:
+
+* **Gradual application upgrade**: Provides a percentage of traffic to route to a new origin, and gradually increase the traffic over time to bring it at par with other origins.
+* **Application migration to Azure**: Create an origin group with both Azure and external origins. Adjust the weight of the origins to prefer the new origins. You can gradually set this up starting with having the new origins disabled, then assigning them the lowest weights, slowly increasing it to levels where they take most traffic. Then finally disabling the less preferred origins and removing them from the group.
+* **Cloud-bursting for additional capacity**: Quickly expand an on-premises deployment into the cloud by putting it behind Front Door. When you need extra capacity in the cloud, you can add or enable more origins and specify what portion of traffic goes to each origin.
+
+## <a name = "affinity"></a>Session Affinity
+
+By default, without session affinity, Azure Front Door forwards requests originating from the same client to different origins. Certain stateful applications or in certain scenarios when ensuing requests from the same user prefers the same origin to process the initial request. The cookie-based session affinity feature is useful when you want to keep a user session on the same origin. When you use managed cookies with SHA256 of the origin URL as the identifier in the cookie, Azure Front Door can direct ensuing traffic from a user session to the same origin for processing.
+
+Session affinity can be enabled the origin group level in Azure Front Door Standard and Premium tier and front end host level in Azure Front Door (classic) for each of your configured domains (or subdomains). Once enabled, Azure Front Door adds a cookie to the user's session. Cookie-based session affinity allows Front Door to identify different users even if behind the same IP address, which in turn allows a more even distribution of traffic between your different origins.
+
+The lifetime of the cookie is the same as the user's session, as Front Door currently only supports session cookie.
+
+> [!NOTE]
+> Regardless of where it is configured, session affinity is remembered through the browser session cookie at the domain level. Subdomains under the same wildcard domain can share the session affinity so long as the same user browser send requests for the same origin resource.
+>
+> Public proxies may interfere with session affinity. This is because establishing a session requires Front Door to add a session affinity cookie to the response, which cannot be done if the response is cacheable as it would disrupt the cookies of other clients requesting the same resource. To protect against this, session affinity will **not** be established if the origin sends a cacheable response when this is attempted. If the session has already been established, it does not matter if the response from the origin is cacheable.
+>
+> Session affinity will be established in the following circumstances, **unless** the response has an HTTP 304 status code:
+> - The response has specific values set for the `Cache-Control` header that prevents caching, such as *private* or *no-store*.
+> - The response contains an `Authorization` header that has not expired.
+> - The response has an HTTP 302 status code.
+
+## Next steps
+
+- Learn how to [create an Azure Front Door](quickstart-create-front-door.md).
+- Learn [how Azure Front Door works](front-door-routing-architecture.md).
frontdoor Rule Set Server Variables https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/rule-set-server-variables.md
+
+ Title: Server variables - Azure Front Door
+description: This article provides a list of the server variables available in Azure Front Door rule sets.
++++ Last updated : 03/22/2022+++
+# Azure Front Door Rule set server variables
+
+Rule set server variables provide access to structured information about the request when you work with [Rule sets](front-door-rules-engine.md?toc=%2fazure%2ffrontdoor%2fstandard-premium%2ftoc.json).
+
+When you use [Rule set match conditions](rules-match-conditions.md), server variables are available as match conditions so that you can identify requests with specific properties.
+
+When you use [Rule set actions](front-door-rules-engine-actions.md), you can use server variables to dynamically change the request and response headers, and rewrite URLs, paths, and query strings, for example, when a new page load or when a form is posted.
+
+> [!NOTE]
+> Server variable is only available with Azure Front Door Standard and Premium tiers.
+
+## Supported variables
+
+| Variable name | Description |
+|-|-|
+| `socket_ip` | The IP address of the direct connection to Azure Front Door edge. If the client used an HTTP proxy or a load balancer to send the request, the value of `socket_ip` is the IP address of the proxy or load balancer.<br/> To access this server variable in a match condition, use [Socket address](rules-match-conditions.md?toc=%2fazure%2ffrontdoor%2fstandard-premium%2ftoc.json#socket-address). |
+| `client_ip` | The IP address of the client that made the original request. If there was an `X-Forwarded-For` header in the request, then the client IP address is picked from the header.<br />To access this server variable in a match condition, use [Remote address](rules-match-conditions.md?toc=%2fazure%2ffrontdoor%2fstandard-premium%2ftoc.json#remote-address) and configure the *Operator* to *IP Match* or *IP Not Match*. |
+| `client_port` | The IP port of the client that made the request. <br/> To access this server variable in a match condition, use [Client port](rules-match-conditions.md?toc=%2fazure%2ffrontdoor%2fstandard-premium%2ftoc.json#client-port).|
+| `hostname` | The host name in the request from the client. <br/> To access this server variable in a match condition, use [Host name](rules-match-conditions.md?toc=%2fazure%2ffrontdoor%2fstandard-premium%2ftoc.json#host-name).|
+| `geo_country` | Indicates the requester's country/region of origin through its country/region code. <br/> To access this server variable in a match condition, use [Remote address](rules-match-conditions.md?toc=%2fazure%2ffrontdoor%2fstandard-premium%2ftoc.json#remote-address) and configure the *Operator* to *Geo Match* or *Geo Not Match*.|
+| `http_method` | The method used to make the URL request, such as `GET` or `POST`.<br/> To access this server variable in a match condition, use [Request method](rules-match-conditions.md?toc=%2fazure%2ffrontdoor%2fstandard-premium%2ftoc.json#request-method).|
+| `http_version` | The request protocol. Usually `HTTP/1.0`, `HTTP/1.1`, or `HTTP/2.0`.<br/> To access this server variable in a match condition, use [HTTP version](rules-match-conditions.md?toc=%2fazure%2ffrontdoor%2fstandard-premium%2ftoc.json#http-version).|
+| `query_string` | The list of variable/value pairs that follows the "?" in the requested URL.<br />For example, in the request `http://contoso.com:8080/article.aspx?id=123&title=fabrikam`, the `query_string` value will be `id=123&title=fabrikam`.<br/> To access this server variable in a match condition, use [Query string](rules-match-conditions.md?toc=%2fazure%2ffrontdoor%2fstandard-premium%2ftoc.json#query-string).|
+| `request_scheme` | The request scheme: `http` or `https`.<br/> To access this server variable in a match condition, use [Request protocol](rules-match-conditions.md?toc=%2fazure%2ffrontdoor%2fstandard-premium%2ftoc.json#request-protocol).|
+| `request_uri` | The full original request URI (with arguments).<br />For example, in the request `http://contoso.com:8080/article.aspx?id=123&title=fabrikam`, the `request_uri` value will be `/article.aspx?id=123&title=fabrikam`.<br/> To access this server variable in a match condition, use [Request URL](rules-match-conditions.md?toc=%2fazure%2ffrontdoor%2fstandard-premium%2ftoc.json#request-url).|
+| `ssl_protocol` | The protocol of an established TLS connection.<br/> To access this server variable in a match condition, use [SSL protocol](rules-match-conditions.md?toc=%2fazure%2ffrontdoor%2fstandard-premium%2ftoc.json#ssl-protocol).|
+| `server_port` | The port of the server that accepted a request.<br/> To access this server variable in a match condition, use [Server port](rules-match-conditions.md?toc=%2fazure%2ffrontdoor%2fstandard-premium%2ftoc.json#server-port).|
+| `url_path` | Identifies the specific resource in the host that the web client wants to access. This is the part of the request URI without the arguments.<br />For example, in the request `http://contoso.com:8080/article.aspx?id=123&title=fabrikam`, the `uri_path` value will be `/article.aspx`.<br/> To access this server variable in a match condition, use [Request path](rules-match-conditions.md?toc=%2fazure%2ffrontdoor%2fstandard-premium%2ftoc.json#request-path).|
+
+## Server variable format
+
+When you work with Rule Set actions, specify server variables by using the following formats:
+
+* `{variable}`: Include the entire server variable. For example, if the client IP address is `111.222.333.444` then the `{client_ip}` token would evaluate to `111.222.333.444`.
+* `{variable:offset}`: Include the server variable after a specific offset, until the end of the variable. The offset is zero-based. For example, if the client IP address is `111.222.333.444` then the `{client_ip:3}` token would evaluate to `.222.333.444`.
+* `{variable:offset:length}`: Include the server variable after a specific offset, up to the specified length. The offset is zero-based. For example, For example, when the variable var is 'AppId=01f592979c584d0f9d679db3e66a3e5e',
+ * Offsets within range, no lengths: `{var:0}` = `AppId=01f592979c584d0f9d679db3e66a3e5e`, `{var:6}` = `01f592979c584d0f9d679db3e66a3e5e`, `{var:-8}` = `e66a3e5e`
+ * Offsets out of range, no lengths: `{var:-128}` = `AppId=01f592979c584d0f9d679db3e66a3e5e`, `{var:128}` = null
+ * Offsets and lengths within range: `{var:0:5}` = `AppId`, `{var:7:7}` = `1f59297`, `{var:7:-7}` = `1f592979c584d0f9d679db3e`
+ * Zero lengths: `{var:0:0}` = null, `{var:4:0}` = null
+ * Offsets within range and lengths out of range: `{var:0:100}` = `AppId=01f592979c584d0f9d679db3e66a3e5e`, `{var:5:100}` = `=01f592979c584d0f9d679db3e66a3e5e`, `{var:0:-48}` = null, `{var:4:-48}` = null
+
+## Supported rule set actions
+
+Server variables are supported on the following Rule set actions:
+
+* Query string caching behavior in [Route configuration override](front-door-rules-engine-actions.md?toc=%2fazure%2ffrontdoor%2fstandard-premium%2ftoc.json#RouteConfigurationOverride)
+* [Modify request header](front-door-rules-engine-actions.md?toc=%2fazure%2ffrontdoor%2fstandard-premium%2ftoc.json#ModifyRequestHeader)
+* [Modify response header](front-door-rules-engine-actions.md?toc=%2fazure%2ffrontdoor%2fstandard-premium%2ftoc.json#ModifyResponseHeader)
+* [URL redirect](front-door-rules-engine-actions.md?toc=%2fazure%2ffrontdoor%2fstandard-premium%2ftoc.json#UrlRedirect)
+* [URL rewrite](front-door-rules-engine-actions.md?toc=%2fazure%2ffrontdoor%2fstandard-premium%2ftoc.json#UrlRewrite)
+
+## Next steps
+
+* Learn more about [Azure Front Door Rule set](front-door-rules-engine-actions.md).
+* Learn more about [Rule set match conditions](rules-match-conditions.md).
+* Learn more about [Rule set actions](front-door-rules-engine-actions.md).
frontdoor Rules Match Conditions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/rules-match-conditions.md
Previously updated : 03/03/2022- Last updated : 03/22/2022+ zone_pivot_groups: front-door-tiers
-# Azure Front Door rules match conditions
+# Rules match conditions
::: zone pivot="front-door-standard-premium"
-In Azure Front Door Standard/Premium [rule sets](front-door-rules-engine.md), a rule consists of none or some match conditions and an action. This article provides detailed descriptions of match conditions you can use in Azure Front Door rule sets.
+In Azure Front Door [Rule sets](front-door-rules-engine.md), a rule consists of none or some match conditions and an action. This article provides detailed descriptions of match conditions you can use in Azure Front Door rule sets.
::: zone-end ::: zone pivot="front-door-classic"
-In Azure Front Door [rules engines](front-door-rules-engine.md), a rule consists of none or some match conditions and an action. This article provides detailed descriptions of match conditions you can use in Azure Front Door rules engines.
+In Azure Front Door (classic) [Rules engines](front-door-rules-engine.md), a rule consists of none or some match conditions and an action. This article provides detailed descriptions of match conditions you can use in Azure Front Door (classic) Rules engines.
::: zone-end
The first part of a rule is a match condition or set of match conditions. A rule
You can use a match condition to: +
+* Filter requests based on a specific IP address, port, country, or region.
+* Filter requests by header information.
+* Filter requests from mobile devices or desktop devices.
+* Filter requests from request file name and file extension.
+* Filter requests by hostname, SSL protocol, request URL, protocol, path, query string, post args, and other values.
+++ * Filter requests based on a specific IP address, country, or region. * Filter requests by header information. * Filter requests from mobile devices or desktop devices. * Filter requests from request file name and file extension.
-* Filter requests from request URL, protocol, path, query string, post args, etc.
+* Filter requests by request URL, protocol, path, query string, post arguments, and other values.
-> [!IMPORTANT]
-> Azure Front Door Standard/Premium (Preview) is currently in public preview.
-> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
-> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
::: zone-end
Use the **device type** match condition to identify requests that have been made
| Property | Supported values | |-||
-| Operator | <ul><li>In the Azure portal: `Equal`, `Not Equal`</li><li>In ARM templates: `Equal`; use the `negateCondition` property to specify _Not Equal_ |
+| Operator | <ul><li>In the Azure portal: `Equal`, `Not Equal`</li><li>In ARM templates: `Equal`; use the `negateCondition` property to specify _Not Equal_</li></ul> |
| Value | `Mobile`, `Desktop` | ### Example
Use the **HTTP version** match condition to identify requests that have been mad
| Property | Supported values | |-||
-| Operator | <ul><li>In the Azure portal: `Equal`, `Not Equal`</li><li>In ARM templates: `Equal`; use the `negateCondition` property to specify _Not Equal_ |
+| Operator | <ul><li>In the Azure portal: `Equal`, `Not Equal`</li><li>In ARM templates: `Equal`; use the `negateCondition` property to specify _Not Equal_</li></ul> |
| Value | `2.0`, `1.1`, `1.0`, `0.9` | ### Example
In this example, we match all requests that have been sent by using the HTTP 2.0
## Request cookies
-Use the **request cookies** match condition to identify requests that have include a specific cookie.
+Use the **request cookies** match condition to identify requests that have included a specific cookie.
> [!NOTE] > The **request cookies** match condition is only available on Azure Front Door Standard/Premium.
Use the **request cookies** match condition to identify requests that have inclu
### Example
-In this example, we match all requests that have include a cookie named `deploymentStampId` with a value of `1`.
+In this example, we match all requests that have included a cookie named `deploymentStampId` with a value of `1`.
# [Portal](#tab/portal)
In this example, we match all requests where the query string contains the strin
The **remote address** match condition identifies requests based on the requester's location or IP address. You can specify multiple values to match, which will be combined using OR logic.
-* Use CIDR notation when specifying IP address blocks. This means that the syntax for an IP address block is the base IP address followed by a forward slash and the prefix size. For example:
+* Use CIDR notation when specifying IP address blocks. The syntax for an IP address block is the base IP address followed by a forward slash and the prefix size. For example:
* **IPv4 example**: `5.5.5.64/26` matches any requests that arrive from addresses 5.5.5.64 through 5.5.5.127. * **IPv6 example**: `1:2:3:/48` matches any requests that arrive from addresses 1:2:3:0:0:0:0:0 through 1:2:3: ffff:ffff:ffff:ffff:ffff. * When you specify multiple IP addresses and IP address blocks, 'OR' logic is applied. * **IPv4 example**: if you add two IP addresses `1.2.3.4` and `10.20.30.40`, the condition is matched for any requests that arrive from either address 1.2.3.4 or 10.20.30.40. * **IPv6 example**: if you add two IP addresses `1:2:3:4:5:6:7:8` and `10:20:30:40:50:60:70:80`, the condition is matched for any requests that arrive from either address 1:2:3:4:5:6:7:8 or 10:20:30:40:50:60:70:80.
-* Remote Address represents the original client IP that is either from the network connection or typically the X-Forwarded-For request header if the user is behind a proxy.
+* The remote address represents the original client IP that is either from the network connection or typically the X-Forwarded-For request header if the user is behind a proxy. Use the [socket address](#socket-address) match condition if you need to match based on the TCP request's IP address.
### Properties
The **remote address** match condition identifies requests based on the requeste
### Example
-In this example, we match all requests where the request has not originated from the United States.
+In this example, we match all requests where the request hasn't originated from the United States.
# [Portal](#tab/portal)
In this example, we match all requests where the request file extension is `pdf`
## Request header
-The **request header** match condition identifies requests that include a specific header in the request. You can use this match condition to check if a header exists whatever its value, or to check if the header matches a specified value. You can specify multiple values to match, which will be combined using OR logic.
+The **request header** match condition identifies requests that include a specific header in the request. You can use this match condition to check if a header exists or to check if the header matches a specified value. You can specify multiple values to match, which will be combined using OR logic.
### Properties
The **request method** match condition identifies requests that use the specifie
| Property | Supported values | |-|-|
-| Operator | <ul><li>In the Azure portal: `Equal`, `Not Equal`</li><li>In ARM templates: `Equal`; use the `negateCondition` property to specify _Not Equal_ |
+| Operator | <ul><li>In the Azure portal: `Equal`, `Not Equal`</li><li>In ARM templates: `Equal`; use the `negateCondition` property to specify _Not Equal_</li></ul> |
| Request method | One or more HTTP methods from: `GET`, `POST`, `PUT`, `DELETE`, `HEAD`, `OPTIONS`, `TRACE`. If multiple values are specified, they're evaluated using OR logic. | ### Example
The **request protocol** match condition identifies requests that use the specif
| Property | Supported values | |-|-|
-| Operator | <ul><li>In the Azure portal: `Equal`, `Not Equal`</li><li>In ARM templates: `Equal`; use the `negateCondition` property to specify _Not Equal_ |
+| Operator | <ul><li>In the Azure portal: `Equal`, `Not Equal`</li><li>In ARM templates: `Equal`; use the `negateCondition` property to specify _Not Equal_</li></ul> |
| Request method | `HTTP`, `HTTPS` | ### Example
In this example, we match all requests where the request URL begins with `https:
+
+## Host name
+
+The **host name** match condition identifies requests based on the specified hostname in the request from the client. The match condition uses the `Host` header value to evaluate the hostname. You can specify multiple values to match, which will be combined using OR logic.
+
+### Properties
+
+| Property | Supported values |
+|-||
+| Operator | Any operator from the [standard operator list](#operator-list). |
+| Value | One or more string values representing the value of request hostname to match. If multiple values are specified, they're evaluated using OR logic. |
+| Case transform | Any case transform from the [standard string transforms list](#string-transform-list). |
+
+### Example
+
+In this example, we match all requests with a `Host` header that ends with `contoso.com`.
+
+# [Portal](#tab/portal)
++
+# [JSON](#tab/json)
+
+```json
+{
+ "name": "HostName",
+ "parameters": {
+ "operator": "EndsWith",
+ "negateCondition": false,
+ "matchValues": [
+ "contoso.com"
+ ],
+ "transforms": [],
+ "typeName": "DeliveryRuleHostNameConditionParameters"
+ }
+}
+```
+
+# [Bicep](#tab/bicep)
+
+```bicep
+{
+ name: 'HostName'
+ parameters: {
+ operator: 'EndsWith'
+ negateCondition: false
+ matchValues: [
+ 'contoso.com'
+ ]
+ transforms: []
+ typeName: 'DeliveryRuleHostNameConditionParameters'
+ }
+}
+```
+++
+## SSL protocol
+
+The **SSL protocol** match condition identifies requests based on the SSL protocol of an established TLS connection. You can specify multiple values to match, which will be combined using OR logic.
+
+### Properties
+
+| Property | Supported values |
+|-||
+| Operator | <ul><li>In the Azure portal: `Equal`, `Not Equal`</li><li>In ARM templates: `Equal`; use the `negateCondition` property to specify _Not Equal_</li></ul> |
+| SSL protocol | <ul><li>In the Azure portal: `1.0`, `1.1`, `1.2`</li><li>In ARM templates: `TLSv1`, `TLSv1.1`, `TLSv1.2`</li></ul> |
+
+### Example
+
+In this example, we match all requests that use the TLS 1.2 protocol.
+
+# [Portal](#tab/portal)
++
+# [JSON](#tab/json)
+
+```json
+{
+ "name": "SslProtocol",
+ "parameters": {
+ "operator": "Equal",
+ "negateCondition": false,
+ "matchValues": [
+ "TLSv1.2"
+ ],
+ "typeName": "DeliveryRuleSslProtocolConditionParameters"
+ }
+},
+```
+
+# [Bicep](#tab/bicep)
+
+```bicep
+{
+ name: 'SslProtocol'
+ parameters: {
+ operator: 'Equal'
+ negateCondition: false
+ matchValues: [
+ 'TLSv1.2'
+ ]
+ typeName: 'DeliveryRuleSslProtocolConditionParameters'
+ }
+}
+```
+++
+## Socket address
+
+The **socket address** match condition identifies requests based on the IP address of the direct connection to Azure Front Door edge. You can specify multiple values to match, which will be combined using OR logic.
+
+> [!NOTE]
+> If the client used an HTTP proxy or a load balancer to send the request, the socket address is the IP address of the proxy or load balancer.
+>
+> Use the [remote address](#remote-address) match condition if you need to match based on the client's original IP address.
+
+* Use CIDR notation when specifying IP address blocks. This means that the syntax for an IP address block is the base IP address followed by a forward slash and the prefix size. For example:
+ * **IPv4 example**: `5.5.5.64/26` matches any requests that arrive from addresses 5.5.5.64 through 5.5.5.127.
+ * **IPv6 example**: `1:2:3:/48` matches any requests that arrive from addresses 1:2:3:0:0:0:0:0 through 1:2:3: ffff:ffff:ffff:ffff:ffff.
+* When you specify multiple IP addresses and IP address blocks, 'OR' logic is applied.
+ * **IPv4 example**: if you add two IP addresses `1.2.3.4` and `10.20.30.40`, the condition is matched for any requests that arrive from either address 1.2.3.4 or 10.20.30.40.
+ * **IPv6 example**: if you add two IP addresses `1:2:3:4:5:6:7:8` and `10:20:30:40:50:60:70:80`, the condition is matched for any requests that arrive from either address 1:2:3:4:5:6:7:8 or 10:20:30:40:50:60:70:80.
+
+### Properties
+
+| Property | Supported values |
+|-||
+| Operator | <ul><li>In the Azure portal: `IP Match`, `Not IP Match`</li><li>In ARM templates: `IPMatch`; use the `negateCondition` property to specify _Not IP Match_</li></ul> |
+| Value | Specify one or more IP address ranges. If multiple IP address ranges are specified, they're evaluated using OR logic. |
+
+### Example
+
+In this example, we match all requests from IP addresses in the range 5.5.5.64/26.
+
+# [Portal](#tab/portal)
++
+# [JSON](#tab/json)
+
+```json
+{
+ "name": "SocketAddr",
+ "parameters": {
+ "operator": "IPMatch",
+ "negateCondition": false,
+ "matchValues": [
+ "5.5.5.64/26"
+ ],
+ "typeName": "DeliveryRuleSocketAddrConditionParameters"
+ }
+}
+```
+
+# [Bicep](#tab/bicep)
+
+```bicep
+{
+ name: 'SocketAddr'
+ parameters: {
+ operator: 'IPMatch'
+ negateCondition: false
+ matchValues: [
+ '5.5.5.64/26'
+ ]
+ typeName: 'DeliveryRuleSocketAddrConditionParameters'
+ }
+}
+```
+++
+## Client port
+
+The **client port** match condition identifies requests based on the TCP port of the client that made the request. You can specify multiple values to match, which will be combined using OR logic.
+
+### Properties
+
+| Property | Supported values |
+|-||
+| Operator | Any operator from the [standard operator list](#operator-list). |
+| Value | One or more port numbers, expressed as integers. If multiple values are specified, they're evaluated using OR logic. |
+
+### Example
+
+In this example, we match all requests with a client port of 1234.
+
+# [Portal](#tab/portal)
++
+# [JSON](#tab/json)
+
+```json
+{
+ "name": "ClientPort",
+ "parameters": {
+ "operator": "Equal",
+ "negateCondition": false,
+ "matchValues": [
+ "1111"
+ ],
+ "typeName": "DeliveryRuleClientPortConditionParameters"
+ }
+}
+```
+
+# [Bicep](#tab/bicep)
+
+```bicep
+{
+ name: 'ClientPort'
+ parameters: {
+ operator: 'Equal'
+ negateCondition: false
+ matchValues: [
+ '1111'
+ ]
+ typeName: 'DeliveryRuleClientPortConditionParameters'
+ }
+}
+```
+++
+## Server port
+
+The **server port** match condition identifies requests based on the TCP port of the Azure Front Door server that accepted the request. The port must be 80 or 443. You can specify multiple values to match, which will be combined using OR logic.
+
+### Properties
+
+| Property | Supported values |
+|-||
+| Operator | Any operator from the [standard operator list](#operator-list). |
+| Value | A port number, which must be either 80 or 443. If multiple values are specified, they're evaluated using OR logic. |
+
+### Example
+
+In this example, we match all requests with a server port of 443.
+
+# [Portal](#tab/portal)
++
+# [JSON](#tab/json)
+
+```json
+{
+ "name": "ServerPort",
+ "parameters": {
+ "operator": "Equal",
+ "negateCondition": false,
+ "matchValues": [
+ "443"
+ ],
+ "typeName": "DeliveryRuleServerPortConditionParameters"
+ }
+}
+```
+
+# [Bicep](#tab/bicep)
+
+```bicep
+{
+ name: 'ServerPort'
+ parameters: {
+ operator: 'Equal'
+ negateCondition: false
+ matchValues: [
+ '443'
+ ]
+ typeName: 'DeliveryRuleServerPortConditionParameters'
+ }
+}
+```
++++ ## Operator list For rules that accept values from the standard operator list, the following operators are valid: | Operator | Description | ARM template support | |-|--|--|
-| Any | Matches when there is any value, regardless of what it is. | `operator`: `Any` |
+| Any | Matches when there's any value, regardless of what it is. | `operator`: `Any` |
| Equal | Matches when the value exactly matches the specified string. | `operator`: `Equal` | | Contains | Matches when the value contains the specified string. | `operator`: `Contains` | | Less Than | Matches when the length of the value is less than the specified integer. | `operator`: `LessThan` |
For rules that accept values from the standard operator list, the following oper
| Begins With | Matches when the value begins with the specified string. | `operator`: `BeginsWith` | | Ends With | Matches when the value ends with the specified string. | `operator`: `EndsWith` | | RegEx | Matches when the value matches the specified regular expression. [See below for further details.](#regular-expressions) | `operator`: `RegEx` |
-| Not Any | Matches when there is no value. | `operator`: `Any` and `negateCondition` : `true` |
-| Not Equal | Matches when the value does not match the specified string. | `operator`: `Equal` and `negateCondition` : `true` |
-| Not Contains | Matches when the value does not contain the specified string. | `operator`: `Contains` and `negateCondition` : `true` |
-| Not Less Than | Matches when the length of the value is not less than the specified integer. | `operator`: `LessThan` and `negateCondition` : `true` |
-| Not Greater Than | Matches when the length of the value is not greater than the specified integer. | `operator`: `GreaterThan` and `negateCondition` : `true` |
-| Not Less Than or Equal | Matches when the length of the value is not less than or equal to the specified integer. | `operator`: `LessThanOrEqual` and `negateCondition` : `true` |
-| Not Greater Than or Equals | Matches when the length of the value is not greater than or equal to the specified integer. | `operator`: `GreaterThanOrEqual` and `negateCondition` : `true` |
-| Not Begins With | Matches when the value does not begin with the specified string. | `operator`: `BeginsWith` and `negateCondition` : `true` |
-| Not Ends With | Matches when the value does not end with the specified string. | `operator`: `EndsWith` and `negateCondition` : `true` |
-| Not RegEx | Matches when the value does not match the specified regular expression. [See below for further details.](#regular-expressions) | `operator`: `RegEx` and `negateCondition` : `true` |
+| Not Any | Matches when there's no value. | `operator`: `Any` and `negateCondition` : `true` |
+| Not Equal | Matches when the value doesn't match the specified string. | `operator`: `Equal` and `negateCondition` : `true` |
+| Not Contains | Matches when the value doesn't contain the specified string. | `operator`: `Contains` and `negateCondition` : `true` |
+| Not Less Than | Matches when the length of the value isn't less than the specified integer. | `operator`: `LessThan` and `negateCondition` : `true` |
+| Not Greater Than | Matches when the length of the value isn't greater than the specified integer. | `operator`: `GreaterThan` and `negateCondition` : `true` |
+| Not Less Than or Equal | Matches when the length of the value isn't less than or equal to the specified integer. | `operator`: `LessThanOrEqual` and `negateCondition` : `true` |
+| Not Greater Than or Equals | Matches when the length of the value isn't greater than or equal to the specified integer. | `operator`: `GreaterThanOrEqual` and `negateCondition` : `true` |
+| Not Begins With | Matches when the value doesn't begin with the specified string. | `operator`: `BeginsWith` and `negateCondition` : `true` |
+| Not Ends With | Matches when the value doesn't end with the specified string. | `operator`: `EndsWith` and `negateCondition` : `true` |
+| Not RegEx | Matches when the value doesn't match the specified regular expression. [See below for further details.](#regular-expressions) | `operator`: `RegEx` and `negateCondition` : `true` |
> [!TIP] > For numeric operators like *Less than* and *Greater than or equals*, the comparison used is based on length. The value in the match condition should be an integer that specifies the length you want to compare.
For rules that can transform strings, the following transforms are valid:
::: zone pivot="front-door-classic"
-* Learn more about Azure Front Door [Rules Engine](front-door-rules-engine.md)
+* Learn more about Azure Front Door (classic) [Rules Engine](front-door-rules-engine.md)
* Learn how to [configure your first Rules Engine](front-door-tutorial-rules-engine.md). * Learn more about [Rules actions](front-door-rules-engine-actions.md)
For rules that can transform strings, the following transforms are valid:
::: zone pivot="front-door-standard-premium"
-* Learn more about Azure Front Door Standard/Premium [Rule Set](front-door-rules-engine.md).
+* Learn more about Azure Front Door [Rule Set](front-door-rules-engine.md).
* Learn how to [configure your first Rule Set](standard-premium/how-to-configure-rule-set.md). * Learn more about [Rule actions](front-door-rules-engine-actions.md).
frontdoor Concept Origin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/standard-premium/concept-origin.md
- Title: Origin and Origin group in Azure Front Door Standard/Premium
-description: This article describes what origin and origin group are in an Azure Front Door configuration.
---- Previously updated : 02/12/2022---
-# Origin and Origin group in Azure Front Door Standard/Premium (Preview)
-
-> [!Note]
-> This documentation is for Azure Front Door Standard/Premium (Preview). Looking for information on Azure Front Door? View [here](../front-door-overview.md).
-
-This article will cover concepts about how your web application deployment works with Azure Front Door Standard/Premium. You'll also learn about what an *origin* and *origin group* is in the Azure Front Door Standard/Premium configuration.
-
-> [!IMPORTANT]
-> Azure Front Door Standard/Premium (Preview) is currently in public preview.
-> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
-> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
-
-## Origin
-
-Azure Front Door Standard/Premium origin refers to the host name or public IP of your application that serves your client requests. Azure Front Door Standard/Premium supports both Azure origins and also non-Azure origins, such as when application your application is hosted in your on-premises datacenter or with another cloud provider. Origin shouldn't be confused with your database tier or storage tier. Origin should be viewed as the endpoint for your application backend. When you add an origin to an Azure Front Door Standard/Premium origin group, you must also add the following information:
-
-* **Origin type:** The type of resource you want to add. Front Door supports autodiscovery of your application backends from App Service, Cloud Service, or Storage. If you want a different resource in Azure or even a non-Azure backend, select **Custom host**.
-
- >[!IMPORTANT]
- >During configuration, APIs doesn't validate if the origin is not accessible from the Front Door environment. Make sure that Front Door can reach your origin.
-
-* **Subscription and Origin host name:** If you didn't select **Custom host** for your backend host type, select your backend by choosing the appropriate subscription and the corresponding backend host name.
-
-* **Private Link:** Azure Front Door Premium supports sending traffic to an origin by using Private Link. For more information, see [Secure your Origin with Private Link](concept-private-link.md).
-
-* **Origin host header:** The host header value sent to the backend for each request. For more information, see [Origin host header](#hostheader).
-
-* **Priority:** Assign priorities to your different origin when you want to use a primary origin for all traffic. Also, provide backups if the primary or the backup origins are unavailable. For more information, see [Priority](#priority).
-
-* **Weight:** Assign weights to your different origin to distribute traffic across a set of origins, either evenly or according to weight coefficients. For more information, see [Weights](#weighted).
-
-### <a name = "hostheader"></a>Origin host header
-
-Requests that are forwarded by Azure Front Door Standard/Premium to an origin will include a host header field that the origin uses to retrieve the targeted resource. The value for this field typically comes from the origin URI that has the host header and port.
-
-For example, a request made for `www.contoso.com` will have the host header `www.contoso.com`. If you use Azure portal to configure your origin, the default value for this field is the host name of the backend. If your origin is `contoso-westus.azurewebsites.net`, in the Azure portal, the autopopulated value for the origin host header will be `contoso-westus.azurewebsites.net`. However, if you use Azure Resource Manager templates or another method without explicitly setting this field, Front Door will send the incoming host name as the value for the host header. If the request was made for `www.contoso.com`, and your origin is `contoso-westus.azurewebsites.net` that has an empty header field, Front Door will set the host header as `www.contoso.com`.
-
-Most app backends (Azure Web Apps, Blob storage, and Cloud Services) require the host header to match the domain of the backend. However, the frontend host that routes to your backend will use a different hostname such as `www.contoso.net`.
-
-If your origin requires the host header to match the backend hostname, make sure that the backend host header includes the hostname of the backend.
-
-#### Configuring the origin host header for the origin
-
-To configure the **origin host header** field for an origin in the origin group section:
-
-1. Open your Front Door resource and select the origin group with the origin to configure.
-
-2. Add an origin if you haven't done so, or edit an existing one.
-
-3. Set the origin host header field to a custom value or leave it blank. The hostname for the incoming request will be used as the host header value.
-
-## Origin group
-
-An origin group in Azure Front Door Standard/Premium refers to the set of origins that receives similar traffic for their application. In other words, it's a logical grouping of your application instances across the world that receive the same traffic and respond with an expected behavior. These origins can be deployed across different regions or within the same region. All origins can be in Active/Active deployment mode or what is defined as Active/Passive configuration.
-
-An origin group defines how origins should be evaluated via health probes. It also defines how load balancing occurs between them.
-
-### Health probes
-
-Azure Front Door Standard/Premium sends periodic HTTP/HTTPS probe requests to each of your configured origins. Probe requests determine the proximity and health of each origin to load balance your end-user requests. Health probe settings for an origin group define how we poll the health status of app backends. The following settings are available for load-balancing configuration:
-
-* **Path**: The URL used for probe requests for all the origins in the origin group. For example, if one of your origins is `contoso-westus.azurewebsites.net` and the path gets set to /probe/test.aspx, then Front Door environments, assuming the protocol is HTTP, will send health probe requests to `http://contoso-westus.azurewebsites.net/probe/test.aspx`.
-
-* **Protocol**: Defines whether to send the health probe requests from Front Door to your origins with HTTP or HTTPS protocol.
-
-* **Method**: The HTTP method to be used for sending health probes. Options include GET or HEAD (default).
- > [!NOTE]
- > For lower load and cost on your backends, Front Door recommends using HEAD requests for health probes.
-
-* **Interval (seconds)**: Defines the frequency of health probes to your origins, or the intervals in which each of the Front Door environments sends a probe.
-
- >[!NOTE]
- >For faster failovers, set the interval to a lower value. The lower the value, the higher the health probe volume your backends receive. For example, if the interval is set to 30 seconds with say, 100 Front Door POPs globally, each backend will receive about 200 probe requests per minute.
-
-For more information, see [Health probes](../front-door-health-probes.md).
-
-### Load-balancing settings
-
-Load-balancing settings for the origin group define how we evaluate health probes. These settings determine if the origin is healthy or unhealthy. They also check how to load-balance traffic between different origins in the origin group. The following settings are available for load-balancing configuration:
-
-* **Sample size:** Identifies how many samples of health probes we need to consider for origin health evaluation.
-
-* **Successful sample size:** Defines the sample size as previously mentioned, the number of successful samples needed to call the origin healthy. For example, assume a Front Door health probe interval is 30 seconds, sample size is 5, and successful sample size is 3. Each time we evaluate the health probes for your origin, we look at the last five samples over 150 seconds (5 x 30). At least three successful probes are required to declare the origin as healthy.
-
-* **Latency sensitivity (extra latency):** Defines whether you want Azure Front Door Standard/Premium to send the request to the origin within the latency measurement sensitivity range or forward the request to the closest backend.
-
-For more information, see [Least latency based routing method](#latency).
-
-## Routing methods
-
-Azure Front Door Standard/Premium supports different kinds of traffic-routing methods to determine how to route your HTTP/HTTPS traffic to different service endpoints. When your client requests reaching Front Door, the configured routing method gets applied to ensure the requests are forwarded to the best backend instance.
-
-There are four traffic routing methods available in Azure Front Door Standard/Premium:
-
-* **[Latency](#latency):** The latency-based routing ensures that requests are sent to the lowest latency backends acceptable within a sensitivity range. Basically, your user requests are sent to the "closest" set of backends in respect to network latency.
-* **[Priority](#priority):** You can assign priorities to your backends when you want to configure a primary backend to service all traffic. The secondary backend can be a backup in case the primary backend becomes unavailable.
-* **[Weighted](#weighted):** You can assign weights to your backends when you want to distribute traffic across a set of backends. Whether you want to evenly distribute or according to the weight coefficients.
-
-All Azure Front Door Standard/Premium configurations include monitoring of backend health and automated instant global failover. For more information, see [Backend Monitoring](../front-door-health-probes.md). Your Front Door can work based off of a single routing method. But depending on your application needs, you can also combine multiple routing methods to build an optimal routing topology.
-
-### <a name = "latency"></a>Lowest latencies based traffic-routing
-
-Deploying backends in two or more locations across the globe can improve the responsiveness of your applications by routing traffic to the destination that is 'closest' to your end users. The default traffic-routing method for your Front Door configuration forwards requests from your end users to the closest backend of the Front Door environment that received the request. Combined with the Anycast architecture of Azure Front Door, this approach ensures that each of your end users get maximum performance personalized based on their location.
-
-The 'closest' backend isn't necessarily closest as measured by geographic distance. Instead, Front Door determines the closest backends by measuring network latency.
-
-Below is the overall decision flow:
-
-| Available backends | Priority | Latency signal (based on health probe) | Weights |
-|-| -- | -- | -- |
-| First, select all backends that are enabled and returned healthy (200 OK) for the health probe. If there are six backends A, B, C, D, E, and F, and among them C is unhealthy and E is disabled. The list of available backends is A, B, D, and F. | Next, the top priority backends among the available ones are selected. If backend A, B, and D have priority 1 and backend F has a priority of 2. Then, the selected backends will be A, B, and D.| Select the backends with latency range (least latency & latency sensitivity in ms specified). If backend A is 15 ms, B is 30 ms and D is 60 ms away from the Front Door environment where the request landed, and latency sensitivity is 30 ms, then the lowest latency pool consist of backend A and B, because D is beyond 30 ms away from the closest backend that is A. | Lastly, Front Door will round robin the traffic among the final selected pool of backends in the ratio of weights specified. Say, if backend A has a weight of 5 and backend B has a weight of 8, then the traffic will be distributed in the ratio of 5:8 among backends A and B. |
-
->[!NOTE]
-> By default, the latency sensitivity property is set to 0 ms, that is, always forward the request to the fastest available backend.
-
-### <a name = "priority"></a>Priority-based traffic-routing
-
-Often an organization wants to provide high availability for their services by deploying more than one backup service in case the primary one goes down. Across the industry, this topology is also referred to as Active/Standby or Active/Passive deployment topology. The 'Priority' traffic-routing method allows Azure customers to easily implement this failover pattern.
-
-Your default Front Door contains an equal priority list of backends. By default, Front Door sends traffic only to the top priority backends (lowest value for priority) that is, the primary set of backends. If the primary backends aren't available, Front Door routes the traffic to the secondary set of backends (second lowest value for priority). If both the primary and secondary backends aren't available, the traffic goes to the third, and so on. Availability of the backend is based on the configured status (enabled or disabled) and the ongoing backend health status as determined by the health probes.
-
-#### Configuring priority for backends
-
-Each backend in your backend pool of the Front Door configuration has a property called 'Priority', which can be a number between 1 and 5. With Azure Front Door, you configure the backend priority explicitly using this property for each backend. This property is a value between 1 and 5. Lower values represent a higher priority. Backends can share priority values.
-
-### <a name = "weighted"></a>Weighted traffic-routing method
-The 'Weighted' traffic-routing method allows you to distribute traffic evenly or to use a pre-defined weighting.
-
-In the Weighted traffic-routing method, you assign a weight to each backend in the Front Door configuration of your backend pool. The weight is an integer from 1 to 1000. This parameter uses a default weight of '50'.
-
-With the list of available backends that have an acceptable latency sensitivity, the traffic gets distributed with a round-robin mechanism using the ratio of weights specified. If the latency sensitivity gets set to 0 milliseconds, then this property doesn't take effect unless there are two backends with the same network latency.
-
-The weighted method enables some useful scenarios:
-
-* **Gradual application upgrade**: Gives a percentage of traffic to route to a new backend, and gradually increase the traffic over time to bring it at par with other backends.
-* **Application migration to Azure**: Create a backend pool with both Azure and external backends. Adjust the weight of the backends to prefer the new backends. You can gradually set this up starting with having the new backends disabled, then assigning them the lowest weights, slowly increasing it to levels where they take most traffic. Then finally disabling the less preferred backends and removing them from the pool.
-* **Cloud-bursting for additional capacity**: Quickly expand an on-premises deployment into the cloud by putting it behind Front Door. When you need extra capacity in the cloud, you can add or enable more backends and specify what portion of traffic goes to each backend.
-
-## Next steps
-
-Learn how to [create a Front Door Standard/Premium](create-front-door-portal.md)
frontdoor Create Front Door Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/standard-premium/create-front-door-portal.md
- Title: 'Quickstart: Create an Azure Front Door Standard/Premium profile - Azure portal'
-description: This quickstart shows how to use Azure Front Door Standard/Premium Service for your highly available and high-performance global web application by using the Azure portal.
------ Previously updated : 04/16/2021--
-#Customer intent: As an IT admin, I want to direct user traffic to ensure high availability of web applications.
--
-# Quickstart: Create an Azure Front Door Standard/Premium profile - Azure portal
-
-> [!Note]
-> This documentation is for Azure Front Door Standard/Premium (Preview). Looking for information on Azure Front Door? View [Azure Front Door Docs](../front-door-overview.md).
-
-In this quickstart, you learn how to create an Azure Front Door Standard/Premium profile using the Azure portal. You can create the Azure Front Door Standard/Premium profile through *Quick Create* with basic configurations or through *Custom create* with more advanced configurations. With *Custom create* you deploy two Web Apps. Next, you create the Azure Front Door Standard/Premium profile using the two Web Apps as your origin. You'll then verify connectivity to your Web Apps using the Azure Front Door Standard/Premium frontend hostname.
-
-> [!IMPORTANT]
-> Azure Front Door Standard/Premium (Preview) is currently in public preview.
-> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
-> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
-
-## Prerequisites
-
-An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
-
-## Create Front Door profile - Quick Create
-
-1. Sign in to the [Azure portal](https://portal.azure.com).
-
-1. From the home page or the Azure menu, selectΓÇ»**+ Create a resource**. Search forΓÇ»*Front Door Standard/Premium (Preview)*. Then select **Create**.
-
-1. On the **Compare offerings** page, select **Quick create**. Then select **Continue to create a Front Door**.
-
- :::image type="content" source="../media/create-front-door-portal/front-door-quick-create.png" alt-text="Screenshot of compare offerings.":::
-
-1. On the **Create a front door profile** page, enter, or select the following settings.
-
- :::image type="content" source="../media/create-front-door-portal/front-door-quick-create-2.png" alt-text="Screenshot of Front Door quick create page.":::
-
- | Settings | Value |
- | | |
- | **Subscription** | Select your subscription. |
- | **Resource group** | Select **Create new** and enter *contoso-appservice* in the text box.|
- | **Name** | Give your profile a name. This example uses **contoso-afd-quickcreate**. |
- | **Tier** | Select either Standard or Premium SKU. Standard SKU is content delivery optimized. Premium SKU builds on Standard SKU and is focused on security. See [Tier Comparison](tier-comparison.md). |
- | **Endpoint name** | Enter a globally unique name for your endpoint. |
- | **Origin type** | Select the type of resource for your origin. In this example, we select an App service as the origin that has Private Link enabled. |
- | **Origin host name** | Enter the hostname for your origin. |
- | **Enable Private Link** | If you want to have a private connection between your Azure Front Door and your origin. For more details, please refer to [Private link guidance](concept-private-link.md) and [Enable private link](./how-to-enable-private-link-web-app.md).
- | **Caching** | Select the check box if you want to cache contents closer to users globally using Azure Front Door's edge POPs and Microsoft network. |
- | **WAF policy** | Select **Create new** or select an existing WAF policy from the dropdown if you want to enable this feature. |
-
- > [!NOTE]
- > When creating an Azure Front Door Standard/Premium profile, you must select an origin from the same subscription the Front Door is created in.
-
-1. Select **Review + Create** to get your Front Door profile up and running.
-
- > [!NOTE]
- > It may take a few mins for the configurations to be propagated to all edge POPs.
-
-1. Then click **Create** to get your Front Door profile deployed and running.
-
-1. If you enabled Private Link, go to your origin (App service in this example). Select **Networking** > **Configure Private Link**. Then select the pending request from Azure Front Door, and click Approve. After a few seconds, your application will be accessible through Azure Front Door in a secure manner.
-
-## Create Front Door profile - Custom Create
-
-### Create a web app with two instances as the origin
-
-If you already have an origin or an origin group configured, skip to Create a Front Door Standard/Premium (Preview) for your application.
-
-In this example, we create a web application with two instances that run in different Azure regions. Both the web application instances run in *Active/Active* mode, so either one can take traffic. This configuration differs from an *Active/Stand-By* configuration, where one acts as a failover.
-
-If you don't already have a web app, use the following steps to set up an example web app.
-
-1. Sign in to the Azure portal at https://portal.azure.com.
-
-1. On the top left-hand side of the screen, select **Create a resource** > **WebApp**.
-
-1. On the **Basics** tab of **Create Web App** page, enter, or select the following information.
-
- | Setting | Value |
- | | |
- | **Subscription** | Select your subscription. |
- | **Resource group** | Select **Create new** and enter *FrontDoorQS_rg1* in the text box.|
- | **Name** | Enter a unique **Name** for your web app. This example uses *WebAppContoso-001*. |
- | **Publish** | Select **Code**. |
- | **Runtime stack** | Select **.NET Core 2.1 (LTS)**. |
- | **Operating System** | Select **Windows**. |
- | **Region** | Select **Central US**. |
- | **Windows Plan** | Select **Create new** and enter *myAppServicePlanCentralUS* in the text box. |
- | **Sku and size** | Select **Standard S1 100 total ACU, 1.75-GB memory**. |
-
- :::image type="content" source="../media/create-front-door-portal/create-web-app.png" alt-text="Quick create front door premium SKU in the Azure portal":::
-
-1. Select **Review + create**, review the summary, and then select **Create**. It might take several minutes to deploy to a
-
-After your deployment is complete, create a second web app. Use the same settings as above, except for the following settings:
-
-| Setting | Value |
-| | |
-| **Resource group** | Select **Create new** and enter *FrontDoorQS_rg2*. |
-| **Name** | Enter a unique name for your Web App, in this example, *WebAppContoso-002*. |
-| **Region** | A different region, in this example, *South Central US* |
-| **App Service plan** > **Windows Plan** | Select **New** and enter *myAppServicePlanSouthCentralUS*, and then select **OK**. |
-
-### Create a Front Door Standard/Premium (Preview) for your application
-
-Configure Azure Front Door Standard/Premium (Preview) to direct user traffic based on lowest latency between the two web apps servers. Also secure your Front Door with Web Application Firewall.
-
-1. Sign in to the [Azure portal](https://portal.azure.com).
-
-1. From the home page or the Azure menu, selectΓÇ»**+ Create a resource**. Search forΓÇ»*Front Door Standard/Premium (Preview)*. Then select **Create**.
-
-1. On the **Compare offerings** page, select **Custom create**. Then select **Continue to create a Front Door**.
-
-1. On the **Basics** tab, enter or select the following information, and then select **Next: Secret**.
-
- | Setting | Value |
- | | |
- | **Subscription** | Select your subscription. |
- | **Resource group** | Select **Create new** and enter *FrontDoorQS_rg0* in the text box. |
- | **Resource group location** | Select **East US** |
- | **Profile Name** | Enter a unique name in this subscription **Webapp-Contoso-AFD** |
- | **Tier** | Select **Premium**. |
-
- :::image type="content" source="../media/create-front-door-portal/front-door-custom-create-2.png" alt-text="Create Front Door profile":::
-
-1. *Optional*: **Secrets**. If you plan to use managed certificates, this step is optional. If you have an existing Key Vault in Azure that you plan to use to Bring Your Own Certificate for custom domain, then select **Add a certificate**. You can also add certificate in the management experience after creation.
-
- > [!NOTE]
- > You need to have the right permission to add the certificate from Azure Key Vault as a user.
-
- :::image type="content" source="../media/create-front-door-portal/front-door-custom-create-secret.png" alt-text="Screenshot of add a secret in custom create.":::
-
-1. In the **Endpoint** tab, select **Add an Endpoint** and give your endpoint a globally unique name. You can create multiple endpoints in your Azure Front Door Standard/Premium profile after you finish the create experience. This example uses *contoso-frontend*. Leave Origin response timeout (in seconds) and Status as default. Select **Add** to add the endpoint.
-
- :::image type="content" source="../media/create-front-door-portal/front-door-custom-create-add-endpoint.png" alt-text="Screenshot of add an endpoint.":::
-
-1. Next, add an Origin Group that contains your two web apps. Select **+ Add** to open **Add an origin group** page. For Name, enter *myOriginGroup*, then select **+ Add an origin**.
-
- :::image type="content" source="../media/create-front-door-portal/front-door-custom-create-add-origin-group.png" alt-text="Screenshot of add an origin group.":::
-
-1. In the **Add an origin** page, enter, or select the information below. Then select **Add**.
-
- | Setting | Value |
- | | |
- | **Name** | Enter **webapp1** |
- | **Origin type** | Select **App services** |
- | **Host name** | Select `WebAppContoso-001.azurewebsites.net` |
- | **Origin host header** | Select `WebAppContoso-001.azurewebsites.net` |
- | **Other fields** | Leave all other fields as default. |
-
- > [!NOTE]
- > When creating an Azure Front Door Standard/Premium profile, you must select an origin from the same subscription the Azure Front Door Standard/Premium is created in.
-
- :::image type="content" source="../media/create-front-door-portal/front-door-custom-create-add-origin-1.png" alt-text="Screenshot of add more origins.":::
-
-1. Repeat step 8 to add the second origin webapp002. Select `webappcontoso-002.azurewebsite.net` as the **Origin host name** and **Origin host header**.
-
-1. On the **Add an origin group** page, you'll see two origins added, leave all other fields default.
-
- :::image type="content" source="../media/create-front-door-portal/front-door-custom-create-add-origin-group-2.png" alt-text="Screenshot of add an origin group page.":::
-
-1. Next, add a Route to map your frontend endpoint to the Origin group. This route forwards requests from the endpoint to myOriginGroup. Select **+ Add** on Route to configure a Route.
-
-1. On the **Add a route** page, enter, or select the information below. Then select **Add**.
-
- :::image type="content" source="../media/create-front-door-portal/front-door-custom-create-add-route-without-caching.png" alt-text="Add route without caching":::
-
- | Setting | Value |
- | | |
- | **Name** | Enter **MyRoute** |
- | **Domain** | Select `contoso-frontend.z01.azurefd.net` |
- | **Host name** | Select `WebAppContoso-001.azurewebsites.net` |
- | **Patterns to match** | Leave as default. |
- | **Accepted protocols** | Leave as default. |
- | **Redirect** | Leave it default for **Redirect all traffic to use HTTPS**. |
- | **Origin group** | Select **MyOriginGroup**. |
- | **Origin path** | Leave as default. |
- | **Forwarding protocol** | Select **Match incoming request**. |
- | **Caching** | Leave unchecked in this quickstart. If you want to have your contents cached on edges, select the check box for **Enable caching**. |
- | **Rules** | Leave as default. After you create your front door profile, you can create custom rules and apply them to routes. |
-
- >[!WARNING]
- > **Ensure** that there is a route for each endpoint. An absence of a route can cause an endpoint to fail.
-
-1. Next, select **+ Add** on Security to add a WAF policy. Select **Add** New and give your policy a unique name. Select the check box for **Add bot protection**. Select the endpoint in **Domains**, then select **Add**.
-
- :::image type="content" source="../media/create-front-door-portal/front-door-custom-create-add-waf-policy-2.png" alt-text="add WAF policy":::
-
-1. SelectΓÇ»**Review + Create**, and thenΓÇ»**Create**. It takes a few mins for the configurations to be propagated to all edge POPs. Now you have your first Front Door profile and endpoint.
-
- :::image type="content" source="../media/create-front-door-portal/front-door-custom-create-review.png" alt-text="Review custom create":::
-
-## Verify Azure Front Door
-
-When you create the Azure Front Door Standard/Premium profile, it takes a few minutes for the configuration to be deployed globally. Once completed, you can access the frontend host you created. In a browser, go to `contoso-frontend.z01.azurefd.net`. Your request will automatically get routed to the nearest server from the specified servers in the origin group.
-
-If you created these apps in this quickstart, you'll see an information page.
-
-To test instant global failover, we'll use the following steps:
-
-1. Open a browser, as described above, and go to the frontend address: `contoso-frontend.azurefd.net`.
-
-1. In the Azure portal, search for and select *App services*. Scroll down to find one of your web apps, **WebAppContoso-001** in this example.
-
-1. Select your web app, and then select **Stop**, and **Yes** to verify.
-
-1. Refresh your browser. You should see the same information page.
-
- >[!TIP]
- >There is a little bit of delay for these actions. You might need to refresh again.
-
-1. Find the other web app, and stop it as well.
-
-1. Refresh your browser. This time, you should see an error message.
-
- :::image type="content" source="../media/create-front-door-portal/web-app-stopped-message.png" alt-text="Both instances of the web app stopped":::
-
-## Clean up resources
-
-After you're done, you can remove all the items you created. Deleting a resource group also deletes its contents. If you don't intend to use this Front Door, you should remove resources to avoid unnecessary charges.
-
-1. In the Azure portal, search for and select **Resource groups**, or select **Resource groups** from the Azure portal menu.
-
-1. Filter or scroll down to find a resource group, such as **FrontDoorQS_rg0**.
-
-1. Select the resource group, then select **Delete resource group**.
-
- >[!WARNING]
- >This action is irreversable.
-
-1. Type the resource group name to verify, and then select **Delete**.
-
-Repeat the procedure for the other two groups.
-
-## Next steps
-
-Advance to the next article to learn how to add a custom domain to your Front Door.
-> [!div class="nextstepaction"]
-> [Add a custom domain](how-to-add-custom-domain.md)
frontdoor Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/standard-premium/faq.md
HTTP/2 protocol support is available to clients connecting to Azure Front Door o
Origin groups can be composed of two types of origins: - Public origins include storage accounts, App Service apps, Kubernetes instances, or any other custom hostname that has public connectivity. These origins must be defined either via a public IP address or a publicly resolvable DNS hostname. Members of origin groups can be deployed across availability zones, regions, or even outside of Azure as long as they have public connectivity. Public origins are supported for Azure Front Door Standard and Premium tiers.-- [Private Link origins](concept-private-link.md) are available when you use Azure Front Door (Premium).
+- [Private Link origins](../private-link.md) are available when you use Azure Front Door (Premium).
### What regions is the service available in?
Yes. In fact, Azure Front Door supports host, path, query string redirection, an
The best way to lock down your application to accept traffic only from your specific Front Door instance is to publish your application via Private Endpoint. Network traffic between Front Door and the application traverses over the VNet and a Private Link on the Microsoft backbone network, eliminating exposure from the public internet.
-Learn more about the [securing origin for Front Door with Private Link](concept-private-link.md).
+Learn more about the [securing origin for Front Door with Private Link](../private-link.md).
Alternative way to lock down your application to accept traffic only from your specific Front Door, you'll need to set up IP ACLs for your backend. Then restrict the traffic of your backend to the specific value of the header 'X-Azure-FDID' sent by Front Door. These steps are detailed out as below:
Any updates to routes or backend pools are seamless and will cause zero downtime
Azure Front Door (Standard) requires a public IP or a publicly resolvable DNS name to route traffic. Azure Front Door can't route directly to resources in a virtual network. You can use an Application Gateway or an Azure Load Balancer with a public IP to solve this problem.
-Azure Front Door (Premium) supports routing traffic to [Private Link origins](concept-private-link.md).
+Azure Front Door (Premium) supports routing traffic to [Private Link origins](../private-link.md).
### What are the various timeouts and limits for Azure Front Door?
-Learn about all the documented [timeouts and limits for Azure Front Door](../../azure-resource-manager/management/azure-subscription-service-limits.md#azure-front-door-service-limits).
+Learn about all the documented [timeouts and limits for Azure Front Door](../../azure-resource-manager/management/azure-subscription-service-limits.md#azure-front-door-classic-limits).
### How long does it take for a rule to take effect after being added to the Front Door Rules Engine?
frontdoor How To Add Custom Domain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/standard-premium/how-to-add-custom-domain.md
Title: How to add a custom domain to your Azure Front Door Standard/Premium SKU configuration
-description: In this tutorial, you'll learn how to onboard a custom domain to Azure Front Door Standard/Premium SKU.
+ Title: 'How to add a custom domain - Azure Front Door'
+description: In this article, you'll learn how to onboard a custom domain to Azure Front Door profile using the Azure portal.
documentationcenter: '' Previously updated : 02/18/2021- Last updated : 03/18/2022+ #Customer intent: As a website owner, I want to add a custom domain to my Front Door configuration so that my users can use my custom domain to access my content.
-# Create a custom domain on Azure Front Door Standard/Premium SKU (Preview) using the Azure portal
+# Configure a custom domain on Azure Front Door using the Azure portal
-> [!Note]
-> This documentation is for Azure Front Door Standard/Premium (Preview). Looking for information on Azure Front Door? View [here](../front-door-overview.md).
+When you use Azure Front Door for application delivery, a custom domain is necessary if you would like your own domain name to be visible in your end-user requests. Having a visible domain name can be convenient for your customers and useful for branding purposes.
-When you use Azure Front Door Standard/Premium for application delivery, a custom domain is necessary if you would like your own domain name to be visible in your end-user requests. Having a visible domain name can be convenient for your customers and useful for branding purposes.
-
-After you create an Azure Front Door Standard/Premium profile, the default frontend host will have a subdomain of azurefd.net. This subdomain gets included in the URL when Azure Front Door Standard/Premium delivers content from your backend by default. For example, `https://contoso-frontend.azurefd.net/activeusers.htm`. For your convenience, Azure Front Door provides the option of associating a custom domain with the default host. With this option, you deliver your content with a custom domain in your URL instead of an Azure Front Door Standard/Premium owned domain name. For example, 'https://www.contoso.com/photo.png'.
-
-> [!IMPORTANT]
-> Azure Front Door Standard/Premium (Preview) is currently in public preview.
-> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
-> For more information, see [**Supplemental Terms of Use for Microsoft Azure Previews**](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+After you create an Azure Front Door Standard/Premium profile, the default frontend host will have a subdomain of `azurefd.net`. This subdomain gets included in the URL when Azure Front Door Standard/Premium delivers content from your backend by default. For example, `https://contoso-frontend.azurefd.net/activeusers.htm`. For your convenience, Azure Front Door provides the option of associating a custom domain with the default host. With this option, you deliver your content with a custom domain in your URL instead of an Azure Front Door owned domain name. For example, `https://www.contoso.com/photo.png`.
## Prerequisites * Before you can complete the steps in this tutorial, you must first create a Front Door. For more information, see [Quickstart: Create a Front Door Standard/Premium](create-front-door-portal.md).
After you create an Azure Front Door Standard/Premium profile, the default front
## Add a new custom domain > [!NOTE]
-> While in Public Preview, using Azure DNS to create Apex domains is not supported on Azure Front Door Standard/Premium. There are other DNS providers that support CNAME flattening or DNS chasing that will allow APEX domains to be used for Azure Front Door Standard/Premium.
+> * When using Azure DNS to create Apex domains isn't supported on Azure Front Door currently. There are other DNS providers that support CNAME flattening or DNS chasing that will allow APEX domains to be used for Azure Front Door Standard/Premium.
+> * If a custom domain is validated in one of the Azure Front Door Standard, Premium, classic or classic Microsoft CDN profiles, then it can't be added to another profile.
+>
A custom domain is managed by Domains section in the portal. A custom domain can be created and validated before association to an endpoint. A custom domain and its subdomains can be associated with only a single endpoint at a time. However, you can use different subdomains from the same custom domain for different Front Doors. You can also map custom domains with different subdomains to the same Front Door endpoint.
-1. Under Settings for your Azure Front Door profile, select *Domains* and then the **Add a domain** button.
+1. Select **Domains** under settings for your Azure Front Door profile and then select **+ Add** button.
:::image type="content" source="../media/how-to-add-custom-domain/add-domain-button.png" alt-text="Screenshot of add domain button on domain landing page."::: 1. The **Add a domain** page will appear where you can enter information about of the custom domain. You can choose Azure-managed DNS, which is recommended or you can choose to use your own DNS provider. If you choose Azure-managed DNS, select an existing DNS zone and then select a custom subdomain or create a new one. If you're using another DNS provider, manually enter the custom domain name. Select **Add** to add your custom domain.
+ > [!NOTE]
+ > Azure Front Door supports both Azure managed certificate and customer-managed certificates. If you want to use customer-managed certificate, see [Configure HTTPS on a custom domain](how-to-configure-https-custom-domain.md).
+ >
+ :::image type="content" source="../media/how-to-add-custom-domain/add-domain-page.png" alt-text="Screenshot of add a domain page."::: A new custom domain is created with a validation state of **Submitting**.
A custom domain is managed by Domains section in the portal. A custom domain can
:::image type="content" source="../media/how-to-add-custom-domain/validate-custom-domain.png" alt-text="Screenshot of validate custom domain page.":::
-1. Select the refresh status. Once the domain is validated using the DNS TXT record, the validation status will change to **verified**. This operation may take a few minutes to validate.
-
- :::image type="content" source="../media/how-to-add-custom-domain/domain-status-verified.png" alt-text="Screenshot of custom domain verified.":::
- 1. Close the page to return to custom domains list landing page. The provisioning state of custom domain should change to **Provisioned** and validation state should change to **Approved**. :::image type="content" source="../media/how-to-add-custom-domain/provisioned-approved-status.png" alt-text="Screenshot of provisioned and approved status.":::
+### Domain validation state
+
+| Domain validation state | Description and actions |
+| -- | -- |
+| Submitting | When a new custom domain is added and being created, the validation state becomes Submitting. |
+| Pending | A domain goes to pending state once the DNS TXT record challenge is generated. Please add the DNS TXT record to your DNS provider and wait for the validation to complete. If it is in ΓÇÿPendingΓÇÖ even after the TXT record is updated in the DNS provider, please try to click ΓÇÿRegenerateΓÇÖ to refresh the TXT record and add the TXT record to your DNS provider again. |
+| Rejected | This state is applicable when the certificate provider/authority rejects the issuance for the managed certificate, e.g. when the domain is invalid. Please click on the ΓÇÿRejectedΓÇÖ link and click ΓÇÿRegenerateΓÇÖ on the ΓÇÿValidate the custom domainΓÇÖ page, as shown in the screenshots below this table. Then click on Add to add the TXT record in the DNS provider. |
+| TimeOut | The domain validation state will become from ΓÇÿPendingΓÇÖ to ΓÇÿTimeoutΓÇÖ if you do not add it to your DNS provider within 7 days or add an invalid DNS TXT record. Please click on the Timeout and hit ΓÇÿRegenerateΓÇÖ on the ΓÇÿValidate the custom domainΓÇÖ page, as shown in the screenshots below this table. Then click on Add. Repeat step 3 and 4. |
+| Approved | This means the domain has been successfully validated. |
+| Pending re-validation | This happens when the managed certificate is 45 days or less from expiry. If you have a CNAME record pointing to the AFD endpoint, no action is required for certificate renewal. If the custom domain is pointing to other CNAME records, please click on ΓÇÿPending RevalidationΓÇÖ and hit ΓÇÿRegenerateΓÇÖ on the ΓÇÿValidate the custom domainΓÇÖ page, as shown in the screenshots below this table. Then click on Add or add the TXT record with your own DNS providerΓÇÖs DNS management. |
+| Refreshing validation token | A domain goes to ΓÇ£Refreshing Validation TokenΓÇÖ stage for a brief period after Regenerate button is clicked. Once a new TXT record challenge is issued, the state changes to Pending. |
+| Internal error | If you see this error, retry by clicking the **Refresh** or **Regenerate** buttons. If you're still experiencing issues, raise a support request. |
+
+> [!NOTE]
+> 1. If the **Regenerate** button doesn't work, delete and recreate the domain.
+> 2. If the domain state doesn't reflect as expected, select the **Refresh** button.
+ ## Associate the custom domain with your Front Door Endpoint After you've validated your custom domain, you can then add it to your Azure Front Door Standard/Premium endpoint.
-1. Once custom domain is validated, you can associate it to an existing Azure Front Door Standard/Premium endpoint and route. Select the **Endpoint association** link to open the **Associate endpoint and routes** page. Select an endpoint and routes you want to associate with. Then select **Associate**. Close the page once the associate operation completes.
+1. Once custom domain is validated, you can associate it to an existing Azure Front Door endpoint and route. Select the **Unassociated** link to open the **Associate endpoint and routes** page. Select an endpoint and routes you want to associate with. Then select **Associate**. Close the page once the associate operation completes.
:::image type="content" source="../media/how-to-add-custom-domain/associate-endpoint-routes.png" alt-text="Screenshot of associate endpoint and routes page.":::
Then lastly, validate that your application content is getting served using a br
## Next steps
-To learn how to enable HTTPS for your custom domain, continue to the next tutorial.
-
-> [!div class="nextstepaction"]
-> [Enable HTTPS for a custom domain](how-to-configure-https-custom-domain.md)
+Learn how to [enable HTTPS for your custom domain](how-to-configure-https-custom-domain.md).
frontdoor How To Cache Purge https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/standard-premium/how-to-cache-purge.md
Title: 'Cache purging in Azure Front Door Standard/Premium (Preview)'
-description: This article helps you understand how to purge cache on an Azure Front Door Standard/Premium.
+ Title: 'Cache purging - Azure Front Door'
+description: This article helps you understand how to purge cache on an Azure Front Door Standard and Premium tier profile.
Previously updated : 02/18/2021 Last updated : 03/18/2022
-# Cache purging in Azure Front Door Standard/Premium (Preview)
+# Cache purging in Azure Front Door
-> [!Note]
-> This documentation is for Azure Front Door Standard/Premium (Preview). Looking for information on Azure Front Door? View [here](../front-door-overview.md).
+Azure Front Door caches assets until the asset's time-to-live (TTL) expires. Whenever a client requests an asset with expired TTL, the Azure Front Door environment retrieves a new updated copy of the asset to serve the request and then stores the refreshed cache.
-Azure Front Door Standard/Premium caches assets until the asset's time-to-live (TTL) expires. Whenever a client requests an asset with expired TTL, the Azure Front Door environment retrieves a new updated copy of the asset to serve the request and then stores the refreshed cache.
-
-Best practice is to make sure your users always obtain the latest copy of your assets. The way to do that is to version your assets for each update and publish them as new URLs. Azure Front Door Standard/Premium will immediately retrieve the new assets for the next client requests. Sometimes you may wish to purge cached contents from all edge nodes and force them all to retrieve new updated assets. The reason you want to purge cached contents is because you've made new updates to your application or you want to update assets that contain incorrect information.
-
-> [!IMPORTANT]
-> Azure Front Door Standard/Premium (Preview) is currently in public preview.
-> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
-> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+Best practice is to make sure your users always obtain the latest copy of your assets. The way to do that is to version your assets for each update and publish them as new URLs. Azure Front Door Standard/Premium will immediately retrieve the new assets for the next client requests. Sometimes you may wish to purge cached contents from all edge nodes and force them all to retrieve new updated assets. The reason you want to purge cached contents is because you've made new updates to your application, or you want to update assets that contain incorrect information.
## Prerequisites
-Review [Azure Front Door Caching](../front-door-caching.md) to understand how caching works.
+Review [Caching with Azure Front Door](../front-door-caching.md) to understand how caching works.
## Configure cache purge
-1. Go to the overview page of the Azure Front Door profile with the assets you want to purge, then select **Purge cache**.
+1. Go to the overview page of your Azure Front Door profile with assets you want to purge, then select **Purge cache**.
+
+ :::image type="content" source="../media/how-to-cache-purge/cache-purge-button.png" alt-text="Screenshot of the cache purge button on the overview page.":::
- :::image type="content" source="../media/how-to-cache-purge/front-door-cache-purge-1.png" alt-text="Screenshot of cache purge on overview page.":::
+1. Select one or more endpoints and enter the domain and or subdomains you want to purge from the edge nodes.
-1. Select the endpoint and domain you want to purge from the edge nodes. *(You may select more than one domains)*
+ > [!IMPORTANT]
+ > Cache purge for wildcard domains is not supported, you have to specify a subdomain for cache purge for a wildcard domain. You can add as many single-level subdomains of the wildcard domain. For example, for the wildcard domain `*.afdxgatest.azfdtest.xyz`, you can add subdomains in the form of `contoso.afdxgatest.azfdtest.xyz` or `cart.afdxgatest.azfdtest.xyz` and so on. For more information, see [Wildcard domains in Azure Front Door](../front-door-wildcard-domain.md).
- :::image type="content" source="../media/how-to-cache-purge/front-door-cache-purge-2.png" alt-text="Screenshot of cache purge page.":::
+ :::image type="content" source="../media/how-to-cache-purge/purge-cache-page.png" alt-text="Screenshot of the purge cache page.":::
1. To clear all assets, select **Purge all assets for the selected domains**. Otherwise, in **Paths**, enter the path of each asset you want to purge.
These formats are supported in the lists of paths to purge:
* **Single path purge**: Purge individual assets by specifying the full path of the asset (without the protocol and domain), with the file extension, for example, /pictures/strasbourg.png. * **Root domain purge**: Purge the root of the endpoint with "/*" in the path.
-Cache purges on the Azure Front Door Standard/Preium are case-insensitive. Additionally, they're query string agnostic, meaning purging a URL will purge all query-string variations of it.
+Cache purges on the Azure Front Door profile are case-insensitive. Additionally, they're query string agnostic, which means to purge a URL will purge all query-string variations of it.
## Next steps
-Learn how to [create a Front Door Standard/Premium](create-front-door-portal.md).
+Learn how to [create an Azure Front Door profile](../create-front-door-portal.md).
frontdoor How To Compression https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/standard-premium/how-to-compression.md
Title: Improve performance by compressing files in Azure Front Door Standard/Premium (Preview)
+ Title: Improve performance by compressing files in Azure Front Door
description: Learn how to improve file transfer speed and increase page-load performance by compressing your files in Azure Front Door. Previously updated : 02/18/2021 Last updated : 03/20/2022
-# Improve performance by compressing files in Azure Front Door Standard/Premium (Preview)
-
-> [!Note]
-> This documentation is for Azure Front Door Standard/Premium (Preview). Looking for information on Azure Front Door? View [here](../front-door-overview.md).
+# Improve performance by compressing files in Azure Front Door
File compression is an effective method to improve file transfer speed and increase page-load performance. The compression reduces the size of the file before it's sent by the server. File compression can reduce bandwidth costs and provide a better experience for your users.
-> [!IMPORTANT]
-> Azure Front Door Standard/Premium (Preview) is currently in public preview.
-> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
-> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
There are two ways to enable file compression: - Enabling compression on your origin server. Azure Front Door passes along the compressed files and delivers them to clients that request them.
frontdoor How To Configure Endpoint Manager https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/standard-premium/how-to-configure-endpoint-manager.md
To create an Azure Front Door profile, see [Create a new Azure Front Door Standa
1. Select **Endpoint Manager**. Then select **Add an Endpoint** to create a new Endpoint.
- :::image type="content" source="../media/how-to-configure-endpoint-manager/select-create-endpoint.png" alt-text="Screenshot of add an endpoint through Endpoint Manager.":::
+ :::image type="content" source="../media/how-to-configure-endpoints/select-create-endpoint.png" alt-text="Screenshot of add an endpoint through Endpoint Manager.":::
1. On the **Add an endpoint** page, enter, and select the following settings.
- :::image type="content" source="../media/how-to-configure-endpoint-manager/create-endpoint-page.png" alt-text="Screenshot of add an endpoint page.":::
+ :::image type="content" source="../media/how-to-configure-endpoints/create-endpoint-page.png" alt-text="Screenshot of add an endpoint page.":::
| Settings | Value | | -- | -- |
To create an Azure Front Door profile, see [Create a new Azure Front Door Standa
1. On the **Edit Endpoint** page, select **+ Add** under Domains.
- :::image type="content" source="../media/how-to-configure-endpoint-manager/select-add-domain.png" alt-text="Screenshot of select domain on Edit Endpoint page.":::
+ :::image type="content" source="../media/how-to-configure-endpoints/select-add-domain.png" alt-text="Screenshot of select domain on Edit Endpoint page.":::
### Add Domain 1. On the **Add Domain** page, choose to associate a domain *from your Azure Front Door profile* or *add a new domain*. For information about how to create a brand new domain, see [Create a new Azure Front Door Standard/Premium custom domain](how-to-add-custom-domain.md).
- :::image type="content" source="../media/how-to-configure-endpoint-manager/add-domain-page.png" alt-text="Screenshot of Add a domain page.":::
+ :::image type="content" source="../media/how-to-configure-endpoints/add-domain-page.png" alt-text="Screenshot of Add a domain page.":::
1. Select **Add** to add the domain to current endpoint. The selected domain should appear within the Domain panel.
- :::image type="content" source="../media/how-to-configure-endpoint-manager/domain-in-domainview.png" alt-text="Screenshot of domains in domain view.":::
+ :::image type="content" source="../media/how-to-configure-endpoints/domain-in-domainview.png" alt-text="Screenshot of domains in domain view.":::
### Add Origin Group 1. Select **Add** at the Origin groups view. The **Add an origin group** page appears
- :::image type="content" source="../media/how-to-configure-endpoint-manager/add-origin-group-view.png" alt-text="Screenshot of add an origin group page":::
+ :::image type="content" source="../media/how-to-configure-endpoints/add-origin-group-view.png" alt-text="Screenshot of add an origin group page":::
1. For **Name**, enter a unique name for the new origin group
Load-balancing settings for the origin group define how we evaluate health probe
Select **Add** to add the origin group to current endpoint. The origin group should appear within the Origin group panel ### Add Route
Select **Add** at the Routes view, the **Add a route** page appears. For inform
1. Select **Add** at the Security view, The **Add a WAF policy** page appears
- :::image type="content" source="../media/how-to-configure-endpoint-manager/add-waf-policy-page.png" alt-text="Screenshot of add a WAF policy page.":::
+ :::image type="content" source="../media/how-to-configure-endpoints/add-waf-policy-page.png" alt-text="Screenshot of add a WAF policy page.":::
1. **WAF Policy**: select a WAF policy you like apply for the selected domain within this endpoint. Select **Create New** to create a brand new WAF policy.
- :::image type="content" source="../media/how-to-configure-endpoint-manager/create-new-waf-policy.png" alt-text="Screenshot of create a new WAF policy.":::
+ :::image type="content" source="../media/how-to-configure-endpoints/create-new-waf-policy.png" alt-text="Screenshot of create a new WAF policy.":::
**Name**: enter a unique name for the new WAF policy. You could edit this policy with more configuration from the Web Application Firewall page.
Select **Add** at the Routes view, the **Add a route** page appears. For inform
1. Select **Add** button. The WAF policy should appear within the Security panel
- :::image type="content" source="../media/how-to-configure-endpoint-manager/waf-in-security-view.png" alt-text="Screenshot of WAF policy in security view.":::
+ :::image type="content" source="../media/how-to-configure-endpoints/waf-in-security-view.png" alt-text="Screenshot of WAF policy in security view.":::
## Clean up resources To delete an endpoint when it's no longer needed, select **Delete Endpoint** at the end of the endpoint row ## Next steps
frontdoor How To Configure Https Custom Domain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/standard-premium/how-to-configure-https-custom-domain.md
Title: Configure HTTPS for your custom domain in an Azure Front Door Standard/Premium SKU configuration
-description: In this article, you'll learn how to onboard a custom domain to Azure Front Door Standard/Premium SKU.
+ Title: 'Configure HTTPS for your custom domain - Azure Front Door'
+description: In this article, you'll learn how to configure HTTPS on an Azure Front Door custom domain.
Previously updated : 12/06/2021 Last updated : 03/18/2022 #Customer intent: As a website owner, I want to add a custom domain to my Front Door configuration so that my users can use my custom domain to access my content.
-# Configure HTTPS on a Front Door Standard/Premium SKU (Preview) custom domain using the Azure portal
+# Configure HTTPS on an Azure Front Door custom domain using the Azure portal
-> [!NOTE]
-> This documentation is for Azure Front Door Standard/Premium (Preview). Looking for information on Azure Front Door? View [here](../front-door-overview.md).
-
-Azure Front Door Standard/Premium enables secure TLS delivery to your applications by default when a custom domain is added. By using the HTTPS protocol on your custom domain, you ensure your sensitive data get delivered securely with TLS/SSL encryption when it's sent across the internet. When your web browser is connected to a web site via HTTPS, it validates the web site's security certificate and verifies it's issued by a legitimate certificate authority. This process provides security and protects your web applications from attacks.
-Azure Front Door Standard/Premium supports both Azure managed certificate and customer-managed certificates. Azure Front Door by default automatically enables HTTPS to all your custom domains using Azure managed certificates. No extra steps are required for getting an Azure managed certificate. A certificate is created during the domain validation process. You can also use your own certificate by integrating Azure Front Door Standard/Premium with your Key Vault.
+Azure Front Door enables secure TLS delivery to your applications by default when a custom domain is added. By using the HTTPS protocol on your custom domain, you ensure your sensitive data get delivered securely with TLS/SSL encryption when it's sent across the internet. When your web browser is connected to a web site via HTTPS, it validates the web site's security certificate and verifies it gets issued by a legitimate certificate authority. This process provides security and protects your web applications from attacks.
-> [!IMPORTANT]
-> Azure Front Door Standard/Premium (Preview) is currently in public preview.
-> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
-> For more information, see [**Supplemental Terms of Use for Microsoft Azure Previews**](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+Azure Front Door supports both Azure managed certificate and customer-managed certificates. Azure Front Door by default automatically enables HTTPS to all your custom domains using Azure managed certificates. No extra steps are required for getting an Azure managed certificate. A certificate is created during the domain validation process. You can also use your own certificate by integrating Azure Front Door Standard/Premium with your Key Vault.
## Prerequisites
-* Before you can configure HTTPS for your custom domain, you must first create an Azure Front Door Standard/Premium profile. For more information, see [Quickstart: Create an Azure Front Door Standard/Premium profile](create-front-door-portal.md).
+* Before you can configure HTTPS for your custom domain, you must first create an Azure Front Door profile. For more information, see [Create an Azure Front Door profile](../create-front-door-portal.md).
* If you don't already have a custom domain, you must first purchase one with a domain provider. For example, see [Buy a custom domain name](../../app-service/manage-custom-dns-buy-domain.md).
Azure Front Door Standard/Premium supports both Azure managed certificate and cu
## Azure managed certificates
-1. Under Settings for your Azure Front Door Standard/Premium profile, select **Domains** and then select **+ Add** to add a new domain.
+1. Select **Domains** under settings for your Azure Front Door profile and then select **+ Add** to add a new domain.
:::image type="content" source="../media/how-to-configure-https-custom-domain/add-new-custom-domain.png" alt-text="Screenshot of domain configuration landing page.":::
Azure Front Door Standard/Premium supports both Azure managed certificate and cu
## Using your own certificate
-You can also choose to use your own TLS certificate. When you create your TLS/SSL certificate, you must create a complete certificate chain with an allowed certificate authority (CA) that is part of the [Microsoft Trusted CA List](https://ccadb-public.secure.force.com/microsoft/IncludedCACertificateReportForMSFT). If you use a non-allowed CA, your request will be rejected. The certificate must have a complete certificate chain with leaf and intermediate certificates, and root CA must be part of the [Microsoft Trusted CA List](https://ccadb-public.secure.force.com/microsoft/IncludedCACertificateReportForMSFT). If a certificate without complete chain is presented, the requests that involve that certificate are not guaranteed to work as expected. This certificate must be imported into an Azure Key Vault before you can use it with Azure Front Door Standard/Premium. See [import a certificate](../../key-vault/certificates/tutorial-import-certificate.md) to Azure Key Vault.
+You can also choose to use your own TLS certificate. When you create your TLS/SSL certificate, you must create a complete certificate chain with an allowed certificate authority (CA) that is part of the [Microsoft Trusted CA List](https://ccadb-public.secure.force.com/microsoft/IncludedCACertificateReportForMSFT). If you use a non-allowed CA, your request will be rejected. The root CA must be part of the [Microsoft Trusted CA List](https://ccadb-public.secure.force.com/microsoft/IncludedCACertificateReportForMSFT). If a certificate without complete chain is presented, the requests that involve that certificate aren't guaranteed to work as expected. This certificate must be imported into an Azure Key Vault before you can use it with Azure Front Door Standard/Premium. See how to [import a certificate](../../key-vault/certificates/tutorial-import-certificate.md) to Azure Key Vault.
#### Prepare your Azure Key vault account and certificate
Grant Azure Front Door permission to access the certificates in your Azure Key
1. In your key vault account, under SETTINGS, select **Access policies**. Then select **Add new** to create a new policy.
-1. In **Select principal**, search for **205478c0-bd83-4e1b-a9d6-db63a3e1e1c8**, and choose **Microsoft.AzureFrontDoor-Cdn**. Click **Select**.
+1. In **Select principal**, search for **205478c0-bd83-4e1b-a9d6-db63a3e1e1c8**, and choose **Microsoft.AzureFrontDoor-Cdn**. Select **Select**.
1. In **Secret permissions**, select **Get** to allow Front Door to retrieve the certificate.
Grant Azure Front Door permission to access the certificates in your Azure Key
1. Select **OK**.
+> [!NOTE]
+> If your Azure Key Vault is being protected with Firewall, make sure to allow Azure Front Door to access your Azure Key Vault account.
+ #### Select the certificate for Azure Front Door to deploy 1. Return to your Azure Front Door Standard/Premium in the portal.
-1. Navigate to **Secrets** under *Settings* and select **Add certificate**.
+1. Navigate to **Secrets** under *Settings* and select **+ Add certificate**.
:::image type="content" source="../media/how-to-configure-https-custom-domain/add-certificate.png" alt-text="Screenshot of Azure Front Door secret landing page.":::
Grant Azure Front Door permission to access the certificates in your Azure Key
1. Follow the on-screen steps to validate the certificate. Then associate the newly created custom domain to an endpoint as outlined in [creating a custom domain](how-to-add-custom-domain.md) guide.
-#### Change from Azure managed to Bring Your Own Certificate (BYOC)
+## Certificate renewal and changing certificate types
+
+### Azure managed certificate
+
+Azure managed certificate will be automatically rotated when your custom domain has the CNAME record to an Azure Front Door standard or premium endpoint. The auto rotation won't happen for the two scenarios below
+
+* If the custom domain CNAME record is pointing to other DNS resources
+
+* If your custom domain points to Azure Front Door through a long chain, for example, putting an Azure Traffic Manager before Azure Front Door and other CDN providers, the CNAME chain is contoso.com CNAME in `contoso.trafficmanager.net` CNAME in `contoso.z01.azurefd.net`.
+
+The domain validation state will become ΓÇÿPending RevalidationΓÇÖ 45 days before managed certificate expiry or ΓÇÿRejectedΓÇÖ if the managed certificate issuance is rejected by the certificate authority. Refer to [Add a custom domain](how-to-add-custom-domain.md#domain-validation-state) for actions for different domain state.
+
+### Use your own certificate
+
+In order for the certificate to be automatically rotated to the latest version when a newer version of the certificate is available in your Key Vault, set the secret version to 'Latest'. If a specific version is selected, you have to reselect the new version manually for certificate rotation. It takes up to 24 hours for the new version of the certificate/secret to be automatically deployed.
+
+If you want to change the secret version from ΓÇÿLatestΓÇÖ to a specified version or vice versa, add a new certificate.
+
+##### How to switch between certificate types
1. You can change an existing Azure managed certificate to a user-managed certificate by selecting the certificate state to open the **Certificate details** page.
- :::image type="content" source="../media/how-to-configure-https-custom-domain/domain-certificate.png" alt-text="Screenshot of certificate state on domains landing page." lightbox="../media/how-to-configure-https-custom-domain/domain-certificate-expanded.png":::
+ :::image type="content" source="../media/how-to-configure-https-custom-domain/domain-certificate.png" alt-text="Screenshot of certificate state on domains landing page.":::
+
+1. On the **Certificate details** page, you can change between *Azure managed* and
+*Bring Your Own Certificate (BYOC)*. Then follow the same steps as earlier to choose a certificate. Select **Update** to change the associated certificate with a domain.
-1. On the **Certificate details** page, you can change from "Azure managed" to "Bring Your Own Certificate (BYOC)" option. Then follow the same steps as earlier to choose a certificate. Select **Update** to change the associated certificate with a domain.
+ > [!NOTE]
+ > It may take up to an hour for the new certificate to be deployed when you switch between certificate types.
+ >
:::image type="content" source="../media/how-to-configure-https-custom-domain/certificate-details-page.png" alt-text="Screenshot of certificate details page.":::
frontdoor How To Configure Rule Set https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/standard-premium/how-to-configure-rule-set.md
Title: 'Azure Front Door: Configure Front Door Rule Set'
-description: This article provides guidance on how to configure a Rule Set.
+ Title: 'Configure a Rule set - Azure Front Door'
+description: This article provides guidance on how to configure a Rule set you can use in an Azure Front Door profile.
Previously updated : 02/18/2021- Last updated : 03/17/2022+
-# Configure a Rule Set with Azure Front Door Standard/Premium (Preview)
+# Configure a Rule set with Azure Front Door
-> [!Note]
-> This documentation is for Azure Front Door Standard/Premium (Preview). Looking for information on Azure Front Door? View [here](../front-door-overview.md).
-
-This article shows how to create a Rule Set and your first set of rules in the Azure portal. You'll then learn how to associate the Rule Set to a route from the Rule Set page or from Endpoint Manager.
-
-> [!IMPORTANT]
-> Azure Front Door Standard/Premium (Preview) is currently in public preview.
-> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
-> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+This article shows how to create a Rule Set and your first set of rules using the Azure portal. You'll then learn how to associate the Rule set to a route from the Rule set page or from Endpoint Manager.
## Prerequisites
-* Before you can configure a Rule Set, you must first create an Azure Front Door Standard/Premium. For more information, see [Quickstart: Create an Azure Front Door Standard/Premium profile](create-front-door-portal.md).
+* Before you can configure a Rule Set, you must first create an Azure Front Door Standard/Premium. For more information, see [Quickstart: Create an Azure Front Door Standard/Premium profile](../create-front-door-portal.md).
-## Configure Rule Set in Azure portal
+## Configure Rule set in Azure portal
-1. Within your Front Door profile, select **Rule Set** located under **Settings**. Select **Add** and give it a rule set name.
+1. Within your Azure Front Door profile, select **Rule set** located under **Settings**. Select **+ Add**, then give rule set a name.
- :::image type="content" source="../media/how-to-configure-rule-set/front-door-create-rule-set-1.png" alt-text="Screenshot of rule set landing page.":::
+ :::image type="content" source="../media/how-to-configure-rule-set/create-rule-set.png" alt-text="Screenshot of rule set landing page.":::
-1. Select **Add Rule** to create your first rule. Give it a rule name. Then, select **Add condition** or **Add action** to define your rule. You can add up to 10 conditions and 5 actions for one rule. In this example, we use server variable to add a response header 8Geo-country* for requests that include *contoso* in the URL.
+1. To create your first rule, give it rule name. Then select **+ Add condition** and **+ Add action** to define your rule. You can add up to 10 conditions and 5 actions for one rule. In this example, we use server variable to append "Device type" to the response header for requests that are coming in from a "Mobile" device type.
- :::image type="content" source="../media/how-to-configure-rule-set/front-door-create-rule-set.png" alt-text="Screenshot of rule set configuration page.":::
+ :::image type="content" source="../media/how-to-configure-rule-set/rule-set-configurations.png" alt-text="Screenshot of rule set configuration page.":::
> [!NOTE] > * To delete a condition or action from a rule, use the trash can on the right-hand side of the specific condition or action.
This article shows how to create a Rule Set and your first set of rules in the A
1. You can determine the priority of the rules within your Rule Set by using the arrow buttons to move the rules higher or lower in priority. The list is in ascending order, so the most important rule is listed first.
- :::image type="content" source="../media/how-to-configure-rule-set/front-door-rule-set-change-orders.png" alt-text="Screenshot of rule set priority." lightbox="../media/how-to-configure-rule-set/front-door-rule-set-change-orders-expanded.png":::
+ :::image type="content" source="../media/how-to-configure-rule-set/rule-set-change-orders.png" alt-text="Screenshot of rule set priority." lightbox="../media/how-to-configure-rule-set/rule-set-change-orders-expanded.png":::
> [!TIP] > If you like to verify when the changes are propagated to Azure Front Door, you can create a custom response header in the rule using the example below. You can add a response header `_X-<RuleName>-Version_` and change the value each time rule is updated.
This article shows how to create a Rule Set and your first set of rules in the A
> After the changes are updated, you can go to the URL to confirm the rule version being invoked: > :::image type="content" source="./../media/front-door-rules-engine/version-output.png" alt-text="Screenshot of custom header version output.":::
-1. Once you've created one or more rules select **Save** to complete the creation of your Rule Set.
+1. Once you've created all the rules you need, select **Save** to complete the creation of your Rule set.
-1. Now associate the Rule Set to a Route so it can take effect. You can associate the Rules Set through Rule Set page or you can go to Endpoint Manager to create the association.
+1. Now you can associate the Rule Set to a route so it can take effect. You can associate the Rules set on the Rule Set page or you can do so from the Front Door manager.
- **Rule Set page**:
+ **Rule set page**:
- 1. Select the Rule Set to be associated.
+ 1. From the *Rule set page*, select the **Unassociated** link to associate the Rule set to a route.
- 1. Select the *Unassociated* link.
+ :::image type="content" source="../media/how-to-configure-rule-set/associate-rule-set.png" alt-text="Screenshot of unassociated rule set on Rule set page.":::
- 1. Then in the **Associate a route** page, select the endpoint and route you want to associate with the Rule Set.
- :::image type="content" source="../media/how-to-configure-rule-set/front-door-associate-rule-set.png" alt-text="Screenshot of create a route page.":::
+ :::image type="content" source="../media/how-to-configure-rule-set/associate-rule-set-route.png" alt-text="Screenshot of create a route page.":::
- 1. Select *Next* to change rule set orders if there are multiple rule sets under selected route. Rule set will be executed from top to down. You can change orders by selecting the rule set and move it up or down. Then select *Associate*.
+ 1. Select **Next** to change the rule set order if you have multiple rule sets for a selected route. The rule set will process in the order listed. You can change orders by selecting the rule set and selecting the buttons at the top of the page. Select *Associate* to complete the route association.
> [!Note]
- > You can only associate one rule set with a single route on this page. To associate a Rule Set with multiple routes, please use Endpoint Manager.
+ > You can only associate one rule set with a single route on this page. To associate a Rule set with multiple routes, use the Front Door manager.
- :::image type="content" source="../media/how-to-configure-rule-set/front-door-associate-rule-set-2.png" alt-text="Screenshot of rule set orders.":::
+ :::image type="content" source="../media/how-to-configure-rule-set/rule-set-orders.png" alt-text="Screenshot of rule set orders.":::
- 1. The rule set is now associated with a route. You can look at the response header and see the Geo-country is added.
+ 1. The rule set is now associated with a route. You can look at the response header and confirm that the Device Type is added.
- :::image type="content" source="../media/how-to-configure-rule-set/front-door-associate-rule-set-3.png" alt-text="Screenshot of rule associated with a route.":::
+ :::image type="content" source="../media/how-to-configure-rule-set/rule-set-associated.png" alt-text="Screenshot of rule associated with a route.":::
- **Endpoint Manager**:
+ **Front Door manager**:
- 1. Go to Endpoint manager, select the endpoint you want to associate with the Rule Set.
+ 1. Go to Front Door manager, select the **...** next to the route you want to configure. Then select **Edit route**.
- :::image type="content" source="../media/how-to-configure-rule-set/front-door-associate-rule-set-endpoint-manager-1.png" alt-text="Screenshot of selecting endpoint in Endpoint Manager." lightbox="../media/how-to-configure-rule-set/front-door-associate-rule-set-endpoint-manager-1-expanded.png":::
-
- 1. Select *Edit endpoint*.
-
- :::image type="content" source="../media/how-to-configure-rule-set/front-door-associate-rule-set-endpoint-manager-2.png" alt-text="Screenshot of selecting edit endpoint in Endpoint Manager." lightbox="../media/how-to-configure-rule-set/front-door-associate-rule-set-endpoint-manager-2-expanded.png":::
-
- 1. Select the Route.
+ :::image type="content" source="../media/how-to-configure-rule-set/manager-edit-route.png" alt-text="Screenshot of edit route from Front Door manager." lightbox="../media/how-to-configure-rule-set/manager-edit-route-expanded.png":::
- :::image type="content" source="../media/how-to-configure-rule-set/front-door-associate-rule-set-endpoint-manager-3.png" alt-text="Screenshot of selecting a route.":::
+ 1. On the **Update route** page, under *Rules*, select the Rule sets you want to associate with the route from the dropdown. Then you have the ability change the order of the rule sets.
- 1. On the *Update route* page, in *Rules*, select the Rule Sets you want to associate with the route from the dropdown. Then you can change orders by moving rule set up and down.
+ :::image type="content" source="../media/how-to-configure-rule-set/route-rule-set-update.png" alt-text="Screenshot of rule set on update a route page.":::
- :::image type="content" source="../media/how-to-configure-rule-set/front-door-associate-rule-set-endpoint-manager-4.png" alt-text="Screenshot of update a route page.":::
-
- 1. Then select *Update* or *Add* to finish the association.
-
-## Delete a Rule Set from your Azure Front Door profile
-
-In the preceding steps, you configured and associated a Rule Set to your Route. If you no longer want the Rule Set associated to your Front Door, you can remove the Rule Set by completing the following steps:
-
-1. Go to the **Rule Set page** under **Settings** to disassociate the Rule Set from all associated routes.
-
-1. Expand the Route, select the three dots. Then select *Edit the route*.
+ 1. Select **Update** to save the route configuration.
- :::image type="content" source="../media/how-to-configure-rule-set/front-door-disassociate-rule-set-1.png" alt-text="Screenshot of route expanded in rule set.":::
+## Delete a Rule set
-1. Go to Rules section on the Route page, select the rule set, and select on the *Delete* button.
+If you no longer want the Rule set in your Azure Front Door profile, you can remove the Rule set by completing the following steps:
- :::image type="content" source="../media/how-to-configure-rule-set/front-door-disassociate-rule-set-2.png" alt-text="Screenshot of update route page to delete a rule set." lightbox="../media/how-to-configure-rule-set/front-door-disassociate-rule-set-2-expanded.png":::
+1. Go to the **Rule set** page under **Settings**.
-1. Select *Update* and the Rule Set will disassociate from the route.
+1. Select the **...** next to the rule set you want to remove and then select **Disassociate from all routes**.
-1. Repeat steps 2-5 to disassociate other routes that are associated with this rule set until you see the Routes status shows *Unassociated*.
+ :::image type="content" source="../media/how-to-configure-rule-set/disassociate-rule-set.png" alt-text="Screenshot of disassociate all routes button.":::
-1. For Rule Set that is *Unassociated*, you can delete the Rule Set by clicking on the three dots on the right and select *Delete*.
+1. Once the rule set has been disassociated, you can select the **...** again. Select **Delete** and then select **Yes** to confirm you want to delete the rule set.
- :::image type="content" source="../media/how-to-configure-rule-set/front-door-disassociate-rule-set-3.png" alt-text="Screenshot of how to delete a rule set.":::
+ :::image type="content" source="../media/how-to-configure-rule-set/remove-rule-set.png" alt-text="Screenshot of delete a rule set from Rule set page.":::
-1. The rule set is now deleted.
+1. Repeat steps 2 and 3 to remove any other rule set you have in the Azure Front Door profile.
## Next steps
frontdoor How To Create Origin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/standard-premium/how-to-create-origin.md
To delete an Origin group when you no longer needed it, click the **...** and th
To delete an origin when you no longer need it, click the **...** and then select **Delete** from the drop-down. ## Next steps
frontdoor How To Enable Private Link Internal Load Balancer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/standard-premium/how-to-enable-private-link-internal-load-balancer.md
# Connect Azure Front Door Premium to an internal load balancer origin with Private Link
-This article will guide you through how to configure Azure Front Door Premium SKU to connect to your internal load balancer origin using the Azure Private Link service.
+This article will guide you through how to configure Azure Front Door Premium tier to connect to your internal load balancer origin using the Azure Private Link service.
## Prerequisites
-Create a [private link service](../../private-link/create-private-link-service-portal.md).
+* An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+* Create a [Private Link](../../private-link/create-private-link-service-portal.md) service for your origin web servers.
## Sign in to Azure
In this section, you'll map the Private Link service to a private endpoint creat
1. Select the origin group you want to enable Private Link for the internal load balancer.
-1. Select **+ Add an origin** to add an internal load balancer origin.
+1. Select **+ Add an origin** to add an internal load balancer origin. Note that the hostname must be a valid domain name, IPv4 or IPv6. There are two ways to select an Azure resource. The first option is by **In my directory** to select your own resources. The second option is **By ID or alias** to connect to someone else's resource with a resource ID or alias that is shared with you.
- :::image type="content" source="../media/how-to-enable-private-link-internal-load-balancer/private-endpoint-internal-load-balancer.png" alt-text="Screenshot of enabling private link to an internal load balancer.":::
+ 1. Adding an origin using an IP address:
-1. For **Select an Azure resource**, select **In my directory**. Select or enter the following settings to configure the site you want Azure Front Door Premium to connect with privately.
+ :::image type="content" source="../media/how-to-enable-private-link-internal-load-balancer/private-endpoint-internal-load-balancer-ip.png" alt-text="Screenshot of enabling private link to an internal load balancer using an IP address.":::
+
+ 1. Adding an origin using a domain name:
+
+ :::image type="content" source="../media/how-to-enable-private-link-internal-load-balancer/private-endpoint-internal-load-balancer-domain-name.png" alt-text="Screenshot of enabling private link to an internal load balancer using a domain name.":::
+
+ 1. Select a private link **By ID or alias**:
+
+ :::image type="content" source="../media/how-to-enable-private-link-internal-load-balancer/private-link-by-alias.png" alt-text="Screenshot of enabling private link to an internal load balancer using an ID or alias":::
+
+1. The table below has information of what values to select in the respective fields while enabling private link with Azure Front Door. Select or enter the following settings to configure the App service you want Azure Front Door Premium to connect with privately.
| Setting | Value | | - | -- |
+ | Name | Enter a name to identify this storage blog origin. |
+ | Origin Type | Storage (Azure Blobs) |
+ | Host name | Select the host from the dropdown that you want as an origin. |
+ | Origin host header | You can customize the host header of the origin or leave it as default. |
+ | HTTP port | 80 (default) |
+ | HTTPS port | 443 (default) |
+ | Priority | Different origin can have different priorities to provide primary, secondary, and backup origins. |
+ | Weight | 1000 (default). Assign weights to your different origin when you want to distribute traffic.|
+ | Resource | If you select **In my directory**, specify the ILB resource in your subscription. |
+ | ID/alias | If you select **By ID or alias**, specify the resource ID of the ILB resource you want to enable private link to. |
| Region | Select the region that is the same or closest to your origin. |
- | Resource type | Select **Microsoft.Network/privateLinkServices**. |
- | Resource | Select your Private link tied to the internal load balancer. |
- | Target sub resource | Leave blank. |
| Request message | Customize message or choose the default. |
-1. Then select **Add** and then **Update** to save your configuration.
+1. Then select **Add** and then **Update** to save the origin group settings.
## Approve Azure Front Door Premium private endpoint connection from Private link service
In this section, you'll map the Private Link service to a private endpoint creat
:::image type="content" source="../media/how-to-enable-private-link-internal-load-balancer/overview.png" alt-text="Screenshot of private link overview page.":::
-1. Select the *pending* private endpoint request from Azure Front Door Premium then select **Approve**.
+1. Select the *pending* private endpoint request from Azure Front Door Premium then select **Approve**. Select **Yes** to confirm you want to create this connection.
:::image type="content" source="../media/how-to-enable-private-link-internal-load-balancer/private-endpoint-pending-approval.png" alt-text="Screenshot of pending approval for private link.":::
frontdoor How To Enable Private Link Storage Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/standard-premium/how-to-enable-private-link-storage-account.md
Previously updated : 03/04/2021 Last updated : 03/18/2022 # Connect Azure Front Door Premium to a storage account origin with Private Link
-This article will guide you through how to configure Azure Front Door Premium SKU to connect to your storage account origin privately using the Azure Private Link service.
+This article will guide you through how to configure Azure Front Door Premium tier to connect to your storage account origin privately using the Azure Private Link service.
+
+## Prerequisites
+
+* An Azure account with an active subscription. You can [create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+* Create a [Private Link](../../private-link/create-private-link-service-portal.md) service for your origin web server.
## Sign in to Azure
In this section, you'll map the Private Link service to a private endpoint creat
:::image type="content" source="../media/how-to-enable-private-link-storage-account/private-endpoint-storage-account.png" alt-text="Screenshot of enabling private link to a storage account.":::
-1. For **Select an Azure resource**, select **In my directory**. Select or enter the following settings to configure the site you want Azure Front Door Premium to connect with privately.
+1. The table below has information of what values to select in the respective fields while enabling private link with Azure Front Door. Select or enter the following settings to configure the storage blob you want Azure Front Door Premium to connect with privately.
| Setting | Value | | - | -- |
+ | Name | Enter a name to identify this storage blog origin. |
+ | Origin Type | Storage (Azure Blobs) |
+ | Host name | Select the host from the dropdown that you want as an origin. |
+ | Origin host header | You can customize the host header of the origin or leave it as default. |
+ | HTTP port | 80 (default) |
+ | HTTPS port | 443 (default) |
+ | Priority | Different origin can have different priorities to provide primary, secondary, and backup origins. |
+ | Weight | 1000 (default). Assign weights to your different origin when you want to distribute traffic.|
| Region | Select the region that is the same or closest to your origin. |
- | Resource type | Select **Microsoft.Storage/storageAccounts**. |
- | Resource | Select your storage account. |
- | Target sub resource | You can select *blob* or *web*. |
+ | Target sub resource | The type of sub-resource for the resource selected above that your private endpoint will be able to access. You can select *blob* or *web*. |
| Request message | Customize message or choose the default. |
-1. Then select **Add** to save your configuration.
+1. Then select **Add** to save your configuration. Then select **Update** to save the origin group settings.
## Approve private endpoint connection from the storage account
frontdoor How To Enable Private Link Web App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/standard-premium/how-to-enable-private-link-web-app.md
Title: 'Connect Azure Front Door Premium to an app service origin with Private Link'
+ Title: 'Connect Azure Front Door Premium to an App service origin with Private Link'
description: Learn how to connect your Azure Front Door Premium to a webapp privately. Previously updated : 02/18/2021 Last updated : 03/18/2022
-# Connect Azure Front Door Premium to a App Service origin with Private Link
+# Connect Azure Front Door Premium to an App service origin with Private Link
-This article will guide you through how to configure Azure Front Door Premium SKU to connect to your App Service privately using the Azure Private Link service.
+This article will guide you through how to configure Azure Front Door Premium tier to connect to your App service privately using the Azure Private Link service.
## Prerequisites
In this section, you'll map the Private Link service to a private endpoint creat
1. Select the origin group that contains the App Service origin you want to enable Private Link for.
-1. Select **+ Add an origin** to add a new app service origin or select a previously created app service origin from the list.
+1. Select **+ Add an origin** to add a new app service origin or select a previously created App service origin from the list.
- :::image type="content" source="../media/how-to-enable-private-link-web-app/private-endpoint-web-app.png" alt-text="Screenshot of enabling private link to a Web App.":::
+ :::image type="content" source="../media/how-to-enable-private-link-app-service/private-endpoint-app-service.png" alt-text="Screenshot of enabling private link to a Web App.":::
-1. For **Select an Azure resource**, select **In my directory**. Select or enter the following settings to configure the site you want Azure Front Door Premium to connect with privately.
+1. The table below has information of what values to select in the respective fields while enabling private link with Azure Front Door. Select or enter the following settings to configure the App service you want Azure Front Door Premium to connect with privately.
| Setting | Value | | - | -- |
+ | Name | Enter a name to identify this storage blog origin. |
+ | Origin Type | Storage (Azure Blobs) |
+ | Host name | Select the host from the dropdown that you want as an origin. |
+ | Origin host header | You can customize the host header of the origin or leave it as default. |
+ | HTTP port | 80 (default) |
+ | HTTPS port | 443 (default) |
+ | Priority | Different origin can have different priorities to provide primary, secondary, and backup origins. |
+ | Weight | 1000 (default). Assign weights to your different origin when you want to distribute traffic.|
| Region | Select the region that is the same or closest to your origin. |
- | Resource type | Select **Microsoft.Web/sites**. |
- | Resource | Select **myPrivateLinkService**. |
- | Target sub resource | sites |
+ | Target sub resource | The type of sub-resource for the resource selected above that your private endpoint will be able to access. You can select *site*. |
| Request message | Customize message or choose the default. |
-1. Then select **Add** to save your configuration.
+1. Select **Add** to save your configuration. Then select **Update** to save the origin group settings.
## Approve Azure Front Door Premium private endpoint connection from App Service
In this section, you'll map the Private Link service to a private endpoint creat
1. In **Networking**, select **Configure your private endpoint connections**.
- :::image type="content" source="../media/how-to-enable-private-link-web-app/web-app-configure-endpoint.png" alt-text="Screenshot of networking settings in a Web App.":::
+ :::image type="content" source="../media/how-to-enable-private-link-app-service/app-service-configure-endpoint.png" alt-text="Screenshot of networking settings in a Web App.":::
1. Select the *pending* private endpoint request from Azure Front Door Premium then select **Approve**.
- :::image type="content" source="../media/how-to-enable-private-link-web-app/private-endpoint-pending-approval.png" alt-text="Screenshot of pending private endpoint request.":::
+ :::image type="content" source="../media/how-to-enable-private-link-app-service/private-endpoint-pending-approval.png" alt-text="Screenshot of pending private endpoint request.":::
1. Once approved, it should look like the screenshot below. It will take a few minutes for the connection to fully establish. You can now access your app service from Azure Front Door Premium. Direct access to the App Service from the public internet gets disabled after private endpoint gets enabled.
- :::image type="content" source="../media/how-to-enable-private-link-web-app/private-endpoint-approved.png" alt-text="Screenshot of approved endpoint request.":::
+ :::image type="content" source="../media/how-to-enable-private-link-app-service/private-endpoint-approved.png" alt-text="Screenshot of approved endpoint request.":::
+
+## Next steps
+
+Learn about [Private Link service with App service](../../app-service/networking/private-endpoint.md).
frontdoor How To Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/standard-premium/how-to-logs.md
Title: 'Azure Front Door Standard/Premium (Preview) Logging'
-description: This article explains how logging works in Azure Front Door Standard/Premium.
+ Title: 'Logs - Azure Front Door'
+description: This article explains how Azure Front Door tracks and monitor your environment with logs.
- Previously updated : 08/26/2021+ Last updated : 03/20/2022
-# Azure Front Door Standard/Premium (Preview) Logging
-
-> [!Note]
-> This documentation is for Azure Front Door Standard/Premium (Preview). Looking for information on Azure Front Door? View [here](../front-door-overview.md).
+# Azure Front Door logs
Azure Front Door provides different logging to help you track, monitor, and debug your Front Door.
Azure Front Door provides different logging to help you track, monitor, and debu
* Health Probe logs provides the logs for every failed probe to your origin. * Web Application Firewall (WAF) logs provide detailed information of requests that gets logged through either detection or prevention mode of an Azure Front Door endpoint. A custom domain that gets configured with WAF can also be viewed through these logs.
-> [!IMPORTANT]
-> Azure Front Door Standard/Premium (Preview) is currently in public preview.
-> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
-> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
Access Logs, health probe logs and WAF logs aren't enabled by default. Use the steps below to enable logging. Activity log entries are collected by default, and you can view them in the Azure portal. Logs can have delays up to a few minutes.
You have three options for storing your logs:
1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Search for Azure Front Door Standard/Premium and select the Azure Front Door profile.
+1. Search for Azure Front Door and select the Azure Front Door profile.
1. In the profile, go to **Monitoring**, select **Diagnostic Setting**. Select **Add diagnostic setting**.
To view activity logs:
## Next steps -- Learn about [Azure Front Door Standard/Premium (Preview) Reports](how-to-reports.md).-- Learn about [Azure Front Door Standard/Premium (Preview) real time monitoring metrics](how-to-monitor-metrics.md).
+- Learn about [Azure Front Door reports](how-to-reports.md).
+- Learn about [Azure Front Door real time monitoring metrics](how-to-monitor-metrics.md).
frontdoor How To Monitor Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/standard-premium/how-to-monitor-metrics.md
Title: Monitoring metrics for Azure Front Door Standard/Premium
-description: This article describes the Azure Front Door Standard/Premium monitoring metrics.
+ Title: Monitoring metrics for Azure Front Door
+description: This article describes the Azure Front Door monitoring metrics.
Previously updated : 02/18/2021 Last updated : 03/20/2022
-# Real-time Monitoring in Azure Front Door Standard/Premium
+# Real-time Monitoring in Azure Front Door
-> [!Note]
-> This documentation is for Azure Front Door Standard/Premium (Preview). Looking for information on Azure Front Door? View [here](../front-door-overview.md).
+Azure Front Door is integrated with Azure Monitor and has 11 metrics to help monitor Azure Front Door in real-time to track, troubleshoot, and debug issues.
-Azure Front Door Standard/Premium is integrated with Azure Monitor and has 11 metrics to help monitor Azure Front Door Standard/Premium in real-time to track, troubleshoot, and debug issues.
-
-Azure Front Door Standard/Premium measures and sends its metrics in 60-second intervals. The metrics can take up to 3 mins to appear in the portal. Metrics can be displayed in charts or grid of your choice and are accessible via portal, PowerShell, CLI, and API. For more information, seeΓÇ»[Azure Monitor metrics](../../azure-monitor/essentials/data-platform-metrics.md).
+Azure Front Door measures and sends its metrics in 60-second intervals. The metrics can take up to 3 mins to appear in the portal. Metrics can be displayed in charts or grid of your choice and are accessible via portal, PowerShell, CLI, and API. For more information, seeΓÇ»[Azure Monitor metrics](../../azure-monitor/essentials/data-platform-metrics.md).
The default metrics are free of charge. You can enable additional metrics for an extra cost. You can configure alerts for each metric such as a threshold for 4XXErrorRate or 5XXErrorRate. When the error rate exceeds the threshold, it will trigger an alert as configured. For more information, see [Create, view, and manage metric alerts using Azure Monitor](../../azure-monitor/alerts/alerts-metric.md).
-> [!IMPORTANT]
-> Azure Front Door Standard/Premium (Preview) is currently in public preview.
-> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
-> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
-
-## Metrics supported in Azure Front Door Standard/Premium
+## Metrics supported in Azure Front Door
| Metrics | Description | Dimensions | | - | - | - |
You can configure alerts for each metric such as a threshold for 4XXErrorRate or
## Access Metrics in Azure portal
-1. From the Azure portal menu, select **All Resources** >> **\<your-AFD Standard/Premium (Preview) -profile>**.
+1. From the Azure portal menu, select **All Resources** >> **\<your-AFD-profile>**.
2. Under **Monitoring**, select **Metrics**:
Alert will be charged based on Azure Monitor. For more information about alerts,
## Next steps -- Learn about [Azure Front Door Standard/Premium Reports](how-to-reports.md).-- Learn about [Azure Front Door Standard/Premium Logs](how-to-logs.md).
+- Learn about [Azure Front Door reports](how-to-reports.md).
+- Learn about [Azure Front Door logs](how-to-logs.md).
frontdoor How To Reports https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/standard-premium/how-to-reports.md
Title: 'Azure Front Door Standard/Premium (Preview) Reports'
+ Title: 'Reports - Azure Front Door'
description: This article explains how reporting works in Azure Front Door. -+ Previously updated : 07/07/2021- Last updated : 03/20/2022+
-# Azure Front Door Standard/Premium (Preview) Reports
+# Azure Front Door reports
-> [!IMPORTANT]
-> Azure Front Door Standard/Premium (Preview) is currently in public preview.
-> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
-> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
-
-Azure Front Door Standard/Premium Analytics Reports provide a built-in and all-around view of how you Azure Front Door behaves along with associated Web Application Firewall metrics. You can also take advantage of Access Logs to do further troubleshooting and debugging. Azure Front Door Analytics reports include traffic reports and security reports.
+Azure Front Door analytics reports provide a built-in and all-around view of how your Azure Front Door behaves along with associated Web Application Firewall metrics. You can also take advantage of Access Logs to do further troubleshooting and debugging. Azure Front Door Analytics reports include traffic reports and security reports.
| Reports | Details | |||
Azure Front Door Standard/Premium Analytics Reports provide a built-in and all-a
| Metrics by dimensions | - Breakdown of matched WAF rules trend by action<br/>- Doughnut chart of events by Rule Set Type and event by rule group<br/>- Break down list of top events by rule ID, countries/regions, IP address, URL, and user agent | > [!NOTE]
-> Security reports is only available with Azure Front Door Premium SKU.
+> Security reports is only available with Azure Front Door Premium tier.
Most of the reports are based on access logs and are offered free of charge to customers on Azure Front Door. Customer doesnΓÇÖt have to enable access logs or do any configuration to view these reports. Reports are accessible through portal and API. CSV download is also supported.
Top URLs allow you to view the amount of traffic incurred over a particular endp
## Top Referrers
-Top Referrers allow customers to view the top 50 referrer that originated the most requests to the contents on a particular endpoint or custom domain. You can view data for any period in the past 90 days. A referrer indicates the URL from which a request was generated. Referrer may come from a search engine or other websites. If a user types a URL (for example, http(s)://contoso.com/https://docsupdatetracker.net/index.html) directly into the address line of a browser, the referrer for the requested is "Empty". Top referrers report includes the follow values. You can sort by request count, request %, data transferred and data transferred %. All the metrics are aggregated by hour and may vary per the time frame selected.
+Top Referrers allow customers to view the top 50 referrer that originated the most requests to the contents on a particular endpoint or custom domain. You can view data for any period in the past 90 days. A referrer indicates the URL from which a request was generated. Referrer may come from a search engine or other websites. If a user types a URL (for example, http(s)://contoso.com/https://docsupdatetracker.net/index.html) directly into the address line of a browser, the referrer for the requested is "Empty". Top referrers report includes the following values. You can sort by request count, request %, data transferred and data transferred %. All the metrics are aggregated by hour and may vary per the time frame selected.
* Referrer, the value of Referrer in raw logs * Request counts
The seven tables are for time, rule ID, countries/regions, IP address, URL, host
## Next steps
-Learn about [Azure Front Door Standard/Premium real time monitoring metrics](how-to-monitor-metrics.md).
+Learn about [Azure Front Door real time monitoring metrics](how-to-monitor-metrics.md).
frontdoor Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/standard-premium/overview.md
- Title: Azure Front Door Standard/Premium| Microsoft Docs
-description: This article provides an overview of Azure Front Door Standard/Premium.
----- Previously updated : 01/27/2022---
-# What is Azure Front Door Standard/Premium (Preview)?
-
-> [!IMPORTANT]
-> This documentation is for Azure Front Door Standard/Premium (Preview). Looking for information on Azure Front Door? View [Azure Front Door Docs](../front-door-overview.md).
-
-Azure Front Door Standard/Premium is a fast, reliable, and secure modern cloud Content Delivery Network (CDN) that uses the Microsoft global edge network and integrates with intelligent threat protection. It combines the capabilities of Azure Front Door, Azure CDN standard, and Azure Web Application Firewall (WAF) into a single secure cloud CDN platform.
-
-With Azure Front Door Standard/Premium, you can transform your global consumer and enterprise applications into secure and high-performing personalized modern applications with contents that reach a global audience at the network edge close to the user. It also enables your application to scale out without warm-up while benefitting from the global HTTP load balancing with instant failover.
-
- :::image type="content" source="../media/overview/front-door-overview.png" alt-text="Azure Front Door Standard/Premium architecture" lightbox="../media/overview/front-door-overview-expanded.png":::
-
-Azure Front Door Standard/Premium works at Layer 7 (HTTP/HTTPS layer), by using anycast with split TCP and Microsoft's global network to improve global connectivity. Based on your customized routing method using rules set, you can ensure that Azure Front Door will route your client requests to the fastest and most available origin. An application origin is any Internet-facing service hosted inside or outside of Azure. Azure Front Door Standard/Premium provides a range of traffic-routing methods and origin health monitoring options to suit different application needs and automatic failover scenarios. Similar to Traffic Manager, Front Door is resilient to failures, including failures to an entire Azure region.
-
-Azure Front Door also protects your app at the edges with integrated Web Application Firewall protection, Bot Protection, and built-in layer 3/layer 4 distributed denial of service (DDoS) protection. It also secures your private back-ends with private link service. Azure Front Door gives you MicrosoftΓÇÖs best-in-practice security at global scale.
-
->[!NOTE]
-> Azure provides a suite of fully managed load-balancing solutions for your scenarios.
->
-> * If you are looking to do DNS based global routing and do **not** have requirements for Transport Layer Security (TLS) protocol termination ("SSL offload"), per-HTTP/HTTPS request or application-layer processing, review [Traffic Manager](../../traffic-manager/traffic-manager-overview.md).
-> * If you want to load balance between your servers in a region at the application layer, review [Application Gateway](../../application-gateway/overview.md)
-> * To do network layer load balancing, review [Load Balancer](../../load-balancer/load-balancer-overview.md).
->
-> Your end-to-end scenarios may benefit from combining these solutions as needed.
-> For an Azure load-balancing options comparison, see [Overview of load-balancing options in Azure](/azure/architecture/guide/technology-choices/load-balancing-overview).
-
-> [!IMPORTANT]
-> Azure Front Door Standard/Premium is currently in public preview.
-> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
-> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
-
-## Why use Azure Front Door Standard/Premium (Preview)?
-
-Azure Front Door Standard/Premium provides a single unified platform, which caters to both dynamic and static acceleration with built in turnkey security integration, and a simple and predictable pricing model. Front Door also enables you to define, manage, and monitor the global routing for your app.
-
-Key features included with Azure Front Door Standard/Premium (Preview):
--- Accelerate application performance by using [anycast](../front-door-traffic-acceleration.md?pivots=front-door-standard-premium#anycast) and **[split TCP connections](../front-door-traffic-acceleration.md?pivots=front-door-standard-premium#splittcp)**.--- Load balance across **[origins](concept-origin.md)** and use intelligent **[health probe](../front-door-health-probes.md)** monitoring.--- Define your own **[custom domain](how-to-add-custom-domain.md)** with flexible domain validation.--- Secure applications with integrated **[Web Application Firewall (WAF)](../../web-application-firewall/afds/afds-overview.md)**.--- Perform SSL offload and use integrated **[certificate management](how-to-configure-https-custom-domain.md)**.--- Secure your origins with **[Private Link](concept-private-link.md)**. --- Customize traffic routing and optimizations via **[Rule Sets](../front-door-rules-engine.md)**.--- Analyze **[built-in reports](how-to-reports.md)** with an all-in-one dashboard for both Front Door and security patterns.--- **[Monitoring your Front Door traffic in real time](how-to-monitor-metrics.md)**, and configure alerts that integrate with Azure Monitor.--- **[Log each Front Door request](how-to-logs.md)** and failed health probes.--- Natively support end-to-end IPv6 connectivity and the HTTP/2 protocol.-
-## Pricing
-
-Azure Front Door Standard/Premium has two SKUs, Standard and Premium. See [Tier Comparison](tier-comparison.md). For pricing information, see [Front Door Pricing](https://azure.microsoft.com/pricing/details/frontdoor/).
-
-## What's new?
-
-Subscribe to the RSS feed and view the latest Azure Front Door feature updates on the [Azure Updates](https://azure.microsoft.com/updates/?category=networking&query=Azure%20Front%20Door) page.
-
-## Next steps
-
-* Learn how to [create a Front Door](create-front-door-portal.md).
frontdoor Tier Comparison https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/standard-premium/tier-comparison.md
Title: Azure Front Door Standard/Premium SKU comparison
-description: This article provides an overview of Azure Front Door Standard and Premium SKU and feature differences between them.
+ Title: Azure Front Door tier comparison
+description: This article provides an overview of Azure Front Door tiers and feature differences between them.
Previously updated : 02/18/2021- Last updated : 03/20/2022+
-# Overview of Azure Front Door Standard/Premium SKU (Preview)
-
-> [!Note]
-> This documentation is for Azure Front Door Standard/Premium (Preview). Looking for information on Azure Front Door? View [here](../front-door-overview.md).
-
-Azure Front Door is offered for 3 different SKUs, [Azure Front Door](../front-door-overview.md), Azure Front Door Standard (Preview), and Azure Front Door Premium (Preview). Azure Front Door Standard/Premium SKUs combines capabilities of Azure Front Door, Azure CDN Standard from Microsoft, Azure WAF into a single secure cloud CDN platform with intelligent threat protection.
-
-> [!IMPORTANT]
-> Azure Front Door Standard/Premium (Preview) is currently in public preview.
-> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
-> For more information, see [**Supplemental Terms of Use for Microsoft Azure Previews**](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
-
-* **Azure Front Door Standard SKU** is:
-
- * Content delivery optimized
- * Offering both static and dynamic content acceleration
- * Global load balancing
- * SSL offload
- * Domain and certificate management
- * Enhanced traffic analytics
- * Basic security capabilities
-
-* **Azure Front Door Premium SKU** builds on capabilities of Standard SKU, and adds:
-
- * Extensive security capabilities across WAF
- * BOT protection
- * Private Link support
- * Integration with Microsoft Threat Intelligence and security analytics.
-
-![Diagram showing a comparison between Front Door SKUs.](../media/tier-comparison/tier-comparison.png)
-
-## Feature comparison
-
-| Feature | Standard | Premium |
-|-|:-:|:|
-| Custom domains | Yes | Yes |
-| SSL Offload | Yes | Yes |
-| Caching | Yes | Yes |
-| Compression | Yes | Yes |
-| Global load balancing | Yes | Yes |
-| Layer 7 routing | Yes | Yes |
-| URL rewrite | Yes | Yes |
-| Rules Engine | Yes | Yes |
-| Private Origin (Private Link) | No | Yes |
-| WAF | Custom Rules only | Yes |
-| Bot Protection | No | Yes |
-| Enhanced Metrics and diagnostics | Yes | Yes |
-| Traffic report | Yes | Yes |
-| Security Report | No | Yes |
+# Overview of Azure Front Door tier
++
+Azure Front Door is offered in 2 different tiers, Azure Front Door Standard and Azure Front Door Premium. Azure Front Door Standard and Premium tier combines capabilities of Azure Front Door (classic), Azure CDN Standard from Microsoft (classic), and Azure WAF into a single secure cloud CDN platform with intelligent threat protection.
++
+## Feature comparison between tiers
+
+| Features and optimization | Standard | Premium | Classic |
+|--|--|--|--|
+| Static file delivery | Yes | Yes | Yes |
+| Dynamic site deliver | Yes | Yes | Yes |
+| Custom domains | Yes - DNS TXT record based domain validation | Yes - DNS TXT record based domain validation | Yes - CNAME based validation |
+| Cache manage (purge, rules, and compression) | Yes | Yes | Yes |
+| Origin load balancing | Yes | Yes | Yes |
+| Path based routing | Yes | Yes | Yes |
+| Rules engine | Yes | Yes | Yes |
+| Server variable | Yes | Yes | No |
+| Regular expression in rules engine | Yes | Yes | No |
+| Expanded metrics | Yes | Yes | No |
+| Advanced analytics/built-in reports | Yes | Yes - includes WAF report | No |
+| Raw logs - access logs and WAF logs | Yes | Yes | Yes |
+| Health probe log | Yes | Yes | No |
+| Custom Web Application Firewall (WAF) rules | Yes | Yes | Yes |
+| Microsoft managed rule set | No | Yes | Yes - Only default rule set 1.1 or below |
+| Bot protection | No | Yes | No |
+| Private link support | No | Yes | No |
+| Simplified price (base + usage) | Yes | Yes | No |
+| Azure Policy integration | Yes | Yes | No |
+| Azure Advisory integration | Yes | Yes | No |
## Next steps
frontdoor Troubleshoot Compression https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/standard-premium/troubleshoot-compression.md
Title: Troubleshooting file compression in Azure Front Door Standard/Premium
+ Title: Troubleshooting file compression in Azure Front Door
description: Learn how to troubleshoot issues with file compression in Azure Front Door. This article covers several possible causes. Previously updated : 02/18/2020- Last updated : 03/20/2022+
-# Troubleshooting Azure Front Door Standard/Premium file compression
+# Troubleshooting Azure Front Door file compression
-> [!Note]
-> This documentation is for Azure Front Door Standard/Premium (Preview). Looking for information on Azure Front Door? View [here](../front-door-overview.md).
-
-This article helps you troubleshoot Azure Front Door Standard/Premium file compression issues.
-
-> [!IMPORTANT]
-> Azure Front Door Standard/Premium (Preview) is currently in public preview.
-> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
-> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+This article helps you troubleshoot Azure Front Door file compression issues.
## Symptom
There are several possible causes, including:
## Troubleshooting steps > [!TIP]
-> As with deploying new endpoints, AFD configuration changes take some time to propagate through the network. Usually, changes are applied within 90 minutes. If this is the first time you've set up compression for your CDN endpoint, you should consider waiting 1-2 hours to be sure the compression settings have propagated to the POPs.
+> As with deploying new endpoints, Azure Front Door configuration changes take some time to propagate through the network. Usually, changes are applied within 90 minutes. If this is the first time you've set up compression for your CDN endpoint, you should consider waiting 1-2 hours to be sure the compression settings have propagated to the POPs.
> ### Verify the request
The **Via** HTTP header indicates to the web server that the request is being pa
* **IIS 6**: Set HcNoCompressionForProxies="FALSE" in the IIS Metabase properties. For for information, see [IIS 6 Compression](/previous-versions/iis/6.0-sdk/ms525390(v=vs.90)). * **IIS 7 and up**: Set both **noCompressionForHttp10** and **noCompressionForProxies** to *False* in the server configuration. For more information, see, [HTTP Compression](https://www.iis.net/configreference/system.webserver/httpcompression).+
+## Next steps
+
+For answers to Azure Front Door common questions, see [Azure Front Door FAQ](../front-door-faq.yml).
governance Export Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/how-to/export-resources.md
To export a policy definition from Azure portal, follow these steps:
Policies** button at the bottom of the page. - **Repository filter**: Set to _My repositories_ to see only repositories you own or _All
- repositories_ to see all you granted the GitHub Action access to.
+ repositories_ to see all you granted the GitHub Actions access to.
- **Repository**: Set to the repository that you want to export the Azure Policy resources to. - **Branch**: Set the branch in the repository. Using a branch other than the default is a good way to validate your updates before merging further into your source code.
governance Get Compliance Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/how-to/get-compliance-data.md
to trigger an on-demand evaluation scan from your
[GitHub workflow](https://docs.github.com/actions/configuring-and-managing-workflows/configuring-a-workflow#about-workflows) on one or multiple resources, resource groups, or subscriptions, and gate the workflow based on the compliance state of resources. You can also configure the workflow to run at a scheduled time so
-that you get the latest compliance status at a convenient time. Optionally, this GitHub action can
+that you get the latest compliance status at a convenient time. Optionally, this GitHub Actions can
generate a report on the compliance state of scanned resources for further analysis or for archiving.
jobs:
``` For more information and workflow samples, see the
-[GitHub Action for Azure Policy Compliance Scan repo](https://github.com/Azure/policy-compliance-scan).
+[GitHub Actions for Azure Policy Compliance Scan repo](https://github.com/Azure/policy-compliance-scan).
#### On-demand evaluation scan - Azure CLI
governance Remediate Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/how-to/remediate-resources.md
doesn't impact its operation with Azure Policy.
> - If a resource modified by **deployIfNotExists** or **modify** is outside the scope of the policy > assignment > - If the template accesses properties on resources outside the scope of the policy assignment
+>
+> Also, changing a a policy definition does not update the assignment or the associated managed identity.
## Configure policy definition
governance Policy As Code Github https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/tutorials/policy-as-code-github.md
To export a policy definition from Azure portal, follow these steps:
Policies** button at the bottom of the page. - **Repository filter**: Set to _My repositories_ to see only repositories you own or _All
- repositories_ to see all you granted the GitHub Action access to.
+ repositories_ to see all you granted the GitHub Actions access to.
- **Repository**: Set to the repository that you want to export the Azure Policy resources to. - **Branch**: Set the branch in the repository. Using a branch other than the default is a good way to validate your updates before merging further into your source code.
you can trigger an on-demand compliance evaluation scan from your
on one or multiple resources, resource groups, or subscriptions, and alter the workflow path based on the compliance state of those resources. You can also configure the workflow to run at a scheduled time to get the latest compliance status at a convenient time. Optionally, this
-GitHub action can also generate a report on the compliance state of scanned resources for further
+GitHub Actions can also generate a report on the compliance state of scanned resources for further
analysis or for archiving. The following example runs a compliance scan for a subscription.
hdinsight Cluster Reboot Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/cluster-reboot-vm.md
You can use the **Try it** feature in the API doc to send requests to HDInsight.
POST https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.HDInsight/clusters/{clusterName}/listHosts?api-version=2018-06-01-preview ```
-1. Restart hosts. After you get the names of the nodes that you want to reboot, restart the nodes by using the REST API to reboot the nodes. The node name follows the pattern of *NodeType(wn/hn/zk/gw/id)* + *x* + *first six characters of cluster name*. For more information, see [HDInsight restart hosts REST API operation](/rest/api/hdinsight/2021-06-01/virtual-machines/restart-hosts).
+1. Restart hosts. After you get the names of the nodes that you want to reboot, restart the nodes by using the REST API to reboot the nodes. The node name follows the pattern of *NodeType(wn/hn/zk/gw/ib)* + *x* + *first six characters of cluster name*. For more information, see [HDInsight restart hosts REST API operation](/rest/api/hdinsight/2021-06-01/virtual-machines/restart-hosts).
``` POST https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.HDInsight/clusters/{clusterName}/restartHosts?api-version=2018-06-01-preview
healthcare-apis Workspace Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/workspace-overview.md
Previously updated : 03/22/2022 Last updated : 03/28/2022
One or more workspaces can be created in a resource group from the Azure portal,
A workspace can't be deleted unless all child service instances within the workspace have been deleted. This feature helps prevent any accidental deletion of service instances. However, when a workspace resource group is deleted, all the workspaces and child service instances within the workspace resource group get deleted.
-Workspace names can be reused in the same Azure subscription, but not in a different Azure subscription, after deletion. However, when the move operation is supported and enabled, workspaces and its child resources can be moved from one subscription to another subscription if certain requirements are met. One requirement is that the two subscriptions must be part of the same Azure Active Directory (Azure AD) tenant. Another requirement is that the Private Link configuration isn't enabled. Names for FHIR services, DICOM services and MedTech services can be reused in the same or different subscription after deletion if there's no collision with the URLs of any existing services.
+Workspace names can be reused in the same Azure subscription, but not in a different Azure subscription, after deletion. However, when the move operation is supported and enabled, workspaces and its child resources can be moved from one subscription to another subscription if certain requirements are met. One requirement is that the two subscriptions must be part of the same Azure Active Directory (Azure AD) tenant. Another requirement is that the Private Link configuration isn't enabled. Names for FHIR services, DICOM services, and MedTech services can be reused in the same or different subscription after deletion if there's no collision with the URLs of any existing services.
## Workspace and Azure region selection
When you create a workspace, it must be configured for an Azure region, which ca
Once the Azure Health Data Services workspace is created, youΓÇÖre now ready to create one or more service instances from the Azure portal. You can create multiple service instances of the same type or different types in one workspace. Within the workspace, you can apply shared configuration settings to child service instances, which are covered in the workspace and configuration settings section.
-[ ![Azure Resource Group](media/azure-resource-group.png) ](media/azure-resource-group.png#lightbox)
+[ ![Screenshot of Health Data Services Azure Resource Group diagram.](media/azure-resource-group.png) ](media/azure-resource-group.png#lightbox)
Additionally, workspaces can be created using Azure Resource Manager deployment templates, a process commonly known as infrastructure as code (IaC). This option offers the ability to customize the ARM templates and complete the workspace creation and service instance creation in a combined step.
load-balancer Quickstart Basic Internal Load Balancer Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/basic/quickstart-basic-internal-load-balancer-powershell.md
+
+ Title: 'Quickstart: Create an internal basic load balancer - Azure PowerShell'
+
+description: This quickstart shows how to create an internal basic load balancer using Azure PowerShell
+++ Last updated : 03/24/2022++
+#Customer intent: I want to create a load balancer so that I can load balance internal traffic to VMs.
++
+# Quickstart: Create an internal basic load balancer to load balance VMs using Azure PowerShell
+
+Get started with Azure Load Balancer by using Azure PowerShell to create an internal load balancer and two virtual machines.
+
+>[!NOTE]
+>Standard SKU load balancer is recommended for production workloads. For more information about skus, see **[Azure Load Balancer SKUs](../skus.md)**.
+
+## Prerequisites
+
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F)
+
+- Azure PowerShell installed locally or Azure Cloud Shell
+
+If you choose to install and use PowerShell locally, this article requires the Azure PowerShell module version 5.4.1 or later. Run `Get-Module -ListAvailable Az` to find the installed version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-Az-ps). If you're running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
+
+## Create a resource group
+
+An Azure resource group is a logical container into which Azure resources are deployed and managed.
+
+Create a resource group with [New-AzResourceGroup](/powershell/module/az.resources/new-azresourcegroup).
+
+```azurepowershell-interactive
+New-AzResourceGroup -Name 'CreateIntLBQS-rg' -Location 'eastus'
+```
+
+## Configure virtual network
+
+When you create an internal load balancer, a virtual network is configured as the network for the load balancer. Before you deploy VMs and test your load balancer, create the supporting virtual network resources.
+
+Create a virtual network for the backend virtual machines
+
+Create a network security group to define inbound connections to your virtual network
+
+Create an Azure Bastion host to securely manage the virtual machines in the backend pool
+
+### Create virtual network, network security group and bastion host
+
+* Create a virtual network with [New-AzVirtualNetwork](/powershell/module/az.network/new-azvirtualnetwork)
+
+* Create a network security group rule with [New-AzNetworkSecurityRuleConfig](/powershell/module/az.network/new-aznetworksecurityruleconfig)
+
+* Create an Azure Bastion host with [New-AzBastion](/powershell/module/az.network/new-azbastion)
+
+```azurepowershell-interactive
+## Create backend subnet config ##
+$subnet = @{
+ Name = 'myBackendSubnet'
+ AddressPrefix = '10.1.0.0/24'
+}
+$subnetConfig = New-AzVirtualNetworkSubnetConfig @subnet
+
+## Create Azure Bastion subnet. ##
+$bastsubnet = @{
+ Name = 'AzureBastionSubnet'
+ AddressPrefix = '10.1.1.0/24'
+}
+$bastsubnetConfig = New-AzVirtualNetworkSubnetConfig @bastsubnet
+
+## Create the virtual network ##
+$net = @{
+ Name = 'myVNet'
+ ResourceGroupName = 'CreateIntLBQS-rg'
+ Location = 'eastus'
+ AddressPrefix = '10.1.0.0/16'
+ Subnet = $subnetConfig,$bastsubnetConfig
+}
+$vnet = New-AzVirtualNetwork @net
+
+## Create public IP address for bastion host. ##
+$ip = @{
+ Name = 'myBastionIP'
+ ResourceGroupName = 'CreateIntLBQS-rg'
+ Location = 'eastus'
+ Sku = 'Standard'
+ AllocationMethod = 'Static'
+}
+$publicip = New-AzPublicIpAddress @ip
+
+## Create bastion host ##
+$bastion = @{
+ ResourceGroupName = 'CreateIntLBQS-rg'
+ Name = 'myBastion'
+ PublicIpAddress = $publicip
+ VirtualNetwork = $vnet
+}
+New-AzBastion @bastion -AsJob
+
+## Create rule for network security group and place in variable. ##
+$nsgrule = @{
+ Name = 'myNSGRuleHTTP'
+ Description = 'Allow HTTP'
+ Protocol = '*'
+ SourcePortRange = '*'
+ DestinationPortRange = '80'
+ SourceAddressPrefix = 'Internet'
+ DestinationAddressPrefix = '*'
+ Access = 'Allow'
+ Priority = '2000'
+ Direction = 'Inbound'
+}
+$rule1 = New-AzNetworkSecurityRuleConfig @nsgrule
+
+## Create network security group ##
+$nsg = @{
+ Name = 'myNSG'
+ ResourceGroupName = 'CreateIntLBQS-rg'
+ Location = 'eastus'
+ SecurityRules = $rule1
+}
+New-AzNetworkSecurityGroup @nsg
+
+```
+## Create load balancer
+
+This section details how you can create and configure the following components of the load balancer:
+
+* Create a front-end IP with [New-AzLoadBalancerFrontendIpConfig](/powershell/module/az.network/new-azloadbalancerfrontendipconfig) for the frontend IP pool. This IP receives the incoming traffic on the load balancer
+
+* Create a back-end address pool with [New-AzLoadBalancerBackendAddressPoolConfig](/powershell/module/az.network/new-azloadbalancerbackendaddresspoolconfig) for traffic sent from the frontend of the load balancer
+
+* Create a health probe with [Add-AzLoadBalancerProbeConfig](/powershell/module/az.network/add-azloadbalancerprobeconfig) that determines the health of the backend VM instances
+
+* Create a load balancer rule with [Add-AzLoadBalancerRuleConfig](/powershell/module/az.network/add-azloadbalancerruleconfig) that defines how traffic is distributed to the VMs
+
+* Create a public load balancer with [New-AzLoadBalancer](/powershell/module/az.network/new-azloadbalancer)
+
+```azurepowershell-interactive
+## Place virtual network created in previous step into a variable. ##
+$net = @{
+ Name = 'myVNet'
+ ResourceGroupName = 'CreateIntLBQS-rg'
+}
+$vnet = Get-AzVirtualNetwork @net
+
+## Create load balancer frontend configuration and place in variable. ##
+$lbip = @{
+ Name = 'myFrontEnd'
+ PrivateIpAddress = '10.1.0.4'
+ SubnetId = $vnet.subnets[0].Id
+}
+$feip = New-AzLoadBalancerFrontendIpConfig @lbip
+
+## Create backend address pool configuration and place in variable. ##
+$bepool = New-AzLoadBalancerBackendAddressPoolConfig -Name 'myBackEndPool'
+
+## Create the health probe and place in variable. ##
+$probe = @{
+ Name = 'myHealthProbe'
+ Protocol = 'tcp'
+ Port = '80'
+ IntervalInSeconds = '360'
+ ProbeCount = '5'
+}
+$healthprobe = New-AzLoadBalancerProbeConfig @probe
+
+## Create the load balancer rule and place in variable. ##
+$lbrule = @{
+ Name = 'myHTTPRule'
+ Protocol = 'tcp'
+ FrontendPort = '80'
+ BackendPort = '80'
+ IdleTimeoutInMinutes = '15'
+ FrontendIpConfiguration = $feip
+ BackendAddressPool = $bePool
+}
+$rule = New-AzLoadBalancerRuleConfig @lbrule
+
+## Create the load balancer resource. ##
+$loadbalancer = @{
+ ResourceGroupName = 'CreateIntLBQS-rg'
+ Name = 'myLoadBalancer'
+ Location = 'eastus'
+ Sku = 'Basic'
+ FrontendIpConfiguration = $feip
+ BackendAddressPool = $bePool
+ LoadBalancingRule = $rule
+ Probe = $healthprobe
+}
+New-AzLoadBalancer @loadbalancer
+
+```
+
+## Create virtual machines
+
+In this section, you'll create the two virtual machines for the backend pool of the load balancer.
+
+* Create three network interfaces with [New-AzNetworkInterface](/powershell/module/az.network/new-aznetworkinterface)
+
+* Set an administrator username and password for the VMs with [Get-Credential](/powershell/module/microsoft.powershell.security/get-credential)
+
+* Use [New-AzAvailabilitySet](/powershell/module/az.compute/new-azvm) to create an availability set for the virtual machines.
+
+* Create the virtual machines with:
+
+ * [New-AzVM](/powershell/module/az.compute/new-azvm)
+
+ * [New-AzVMConfig](/powershell/module/az.compute/new-azvmconfig)
+
+ * [Set-AzVMOperatingSystem](/powershell/module/az.compute/set-azvmoperatingsystem)
+
+ * [Set-AzVMSourceImage](/powershell/module/az.compute/set-azvmsourceimage)
+
+ * [Add-AzVMNetworkInterface](/powershell/module/az.compute/add-azvmnetworkinterface)
+
+```azurepowershell-interactive
+# Set the administrator and password for the VMs. ##
+$cred = Get-Credential
+
+## Place virtual network created in previous step into a variable. ##
+$net = @{
+ Name = 'myVNet'
+ ResourceGroupName = 'CreateIntLBQS-rg'
+}
+$vnet = Get-AzVirtualNetwork @net
+
+## Place the load balancer into a variable. ##
+$lb = @{
+ Name = 'myLoadBalancer'
+ ResourceGroupName = 'CreateIntLBQS-rg'
+}
+$bepool = Get-AzLoadBalancer @lb | Get-AzLoadBalancerBackendAddressPoolConfig
+
+## Place the network security group into a variable. ##
+$sg = @{
+ Name = 'myNSG'
+ ResourceGroupName = 'CreateIntLBQS-rg'
+}
+$nsg = Get-AzNetworkSecurityGroup @sg
+
+## Create availability set for the virtual machines. ##
+$set = @{
+ Name = 'myAvailabilitySet'
+ ResourceGroupName = 'CreateIntLBQS-rg'
+ Location = 'eastus'
+ Sku = 'Aligned'
+ PlatformFaultDomainCount = '2'
+ PlatformUpdateDomainCount = '2'
+}
+$avs = New-AzAvailabilitySet @set
+
+## For loop with variable to create virtual machines for load balancer backend pool. ##
+for ($i=1; $i -le 2; $i++)
+{
+## Command to create network interface for VMs ##
+$nic = @{
+ Name = "myNicVM$i"
+ ResourceGroupName = 'CreateIntLBQS-rg'
+ Location = 'eastus'
+ Subnet = $vnet.Subnets[0]
+ NetworkSecurityGroup = $nsg
+ LoadBalancerBackendAddressPool = $bepool
+}
+$nicVM = New-AzNetworkInterface @nic
+
+## Create a virtual machine configuration for VMs ##
+$vmsz = @{
+ VMName = "myVM$i"
+ VMSize = 'Standard_DS1_v2'
+ AvailabilitySetId = $avs.Id
+}
+$vmos = @{
+ ComputerName = "myVM$i"
+ Credential = $cred
+}
+$vmimage = @{
+ PublisherName = 'MicrosoftWindowsServer'
+ Offer = 'WindowsServer'
+ Skus = '2019-Datacenter'
+ Version = 'latest'
+}
+$vmConfig = New-AzVMConfig @vmsz `
+ | Set-AzVMOperatingSystem @vmos -Windows `
+ | Set-AzVMSourceImage @vmimage `
+ | Add-AzVMNetworkInterface -Id $nicVM.Id
+
+## Create the virtual machine for VMs ##
+$vm = @{
+ ResourceGroupName = 'CreateIntLBQS-rg'
+ Location = 'eastus'
+ VM = $vmConfig
+}
+New-AzVM @vm -AsJob
+}
+```
+
+The deployments of the virtual machines and bastion host are submitted as PowerShell jobs. To view the status of the jobs, use [Get-Job](/powershell/module/microsoft.powershell.core/get-job):
+
+```azurepowershell-interactive
+Get-Job
+
+Id Name PSJobTypeName State HasMoreData Location Command
+-- - - -- -- -- -
+1 Long Running O… AzureLongRunni… Completed True localhost New-AzBastion
+2 Long Running O… AzureLongRunni… Completed True localhost New-AzVM
+3 Long Running O… AzureLongRunni… Completed True localhost New-AzVM
+```
++
+## Create the test virtual machine
+
+Create the virtual machine with:
+
+* [New-AzNetworkInterface](/powershell/module/az.network/new-aznetworkinterface)
+
+* [New-AzVM](/powershell/module/az.compute/new-azvm)
+
+* [New-AzVMConfig](/powershell/module/az.compute/new-azvmconfig)
+
+* [Set-AzVMOperatingSystem](/powershell/module/az.compute/set-azvmoperatingsystem)
+
+* [Set-AzVMSourceImage](/powershell/module/az.compute/set-azvmsourceimage)
+
+* [Add-AzVMNetworkInterface](/powershell/module/az.compute/add-azvmnetworkinterface)
+
+```azurepowershell-interactive
+# Set the administrator and password for the VM. ##
+$cred = Get-Credential
+
+## Place the virtual network into a variable. ##
+$net = @{
+ Name = 'myVNet'
+ ResourceGroupName = 'CreateIntLBQS-rg'
+}
+$vnet = Get-AzVirtualNetwork @net
+
+## Place the network security group into a variable. ##
+$sg = @{
+ Name = 'myNSG'
+ ResourceGroupName = 'CreateIntLBQS-rg'
+}
+$nsg = Get-AzNetworkSecurityGroup @sg
+
+## Command to create network interface for VM ##
+$nic = @{
+ Name = "myNicTestVM"
+ ResourceGroupName = 'CreateIntLBQS-rg'
+ Location = 'eastus'
+ Subnet = $vnet.Subnets[0]
+ NetworkSecurityGroup = $nsg
+}
+$nicVM = New-AzNetworkInterface @nic
+
+## Create a virtual machine configuration for VMs ##
+$vmsz = @{
+ VMName = "myTestVM"
+ VMSize = 'Standard_DS1_v2'
+}
+$vmos = @{
+ ComputerName = "myTestVM"
+ Credential = $cred
+}
+$vmimage = @{
+ PublisherName = 'MicrosoftWindowsServer'
+ Offer = 'WindowsServer'
+ Skus = '2019-Datacenter'
+ Version = 'latest'
+}
+$vmConfig = New-AzVMConfig @vmsz `
+ | Set-AzVMOperatingSystem @vmos -Windows `
+ | Set-AzVMSourceImage @vmimage `
+ | Add-AzVMNetworkInterface -Id $nicVM.Id
+
+## Create the virtual machine for VMs ##
+$vm = @{
+ ResourceGroupName = 'CreateIntLBQS-rg'
+ Location = 'eastus'
+ VM = $vmConfig
+}
+New-AzVM @vm
+```
+
+## Install IIS
+
+Use [Set-AzVMExtension](/powershell/module/az.compute/set-azvmextension) to install the Custom Script Extension.
+
+The extension runs `PowerShell Add-WindowsFeature Web-Server` to install the IIS webserver and then updates the Default.htm page to show the hostname of the VM:
+
+> [!IMPORTANT]
+> Ensure the virtual machine deployments have completed from the previous steps before proceeding. Use `Get-Job` to check the status of the virtual machine deployment jobs.
+
+```azurepowershell-interactive
+## For loop with variable to install custom script extension on virtual machines. ##
+for ($i=1; $i -le 2; $i++)
+{
+$ext = @{
+ Publisher = 'Microsoft.Compute'
+ ExtensionType = 'CustomScriptExtension'
+ ExtensionName = 'IIS'
+ ResourceGroupName = 'CreateIntLBQS-rg'
+ VMName = "myVM$i"
+ Location = 'eastus'
+ TypeHandlerVersion = '1.8'
+ SettingString = '{"commandToExecute":"powershell Add-WindowsFeature Web-Server; powershell Add-Content -Path \"C:\\inetpub\\wwwroot\\Default.htm\" -Value $($env:computername)"}'
+}
+Set-AzVMExtension @ext -AsJob
+}
+```
+
+The extensions are deployed as PowerShell jobs. To view the status of the installation jobs, use [Get-Job](/powershell/module/microsoft.powershell.core/get-job):
+
+```azurepowershell-interactive
+Get-Job
+
+Id Name PSJobTypeName State HasMoreData Location Command
+-- - - -- -- -- -
+8 Long Running O… AzureLongRunni… Running True localhost Set-AzVMExtension
+9 Long Running O… AzureLongRunni… Running True localhost Set-AzVMExtension
+```
+
+> [!IMPORTANT]
+> Ensure the custom script extension deployments have completed from the previous steps before proceeding. Use `Get-Job` to check the status of the deployment jobs.
+
+## Test the load balancer
+
+1. [Sign in](https://portal.azure.com) to the Azure portal.
+
+1. Find the private IP address for the load balancer on the **Overview** screen. Select **All services** in the left-hand menu, select **All resources**, and then select **myLoadBalancer**.
+
+2. Make note or copy the address next to **Private IP Address** in the **Overview** of **myLoadBalancer**.
+
+3. Select **All services** in the left-hand menu, select **All resources**, and then from the resources list, select **myTestVM** that is located in the **CreateIntLBQS-rg** resource group.
+
+4. On the **Overview** page, select **Connect**, then **Bastion**.
+
+6. Enter the username and password entered during VM creation.
+
+7. Open **Internet Explorer** on **myTestVM**.
+
+8. Enter the IP address from the previous step into the address bar of the browser. The default page of IIS Web server is displayed on the browser.
+
+To see the load balancer distribute traffic across all three VMs, you can customize the default page of each VM's IIS Web server and then force-refresh your web browser from the client machine.
+
+## Clean up resources
+
+When no longer needed, you can use the [Remove-AzResourceGroup](/powershell/module/az.resources/remove-azresourcegroup) command to remove the resource group, load balancer, and the remaining resources.
+
+```azurepowershell-interactive
+Remove-AzResourceGroup -Name 'CreateIntLBQS-rg'
+```
+
+## Next steps
+
+In this quickstart:
+
+* You created an internal load balancer
+
+* Attached virtual machines
+
+* Configured the load balancer traffic rule and health probe
+
+* Tested the load balancer
+
+To learn more about Azure Load Balancer, continue to:
+> [!div class="nextstepaction"]
+> [What is Azure Load Balancer?](../load-balancer-overview.md)
load-balancer Quickstart Load Balancer Standard Internal Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/quickstart-load-balancer-standard-internal-powershell.md
$nsg = @{
New-AzNetworkSecurityGroup @nsg ```
-## Create standard load balancer
+## Create load balancer
This section details how you can create and configure the following components of the load balancer:
Id Name PSJobTypeName State HasMoreData Location
-- - - -- -- -- - 8 Long Running O… AzureLongRunni… Running True localhost Set-AzVMExtension 9 Long Running O… AzureLongRunni… Running True localhost Set-AzVMExtension
-10 Long Running O… AzureLongRunni… Running True localhost Set-AzVMExtension
``` ## Create the test virtual machine
load-testing How To Parameterize Load Tests https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/how-to-parameterize-load-tests.md
Title: Parameterize load tests with secrets and environment variables
-description: 'Learn how to conduct configurable load tests by using secrets and environment variables as parameters in Azure Load Testing.'
+description: 'Learn how to create configurable load tests by using secrets and environment variables as parameters in Azure Load Testing.'
Previously updated : 11/30/2021 Last updated : 03/22/2022
-# Conduct configurable load tests with secrets and environment variables
+# Create configurable load tests with secrets and environment variables
Learn how to change the behavior of a load test without having to edit the Apache JMeter script. With Azure Load Testing Preview, you can use parameters to make a configurable test script. For example, turn the application endpoint into a parameter to reuse your test script across multiple environments.
The Azure Load Testing service supports two types of parameters:
## <a name="secrets"></a> Configure load tests with secrets
-In this section, you configure your load test to pass secrets to your load test script.
+In this section, you learn how to pass secrets to your load test script in Azure Load Testing. For example, you might use a secret to pass the API key to a web service endpoint that you're load testing. Instead of storing the API key in configuration or hard-coding it in the script, you can save it in a secret store to tightly control access to the secret.
-1. Update the Apache JMeter script to accept and use a secret input parameter. An example of such a parameter is a web service authentication token that you pass into an HTTP header.
+Azure Load Testing enables you to store secrets in Azure Key Vault. Alternatively, when you run your load test in a CI/CD pipeline, you can also use the secret store that's associated with your CI/CD technology, such as Azure Pipelines or GitHub Actions.
-1. Store the secret value in a secret store, which allows you to tightly control access. Azure Load Testing integrates with your Azure key vault, or with the secret store that's linked to your continuous integration and continuous delivery (CI/CD) workflow.
+To use secrets with Azure Load Testing, you perform the following steps:
-1. Configure the load test and pass a reference for the secret to the test script.
+1. Store the secret value in the secret store (Azure Key Vault or the CI/CD secret store).
+1. Pass a reference to the secret into the Apache JMeter test script.
+1. Use the secret value in the Apache JMeter test script by using the `GetSecret` custom function.
-### Use secrets in Apache JMeter
+### <a name="akv_secrets"></a> Use Azure Key Vault to store load test secrets
-In this section, you update the Apache JMeter script to use a secret as an input parameter.
+You can use Azure Key Vault to pass secret values to your test script in Azure Load Testing. You'll add a reference to the secret in the Azure Load Testing configuration. Azure Load Testing then uses this reference to retrieve the secret value in the Apache JMeter script.
-You first define a user-defined variable that retrieves the secret value, and then you can use this variable in the test execution (for example, to set an HTTP request header).
-
-1. Create a user-defined variable in your JMX file, and assign the secret value to it by using the `GetSecret` custom function.
-
- The `GetSecret(<my-secret-name>)` function takes the secret name as an argument. You use this same name when you configure the load test in a later step.
-
- You can create the user-defined variable by using the Apache JMeter IDE, as shown in the following image:
-
- :::image type="content" source="media/how-to-parameterize-load-tests/user-defined-variables.png" alt-text="Screenshot that shows how to add user-defined variables to your Apache JMeter script.":::
-
- Alternatively, you can directly edit the JMX file, as shown in this example code snippet:
-
- ```xml
- <Arguments guiclass="ArgumentsPanel" testclass="Arguments" testname="User Defined Variables" enabled="true">
- <collectionProp name="Arguments.arguments">
- <elementProp name="appToken" elementType="Argument">
- <stringProp name="Argument.name">udv_appToken</stringProp>
- <stringProp name="Argument.value">${__GetSecret(appToken)}</stringProp>
- <stringProp name="Argument.desc">Value for x-secret header </stringProp>
- <stringProp name="Argument.metadata">=</stringProp>
- </elementProp>
- </collectionProp>
- </Arguments>
- ```
-
-1. Reference the user-defined variable in the test script.
-
- You can use the `${}` syntax to reference the variable in the script. In the following example, you use the `udv_appToken` variable to set an HTTP header.
-
- ```xml
- <HeaderManager guiclass="HeaderPanel" testclass="HeaderManager" testname="HTTP Header Manager" enabled="true">
- <collectionProp name="HeaderManager.headers">
- <elementProp name="" elementType="Header">
- <stringProp name="Header.name">api-key</stringProp>
- <stringProp name="Header.value">${udv_appToken}</stringProp>
- </elementProp>
- </collectionProp>
- </HeaderManager>
- ```
-
-### <a name="akv_secrets"></a> Use your Azure key vault
-
-When you create a load test in the Azure portal, or you use a [YAML test configuration file](./reference-test-config-yaml.md), you'll use a reference to a secret in your Azure key vault.
+You'll also need to grant Azure Load Testing access to your Azure key vault to retrieve the secret value.
> [!NOTE] > If you run a load test as part of your CI/CD process, you might also use the related secret store. Skip to [Use the CI/CD secret store](#cicd_secrets).
-1. [Add the secret to your key vault](../key-vault/secrets/quick-create-portal.md#add-a-secret-to-key-vault), if you haven't already done so.
+1. [Add the secret value to your key vault](../key-vault/secrets/quick-create-portal.md#add-a-secret-to-key-vault), if you haven't already done so.
-1. Retrieve the key vault secret identifier for your secret. You'll use this secret identifier to configure your load test.
+1. Retrieve the key vault **secret identifier** for your secret. You'll use this secret identifier to configure your load test.
:::image type="content" source="media/how-to-parameterize-load-tests/key-vault-secret.png" alt-text="Screenshot that shows the details of a secret in an Azure key vault.":::
- The secret identifier is the full URI of the secret in the key vault. Optionally, you can also include a version number. For example, `https://myvault.vault.azure.net/secrets/mysecret/` or `https://myvault.vault.azure.net/secrets/mysecret/abcdef01-2345-6789-0abc-def012345678`.
+ The **secret identifier** is the full URI of the secret in the Azure key vault. Optionally, you can also include a version number. For example, `https://myvault.vault.azure.net/secrets/mysecret/` or `https://myvault.vault.azure.net/secrets/mysecret/abcdef01-2345-6789-0abc-def012345678`.
1. Grant your Azure Load Testing resource access to the key vault.-
- Your Azure Load Testing resource doesn't have permission to retrieve secrets from the key vault. You'll first enable a system-assigned managed identity for your Load Testing resource. Then, you'll grant read permissions to this managed identity.
+
+ To retrieve the secret from your Azure key vault, you need to give read permission to your Azure Load Testing resource. To enable this, you need to first specify an identity for your load testing resource. Azure Load Testing can use a system-assigned or user-assigned identity.
To provide Azure Load Testing access to your key vault, see [Use managed identities for Azure Load Testing](how-to-use-a-managed-identity.md). 1. Reference the secret in the load test configuration.
- You define a load test secret parameter for each secret that you reference in the Apache JMeter script. The parameter name should match the name you used in the test script. The secret parameter value is the key vault security identifier.
+ You define a load test secret parameter for each secret that you reference in the Apache JMeter script. The parameter name should match the secret name that you use in the Apache JMeter test script. The parameter value is the key vault security identifier.
- You can specify secret parameters by doing either of the following:
+ You can specify secret parameters by doing either of the following:
* In the Azure portal, select your load test, select **Configure**, select the **Parameters** tab, and then enter the parameter details. :::image type="content" source="media/how-to-parameterize-load-tests/test-creation-secrets.png" alt-text="Screenshot that shows where to add secret details to a load test in the Azure portal.":::
- * Alternatively, you can specify a secret in the YAML configuration file. For more information about the syntax, see the [Test configuration YAML reference](./reference-test-config-yaml.md).
+ * If you're configuring a CI/CD workflow and use Azure Key Vault, you can specify a secret in the YAML configuration file by using the `secrets` property. For more information about the syntax, see the [Test configuration YAML reference](./reference-test-config-yaml.md).
+
+1. Specify the identity that Azure Load Testing uses to access your secrets in Azure Key Vault.
+
+ The identity can be the system-assigned identity of the load testing resource, or one of the user-assigned identities. Make sure you use the same identity you've granted access previously.
+
+ You can specify the key vault reference identity by doing either of the following:
+
+ * In the Azure portal, select your load test, select **Configure**, select the **Parameters** tab, and then configure the **Key Vault reference identity**.
+
+ :::image type="content" source="media/how-to-parameterize-load-tests/key-vault-reference-identity.png" alt-text="Screenshot that shows how to select key vault reference identity.":::
-### <a name="cicd_secrets"></a> Use the CI/CD secret store
+ * If you're configuring a CI/CD workflow and use Azure Key Vault, you can specify the reference identity in the YAML configuration file by using the `keyVaultReferenceIdentity` property. For more information about the syntax, see the [Test configuration YAML reference](./reference-test-config-yaml.md).
+
+You've now specified a secret in Azure Key Vault and configured your Azure Load Testing resource to retrieve its value. You can now move to [Use secrets in Apache JMeter](#jmeter_secrets).
+
+### <a name="cicd_secrets"></a> Use the CI/CD secret store to save load test secrets
+
+You can use Azure Key Vault to pass secret values to your test script in Azure Load Testing. You'll add a reference to the secret in the Azure Load Testing configuration. Azure Load Testing then uses this reference to retrieve the secret value in the Apache JMeter script.
+
+You'll also need to grant Azure Load Testing access to your Azure key vault to retrieve the secret value.
If you're using Azure Load Testing in your CI/CD workflow, you can also use the associated secret store. For example, you can use [GitHub repository secrets](https://docs.github.com/actions/security-guides/encrypted-secrets), or [secret variables in Azure Pipelines](/azure/devops/pipelines/process/variables?view=azure-devopsd&tabs=yaml%2Cbatch#secret-variables&preserve-view=true).
+You'll first add a secret to the CI/CD secret store. In the CI/CD workflow you'll then pass the secret value to the Azure Load Testing task/action.
+ > [!NOTE] > If you're already using a key vault, you might also use it to store the load test secrets. Skip to [Use Azure Key Vault](#akv_secrets).
If you're using Azure Load Testing in your CI/CD workflow, you can also use the
> [!NOTE] > Be sure to use the actual secret value and not the key vault secret identifier as the value.
-1. Pass the secret as an input parameter for the Load Testing task/action in the CI/CD workflow.
+1. Pass the secret as an input parameter to the Load Testing task/action in the CI/CD workflow.
- The following YAML snippet shows a GitHub Actions example:
+ The following YAML snippet shows how to pass the secret to the [Load Testing GitHub action](https://github.com/marketplace/actions/azure-load-testing):
```yaml - name: 'Azure Load Testing'
If you're using Azure Load Testing in your CI/CD workflow, you can also use the
] ```
- The following YAML snippet shows an Azure Pipelines example:
+ The following YAML snippet shows how to pass the secret to the [Azure Pipelines task](/azure/devops/pipelines/tasks/test/azure-load-testing):
```yml - task: AzureLoadTest@1
If you're using Azure Load Testing in your CI/CD workflow, you can also use the
``` > [!IMPORTANT]
- > The name of the secret parameter needs to match the name that's used in the Apache JMeter script.
+ > The name of the secret input parameter needs to match the name that's used in the Apache JMeter script.
+
+You've now specified a secret in the CI/CD secret store and passed a reference to Azure Load Testing. You can now use the secret in the Apache JMeter script.
+
+### <a name="jmeter_secrets"></a> Use secrets in Apache JMeter
+
+In this section, you'll update the Apache JMeter script to use the secret that you specified earlier.
+
+You first create a user-defined variable that retrieves the secret value. Then, you can use this variable in your test (for example, to pass an API token in an HTTP request header).
+
+1. Create a user-defined variable in your JMX file, and assign the secret value to it by using the `GetSecret` custom function.
+
+ The `GetSecret(<my-secret-name>)` function takes the secret name as an argument. You use this same name when you configure the load test in a later step.
+
+ You can create the user-defined variable by using the Apache JMeter IDE, as shown in the following image:
+
+ :::image type="content" source="media/how-to-parameterize-load-tests/user-defined-variables.png" alt-text="Screenshot that shows how to add user-defined variables to your Apache JMeter script.":::
+
+ Alternatively, you can directly edit the JMX file, as shown in this example code snippet:
+
+ ```xml
+ <Arguments guiclass="ArgumentsPanel" testclass="Arguments" testname="User Defined Variables" enabled="true">
+ <collectionProp name="Arguments.arguments">
+ <elementProp name="appToken" elementType="Argument">
+ <stringProp name="Argument.name">udv_appToken</stringProp>
+ <stringProp name="Argument.value">${__GetSecret(appToken)}</stringProp>
+ <stringProp name="Argument.desc">Value for x-secret header </stringProp>
+ <stringProp name="Argument.metadata">=</stringProp>
+ </elementProp>
+ </collectionProp>
+ </Arguments>
+ ```
+1. Reference the user-defined variable in the test script.
+
+ You can use the `${}` syntax to reference the variable in the script. In the following example, you use the `udv_appToken` variable to set an HTTP header.
+
+ ```xml
+ <HeaderManager guiclass="HeaderPanel" testclass="HeaderManager" testname="HTTP Header Manager" enabled="true">
+ <collectionProp name="HeaderManager.headers">
+ <elementProp name="" elementType="Header">
+ <stringProp name="Header.name">api-key</stringProp>
+ <stringProp name="Header.value">${udv_appToken}</stringProp>
+ </elementProp>
+ </collectionProp>
+ </HeaderManager>
+ ```
## <a name="envvars"></a> Configure load tests with environment variables
load-testing How To Use A Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/how-to-use-a-managed-identity.md
This article shows how you can create a managed identity for an Azure Load Testi
A managed identity in Azure Active Directory (Azure AD) allows your resource to easily access other Azure AD-protected resources, such as Azure Key Vault. The identity is managed by the Azure platform. For more information about managed identities in Azure AD, see [Managed identities for Azure resources](../active-directory/managed-identities-azure-resources/overview.md).
-Azure Load Testing supports only system-assigned identities. A system-assigned identity is associated with your Azure Load Testing resource and is removed when your resource is deleted. A resource can have only one system-assigned identity.
+Azure Load Testing supports two types of identities:
+
+- A **system-assigned identity** is associated with your Azure Load Testing resource and is removed when your resource is deleted. A resource can have only one system-assigned identity.
+
+- A **user-assigned identity** is a standalone Azure resource that you can assign to your Azure Load Testing resource. When you delete the Load Testing resource, the identity is not removed. You can assign multiple user-assigned identities to the Load Testing resource.
> [!IMPORTANT] > Azure Load Testing is currently in preview. For legal terms that apply to Azure features that are in beta, in preview, or otherwise not yet released into general availability, see the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
Azure Load Testing supports only system-assigned identities. A system-assigned i
To add a system-assigned identity for your Azure Load Testing resource, you need to enable a property on the resource. You can set this property by using the Azure portal or by using an Azure Resource Manager (ARM) template.
-### Use the Azure portal
+# [Portal](#tab/azure-portal)
To set up a managed identity in the portal, you first create an Azure Load Testing resource and then enable the feature.
To set up a managed identity in the portal, you first create an Azure Load Testi
:::image type="content" source="media/how-to-use-a-managed-identity/system-assigned-managed-identity.png" alt-text="Screenshot that shows how to turn on system-assigned managed identity for Azure Load Testing.":::
-### Use an ARM template
+# [ARM template](#tab/arm)
You can use an ARM template to automate the deployment of your Azure resources. You can create any resource of type `Microsoft.LoadTestService/loadtests` with an identity by including the following property in the resource definition:
When the resource is created, it gets the following additional properties:
The `tenantId` property identifies which Azure AD tenant the identity belongs to. The `principalId` is a unique identifier for the resource's new identity. Within Azure AD, the service principal has the same name as the Azure Load Testing resource. ++
+## Set a user-assigned identity
+
+Before you can add a user-assigned identity to an Azure Load Testing resource, you must first create this identity. You can then add the identity by using its resource identifier.
+
+# [Portal](#tab/azure-portal)
+
+1. Create a user-assigned managed identity by following the instructions mentioned [here](/azure/active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities?pivots=identity-mi-methods-azp#create-a-user-assigned-managed-identity).
+
+1. In the [Azure portal](https://portal.azure.com/), go to your Azure Load Testing resource.
+
+1. On the left pane, select **Identity**.
+
+1. Select **User assigned** tab and click **Add**.
+
+1. Search and select the identity you created previously. Then select **Add** to add it to the Azure Load Testing resource.
+
+ :::image type="content" source="media/how-to-use-a-managed-identity/user-assigned-managed-identity.png" alt-text="Screenshot that shows how to turn on user-assigned managed identity for Azure Load Testing.":::
+
+# [ARM template](#tab/arm)
+
+You can create an Azure Load Testing resource by using an ARM template and the resource type `Microsoft.LoadTestService/loadtests`. You can specify a user-assigned identity in the `identity` section of the resource definition. Replace the `<RESOURCEID>` text placeholder with the resource ID of your user-assigned identity:
+
+```json
+"identity": {
+ "type": "UserAssigned",
+ "userAssignedIdentities": {
+ "<RESOURCEID>": {}
+ }
+}
+```
+
+The following code snippet shows an example of an Azure Load Testing ARM resource definition with a user-assigned identity:
+
+```json
+{
+ "type": "Microsoft.LoadTestService/loadtests",
+ "apiVersion": "2021-09-01-preview",
+ "name": "[parameters('name')]",
+ "location": "[parameters('location')]",
+ "tags": "[parameters('tags')]",
+ "identity": {
+ "type": "UserAssigned",
+ "userAssignedIdentities": {
+ "<RESOURCEID>": {}
+ }
+}
+```
+
+After the Load Testing resource is created, Azure provides the `principalId` and `clientId` properties:
+
+```json
+"identity": {
+ "type": "UserAssigned",
+ "userAssignedIdentities": {
+ "<RESOURCEID>": {
+ "principalId": "<PRINCIPALID>",
+ "clientId": "<CLIENTID>"
+ }
+ }
+}
+```
+
+The `principalId` is a unique identifier for the identity that's used for Azure AD administration. The `clientId` is a unique identifier for the resource's new identity that's used for specifying which identity to use during runtime calls.
+++ ## Grant access to your Azure key vault A managed identity allows the Azure Load testing resource to access other Azure resources. In this section, you grant the Azure Load Testing service access to read secret values from your key vault.
If you don't already have a key vault, follow the instructions in [Azure Key Vau
:::image type="content" source="media/how-to-use-a-managed-identity/key-vault-add-policy.png" alt-text="Screenshot that shows how to add an access policy to your Azure key vault.":::
-1. Select **Select principal**, and then select the system-assigned principal for your Azure Load Testing resource.
+1. Select **Select principal**, and then select the system-assigned or user-assigned principal for your Azure Load Testing resource.
The name of the system-assigned principal is the same name as the Azure Load Testing resource.
load-testing Reference Test Config Yaml https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/reference-test-config-yaml.md
A test configuration uses the following keys:
| `env` | object | List of environment variables that the Apache JMeter script references. | | `env.name` | string | Name of the environment variable. This name should match the secret name that you use in the Apache JMeter script. | | `env.value` | string | Value of the environment variable. |
+| `keyVaultReferenceIdentity` | string | Resource ID of the user-assigned managed identity for accessing the secrets from your Azure Key Vault. If you use a system-managed identity, this information is not needed. Make sure to grant this user-assigned identity access to your Azure key vault. |
The following example contains the configuration for a load test:
env:
secrets: - name: my-secret value: https://akv-contoso.vault.azure.net/secrets/MySecret
+keyVaultReferenceIdentity: /subscriptions/abcdef01-2345-6789-0abc-def012345678/resourceGroups/sample-rg/providers/Microsoft.ManagedIdentity/userAssignedIdentities/sample-identity
``` ## Next steps
load-testing Tutorial Cicd Azure Pipelines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/tutorial-cicd-azure-pipelines.md
Previously updated : 01/25/2022 Last updated : 03/28/2022 #Customer intent: As an Azure user, I want to learn how to automatically test builds for performance regressions on every merge request and/or deployment by using Azure Pipelines. # Tutorial: Identify performance regressions with Azure Load Testing Preview and Azure Pipelines
-This tutorial describes how to automate performance regression testing by using Azure Load Testing Preview and Azure Pipelines. You'll configure an Azure Pipelines CI/CD workflow with the [Azure Load Testing task](/azure/devops/pipelines/tasks/test/azure-load-testing) to run a load test for a sample web application. You'll then use the test results to identify performance regressions.
+This tutorial describes how to automate performance regression testing by using Azure Load Testing Preview and Azure Pipelines. You'll set up an Azure Pipelines CI/CD workflow to deploy a sample Node.js application on Azure and trigger a load test using the [Azure Load Testing task](/azure/devops/pipelines/tasks/test/azure-load-testing). Once the load test finishes, you'll use the Azure Load Testing dashboard to identify performance issues.
+
+You'll deploy a sample Node.js web app on Azure App Service. The web app uses Azure Cosmos DB for storing the data. The sample application also contains an Apache JMeter script to load test three APIs.
If you're using GitHub Actions for your CI/CD workflows, see the corresponding [GitHub Actions tutorial](./tutorial-cicd-github-actions.md).
You'll learn how to:
## Set up the sample application repository
-To get started, you need a GitHub repository with the sample web application. You'll use this repository to configure an Azure Pipelines workflow to run the load test.
-
-The sample application's source repo includes an Apache JMeter script named *SampleApp.jmx*. This script makes three API calls on each test iteration:
-
-* `add`: Carries out a data insert operation on Azure Cosmos DB for the number of visitors on the web app.
-* `get`: Carries out a GET operation from Azure Cosmos DB to retrieve the count.
-* `lasttimestamp`: Updates the time stamp since the last user went to the website.
-
-1. Open a browser and go to the sample application's [source GitHub repository](https://github.com/Azure-Samples/nodejs-appsvc-cosmosdb-bottleneck.git).
+To get started with this tutorial, you first need to set up a sample Node.js web application. The sample application contains an Azure Pipelines definition to deploy the application on Azure and trigger a load test.
- The sample application is a Node.js app that consists of an Azure App Service web component and an Azure Cosmos DB database.
-
-1. Select **Fork** to fork the sample application's repository to your GitHub account.
-
- :::image type="content" source="./media/tutorial-cicd-azure-pipelines/fork-github-repo.png" alt-text="Screenshot that shows the button to fork the sample application's GitHub repo.":::
## Set up Azure Pipelines access permissions for Azure
To access Azure resources, create a service connection in Azure DevOps and use r
## Configure the Azure Pipelines workflow to run a load test
-In this section, you'll set up an Azure Pipelines workflow that triggers the load test.
+In this section, you'll set up an Azure Pipelines workflow that triggers the load test. The sample application repository already contains a pipelines definition file *azure-pipeline.yml*.
-The sample application repository already contains a pipelines definition file. This pipeline first deploys the sample web application to Azure App Service, and then invokes the load test by using the [Azure Load Testing task](/azure/devops/pipelines/tasks/test/azure-load-testing). The pipeline uses an environment variable to pass the URL of the web application to the Apache JMeter script.
+The Azure Pipelines workflow performs the following steps for every update to the main branch:
+
+- Deploy the sample Node.js application to an Azure App Service web app.
+- Create an Azure Load Testing resource using the *ARMTemplate/template.json* Azure Resource Manager (ARM) template, if the resource doesn't exist yet. Learn more about ARM templates [here](../azure-resource-manager/templates/overview.md).
+- Trigger Azure Load Testing to create and run the load test, based on the Apache JMeter script and the test configuration YAML file in the repository.
+- Invoke Azure Load Testing by using the [Azure Load Testing task](/azure/devops/pipelines/tasks/test/azure-load-testing) and the sample Apache JMeter script *SampleApp.jmx* and the load test configuration file *SampleApp.yaml*.
+
+Follow these steps to configure the Azure Pipelines workflow for your environment:
1. Install the **Azure Load Testing** task extension from the Azure DevOps Marketplace.
The sample application repository already contains a pipelines definition file.
:::image type="content" source="./media/tutorial-cicd-azure-pipelines/create-pipeline-review.png" alt-text="Screenshot that shows the Azure Pipelines Review tab when you're creating a pipeline.":::
+ These variables are used to configure the Azure Pipelines tasks for deploying the sample application to Azure, and to connect to your Azure Load Testing resource.
+ 1. Select **Save and run**, enter text for **Commit message**, and then select **Save and run**. :::image type="content" source="./media/tutorial-cicd-azure-pipelines/create-pipeline-save.png" alt-text="Screenshot that shows selections for saving and running a new Azure pipeline.":::
The sample application repository already contains a pipelines definition file.
:::image type="content" source="./media/tutorial-cicd-azure-pipelines/create-pipeline-status.png" alt-text="Screenshot that shows how to view pipeline job details.":::
-## View results of a load test
-
-For every update to the main branch, the Azure pipeline executes the following steps:
--- Deploy the sample Node.js application to an Azure App Service web app. The name of the web app is configured in the pipeline definition.-- Create an Azure Load Testing resource using the Azure Resource Manager (ARM) template present in the GitHub repository. Learn more about ARM templates [here](../azure-resource-manager/templates/overview.md).-- Trigger Azure Load Testing to create and run the load test, based on the Apache JMeter script and the test configuration YAML file in the repository.
+## View load test results
To view the results of the load test in the pipeline log:
load-testing Tutorial Cicd Github Actions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/tutorial-cicd-github-actions.md
Previously updated : 01/27/2022 Last updated : 03/28/2022 #Customer intent: As an Azure user, I want to learn how to automatically test builds for performance regressions on every pull request and/or deployment by using GitHub Actions. # Tutorial: Identify performance regressions with Azure Load Testing Preview and GitHub Actions
-This tutorial describes how to automate performance regression testing by using Azure Load Testing Preview and GitHub Actions. You'll configure a GitHub Actions CI/CD workflow and use the [Azure Load Testing Action](https://github.com/marketplace/actions/azure-load-testing) to run a load test for a sample web application. You'll then use the test results to identify performance regressions.
+This tutorial describes how to automate performance regression testing by using Azure Load Testing Preview and GitHub Actions. You'll set up a GitHub Actions CI/CD workflow to deploy a sample Node.js application on Azure and trigger a load test using the [Azure Load Testing action](https://github.com/marketplace/actions/azure-load-testing). Once the load test finishes, you'll use the Azure Load Testing dashboard to identify performance issues.
+
+You'll deploy a sample Node.js web app on Azure App Service. The web app uses Azure Cosmos DB for storing the data. The sample application also contains an Apache JMeter script to load test three APIs.
If you're using Azure Pipelines for your CI/CD workflows, see the corresponding [Azure Pipelines tutorial](./tutorial-cicd-azure-pipelines.md).
You'll learn how to:
* An Azure account with an active subscription. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. * A GitHub account where you can create a repository. If you don't have one, you can [create one for free](https://github.com/).
-## Set up your repository
-
-To get started, you need a GitHub repository with the sample web application. You'll use this repository to configure a GitHub Actions workflow to run the load test.
-
-The sample application's source repo includes an Apache JMeter script named *SampleApp.jmx*. This script makes three API calls on each test iteration:
-
-* `add`: Carries out a data insert operation on Azure Cosmos DB for the number of visitors on the web app.
-* `get`: Carries out a GET operation from Azure Cosmos DB to retrieve the count.
-* `lasttimestamp`: Updates the time stamp since the last user went to the website.
-
-1. Open a browser and go to the sample application's [source GitHub repository](https://github.com/Azure-Samples/nodejs-appsvc-cosmosdb-bottleneck.git).
-
- The sample application is a Node.js app that consists of an Azure App Service web component and an Azure Cosmos DB database.
+## Set up the sample application repository
-1. Select **Fork** to fork the sample application's repository to your GitHub account.
+To get started with this tutorial, you first need to set up a sample Node.js web application. The sample application contains a GitHub Actions workflow definition to deploy the application on Azure and trigger a load test.
- :::image type="content" source="./media/tutorial-cicd-github-actions/fork-github-repo.png" alt-text="Screenshot that shows the button to fork the sample application's GitHub repo.":::
## Set up GitHub access permissions for Azure
First, you'll create an Azure Active Directory [service principal](../active-dir
1. Copy this JSON object, which you can use to authenticate from GitHub.
-1. Grant permissions to the service principal to create and run tests with Azure Load Testing. The Load Test Contributor role grants permissions to create, manage and run tests in an Azure Load Testing resource.
+1. Grant permissions to the service principal to create and run tests with Azure Load Testing. The **Load Test Contributor** role grants permissions to create, manage and run tests in an Azure Load Testing resource.
First, retrieve the ID of the service principal object by running this Azure CLI command:
First, you'll create an Azure Active Directory [service principal](../active-dir
### Configure the GitHub secret
-You'll add a GitHub secret to your repository for the service principal you created in the previous step. The Azure Login action uses this secret to authenticate with Azure.
+You'll add a GitHub secret **AZURE_CREDENTIALS** to your repository for the service principal you created in the previous step. The Azure Login action in the GitHub Actions workflow uses this secret to authenticate with Azure.
1. In [GitHub](https://github.com), browse to your forked repository, select **Settings** > **Secrets** > **New repository secret**. :::image type="content" source="./media/tutorial-cicd-github-actions/github-new-secret.png" alt-text="Screenshot that shows selections for adding a new repository secret to your GitHub repo.":::
-1. Paste the JSON role assignment credentials that you copied previously, as the value of secret variable *AZURE_CREDENTIALS*.
+1. Paste the JSON role assignment credentials that you copied previously, as the value of secret variable **AZURE_CREDENTIALS**.
:::image type="content" source="./media/tutorial-cicd-github-actions/github-new-secret-details.png" alt-text="Screenshot that shows the details of the new GitHub repository secret.":::
jobs:
creds: ${{ secrets.AZURE_CREDENTIALS }} ```
-You've now authenticated with Azure from the GitHub. You'll now configure the CI/CD workflow to run a load test by using Azure Load Testing.
+You've now authorized your GitHub Actions workflow to access your Azure Load Testing resource. You'll now configure the CI/CD workflow to run a load test by using Azure Load Testing.
## Configure the GitHub Actions workflow to run a load test
-In this section, you'll set up a GitHub Actions workflow that triggers the load test. The sample application repository contains a workflow file *SampleApp.yaml*. The workflow first deploys the sample web application to Azure App Service, and then invokes the load test by using the [Azure Load Testing Action](https://github.com/marketplace/actions/azure-load-testing). The GitHub action uses an environment variable to pass the URL of the web application to the Apache JMeter script.
+In this section, you'll set up a GitHub Actions workflow that triggers the load test. The sample application repository contains a workflow file *SampleApp.yaml*. The workflow first deploys the sample web application to Azure App Service, and then invokes the load test by using the [Azure Load Testing Action](https://github.com/marketplace/actions/azure-load-testing). The GitHub Actions uses an environment variable to pass the URL of the web application to the Apache JMeter script.
-Update the *SampleApp.yaml* GitHub Actions workflow file to configure the parameters for running the load test.
+The GitHub Actions workflow performs the following steps for every update to the main branch:
+
+- Deploy the sample Node.js application to an Azure App Service web app.
+- Create an Azure Load Testing resource using the *ARMTemplate/template.json* Azure Resource Manager (ARM) template, if the resource doesn't exist yet. Learn more about ARM templates [here](../azure-resource-manager/templates/overview.md).
+- Invoke Azure Load Testing by using the [Azure Load Testing Action](https://github.com/marketplace/actions/azure-load-testing) and the sample Apache JMeter script *SampleApp.jmx* and the load test configuration file *SampleApp.yaml*.
+
+Follow these steps to configure the GitHub Actions workflow for your environment:
1. Open the *.github/workflows/workflow.yml* GitHub Actions workflow file in your sample application's repository.
Update the *SampleApp.yaml* GitHub Actions workflow file to configure the parame
LOAD_TEST_RESOURCE_GROUP: "<your-azure-load-testing-resource-group-name>" ```
+ These variables are used to configure the GitHub actions for deploying the sample application to Azure, and to connect to your Azure Load Testing resource.
+ 1. Commit your changes directly to the main branch. :::image type="content" source="./media/tutorial-cicd-github-actions/commit-workflow.png" alt-text="Screenshot that shows selections for committing changes to the GitHub Actions workflow file."::: The commit will trigger the GitHub Actions workflow in your repository. You can verify that the workflow is running by going to the **Actions** tab.
-## View results of a load test
-
-The GitHub Actions workflow executes the following steps for every update to the main branch:
--- Deploy the sample Node.js application to an Azure App Service web app. The name of the web app is configured in the workflow file.-- Create an Azure Load Testing resource using the Azure Resource Manager (ARM) template present in the GitHub repository. Learn more about ARM templates [here](../azure-resource-manager/templates/overview.md).-- Trigger Azure Load Testing to create and run the load test based on the Apache JMeter script and the test configuration YAML file in the repository.
+## View load test results
To view the results of the load test in the GitHub Actions workflow log:
logic-apps Connect Virtual Network Vnet Isolated Environment Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/connect-virtual-network-vnet-isolated-environment-overview.md
For more information, review the [differences between multi-tenant Azure Logic A
## How an ISE works with a virtual network
-When you create an ISE, you select the Azure virtual network where you want Azure to *inject* or deploy your ISE. When you create logic apps and integration accounts that need access to this virtual network, you can select your ISE as the host location those logic apps and integration accounts. Inside the ISE, logic apps run on dedicated resources separately from others in the multi-tenant Azure Logic Apps environment. Data in an ISE stays in the [same region where you create and deploy that ISE](https://azure.microsoft.com/global-infrastructure/data-residency/).
+When you create an ISE, you select the Azure virtual network where you want Azure to *inject* or deploy your ISE. When you create logic apps and integration accounts that need access to this virtual network, you can select your ISE as the host location for those logic apps and integration accounts. Inside the ISE, logic apps run on dedicated resources separately from others in the multi-tenant Azure Logic Apps environment. Data in an ISE stays in the [same region where you create and deploy that ISE](https://azure.microsoft.com/global-infrastructure/data-residency/).
![Select integration service environment](./media/connect-virtual-network-vnet-isolated-environment-overview/select-logic-app-integration-service-environment.png)
logic-apps Logic Apps Limits And Config https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-limits-and-config.md
For Azure Logic Apps to receive incoming communication through your firewall, yo
### Outbound IP addresses
-For Azure Logic Apps to send outgoing communication through your firewall, you have to allow traffic through *all* the outbound IP addresses described in this section for your logic app's Azure region. If you're using Azure Government, see [Azure Government - Outbound IP addresses](#azure-government-outbound). If your workflow also uses any [managed connectors](../connectors/managed.md), such as the Office 365 Outlook connector or SQL connector, or uses any [custom connectors](/connectors/custom-connectors/), your firewall has to allow traffic through *all* the [managed connector outbound IP addresses](/connectors/common/outbound-ip-addresses) in your logic app's Azure region. If your workflow uses custom connectors that access on-premises resources through the [on-premises data gateway resource in Azure](logic-apps-gateway-connection.md), you need to set up the gateway installation to allow access for the corresponding [*managed connector* outbound IP addresses](/connectors/common/outbound-ip-addresses). For more information about setting up communication settings on the gateway, review these topics:
+For Azure Logic Apps to send outgoing communication through your firewall, you have to allow traffic in your logic app's Azure region for *all the outbound IP addresses* described in this section. If you're using Azure Government, see [Azure Government - Outbound IP addresses](#azure-government-outbound).
+
+Also, if your workflow also uses any [managed connectors](../connectors/managed.md), such as the Office 365 Outlook connector or SQL connector, or uses any [custom connectors](/connectors/custom-connectors/), your firewall has to allow traffic in your logic app's Azure region for [*all the managed connector outbound IP addresses*](/connectors/common/outbound-ip-addresses/#azure-logic-apps). If your workflow uses custom connectors that access on-premises resources through the [on-premises data gateway resource in Azure](logic-apps-gateway-connection.md), you need to set up the gateway installation to allow access for the corresponding [*managed connector* outbound IP addresses](/connectors/common/outbound-ip-addresses#azure-logic-apps). For more information about setting up communication settings on the gateway, review these topics:
* [Adjust communication settings for the on-premises data gateway](/data-integration/gateway/service-gateway-communication) * [Configure proxy settings for the on-premises data gateway](/data-integration/gateway/service-gateway-proxy)
For Azure Logic Apps to send outgoing communication through your firewall, you h
#### Multi-tenant & single-tenant - Outbound IP addresses
+This section lists the outbound IP addresses that Azure Logic Apps requires in your logic app's Azure region to communicate through your firewall. Also, if your workflow uses any managed connectors or custom connectors, your firewall has to allow traffic in your logic app's Azure region for [*all the managed connectors' outbound IP addresses*](/connectors/common/outbound-ip-addresses/#azure-logic-apps). If you have custom connectors that access on-premises resources through the on-premises data gateway resource in Azure, set up your *gateway installation* to allow access for the corresponding managed connector outbound IP addresses.
+ | Region | Logic Apps IP | |--|| | Australia East | 13.75.149.4, 104.210.91.55, 104.210.90.241, 52.187.227.245, 52.187.226.96, 52.187.231.184, 52.187.229.130, 52.187.226.139, 20.53.93.188, 20.53.72.170, 20.53.107.208, 20.53.106.182 |
logic-apps Set Up Devops Deployment Single Tenant Azure Logic Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/set-up-devops-deployment-single-tenant-azure-logic-apps.md
To set up a release pipeline that deploys to Azure, follow the associated steps
#### [GitHub](#tab/github)
-For GitHub deployments, you can deploy your logic app by using [GitHub Actions](https://docs.github.com/actions), for example, the GitHub Action in Azure Functions. This action requires that you pass through the following information:
+For GitHub deployments, you can deploy your logic app by using [GitHub Actions](https://docs.github.com/actions), for example, the GitHub Actions in Azure Functions. This action requires that you pass through the following information:
- The logic app name to use for deployment - The zip file that contains your actual build artifacts, including all workflow folders, configuration files such as host.json, connections.json, and any other related files.
machine-learning How To Auto Train Nlp Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-auto-train-nlp-models.md
For multi-class classification, the dataset can contain several text columns and
text,labels "I love watching Chicago Bulls games.","NBA" "Tom Brady is a great player.","NFL"
-"There is a game between Yankees and Orioles tonight","NFL"
+"There is a game between Yankees and Orioles tonight","MLB"
"Stephen Curry made the most number of 3-Pointers","NBA" ```
NER only | - The file should not start with an empty line <br> - Each line must
Automated ML's NLP capability is triggered through `AutoMLConfig`, which is the same workflow for submitting automated ML experiments for classification, regression and forecasting tasks. You would set most of the parameters as you would for those experiments, such as `task`, `compute_target` and data inputs.
-However, there are key differences include:
+However, there are key differences:
* You can ignore `primary_metric`, as it is only for reporting purpose. Currently, automated ML only trains one model per run for NLP and there is no model selection. * The `label_column_name` parameter is only required for multi-class and multi-label text classification tasks.
+* If the majority of the samples in your dataset contain more than 128 words, it's considered long range. For this scenario, you can enable the long range text option with the `enable_long_range_text=True` parameter in your `AutoMLConfig`. Doing so, helps improve model performance but requires a longer training times.
+ * If you enable long range text, then a GPU with higher memory is required such as, [NCv3](../virtual-machines/ncv3-series.md) series or [ND](../virtual-machines/nd-series.md) series.
+ * The `enable_long_range_text` parameter is only available for multi-class classification tasks.
```python automl_settings = { "verbosity": logging.INFO,
+ "enable_long_range_text": True, # # You only need to set this parameter if you want to enable the long-range text setting
} automl_config = AutoMLConfig(
https://github.com/Azure/azureml-examples/blob/main/python-sdk/tutorials/automl-
## Next steps + Learn more about [how and where to deploy a model](how-to-deploy-and-where.md).
-+ [Troubleshoot automated ML experiments](how-to-troubleshoot-auto-ml.md).
++ [Troubleshoot automated ML experiments](how-to-troubleshoot-auto-ml.md).
machine-learning How To Configure Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-configure-cli.md
The `ml` extension (preview) to the [Azure CLI](/cli/azure/) is the enhanced int
- To use the CLI, you must have an Azure subscription. If you don't have an Azure subscription, create a free account before you begin. Try the [free or paid version of Azure Machine Learning](https://azure.microsoft.com/free/) today. - To use the CLI commands in this document from your **local environment**, you need the [Azure CLI](/cli/azure/install-azure-cli).
+## Limitations
++ ## Installation The new Machine Learning extension **requires Azure CLI version `>=2.15.0`**. Ensure this requirement is met:
machine-learning How To Manage Workspace Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-manage-workspace-cli.md
In this article, you learn how to create and manage Azure Machine Learning works
[!INCLUDE [register-namespace](../../includes/machine-learning-register-namespace.md)] + [!INCLUDE [application-insight](../../includes/machine-learning-application-insight.md)] ## Connect the CLI to your Azure subscription
machine-learning How To Train Distributed Gpu https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-train-distributed-gpu.md
TF_CONFIG='{
- [azureml-examples: Distributed TensorFlow training with MultiWorkerMirroredStrategy](https://github.com/Azure/azureml-examples/tree/main/python-sdk/workflows/train/tensorflow/mnist-distributed)
-## <a name="infiniband"></a> Accelerating GPU training with InfiniBand
+## <a name="infiniband"></a> Accelerating distributed GPU training with InfiniBand
-Certain Azure VM series, specifically the NC, ND, and H-series, now have RDMA-capable VMs with SR-IOV and Infiniband support. These VMs communicate over the low latency and high-bandwidth InfiniBand network, which is much more performant than Ethernet-based connectivity. SR-IOV for InfiniBand enables near bare-metal performance for any MPI library (MPI is used by many distributed training frameworks and tooling, including NVIDIA's NCCL software.) These SKUs are intended to meet the needs of computationally intensive, GPU-acclerated machine learning workloads. For more information, see [Accelerating Distributed Training in Azure Machine Learning with SR-IOV](https://techcommunity.microsoft.com/t5/azure-ai/accelerating-distributed-training-in-azure-machine-learning/ba-p/1059050).
+As the number of VMs training a model increases, the time required to train that model should decrease. The decrease in time, ideally, should be linearly proportional to the number of training VMs. For instance, if training a model on one VM takes 100 seconds, then training the same model on two VMs should ideally take 50 seconds. Training the model on four VMs should take 25 seconds, and so on.
-If you create an `AmlCompute` cluster of one of these RDMA-capable, InfiniBand-enabled sizes, such as `Standard_ND40rs_v2`, the OS image will come with the Mellanox OFED driver required to enable InfiniBand preinstalled and preconfigured.
+InfiniBand can be an important factor in attaining this linear scaling. InfiniBand enables low-latency, GPU-to-GPU communication across nodes in a cluster. InfiniBand requires specialized hardware to operate. Certain Azure VM series, specifically the NC, ND, and H-series, now have RDMA-capable VMs with SR-IOV and InfiniBand support. These VMs communicate over the low latency and high-bandwidth InfiniBand network, which is much more performant than Ethernet-based connectivity. SR-IOV for InfiniBand enables near bare-metal performance for any MPI library (MPI is used by many distributed training frameworks and tooling, including NVIDIA's NCCL software.) These SKUs are intended to meet the needs of computationally intensive, GPU-acclerated machine learning workloads. For more information, see [Accelerating Distributed Training in Azure Machine Learning with SR-IOV](https://techcommunity.microsoft.com/t5/azure-ai/accelerating-distributed-training-in-azure-machine-learning/ba-p/1059050).
+
+Typically, VM SKUs with an 'r' in their name contain the required InfiniBand hardware, and those without an 'r' typically do not. ('r' is a reference to RDMA, which stands for "remote direct memory access.") For instance, the VM SKU `Standard_NC24rs_v3` is InfiniBand-enabled, but the SKU `Standard_NC24s_v3` is not. Aside from the InfiniBand capabilities, the specs between these two SKUs are largely the same ΓÇô both have 24 cores, 448 GB RAM, 4 GPUs of the same SKU, etc. [Learn more about RDMA- and InfiniBand-enabled machine SKUs](../virtual-machines/sizes-hpc.md#rdma-capable-instances).
+
+>[!WARNING]
+>The older-generation machine SKU `Standard_NC24r` is RDMA-enabled, but it does not contain SR-IOV hardware required for InfiniBand.
+
+If you create an `AmlCompute` cluster of one of these RDMA-capable, InfiniBand-enabled sizes, the OS image will come with the Mellanox OFED driver required to enable InfiniBand preinstalled and preconfigured.
## Next steps
machine-learning How To Train Scikit Learn https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-train-scikit-learn.md
Previously updated : 09/28/2020 Last updated : 03/21/2022
Whether you're training a machine learning scikit-learn model from the ground-up
## Prerequisites
-Run this code on either of these environments:
+You can run this code in either an Azure Machine Learning compute instance, or your own Jupyter Notebook:
- - Complete the [Quickstart: Get started with Azure Machine Learning](quickstart-create-resources.md) to create a dedicated notebook server pre-loaded with the SDK and the sample repository.
- - In the samples training folder on the notebook server, find a completed and expanded notebook by navigating to this directory: **how-to-use-azureml > ml-frameworks > scikit-learn > train-hyperparameter-tune-deploy-with-sklearn** folder.
+ - Azure Machine Learning compute instance
+ - Complete the [Quickstart: Get started with Azure Machine Learning](quickstart-create-resources.md) to create a compute instance. Every compute instance includes a dedicated notebook server pre-loaded with the SDK and the notebooks sample repository.
+ - Select the notebook tab in the Azure Machine Learning studio. In the samples training folder, find a completed and expanded notebook by navigating to this directory: **how-to-use-azureml > ml-frameworks > scikit-learn > train-hyperparameter-tune-deploy-with-sklearn** folder.
+ - You can use the pre-populated code in the sample training folder to complete this tutorial.
+ - Create a Jupyter Notebook server and run the code in the following sections.
- [Install the Azure Machine Learning SDK](/python/api/overview/azure/ml/install) (>= 1.13.0). - [Create a workspace configuration file](how-to-configure-environment.md#workspace).
machine-learning Tutorial Train Deploy Image Classification Model Vscode https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/tutorial-train-deploy-image-classification-model-vscode.md
In this tutorial, you learn the following tasks:
## Prerequisites -- Azure subscription. If you don't have one, sign up to try the [free or paid version of Azure Machine Learning](https://azure.microsoft.com/free/). If you're using the free version, use a CPU cluster for training instead of GPU.
+- Azure subscription. If you don't have one, sign up to try the [free or paid version of Azure Machine Learning](https://azure.microsoft.com/free/). If you're using the free subscription, only CPU clusters are supported.
- Install [Visual Studio Code](https://code.visualstudio.com/docs/setup/setup-overview), a lightweight, cross-platform code editor. - Azure Machine Learning Studio Visual Studio Code extension. For install instructions see the [Setup Azure Machine Learning Visual Studio Code extension guide](./how-to-setup-vs-code.md) - CLI (v2) (preview). For installation instructions, see [Install, set up, and use the CLI (v2) (preview)](how-to-configure-cli.md)
marketplace Azure App Offer Setup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/azure-app-offer-setup.md
Previously updated : 12/03/2021 Last updated : 03/28/2022 # Create an Azure application offer
If you havenΓÇÖt already done so, read [Plan an Azure application offer for the
* This name is only visible in Partner Center and itΓÇÖs different from the offer name and other values shown to customers. * The Offer alias can't be changed after you select **Create**.
+1. Associate the new offer with a _publisher_. A publisher represents an account for your organization. You may have a need to create the offer under a particular publisher. If you donΓÇÖt, you can simply accept the publisher account youΓÇÖre signed in to.
+
+ > [!NOTE]
+ > The selected publisher must be enrolled in the [**Commercial Marketplace program**](marketplace-faq-publisher-guide.yml#how-do-i-sign-up-to-be-a-publisher-in-the-microsoft-commercial-marketplace-) and cannot be modified after the offer is created.
+ 1. To generate the offer and continue, select **Create**. ## Configure your Azure application offer setup details
marketplace Azure Container Offer Setup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/azure-container-offer-setup.md
Previously updated : 09/27/2021 Last updated : 03/28/2022 # Create an Azure Container offer
Review [Plan an Azure Container offer](marketplace-containers.md). It will expla
## New offer
-Enter an **Offer ID**. This is a unique identifier for each offer in your account.
+1. Enter an **Offer ID**. This is a unique identifier for each offer in your account.
-- This ID is visible to customers in the web address for the offer and in Azure Resource Manager templates, if applicable.-- Use only lowercase letters and numbers. The ID can include hyphens and underscores, but no spaces, and is limited to 50 characters. For example, if your Publisher ID is `testpublisherid` and you enter **test-offer-1**, the offer web address will be `https://appsource.microsoft.com/product/dynamics-365/testpublisherid.test-offer-1`.-- The Offer ID can't be changed after you select **Create**.
+ - This ID is visible to customers in the web address for the offer and in Azure Resource Manager templates, if applicable.
+ - Use only lowercase letters and numbers. The ID can include hyphens and underscores, but no spaces, and is limited to 50 characters. For example, if your Publisher ID is `testpublisherid` and you enter **test-offer-1**, the offer web address will be `https://appsource.microsoft.com/product/dynamics-365/testpublisherid.test-offer-1`.
+ - The Offer ID can't be changed after you select **Create**.
-Enter an **Offer alias**. This is the name used for the offer in Partner Center.
+1. Enter an **Offer alias**. This is the name used for the offer in Partner Center.
-- This name isn't used on Azure Marketplace. It is different from the offer name and other values shown to customers.
+ - This name isn't used on Azure Marketplace. It is different from the offer name and other values shown to customers.
-Select **Create** to generate the offer. Partner Center opens the **Offer setup** page.
+1. Associate the new offer with a _publisher_. A publisher represents an account for your organization. You may have a need to create the offer under a particular publisher. If you donΓÇÖt, you can simply accept the publisher account youΓÇÖre signed in to.
+
+ > [!NOTE]
+ > The selected publisher must be enrolled in the [**Commercial Marketplace program**](marketplace-faq-publisher-guide.yml#how-do-i-sign-up-to-be-a-publisher-in-the-microsoft-commercial-marketplace-) and cannot be modified after the offer is created.
+
+1. Select **Create** to generate the offer. Partner Center opens the **Offer setup** page.
## Alias
marketplace Azure Vm Offer Setup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/azure-vm-offer-setup.md
Previously updated : 10/15/2021 Last updated : 03/28/2022 # Create a virtual machine offer on Azure Marketplace
If you haven't done so yet, review [Plan a virtual machine offer](marketplace-vi
[ ![Screenshot showing the left pane menu options and the "New offer" button.](./media/create-vm/new-offer-azure-virtual-machine-workspaces.png) ](./media/create-vm/new-offer-azure-virtual-machine-workspaces.png#lightbox)
-> [!NOTE]
-> After an offer is published, any edits you make to it in Partner Center appear on Azure Marketplace only after you republish the offer. Be sure to always republish an offer after making changes to it.
+ > [!NOTE]
+ > After an offer is published, any edits you make to it in Partner Center appear on Azure Marketplace only after you republish the offer. Be sure to always republish an offer after making changes to it.
-Enter an **Offer ID**. This is a unique identifier for each offer in your account.
+1. Enter an **Offer ID**. This is a unique identifier for each offer in your account.
-- This ID is visible to customers in the web address for the Azure Marketplace offer and in Azure PowerShell and the Azure CLI, if applicable.-- Use only lowercase letters and numbers. The ID can include hyphens and underscores, but no spaces, and is limited to 50 characters. For example, if you enter **test-offer-1**, the offer web address will be `https://azuremarketplace.microsoft.com/marketplace/../test-offer-1`.-- The Offer ID can't be changed after you select **Create**.
+ - This ID is visible to customers in the web address for the Azure Marketplace offer and in Azure PowerShell and the Azure CLI, if applicable.
+ - Use only lowercase letters and numbers. The ID can include hyphens and underscores, but no spaces, and is limited to 50 characters. For example, if you enter **test-offer-1**, the offer web address will be `https://azuremarketplace.microsoft.com/marketplace/../test-offer-1`.
+ - The Offer ID can't be changed after you select **Create**.
-Enter an **Offer alias**. The offer alias is the name that's used for the offer in Partner Center.
+1. Enter an **Offer alias**. The offer alias is the name that's used for the offer in Partner Center.
-- This name is not used on Azure Marketplace. It is different from the offer name and other values that are shown to customers.
+ - This name is not used on Azure Marketplace. It is different from the offer name and other values that are shown to customers.
-Select **Create** to generate the offer and continue. Partner Center opens the **Offer setup** page.
+1. Associate the new offer with a _publisher_. A publisher represents an account for your organization. You may have a need to create the offer under a particular publisher. If you donΓÇÖt, you can simply accept the publisher account youΓÇÖre signed in to.
+
+ > [!NOTE]
+ > The selected publisher must be enrolled in the [**Commercial Marketplace program**](marketplace-faq-publisher-guide.yml#how-do-i-sign-up-to-be-a-publisher-in-the-microsoft-commercial-marketplace-) and cannot be modified after the offer is created.
+
+1. Select **Create** to generate the offer and continue. Partner Center opens the **Offer setup** page.
## Test drive (optional)
marketplace Create Consulting Service Offer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/create-consulting-service-offer.md
Previously updated : 09/27/2021 Last updated : 03/28/2022 # Create a consulting service offer
To publish a consulting service offer, you must meet certain eligibility require
* The offer ID can't be changed after you select **Create**. 1. Enter an **Offer alias**. This is the name used for the offer in Partner Center. It isn't visible in the online stores and is different from the offer name shown to customers.+
+1. Associate the new offer with a _publisher_. A publisher represents an account for your organization. You may have a need to create the offer under a particular publisher. If you donΓÇÖt, you can simply accept the publisher account youΓÇÖre signed in to.
+
+ > [!NOTE]
+ > The selected publisher must be enrolled in the [**Commercial Marketplace program**](marketplace-faq-publisher-guide.yml#how-do-i-sign-up-to-be-a-publisher-in-the-microsoft-commercial-marketplace-) and cannot be modified after the offer is created.
+ 1. To generate the offer and continue, select **Create**. ## Configure lead management
marketplace Create Managed Service Offer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/create-managed-service-offer.md
Previously updated : 09/27/2021 Last updated : 03/28/2022 # Create a Managed Service offer for the commercial marketplace
To publish a Managed Service offer, you must have earned a Gold or Silver Micros
- The Offer ID can't be changed after you select **Create**. 1. Enter an **Offer alias**. This is the name used for the offer in Partner Center. It isn't visible in the online stores and is different from the offer name shown to customers.+
+1. Associate the new offer with a _publisher_. A publisher represents an account for your organization. You may have a need to create the offer under a particular publisher. If you donΓÇÖt, you can simply accept the publisher account youΓÇÖre signed in to.
+
+ > [!NOTE]
+ > The selected publisher must be enrolled in the [**Commercial Marketplace program**](marketplace-faq-publisher-guide.yml#how-do-i-sign-up-to-be-a-publisher-in-the-microsoft-commercial-marketplace-) and cannot be modified after the offer is created.
+ 1. To generate the offer and continue, select **Create**. ## Setup details
marketplace Create New Saas Offer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/create-new-saas-offer.md
Previously updated : 09/27/2021 Last updated : 03/28/2022 # Create a SaaS offer
If you havenΓÇÖt already done so, read [Plan a SaaS offer](plan-saas-offer.md).
+ This name isn't visible in the commercial marketplace and itΓÇÖs different from the offer name and other values shown to customers. + The offer alias can't be changed after you select **Create**.+
+1. Associate the new offer with a _publisher_. A publisher represents an account for your organization. You may have a need to create the offer under a particular publisher. If you donΓÇÖt, you can simply accept the publisher account youΓÇÖre signed in to.
+
+ > [!NOTE]
+ > The selected publisher must be enrolled in the [**Commercial Marketplace program**](marketplace-faq-publisher-guide.yml#how-do-i-sign-up-to-be-a-publisher-in-the-microsoft-commercial-marketplace-) and cannot be modified after the offer is created.
+ 1. To generate the offer and continue, select **Create**. ## Configure your SaaS offer setup details
marketplace Dynamics 365 Business Central Offer Setup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/dynamics-365-business-central-offer-setup.md
Previously updated : 11/24/2021 Last updated : 03/28/2022 # Create a Dynamics 365 Business Central offer
Review [Plan a Dynamics 365 offer](marketplace-dynamics-365.md). It explains the
## New offer
-In the dialog box that appears, enter an **Offer ID**. This is a unique identifier for each offer in your account.
+1. In the dialog box that appears, enter an **Offer ID**. This is a unique identifier for each offer in your account.
-- This ID is visible to customers in the web address for the offer and in Azure Resource Manager templates, if applicable.-- Use only lowercase letters and numbers. The ID can include hyphens and underscores, but no spaces, and is limited to 50 characters. For example, if your Publisher ID is `testpublisherid` and you enter **test-offer-1**, the offer web address will be `https://appsource.microsoft.com/product/dynamics-365/testpublisherid.test-offer-1`.-- The Offer ID can't be changed after you select **Create**.
+ - This ID is visible to customers in the web address for the offer and in Azure Resource Manager templates, if applicable.
+ - Use only lowercase letters and numbers. The ID can include hyphens and underscores, but no spaces, and is limited to 50 characters. For example, if your Publisher ID is `testpublisherid` and you enter **test-offer-1**, the offer web address will be `https://appsource.microsoft.com/product/dynamics-365/testpublisherid.test-offer-1`.
+ - The Offer ID can't be changed after you select **Create**.
-Enter an **Offer alias**. This is the name used for the offer in Partner Center.
+1. Enter an **Offer alias**. This is the name used for the offer in Partner Center.
-- This name isn't used on AppSource. It is different from the offer name and other values shown to customers.-- This name can't be changed after you select **Create**.
+ - This name isn't used on AppSource. It is different from the offer name and other values shown to customers.
+ - This name can't be changed after you select **Create**.
-Select **Create** to generate the offer. Partner Center opens the **Offer setup** page.
+1. Associate the new offer with a _publisher_. A publisher represents an account for your organization. You may have a need to create the offer under a particular publisher. If you donΓÇÖt, you can simply accept the publisher account youΓÇÖre signed in to.
+
+ > [!NOTE]
+ > The selected publisher must be enrolled in the [**Commercial Marketplace program**](marketplace-faq-publisher-guide.yml#how-do-i-sign-up-to-be-a-publisher-in-the-microsoft-commercial-marketplace-) and cannot be modified after the offer is created.
+
+1. Select **Create** to generate the offer. Partner Center opens the **Offer setup** page.
## Alias
marketplace Dynamics 365 Customer Engage Offer Setup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/dynamics-365-customer-engage-offer-setup.md
Previously updated : 12/03/2021 Last updated : 03/28/2022 # Create a Dynamics 365 apps on Dataverse and Power Apps offer
Review [Plan a Dynamics 365 offer](marketplace-dynamics-365.md). It will explain
## New offer
-Enter an **Offer ID**. This is a unique identifier for each offer in your account.
+1. Enter an **Offer ID**. This is a unique identifier for each offer in your account.
-- This ID is visible to customers in the web address for the offer and in Azure Resource Manager templates, if applicable.-- Use only lowercase letters and numbers. The ID can include hyphens and underscores, but no spaces, and is limited to 50 characters. For example, if your Publisher ID is `testpublisherid` and you enter **test-offer-1**, the offer web address will be `https://appsource.microsoft.com/product/dynamics-365/testpublisherid.test-offer-1`.-- The Offer ID can't be changed after you select **Create**.
+ - This ID is visible to customers in the web address for the offer and in Azure Resource Manager templates, if applicable.
+ - Use only lowercase letters and numbers. The ID can include hyphens and underscores, but no spaces, and is limited to 50 characters. For example, if your Publisher ID is `testpublisherid` and you enter **test-offer-1**, the offer web address will be `https://appsource.microsoft.com/product/dynamics-365/testpublisherid.test-offer-1`.
+ - The Offer ID can't be changed after you select **Create**.
-Enter an **Offer alias**. This is the name used for the offer in Partner Center.
+1. Enter an **Offer alias**. This is the name used for the offer in Partner Center.
-- This name isn't used on AppSource. It is different from the offer name and other values shown to customers.-- This name can't be changed after you select **Create**.
+ - This name isn't used on AppSource. It is different from the offer name and other values shown to customers.
+ - This name can't be changed after you select **Create**.
-Select **Create** to generate the offer. Partner Center opens the **Offer setup** page.
+1. Associate the new offer with a _publisher_. A publisher represents an account for your organization. You may have a need to create the offer under a particular publisher. If you donΓÇÖt, you can simply accept the publisher account youΓÇÖre signed in to.
+
+ > [!NOTE]
+ > The selected publisher must be enrolled in the [**Commercial Marketplace program**](marketplace-faq-publisher-guide.yml#how-do-i-sign-up-to-be-a-publisher-in-the-microsoft-commercial-marketplace-) and cannot be modified after the offer is created.
+
+1. Select **Create** to generate the offer. Partner Center opens the **Offer setup** page.
## Alias
marketplace Dynamics 365 Operations Offer Setup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/dynamics-365-operations-offer-setup.md
Previously updated : 12/03/2021 Last updated : 03/28/2022 # Create a Dynamics 365 Operations Apps offer
Review [Plan a Dynamics 365 offer](marketplace-dynamics-365.md). It will explain
## New offer
-Enter an **Offer ID**. This is a unique identifier for each offer in your account.
+1. Enter an **Offer ID**. This is a unique identifier for each offer in your account.
-- This ID is visible to customers in the web address for the offer and in Azure Resource Manager templates, if applicable.-- Use only lowercase letters and numbers. The ID can include hyphens and underscores, but no spaces, and is limited to 50 characters. For example, if you enter **test-offer-1**, the offer web address will be `https://azuremarketplace.microsoft.com/marketplace/../test-offer-1`.-- The Offer ID can't be changed after you select **Create**.
+ - This ID is visible to customers in the web address for the offer and in Azure Resource Manager templates, if applicable.
+ - Use only lowercase letters and numbers. The ID can include hyphens and underscores, but no spaces, and is limited to 50 characters. For example, if you enter **test-offer-1**, the offer web address will be `https://azuremarketplace.microsoft.com/marketplace/../test-offer-1`.
+ - The Offer ID can't be changed after you select **Create**.
-Enter an **Offer alias**. This is the name used for the offer in Partner Center.
+1. Enter an **Offer alias**. This is the name used for the offer in Partner Center.
-- This name isn't used on AppSource. It is different from the offer name and other values shown to customers.-- This name can't be changed after you select **Create**.
+ - This name isn't used on AppSource. It is different from the offer name and other values shown to customers.
+ - This name can't be changed after you select **Create**.
-Select **Create** to generate the offer. Partner Center opens the **Offer setup** page.
+1. Associate the new offer with a _publisher_. A publisher represents an account for your organization. You may have a need to create the offer under a particular publisher. If you donΓÇÖt, you can simply accept the publisher account youΓÇÖre signed in to.
+
+ > [!NOTE]
+ > The selected publisher must be enrolled in the [**Commercial Marketplace program**](marketplace-faq-publisher-guide.yml#how-do-i-sign-up-to-be-a-publisher-in-the-microsoft-commercial-marketplace-) and cannot be modified after the offer is created.
+
+1. Select **Create** to generate the offer. Partner Center opens the **Offer setup** page.
## Alias
marketplace Iot Edge Offer Setup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/iot-edge-offer-setup.md
Previously updated : 09/27/2021 Last updated : 03/28/2022 # Create an IoT Edge Module offer
Review [Plan an IoT Edge Module offer](marketplace-iot-edge.md). It will explain
## New offer
-Enter an **Offer ID**. This is a unique identifier for each offer in your account.
+1. Enter an **Offer ID**. This is a unique identifier for each offer in your account.
-- This ID is visible to customers in the web address for the offer and in Azure Resource Manager templates, if applicable.-- Use only lowercase letters and numbers. The ID can include hyphens and underscores, but no spaces, and is limited to 50 characters. For example, if your Publisher ID is `testpublisherid` and you enter **test-offer-1**, the offer web address will be `https://appsource.microsoft.com/product/dynamics-365/testpublisherid.test-offer-1`.
-<! The Offer ID combined with the Publisher ID must be under 50 characters in length.-->
-- The Offer ID can't be changed after you select **Create**.
+ - This ID is visible to customers in the web address for the offer and in Azure Resource Manager templates, if applicable.
+ - Use only lowercase letters and numbers. The ID can include hyphens and underscores, but no spaces, and is limited to 50 characters. For example, if your Publisher ID is `testpublisherid` and you enter **test-offer-1**, the offer web address will be `https://appsource.microsoft.com/product/dynamics-365/testpublisherid.test-offer-1`.
+ <! The Offer ID combined with the Publisher ID must be under 50 characters in length.-->
+ - The Offer ID can't be changed after you select **Create**.
-Enter an **Offer alias**. This is the name used for the offer in Partner Center.
+1. Enter an **Offer alias**. This is the name used for the offer in Partner Center.
-- This name isn't used on AppSource. It is different from the offer name and other values shown to customers.-- This name can't be changed after you select **Create**.
+ - This name isn't used on AppSource. It is different from the offer name and other values shown to customers.
+ - This name can't be changed after you select **Create**.
-Select **Create** to generate the offer. Partner Center opens the **Offer setup** page.
+1. Associate the new offer with a _publisher_. A publisher represents an account for your organization. You may have a need to create the offer under a particular publisher. If you donΓÇÖt, you can simply accept the publisher account youΓÇÖre signed in to.
+
+ > [!NOTE]
+ > The selected publisher must be enrolled in the [**Commercial Marketplace program**](marketplace-faq-publisher-guide.yml#how-do-i-sign-up-to-be-a-publisher-in-the-microsoft-commercial-marketplace-) and cannot be modified after the offer is created.
+
+1. Select **Create** to generate the offer. Partner Center opens the **Offer setup** page.
## Alias
marketplace Power Bi App Offer Setup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/power-bi-app-offer-setup.md
Previously updated : 11/22/2021 Last updated : 03/28/2022 # Create a Power BI app offer
If **Power BI App** isn't shown or enabled, your account doesn't have permission
## New offer
-Enter an **Offer ID**. This is a unique identifier for each offer in your account.
+1. Enter an **Offer ID**. This is a unique identifier for each offer in your account.
-- This ID is visible to customers in the web address for the offer and in Azure Resource Manager templates, if applicable.-- Use only lowercase letters and numbers. The ID can include hyphens and underscores, but no spaces, and is limited to 50 characters. For example, if your Publisher ID is `testpublisherid` and you enter **test-offer-1**, the offer web address will be `https://appsource.microsoft.com/product/dynamics-365/testpublisherid.test-offer-1`.-- The Offer ID can't be changed after you select **Create**.
+ - This ID is visible to customers in the web address for the offer and in Azure Resource Manager templates, if applicable.
+ - Use only lowercase letters and numbers. The ID can include hyphens and underscores, but no spaces, and is limited to 50 characters. For example, if your Publisher ID is `testpublisherid` and you enter **test-offer-1**, the offer web address will be `https://appsource.microsoft.com/product/dynamics-365/testpublisherid.test-offer-1`.
+ - The Offer ID can't be changed after you select **Create**.
-Enter an **Offer alias**. This is the name used for the offer in Partner Center.
+1. Enter an **Offer alias**. This is the name used for the offer in Partner Center.
-- This name isn't used on AppSource. It is different from the offer name and other values shown to customers.-- This name can't be changed after you select **Create**.
+ - This name isn't used on AppSource. It is different from the offer name and other values shown to customers.
+ - This name can't be changed after you select **Create**.
-Select **Create** to generate the offer. Partner Center opens the **Offer setup** page.
+1. Associate the new offer with a _publisher_. A publisher represents an account for your organization. You may have a need to create the offer under a particular publisher. If you donΓÇÖt, you can simply accept the publisher account youΓÇÖre signed in to.
+
+ > [!NOTE]
+ > The selected publisher must be enrolled in the [**Commercial Marketplace program**](marketplace-faq-publisher-guide.yml#how-do-i-sign-up-to-be-a-publisher-in-the-microsoft-commercial-marketplace-) and cannot be modified after the offer is created.
+
+1. Select **Create** to generate the offer. Partner Center opens the **Offer setup** page.
## Alias
marketplace Power Bi Visual Offer Setup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/power-bi-visual-offer-setup.md
Previously updated : 11/22/2021 Last updated : 03/28/2022 # Create a Power BI visual offer
Review [Plan a Power BI visual offer](marketplace-power-bi-visual.md). It will e
## New offer
-Enter an **Offer ID**. This is a unique identifier for each offer in your account.
+1. Enter an **Offer ID**. This is a unique identifier for each offer in your account.
-- This ID is visible to customers in the web address for the offer and in Azure Resource Manager templates, if applicable.-- Use only lowercase letters and numbers. The ID can include hyphens and underscores, but no spaces, and is limited to 50 characters. For example, if your Publisher ID is `testpublisherid` and you enter **test-offer-1**, the offer web address will be `https://appsource.microsoft.com/product/dynamics-365/testpublisherid.test-offer-1`.-- The Offer ID can't be changed after you select **Create**.-- The Offer ID should be unique within the list of all other Power BI visual offers in Partner Center.
+ - This ID is visible to customers in the web address for the offer and in Azure Resource Manager templates, if applicable.
+ - Use only lowercase letters and numbers. The ID can include hyphens and underscores, but no spaces, and is limited to 50 characters. For example, if your Publisher ID is `testpublisherid` and you enter **test-offer-1**, the offer web address will be `https://appsource.microsoft.com/product/dynamics-365/testpublisherid.test-offer-1`.
+ - The Offer ID can't be changed after you select **Create**.
+ - The Offer ID should be unique within the list of all other Power BI visual offers in Partner Center.
-Enter an **Offer alias**. This is the name used for the offer in Partner Center.
+1. Enter an **Offer alias**. This is the name used for the offer in Partner Center.
-- This name isn't used in AppSource. It is different from the offer name and other values shown to customers.
+ - This name isn't used in AppSource. It is different from the offer name and other values shown to customers.
-Select **Create** to generate the offer. Partner Center opens the **Offer setup** page.
+1. Associate the new offer with a _publisher_. A publisher represents an account for your organization. You may have a need to create the offer under a particular publisher. If you donΓÇÖt, you can simply accept the publisher account youΓÇÖre signed in to.
+
+ > [!NOTE]
+ > The selected publisher must be enrolled in the [**Commercial Marketplace program**](marketplace-faq-publisher-guide.yml#how-do-i-sign-up-to-be-a-publisher-in-the-microsoft-commercial-marketplace-) and cannot be modified after the offer is created.
+
+1. Select **Create** to generate the offer. Partner Center opens the **Offer setup** page.
## Setup details
marketplace Switch Accounts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/switch-accounts.md
Previously updated : 01/20/2022 Last updated : 03/28/2022 # Switch accounts in Partner Center
If you don't see the *account picker*, you are part of one account only. You can
1. In the upper-right, select **Settings** (gear icon) > **Account settings**. 1. In the left-menu, under **Organization profile**, select **Legal info**. Then select the **Developer** tab. -
-You can then select any of account on the list to switch to that account. After you switch, everything in Partner Center appears in the context of that account.
+You can then select any of the accounts on the list to switch to that account. After you switch, everything in Partner Center appears in the context of that account.
> [!NOTE] > Partner Center uses [Azure Active Directory](../active-directory/fundamentals/active-directory-whatis.md) (Azure AD) for multi-user account access and management. Your organization's Azure AD is automatically associated with your Partner Center account as part of the enrollment process.
-In the following example, the signed-in user is part of the four highlighted accounts. The user can switch between them by clicking on an account.
+In the following example, the signed-in user is part of the four highlighted accounts. The user can switch between them by selecting an account.
[ ![Screenshot of accounts that can be selected with the account picker.](./media/manage-accounts/account-picker-two-workspaces.png) ](./media/manage-accounts/account-picker-two-workspaces.png#lightbox)
+## Manage offers across company accounts
+
+In the marketplace offers workspace, you no longer have to switch accounts to see the offers created under a specific account. The workspace lets you manage offers across all the accounts you have access to in a single view. You can associate new offers with specific publishers who are eligible to publish in either the commercial marketplace or Office Store programs.
+
+> [!IMPORTANT]
+> The account picker may still be required in some scenarios within the _Marketplace offers_ workspace. Publishers enrolled in a single program, either the commercial marketplace or Office Store, must switch accounts to access offers in a different program. Companies with more than 75 unique publishers canΓÇÖt manage offers across company accounts.
+ ## Next steps - [Add and manage users for the commercial marketplace](add-manage-users.md)
migrate Common Questions Server Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/common-questions-server-migration.md
Title: Common questions about Azure Migrate Server Migration description: Get answers to common questions about using Azure Migrate Server Migration to migrate machines.--
-ms.
++
+ms.
Last updated 08/28/2020
We do not recommend using the recovery services vault created by Azure Migrate f
### What is the difference between the Test Migration and Migrate operations?
-Test migration provides a way to test and validate migrations prior to the actual migration. Test migration works by letting you create test copies of replicating VMs in a sandbox environment in Azure. The sandbox environment is demarcated by a test virtual network you specify. The test migration operation is non-disruptive, with applications continuing to run at the source while letting you perform tests on a cloned copy in an isolated sandbox environment. You can perform multiple tests as needed to validate the migration, perform app testing, and address any issues before the actual migration.
+Test migration provides a way to test and validate migrations prior to the actual migration. Test migration works by letting you use a sandbox environment in Azure to test the virtual machines before actual migration. The sandbox environment is demarcated by a test virtual network you specify. The test migration operation is non-disruptive, provided the test VNet is sufficiently isolated. Isolated VNet here means the inbound and outbound connection rules are designed to avoid unwanted connections. For example ΓÇô connection to On-premise machines is restricted.
+The applications can continue to run at the source while letting you perform tests on a cloned copy in an isolated sandbox environment. You can perform multiple tests, as needed, to validate the migration, perform app testing, and address any issues before the actual migration.
+ ### Is there a Rollback option for Azure Migrate?
migrate How To Test Replicating Virtual Machines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/how-to-test-replicating-virtual-machines.md
+
+ Title: Tests migrate replicating virtual machines
+description: Learn best practices for testing replicating virtual machines
++
+ms.
+ Last updated : 3/23/2022+++
+# Test migrate replicating virtual machines
+
+This article helps you understand how to test replicating virtual machines. Test migration provides a way to test and validate migrations prior to the actual migration.
+++
+## Prerequisites
+
+Before you get started, you need to perform the following steps:
+
+- Create the Azure Migrate project.
+- Deploy the appliance for your scenario and complete discovery of virtual machines.
+- Configure replication for one or more virtual machines that are to be migrated.
+> [!IMPORTANT]
+> You'll need to have at least one replicating virtual machine in the project before you can perform test migration.
+
+To learn how to perform the above, review the following tutorials based on your scenarios
+- [Migrating VMware virtual machines to Azure with the agentless migration method](./tutorial-migrate-vmware.md).
+- [Migrating Hyper-V VMs to Azure with Azure Migrate Server Migration](./tutorial-migrate-hyper-v.md)
+- [Migrating machines as physical server to Azure with Azure Migrate.](./tutorial-migrate-physical-virtual-machines.md)
++
+## Setting up your test environment
+
+The requirements for a test environment can vary according to your needs. Azure Migrate gives customers complete flexibility to create their own test environment. An option to select the VNet is given during test migration. You can customize the setting of this VNet to create a test environment according to your need.
+
+Furthermore, you can create 1:1 mapping between subnets of the VNet and Network Interface Cards (NICs) on VM, which gives more flexibility in creating the test environment.
+
+> [!Note]
+> Currently, the subnet selection feature is available only for agentless VMware migration scenario.
+
+The following logic is used for subnet selection for other scenarios (Migration from Hyper-V environment and physical server migration)
+
+- If a target subnet (other than default) was specified as an input while enabling replication. Azure Migrate prioritizes using a subnet with the same name in the Virtual Network selected for the test migration.
+
+- If the subnet with the same name ins't found, then Azure Migrate selects the first subnet available alphabetically that isn't a Gateway/Application Gateway/Firewall/Bastion subnet. For example,
+
+ - Suppose the target VNet is VNet-alpha and target subnet is Subnet-alpha for a replicating VM. VNet-beta is selected during test migration for this VM, then -
+ - If VNet-beta has a subnet named Subnet-alpha, that subnet would be chosen for test migration.
+ - If VNet-beta doesn't have a Subnet-alpha, then the next alphabetically available subnet, suppose Subnet-beta, would be chosen if it isn't Gateway / Application Gateway / Firewall / Bastion subnet.
+
+## Precautions to take selecting the test migration virtual network
+
+The test environment boundaries would depend on the network setting of the VNet you selected. The tested VM would behave exactly like it's supposed to run after migration. We don't recommend performing a test migration to a production virtual network. Problems such as duplicate VM or DNS entry changes can arise if the VNet selected for test migration has connections open to on premise VNet.
++
+## Selecting test migration VNet while enabling replication (Agentless VMware migration)
+
+ Select the VNet and subnet for test migration from the Target settings tab. These settings can be overridden later in Compute and Network tab of the replicating VM or while starting test migration of the replicating VM.
++
+## Changing test migration virtual network and subnet of a replicating machine (Agentless VMware migration)
+
+You can change the VNet and subnet of a replicating machine by following the steps below.
+
+1. Select the virtual machine from the list of currently replicating virtual machines
+
+ :::image type="content" source="./media/how-to-test-replicating-virtual-machines/test-migration-subnet-selection-step-1.png" alt-text="Screenshot shows the contents of replicating machine screen. It contains a list of replicating machine.":::
+
+2. Select on the Compute and Network option under the general heading.
+
+ :::image type="content" source="./media/how-to-test-replicating-virtual-machines/test-migration-subnet-selection-step-2.png" alt-text="Screenshot shows the location of network and compute option on the details page of replicating machine.":::
+
+3. Select the virtual network under the Test migration column. It's important to select the VNet in this drop down for test migration to be able to select subnet for each Network Interface Card (NIC) in the following steps.
+
+ :::image type="content" source="./media/how-to-test-replicating-virtual-machines/test-migration-subnet-selection-step-3.png" alt-text="Screenshot shows where to select VNet in replicating machine's network and compute options.":::
+
+4. Select on the Network Interface Card's name to check its settings. You can select the subnet for each of the Network Interface Card (NIC) of the VM.
+
+ :::image type="content" source="./media/how-to-test-replicating-virtual-machines/test-migration-subnet-selection-step-4.png" alt-text="Screenshot shows how to select a subnet for each Network Interface Card of replicating machine in the network and compute options of replicating machine.":::
+
+5. To change the settings, select on the pencil icon. Change the setting for the Network Interface Card (NIC) in the new form. Select OK.
+ :::image type="content" source="./media/how-to-test-replicating-virtual-machines/test-migration-subnet-selection-step-5.png" alt-text="Screenshot shows the content of the Network Interface Card page after clicking the pencil icon next to Network Interface Card's name in the network and compute screen.":::
+
+6. Select save. Changes aren't saved until you can see the colored square next to Network Interface Card's (NIC) name.
+
+ :::image type="content" source="./media/how-to-test-replicating-virtual-machines/test-migration-subnet-selection-step-6.png" alt-text="Screenshot shows the network and compute options screen of replicating machine and highlights the save button.":::
migrate How To Use Azure Migrate With Private Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/how-to-use-azure-migrate-with-private-endpoints.md
description: Use Azure Migrate private link support to discover, assess, and mig
ms.+ Last updated 05/10/2020
The role permissions for the Azure Resource Manager vary depending on the type o
1. Go to the replication/cache storage account selected for replication. In the left pane, select **Access control (IAM)**.
-1. Select **+ Add**, and select **Add role assignment**.
+1. Select **Add** > **Add role assignment**.
- ![Screenshot that shows Add role assignment.](./media/how-to-use-azure-migrate-with-private-endpoints/storage-role-assignment.png)
+ :::image type="content" source="../../includes/role-based-access-control/media/add-role-assignment-menu-generic.png" alt-text="Screenshot that shows access control (IAM) page with Add role assignment menu open.":::
-1. On the **Add role assignment** page in the **Role** box, select the appropriate role from the permissions list previously mentioned. Enter the name of the vault noted previously, and select **Save**.
+1. On the **Role** tab, select the appropriate role from the permissions list previously mentioned. Also, select the name of the vault noted previously.
- ![Screenshot that shows the Add role assignment page.](./media/how-to-use-azure-migrate-with-private-endpoints/storage-role-assignment-select-role.png)
+ ![Screenshot that shows Add role assignment page with Role tab selected.](../../includes/role-based-access-control/media/add-role-assignment-role-generic.png)
+
+1. On the **Review + assign** tab, select **Review + assign** to assign the role.
1. In addition to these permissions, you must also allow access to Microsoft trusted services. If your network access is restricted to selected networks, on the **Networking** tab in the **Exceptions** section, select **Allow trusted Microsoft services to access this storage account**.
migrate Tutorial Migrate Vmware https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-migrate-vmware.md
Title: Migrate VMware VMs agentless Azure Migrate Server Migration description: Learn how to run an agentless migration of VMware VMs with Azure Migrate.--
-ms.
++
+ms.
Last updated 06/09/2020
Do a test migration as follows:
![Test migration](./media/tutorial-migrate-vmware/test-migrate.png)
-3. In **Test Migration**, select the Azure VNet in which the Azure VM will be located after the migration. We recommend you use a non-production VNet.
-4. The **Test migration** job starts. Monitor the job in the portal notifications.
-5. After the migration finishes, view the migrated Azure VM in **Virtual Machines** in the Azure portal. The machine name has a suffix **-Test**.
-6. After the test is done, right-click the Azure VM in **Replicating machines**, and click **Clean up test migration**.
+3. In **Test migration**, select the Azure VNet in which the Azure VM will be located during testing. We recommend you use a non-production VNet.
+4. Choose the subnet to which you would like to associate each of the Network Interface Cards (NICs) of the migrated VM.
+
+ :::image type="content" source="./media/tutorial-migrate-vmware/test-migration-subnet-selection.png" alt-text="Screenshot shows subnet selection during test migration.":::
+
+5. The **Test migration** job starts. Monitor the job in the portal notifications.
+6. After the migration finishes, view the migrated Azure VM in **Virtual Machines** in the Azure portal. The machine name has a suffix **-Test**.
+7. After the test is done, right-click the Azure VM in **Replicating machines**, and click **Clean up test migration**.
![Clean up migration](./media/tutorial-migrate-vmware/clean-up.png)
migrate Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/whats-new.md
[Azure Migrate](migrate-services-overview.md) helps you to discover, assess, and migrate on-premises servers, apps, and data to the Microsoft Azure cloud. This article summarizes new releases and features in Azure Migrate.
+## Update (March 2022)
+- General Availability: Support to select subnets for each Network Interface Card of a replicating virtual machine in VMware agentless migration scenario.
## Update (February 2022) - General Availability: Migrate Windows and Linux Hyper-V virtual machines with large data disks (up to 32 TB in size). - Azure Migrate is now supported in Azure China. [Learn more](/azure/china/overview-operations#azure-operations-in-china).
openshift Howto Secure Openshift With Front Door Feb 22 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/howto-secure-openshift-with-front-door-feb-22.md
+
+ Title: Secure access to Azure Red Hat OpenShift with Azure Front Door
+description: This article explains how to use Azure Front Door to secure access to Azure Red Hat OpenShift applications.
++++ Last updated : 12/07/2021
+keywords: azure, openshift, red hat, front, door
+#Customer intent: I need to understand how to secure access to Azure Red Hat OpenShift applications with Azure Front Door.
++
+# Secure access to Azure Red Hat OpenShift with Azure Front Door
+
+This article explains how to use Azure Front Door Premium to secure access to Azure Red Hat OpenShift.
+
+## Prerequisites
+
+The following prerequisites are required:
+
+- You have an existing Azure Red Hat OpenShift cluster. For information on creating an Azure Red Hat OpenShift Cluster, learn how to [create-an-aks-cluster](../aks/kubernetes-walkthrough-portal.md#create-an-aks-cluster).
+
+- The cluster is configured with private ingress visibility.
+
+- A custom domain name is used, for example:
+
+ `example.com`
+
+> [!NOTE]
+> The initial state doesn't have DNS configured.
+> No applications are exposed externally from the Azure Red Hat OpenShift cluster.
+
+## Create an Azure Private Link service
+
+This section explains how to create an Azure Private Link service. An Azure Private Link service is a reference to your own service that is powered by Azure Private Link.
+
+Your service, which is running behind the Azure Standard Load Balancer, can be enabled for Private Link access so that consumers to your service can access it privately from their own VNets. Your customers can create a private endpoint inside their VNet and map it to this service.
+
+For more information about the Azure Private Link service and how it's used, see [Azure Private Link service](../private-link/private-link-service-overview.md).
+
+Create an **AzurePrivateLinkSubnet**. This subnet includes a netmask that permits visibility of the subnet to the control plane and worker nodes of the Azure cluster. Don't delegate this new subnet to any services or configure any service endpoints.
+
+For example, if the virtual network is 10.10.0.0/16 and:
+
+ - Existing Azure Red Hat OpenShift control plane subnet = 10.10.0.0/24
+ - Existing Azure Red Hat OpenShift worker subnet = 10.10.1.0/24
+ - New AzurePrivateLinkSubnet = 10.10.2.0/24
+
+ Create a new Private Link at [Azure Private Link service](https://portal.azure.com/#create/Microsoft.PrivateLinkservice), as explained in the following steps:
+
+1. On the **Basics** tab, configure the following options:
+ - **Project Details**
+ * Select your Azure subscription.
+ * Select the resource group in which your Azure Red Hat OpenShift cluster was deployed.
+ - **Instance Details**
+ - Enter a **Name** for your Azure Private Link service, as in the following example: *example-com-private-link*.
+ - Select a **Region** for your Private Link.
+
+2. On the **Outbound Settings** tab:
+ - Set the **Load Balancer** to the **-internal** load balancer of the cluster for which you're enabling external access. The choices are populated in the drop-down list.
+ - Set the **Load Balancer frontend IP address** to the IP address of the Azure Red Hat OpenShift ingress controller, which typically ends in **.254**. If you're unsure, use the following command.
+
+ ```azurecli
+ az aro show -n <cluster-name> -g <resource-group> -o tsv --query ingressProfiles[].ip
+ ```
+
+ - The **Source NAT subnet** should be the **AzurePrivateLinkSubnet**, which you created.
+ - No items should be changed in **Outbound Settings**.
+
+3. On the **Access Security** tab, no changes are required.
+
+ - At the **Who can request access to your service?** prompt, select **Anyone with your alias**.
+ - Don't add any subscriptions for auto-approval.
+
+4. On the **Tags** tab, select **Review + create**.
+
+5. Select **Create** to create the Azure Private Link service, and then wait for the process to complete.
+
+6. When your deployment is complete, select **Go to resource group** under **Next steps**.
+
+In the Azure portal, enter the Azure Private Link service that was deployed. Retain the **Alias** that was generated for the Azure Private Link service. It will be used later.
+
+## Register domain in Azure DNS
+
+This section explains how to register a domain in Azure DNS.
+
+1. Create a global [Azure DNS](https://portal.azure.com/#create/Microsoft.DnsZone) zone for example.com.
+
+2. Create a global [Azure DNS](https://portal.azure.com/#create/Microsoft.DnsZone) zone for apps.example.com.
+
+3. Note the four nameservers that are present in Azure DNS for apps.example.com.
+
+4. Create a new **NS** record set in the example.com zone that points to **app** and specify the four nameservers that were present when the **apps** zone was created.
+
+## Create a New Azure Front Door Premium service
+
+To create a new Azure Front Door Premium service:
+
+1. On [Microsoft Azure (PREVIEW) Compare offerings](https://ms.portal.azure.com/#create/Microsoft.AFDX) select **Azure Front Door**, and then select **Continue to create a Front Door**.
+
+2. On the **Create a front door profile** page in the **Subscription** > **Resource group**, select the resource group in which your Azure Red Hat OpenShift cluster was deployed to house your Azure Front Door Premium (PREVIEW) resource.
+
+3. Name your Azure Front Door Premium service appropriately. For example, in the **Name** field, enter the following name:
+
+ `example-com-frontdoor`
+
+4. Select the **Premium** tier. The Premium tier is the only choice that supports Azure Private Link.
+
+5. For **Endpoint name**, choose an endpoint name that is appropriate for Azure Front Door.
+
+ For each application deployed, a CNAME will be created in the Azure DNS to point to this hostname. Therefore, it's important to choose a name that is agnostic to applications. For security, the name shouldn't suggest the applications or architecture that youΓÇÖve deployed, such as **example01**.
+
+ The name you choose will be prepended to the **.z01.azurefd.net** domain.
+
+6. For **Origin type**, select **Custom**.
+
+7. For **Origin Host Name**, enter the following placeholder:
+
+ `changeme.com`
+
+ This placeholder will be deleted later.
+
+ At this stage, don't enable the Azure Private Link service, caching, or the Web Application Firewall (WAF) policy.
+
+9. Select **Review + create** to create the Azure Front Door Premium (PREVIEW) resource, and then wait for the process to complete.
+
+## Initial configuration of Azure Front Door Premium
+
+To configure Azure Front Door Premium:
+
+1. In the Azure portal, enter the Azure Front Door Premium service that was deployed.
+
+2. In the **Endpoint Manager** window, modify the endpoint by selecting **Edit endpoint**.
+
+3. Delete the default route, which was created as **default-route**.
+
+4. Close the **Endpoint Manager** window.
+
+5. In the **Origin Groups** window, delete the default origin group that was named **default-origin-group**.
+
+## Exposing an application route in Azure Red Hat OpenShift
+
+Azure Red Hat OpenShift must be configured to serve the application with the same hostname that Azure Front Door will be exposing externally (\*.apps.example.com). In our example, we'll expose the Reservations application with the following hostname:
+
+`reservations.apps.example.com`
+
+Also, create a secure route in Azure Red Hat OpenShift that exposes the hostname.
+
+## Configure Azure DNS
+
+To configure the Azure DNS:
+
+1. Enter the public **apps** DNS zone previously created.
+
+2. Create a new CNAME record set named **reservation**. This CNAME record set is an alias for our example Azure Front Door endpoint:
+
+ `example01.z01.azurefd.net`
+
+## Configure Azure Front Door Premium
+
+The following steps explain how to configure Azure Front Door Premium.
+
+1. In the Azure portal, enter the Azure Front Door Premium service you created previously:
+
+ `example-com-frontdoor`
+
+ **In the Domains window**:
+
+ 1. Because all DNS servers are hosted on Azure, leave **DNS Management** set to **Azure managed DNS**.
+
+3. Select the example domain:
+
+ `apps.example.com`
+
+4. Select the CNAME in our example:
+
+ `reservations.apps.example.com`
+
+5. Use the default values for **HTTPS** and **Minimum TLS version**.
+
+6. Select **Add**.
+
+7. When the **Validation stat** changes to **Pending**, select **Pending**.
+
+8. To authenticate ownership of the DNS zone, for **DNS record status**, select **Add**.
+
+9. Select **Close**.
+
+10. Continue to select **Refresh** until the **Validation state** of the domain changes to **Approved** and the **Endpoint association** changes to **Unassociated**.
+
+**In the Origin Groups window**:
+
+1. Select **Add**.
+
+2. Give your **Origin Group** an appropriate name, such as **Reservations-App**.
+
+3. Select **Add an origin**.
+
+4. Enter the name of the origin, such as **ARO-Cluster-1**.
+
+5. Choose an **Origin type** of **Custom**.
+
+6. Enter the fully qualified domain name (FQDN) hostname that was exposed in your Azure Red Hat OpenShift cluster, such as:
+
+ `reservations.apps.example.com`
+
+7. Enable the **Private Link** service.
+
+8. Enter the **Alias** that was obtained from the Azure Private Link service.
+
+9. Select **Add** to return to the origin group creation window.
+
+10. Select **Add** to add the origin group and return to the Azure portal.
+
+## Grant approval in Azure Private Link
+
+To grant approval to the **example-com-private-link**, which is the **Azure Private Link** service you created previously, complete the following steps.
+
+1. On the **Private endpoint connections** tab, select the checkbox that now exists from the resource described as **do from AFD**.
+
+2. Select **Approve**, and then select **Yes** to verify the approval.
+
+## Complete Azure Front Door Premium configuration
+
+The following steps explain how to complete the configuration of Azure Front Door Premium.
+
+1. In the Azure portal, enter the Azure Front Door Premium service you previously created:
+
+ `example-com-frontdoor`
+
+2. In the **Endpoint Manager** window, select **Edit endpoint** to modify the endpoint.
+
+3. Select **+Add** under **Routes**.
+
+4. Give your route an appropriate name, such as **Reservations-App-Route-Config**.
+
+5. Under **Domains**, then under **Available validated domains**, select the fully qualified domain name, for example:
+
+ `reservations.apps.example.com`
++
+6. To redirect HTTP traffic to use HTTPS, leave the **Redirect** checkbox selected.
+
+7. Under **Origin group**, select **Reservations-App**, the origin group you previously created.
+
+8. You can enable caching, if appropriate.
+
+9. Select **Add** to create the route.
+After the route is configured, the **Endpoint manager** populates the **Domains** and **Origin groups** panes with the other elements created for this application.
+
+Because Azure Front Door is a global service, the application can take up to 30 minutes to deploy. During this time, you may choose to create a WAF for your application. When your application goes live, it can be accessed using the URL used in this example:
+
+`https://reservations.apps.example.com`
+
+## Next steps
+
+Create a Azure Web Application Firewall on Azure Front Door using the Azure portal:
+> [!div class="nextstepaction"]
+> [Tutorial: Create a Web Application Firewall policy on Azure Front Door using the Azure portal](../web-application-firewall/afds/waf-front-door-create-portal.md)
postgresql Concepts Supported Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/concepts-supported-versions.md
The current minor release is 10.16. Refer to the [PostgreSQL documentation](http
The current minor release is 9.6.21. Refer to the [PostgreSQL documentation](https://www.postgresql.org/docs/9.6/static/release-9-6-21.html) to learn more about improvements and fixes in this minor release. ## PostgreSQL version 9.5 (retired)
-Aligning with Postgres community's [versioning policy](https://www.postgresql.org/support/versioning/), Azure Database for PostgreSQL has retired Postgres version 9.5 as of February 11, 2021. Please see [Azure Database for PostgreSQL versioning policy](concepts-version-policy.md) for more details and restrictions. If you are running this major version, please upgrade to a higher version, preferably to PostgreSQL 11 at your earliest convenience.
+Aligning with Postgres community's [versioning policy](https://www.postgresql.org/support/versioning/), Azure Database for PostgreSQL has retired PostgreSQL version 9.5 as of February 11, 2021. Please see [Azure Database for PostgreSQL versioning policy](concepts-version-policy.md) for more details and restrictions. If you are running this major version, please upgrade to a higher version, preferably to PostgreSQL 11 at your earliest convenience.
## Managing upgrades The PostgreSQL project regularly issues minor releases to fix reported bugs. Azure Database for PostgreSQL automatically patches servers with minor releases during the service's monthly deployments.
purview Register Scan Oracle Source https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-oracle-source.md
Previously updated : 01/17/2022 Last updated : 03/28/2022
When scanning Oracle source, Azure Purview supports:
When setting up scan, you can choose to scan an entire Oracle server, or scope the scan to a subset of schemas matching the given name(s) or name pattern(s).
+Currently, the Oracle service name is not captured in the metadata or hierarchy.
+ ## Prerequisites * An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
role-based-access-control Role Definitions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/role-definitions.md
Built-in roles have `AssignableScopes` set to the root scope (`"/"`). The root s
> | Two subscriptions | `"/subscriptions/{subscriptionId1}", "/subscriptions/{subscriptionId2}"` | > | Network resource group | `"/subscriptions/{subscriptionId1}/resourceGroups/Network"` | > | One management group | `"/providers/Microsoft.Management/managementGroups/{groupId1}"` |
-> | Management group and a subscription | `"/providers/Microsoft.Management/managementGroups/{groupId1}", /subscriptions/{subscriptionId1}",` |
+> | Management group and a subscription | `"/providers/Microsoft.Management/managementGroups/{groupId1}", "/subscriptions/{subscriptionId1}",` |
> | All scopes (applies only to built-in roles) | `"/"` | For information about `AssignableScopes` for custom roles, see [Azure custom roles](custom-roles.md).
sentinel Normalization Common Fields https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/normalization-common-fields.md
The currently supported list of vendors and products used in the [EventVendor](#
| Vendor | Products | | | -- |
-| Apache | Squid Proxy |
| AWS | - CloudTrail<br> - VPC | | Cisco | - ASA<br> - Umbrella | | Corelight | Zeek |
The currently supported list of vendors and products used in the [EventVendor](#
| Microsoft | - AAD<br> - Azure Firewall<br> - Azure File Storage<br> - Azure NSG flows<br> - DNS Server<br> - Microsoft 365 Defender for Endpoint<br> - Microsoft Defender for IoT<br> - Security Events<br> - Sharepoint 365<br>- Sysmon<br> - Sysmon for Linux<br> - VMConnection<br> - Windows Firewall<br> - WireData <br> | Okta | Okta | | Palo Alto | - PanOS<br> - CDL<br> |
+| Squid | Squid Proxy |
| Vectra AI | Vectra Steam | | Zscaler | - ZIA DNS<br> - ZIA Firewall<br> - ZIA Proxy | If you are developing a parser for a vendor or a product which are not listed here, contact the [Microsoft Sentinel](mailto:azuresentinel@microsoft.com) team to allocate a new allowed vendor and product designators. + ## Next steps For more information, see:
service-bus-messaging Service Bus Filter Examples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-filter-examples.md
Title: Set subscriptions filters in Azure Service Bus | Microsoft Docs description: This article provides examples for defining filters and actions on Azure Service Bus topic subscriptions. Previously updated : 09/07/2021 Last updated : 03/25/2022 ms.devlang: csharp # Set subscription filters (Azure Service Bus)
-This article provides a few examples on setting filters on Service Bus topic subscriptions. For conceptual information about filters, see [Filters](topic-filters.md).
+This article provides a few examples on setting filters on subscriptions for Service Bus topics. For conceptual information about filters, see [Filters](topic-filters.md).
## Filter on system properties To refer to a system property in a filter, use the following format: `sys.<system-property-name>`. ```csharp
-sys.label LIKE '%bus%'`
+sys.label LIKE '%bus%'
sys.messageid = 'xxxx' sys.correlationid like 'abc-%' ```
+> [!NOTE]
+> - For a list of system properties, see [Messages, payloads, and serialization](service-bus-messages-payloads.md).
+> - Use system property names from [Microsoft.Azure.ServiceBus.Message](/dotnet/api/microsoft.azure.servicebus.message#properties) in your filters even when you use [ServiceBusMessage](/dotnet/api/azure.messaging.servicebus.servicebusmessage) from the new [Azure.Messaging.ServiceBus](/dotnet/api/azure.messaging.servicebus) namespace to send and receive messages. The `Subject` from [ServiceBusMessage](/dotnet/api/azure.messaging.servicebus.servicebusmessage) maps to `Label` in [Microsoft.Azure.ServiceBus.Message](/dotnet/api/microsoft.azure.servicebus.message#properties).
+ ## Filter on message properties Here are the examples of using message properties in a filter. You can access message properties using `user.property-name` or just `property-name`.
service-fabric Service Fabric Connect And Communicate With Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-connect-and-communicate-with-services.md
In Service Fabric, a service runs somewhere in a Service Fabric cluster, typical
A Service Fabric application is generally composed of many different services, where each service performs a specialized task. These services may communicate with each other to form a complete function, such as rendering different parts of a web application. There are also client applications that connect to and communicate with services. This document discusses how to set up communication with and between your services in Service Fabric.
+[Check this page for a training video that also discusses service communication:](/shows/building-microservices-applications-on-azure-service-fabric/service-network-communication)
## Bring your own protocol Service Fabric helps manage the lifecycle of your services but it does not make decisions about what your services do. This includes communication. When your service is opened by Service Fabric, that's your service's opportunity to set up an endpoint for incoming requests, using whatever protocol or communication stack you want. Your service will listen on a normal **IP:port** address using any addressing scheme, such as a URI. Multiple service instances or replicas may share a host process, in which case they will either need to use different ports or use a port-sharing mechanism, such as the http.sys kernel driver in Windows. In either case, each service instance or replica in a host process must be uniquely addressable.
service-fabric Service Fabric Reliable Services Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-reliable-services-introduction.md
Reliable Services are different from services you may have written before, becau
* **Scalability** - Services are decoupled from specific hardware, and they can grow or shrink as necessary through the addition or removal of hardware or other resources. Services are easily partitioned (especially in the stateful case) to ensure that the service can scale and handle partial failures. Services can be created and deleted dynamically via code, enabling more instances to be spun up as necessary, for example in response to customer requests. Finally, Service Fabric encourages services to be lightweight. Service Fabric allows thousands of services to be provisioned within a single process, rather than requiring or dedicating entire OS instances or processes to a single instance of a service. * **Consistency** - Any information stored in a Reliable Service can be guaranteed to be consistent. This is true even across multiple Reliable Collections within a service. Changes across collections within a service can be made in a transactionally atomic manner.
+[Check this page for a training video to learn about the Service Fabric reliable services programming model and how, with this .NET programming model, your application can integrate more closely with the Service Fabric runtime:](/shows/building-microservices-applications-on-azure-service-fabric/what-are-reliable-services)
## Service lifecycle Whether your service is stateful or stateless, Reliable Services provide a simple lifecycle that lets you quickly plug in your code and get started. Getting a new service up and running requires you to implement two methods:
spring-cloud Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/faq.md
Azure Spring Cloud intelligently schedules your applications on the underlying K
### In which regions is Azure Spring Cloud available?
-East US, East US 2, Central US, South Central US, North Central US, West US, West US 2, West Europe, North Europe, UK South, Southeast Asia, Australia East, Canada Central, UAE North, Central India, Korea Central, East Asia, Japan East, South Africa North, Brazil South, France Central, China East 2(Mooncake), and China North 2(Mooncake). [Learn More](https://azure.microsoft.com/global-infrastructure/services/?products=spring-cloud)
+East US, East US 2, Central US, South Central US, North Central US, West US, West US 2, West US 3, West Europe, North Europe, UK South, Southeast Asia, Australia East, Canada Central, UAE North, Central India, Korea Central, East Asia, Japan East, South Africa North, Brazil South, France Central, China East 2(Mooncake), and China North 2(Mooncake). [Learn More](https://azure.microsoft.com/global-infrastructure/services/?products=spring-cloud)
### Is any customer data stored outside of the specified region?
static-web-apps Publish Vuepress https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/publish-vuepress.md
The following steps show you how to create a new static site app and deploy it t
1. Select the **Review + Create** button to verify the details are all correct.
-1. Select **Create** to start the creation of the App Service Static Web App and provision a GitHub Action for deployment.
+1. Select **Create** to start the creation of the App Service Static Web App and provision a GitHub Actions for deployment.
1. Once the deployment completes click, **Go to resource**.
-1. On the resource screen, click the _URL_ link to open your deployed application. You may need to wait a minute or two for the GitHub Action to complete.
+1. On the resource screen, click the _URL_ link to open your deployed application. You may need to wait a minute or two for the GitHub Actions to complete.
:::image type="content" source="./media/publish-vuepress/deployed-app.png" alt-text="Deployed application":::
storage Secure File Transfer Protocol Host Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/secure-file-transfer-protocol-host-keys.md
When you connect to Blob Storage by using an SFTP client, you might be prompted
- [SSH File Transfer Protocol (SFTP) support for Azure Blob Storage](secure-file-transfer-protocol-support.md) - [Connect to Azure Blob Storage by using the SSH File Transfer Protocol (SFTP)](secure-file-transfer-protocol-support-how-to.md) - [Limitations and known issues with SSH File Transfer Protocol (SFTP) support for Azure Blob Storage](secure-file-transfer-protocol-known-issues.md)
+- [SSH File Transfer Protocol (SFTP) performance considerations in Azure Blob storage](secure-file-transfer-protocol-performance.md)
storage Secure File Transfer Protocol Known Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/secure-file-transfer-protocol-known-issues.md
This article describes limitations and known issues of SFTP support for Azure Bl
- When a firewall is configured, connections from non-allowed IPs are not rejected as expected. However, if there is a successful connection for an authenticated user then all data plane operations will be rejected.
+- There's a 4 minute timeout for idle or inactive connections. OpenSSH will appear to stop responding and then disconnect. Some clients reconnect automatically.
+ ## Security - Host keys are published [here](secure-file-transfer-protocol-host-keys.md). During the public preview, host keys may rotate frequently.
This article describes limitations and known issues of SFTP support for Azure Bl
## Performance -- Upload performance with default settings for some clients can be slow. Some of this is expected because SFTP is a chatty protocol and sends small message requests. Increasing the buffer size and using multiple concurrent connections can significantly improve speed. -
- - For WinSCP, you can use a maximum of 9 concurrent connections to upload multiple files.
-
- - For OpenSSH on Windows, you can increase buffer size to 100000: sftp -B 100000 testaccount.user1@testaccount.blob.core.windows.net
-
- - For OpenSSH on Linux, you can increase buffer size to 262000: sftp -B 262000 -R 32 testaccount.user1@testaccount.blob.core.windows.net
--- There's a 4 minute timeout for idle or inactive connections. OpenSSH will appear to stop responding and then disconnect. Some clients reconnect automatically.
+For performance issues and considerations, see [SSH File Transfer Protocol (SFTP) performance considerations in Azure Blob storage](secure-file-transfer-protocol-performance.md).
## Other
This article describes limitations and known issues of SFTP support for Azure Bl
- [SSH File Transfer Protocol (SFTP) support for Azure Blob Storage](secure-file-transfer-protocol-support.md) - [Connect to Azure Blob Storage by using the SSH File Transfer Protocol (SFTP)](secure-file-transfer-protocol-support-how-to.md) - [Host keys for SSH File Transfer Protocol (SFTP) support for Azure Blob Storage](secure-file-transfer-protocol-host-keys.md)
+- [SSH File Transfer Protocol (SFTP) performance considerations in Azure Blob storage](secure-file-transfer-protocol-performance.md)
storage Secure File Transfer Protocol Performance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/secure-file-transfer-protocol-performance.md
+
+ Title: SSH File Transfer Protocol (SFTP) performance considerations in Azure Blob storage (preview) | Microsoft Docs
+description: Optimize the performance of your SSH File Transfer Protocol (SFTP) requests by using the recommendations in this article.
++++ Last updated : 03/28/2022++++++
+# SSH File Transfer Protocol (SFTP) performance considerations in Azure Blob storage (preview)
+
+Blob storage now supports the SSH File Transfer Protocol (SFTP). This article contains recommendations that will help you to optimize the performance of your storage requests. To learn more about SFTP support for Azure Blob Storage, see [SSH File Transfer Protocol (SFTP) support for Azure Blob Storage](secure-file-transfer-protocol-support.md).
+
+## Use concurrent connections to increase throughput
+
+Azure Blob Storage scales linearly until it reaches the maximum storage account egress and ingress limit. Therefore, your applications can achieve higher throughput by using more client connections. To view storage account egress and ingress limits, see [Scalability and performance targets for standard storage accounts](../common/scalability-targets-standard-account.md).
+
+For WinSCP, you can use a maximum of 9 concurrent connections to upload multiple files. Other common SFTP clients such as FileZilla have similar options.
+
+> [!IMPORTANT]
+> Concurrent uploads will only improve performance when uploading multiple files at the same time. Using multiple connections to upload a single file is not supported.
+
+## Use premium block blob storage accounts
+
+[Azure premium block blob storage account](../common/storage-account-create.md) offers consistent low-latency and high transaction rates. The premium block blob storage account can reach maximum bandwidth with fewer threads and clients. For example, with a single client, a premium block blob storage account can achieve **2.3x** bandwidth compared to the same setup used with a standard performance general purpose v2 storage account.
+
+## Reduce the impact of network latency
+
+Network latency has a large impact on SFTP performance due to its reliance on small messages. By default, most clients use a message size of around 32KB.
+
+- Increase default message size to achieve better performance
+
+ - For OpenSSH on Windows, you can increase the message size to 100000 with the `-B` option: `sftp -B 100000 testaccount.user1@testaccount.blob.core.windows.net`
+
+ - For OpenSSH on Linux, you can increase buffer size to 262000 with the `-B` option: `sftp -B 262000 -R 32 testaccount.user1@testaccount.blob.core.windows.net`
+
+- Make storage requests from a client located in the same region as the storage account
+
+## See also
+
+- [SSH File Transfer Protocol (SFTP) support for Azure Blob Storage](secure-file-transfer-protocol-support.md)
+- [Connect to Azure Blob Storage by using the SSH File Transfer Protocol (SFTP)](secure-file-transfer-protocol-support-how-to.md)
+- [Limitations and known issues with SSH File Transfer Protocol (SFTP) support for Azure Blob Storage](secure-file-transfer-protocol-known-issues.md)
+- [Host keys for SSH File Transfer Protocol (SFTP) support for Azure Blob Storage](secure-file-transfer-protocol-host-keys.md)
storage Secure File Transfer Protocol Support How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/secure-file-transfer-protocol-support-how-to.md
See the documentation of your SFTP client for guidance about how to connect and
- [SSH File Transfer Protocol (SFTP) support for Azure Blob Storage](secure-file-transfer-protocol-support.md) - [Limitations and known issues with SSH File Transfer Protocol (SFTP) support for Azure Blob Storage](secure-file-transfer-protocol-known-issues.md) - [Host keys for SSH File Transfer Protocol (SFTP) support for Azure Blob Storage](secure-file-transfer-protocol-host-keys.md)
+- [SSH File Transfer Protocol (SFTP) performance considerations in Azure Blob storage](secure-file-transfer-protocol-performance.md)
storage Secure File Transfer Protocol Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/secure-file-transfer-protocol-support.md
Transaction and storage costs are based on factors such as storage account type
- [Connect to Azure Blob Storage by using the SSH File Transfer Protocol (SFTP)](secure-file-transfer-protocol-support-how-to.md) - [Limitations and known issues with SSH File Transfer Protocol (SFTP) support for Azure Blob Storage](secure-file-transfer-protocol-known-issues.md) - [Host keys for SSH File Transfer Protocol (SFTP) support for Azure Blob Storage](secure-file-transfer-protocol-host-keys.md)
+- [SSH File Transfer Protocol (SFTP) performance considerations in Azure Blob storage](secure-file-transfer-protocol-performance.md)
storage Storage Blob Append https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-append.md
+
+ Title: Append data to a blob with .NET - Azure Storage
+description: Learn how to append data to a blob in Azure Storage by using the.NET client library.
++++ Last updated : 03/28/2022+++
+ms.devlang: csharp, python
+++
+# Append data to a blob in Azure Storage using the .NET client library
+
+You can append data to a blob by creating an append blob. Append blobs are made up of blocks like block blobs, but are optimized for append operations. Append blobs are ideal for scenarios such as logging data from virtual machines.
+
+> [!NOTE]
+> The examples in this article assume that you've created a [BlobServiceClient](/dotnet/api/azure.storage.blobs.blobserviceclient) object by using the guidance in the [Get started with Azure Blob Storage and .NET](storage-blob-dotnet-get-started.md) article. Blobs in Azure Storage are organized into containers. Before you can upload a blob, you must first create a container. To learn how to create a container, see [Create a container in Azure Storage with .NET](storage-blob-container-create.md).
+
+## Create an append blob and append data
+
+Use these methods to create an append blob.
+
+- [Create](/dotnet/api/azure.storage.blobs.specialized.appendblobclient.create)
+- [CreateAsync](/dotnet/api/azure.storage.blobs.specialized.appendblobclient.createasync)
+- [CreateIfNotExists](/dotnet/api/azure.storage.blobs.specialized.appendblobclient.createifnotexists)
+- [CreateIfNotExistsAsync](/dotnet/api/azure.storage.blobs.specialized.appendblobclient.createifnotexistsasync)
+
+Use either of these methods to append data to that append blob:
+
+- [AppendBlock](/dotnet/api/azure.storage.blobs.specialized.appendblobclient.appendblock)
+- [AppendBlockAsync](/dotnet/api/azure.storage.blobs.specialized.appendblobclient.appendblockasync)
+
+The maximum size in bytes of each append operation is defined by the [AppendBlobMaxAppendBlockBytes](/dotnet/api/azure.storage.blobs.specialized.appendblobclient.appendblobmaxappendblockbytes) property. The following example creates an append blob and appends log data to that blob. This example uses the [AppendBlobMaxAppendBlockBytes](/dotnet/api/azure.storage.blobs.specialized.appendblobclient.appendblobmaxappendblockbytes) property to determine whether multiple append operations are required.
+
+```csharp
+public static async void AppendToBlob
+ (BlobContainerClient containerClient, MemoryStream logEntryStream, string LogBlobName)
+{
+ AppendBlobClient appendBlobClient = containerClient.GetAppendBlobClient(LogBlobName);
+
+ appendBlobClient.CreateIfNotExists();
+
+ var maxBlockSize = appendBlobClient.AppendBlobMaxAppendBlockBytes;
+
+ var buffer = new byte[maxBlockSize];
+
+ if (logEntryStream.Length <= maxBlockSize)
+ {
+ appendBlobClient.AppendBlock(logEntryStream);
+ }
+ else
+ {
+ var bytesLeft = (logEntryStream.Length - logEntryStream.Position);
+
+ while (bytesLeft > 0)
+ {
+ if (bytesLeft >= maxBlockSize)
+ {
+ buffer = new byte[maxBlockSize];
+ await logEntryStream.ReadAsync
+ (buffer, 0, maxBlockSize);
+ }
+ else
+ {
+ buffer = new byte[bytesLeft];
+ await logEntryStream.ReadAsync
+ (buffer, 0, Convert.ToInt32(bytesLeft));
+ }
+
+ appendBlobClient.AppendBlock(new MemoryStream(buffer));
+
+ bytesLeft = (logEntryStream.Length - logEntryStream.Position);
+
+ }
+
+ }
+
+}
+```
+
+## See also
+
+- [Understanding block blobs, append blobs, and page blobs](/rest/api/storageservices/understanding-block-blobs--append-blobs--and-page-blobs)
+- [OpenWrite](/dotnet/api/azure.storage.blobs.specialized.appendblobclient.openwrite) / [OpenWriteAsync](/dotnet/api/azure.storage.blobs.specialized.appendblobclient.openwriteasync)
+- [Append Block](/api/storageservices/append-block) (REST API)
storage Storage Blob Block Blob Premium https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-block-blob-premium.md
This section contains the following examples:
Premium block blob storage can help you *hydrate* or bring up your environment quickly. In industries such as banking, certain regulatory requirements might require companies to regularly tear down their environments, and then bring them back up from scratch. The data used to hydrate their environment must load quickly.
-Some of our partners store a copy of their MongoDB instance each week to a premium block blob storage account. The system is then torn down. To get the system back online quickly again, the latest copy of the MangoDB instance is read and loaded. For audit purposes, previous copies are maintained in cloud storage for a period of time.
+Some of our partners store a copy of their MongoDB instance each week to a premium block blob storage account. The system is then torn down. To get the system back online quickly again, the latest copy of the MongoDB instance is read and loaded. For audit purposes, previous copies are maintained in cloud storage for a period of time.
### Interactive editing applications
storage Storage Blob Container Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-create.md
Title: Create or delete a blob container with .NET - Azure Storage
-description: Learn how to create or delete a blob container in your Azure Storage account using the .NET client library.
+ Title: Create a blob container with .NET - Azure Storage
+description: Learn how to create a blob container in your Azure Storage account using the .NET client library.
-+ Previously updated : 02/04/2020- Last updated : 03/28/2022+ ms.devlang: csharp
-# Create or delete a container in Azure Storage with .NET
+# Create a container in Azure Storage with .NET
-Blobs in Azure Storage are organized into containers. Before you can upload a blob, you must first create a container. This article shows how to create and delete containers with the [Azure Storage client library for .NET](/dotnet/api/overview/azure/storage).
+Blobs in Azure Storage are organized into containers. Before you can upload a blob, you must first create a container. This article shows how to create containers with the [Azure Storage client library for .NET](/dotnet/api/overview/azure/storage).
## Name a container
The URI for a container is in this format:
To create a container, call one of the following methods:
-# [.NET v12 SDK](#tab/dotnet)
- - [CreateBlobContainer](/dotnet/api/azure.storage.blobs.blobserviceclient.createblobcontainer) - [CreateBlobContainerAsync](/dotnet/api/azure.storage.blobs.blobserviceclient.createblobcontainerasync) These methods throw an exception if a container with the same name already exists.
-# [.NET v11 SDK](#tab/dotnetv11)
--- [Create](/dotnet/api/microsoft.azure.storage.blob.cloudblobcontainer.create)-- [CreateAsync](/dotnet/api/microsoft.azure.storage.blob.cloudblobcontainer.createasync)-- [CreateIfNotExists](/dotnet/api/microsoft.azure.storage.blob.cloudblobcontainer.createifnotexists)-- [CreateIfNotExistsAsync](/dotnet/api/microsoft.azure.storage.blob.cloudblobcontainer.createifnotexistsasync)-
-The **Create** and **CreateAsync** methods throw an exception if a container with the same name already exists.
-
-The **CreateIfNotExists** and **CreateIfNotExistsAsync** methods return a Boolean value indicating whether the container was created. If a container with the same name already exists, these methods return **False** to indicate a new container wasn't created.
--- Containers are created immediately beneath the storage account. It's not possible to nest one container beneath another. The following example creates a container asynchronously:
-# [.NET v12 SDK](#tab/dotnet)
- :::code language="csharp" source="~/azure-storage-snippets/blobs/howto/dotnet/dotnet-v12/Containers.cs" id="CreateSampleContainerAsync":::
-# [.NET v11 SDK](#tab/dotnetv11)
-
-```csharp
-private static async Task<CloudBlobContainer> CreateSampleContainerAsync(CloudBlobClient blobClient)
-{
- // Name the sample container based on new GUID, to ensure uniqueness.
- // The container name must be lowercase.
- string containerName = "container-" + Guid.NewGuid();
-
- // Get a reference to a sample container.
- CloudBlobContainer container = blobClient.GetContainerReference(containerName);
-
- try
- {
- // Create the container if it does not already exist.
- bool result = await container.CreateIfNotExistsAsync();
- if (result == true)
- {
- Console.WriteLine("Created container {0}", container.Name);
- }
- }
- catch (StorageException e)
- {
- Console.WriteLine("HTTP error code {0}: {1}",
- e.RequestInformation.HttpStatusCode,
- e.RequestInformation.ErrorCode);
- Console.WriteLine(e.Message);
- }
-
- return container;
-}
-```
--- ## Create the root container A root container serves as a default container for your storage account. Each storage account may have one root container, which must be named *$root*. The root container must be explicitly created or deleted.
You can reference a blob stored in the root container without including the root
The following example creates the root container synchronously:
-# [.NET v12 SDK](#tab/dotnet)
- :::code language="csharp" source="~/azure-storage-snippets/blobs/howto/dotnet/dotnet-v12/Containers.cs" id="CreateRootContainer":::
-# [.NET v11 SDK](#tab/dotnetv11)
-
-```csharp
-private static void CreateRootContainer(CloudBlobClient blobClient)
-{
- try
- {
- // Create the root container if it does not already exist.
- CloudBlobContainer container = blobClient.GetContainerReference("$root");
- if (container.CreateIfNotExists())
- {
- Console.WriteLine("Created root container.");
- }
- else
- {
- Console.WriteLine("Root container already exists.");
- }
- }
- catch (StorageException e)
- {
- Console.WriteLine("HTTP error code {0}: {1}",
- e.RequestInformation.HttpStatusCode,
- e.RequestInformation.ErrorCode);
- Console.WriteLine(e.Message);
- }
-}
-```
---
-## Delete a container
-
-To delete a container in .NET, use one of the following methods:
-
-# [.NET v12 SDK](#tab/dotnet)
--- [Delete](/dotnet/api/azure.storage.blobs.blobcontainerclient.delete)-- [DeleteAsync](/dotnet/api/azure.storage.blobs.blobcontainerclient.deleteasync)-- [DeleteIfExists](/dotnet/api/azure.storage.blobs.blobcontainerclient.deleteifexists)-- [DeleteIfExistsAsync](/dotnet/api/azure.storage.blobs.blobcontainerclient.deleteifexistsasync)-
-# [.NET v11 SDK](#tab/dotnetv11)
--- [Delete](/dotnet/api/microsoft.azure.storage.blob.cloudblobcontainer.delete)-- [DeleteAsync](/dotnet/api/microsoft.azure.storage.blob.cloudblobcontainer.deleteasync)-- [DeleteIfExists](/dotnet/api/microsoft.azure.storage.blob.cloudblobcontainer.deleteifexists)-- [DeleteIfExistsAsync](/dotnet/api/microsoft.azure.storage.blob.cloudblobcontainer.deleteifexistsasync)--
-The **Delete** and **DeleteAsync** methods throw an exception if the container doesn't exist.
-
-The **DeleteIfExists** and **DeleteIfExistsAsync** methods return a Boolean value indicating whether the container was deleted. If the specified container doesn't exist, then these methods return **False** to indicate that the container wasn't deleted.
-
-After you delete a container, you can't create a container with the same name for at *least* 30 seconds. Attempting to create a container with the same name will fail with HTTP error code 409 (Conflict). Any other operations on the container or the blobs it contains will fail with HTTP error code 404 (Not Found).
-
-The following example deletes the specified container, and handles the exception if the container doesn't exist:
-
-# [.NET v12 SDK](#tab/dotnet)
--
-# [.NET v11 SDK](#tab/dotnetv11)
-
-```csharp
-private static async Task DeleteSampleContainerAsync(CloudBlobClient blobClient, string containerName)
-{
- CloudBlobContainer container = blobClient.GetContainerReference(containerName);
-
- try
- {
- // Delete the specified container and handle the exception.
- await container.DeleteAsync();
- }
- catch (StorageException e)
- {
- Console.WriteLine("HTTP error code {0}: {1}",
- e.RequestInformation.HttpStatusCode,
- e.RequestInformation.ErrorCode);
- Console.WriteLine(e.Message);
- Console.ReadLine();
- }
-}
-```
---
-The following example shows how to delete all of the containers that start with a specified prefix.
-
-# [.NET v12 SDK](#tab/dotnet)
--
-# [.NET v11 SDK](#tab/dotnetv11)
-
-```csharp
-private static async Task DeleteContainersWithPrefixAsync(CloudBlobClient blobClient, string prefix)
-{
- Console.WriteLine("Delete all containers beginning with the specified prefix");
- try
- {
- foreach (var container in blobClient.ListContainers(prefix))
- {
- Console.WriteLine("\tContainer:" + container.Name);
- await container.DeleteAsync();
- }
-
- Console.WriteLine();
- }
- catch (StorageException e)
- {
- Console.WriteLine(e.Message);
- Console.ReadLine();
- throw;
- }
-}
-```
---- ## See also
+- [Get started with Azure Blob Storage and .NET](storage-blob-dotnet-get-started.md)
- [Create Container operation](/rest/api/storageservices/create-container) - [Delete Container operation](/rest/api/storageservices/delete-container)
storage Storage Blob Container Delete https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-delete.md
+
+ Title: Delete and restore a blob container with .NET - Azure Storage
+description: Learn how to delete and restore a blob container in your Azure Storage account using the .NET client library.
+++++ Last updated : 03/28/2022++
+ms.devlang: csharp
+++
+# Delete and restore a container in Azure Storage with .NET
+
+This article shows how to delete containers with the [Azure Storage client library for .NET](/dotnet/api/overview/azure/storage). If you've enabled container soft delete, you can restore deleted containers.
+
+## Delete a container
+
+To delete a container in .NET, use one of the following methods:
+
+- [Delete](/dotnet/api/azure.storage.blobs.blobcontainerclient.delete)
+- [DeleteAsync](/dotnet/api/azure.storage.blobs.blobcontainerclient.deleteasync)
+- [DeleteIfExists](/dotnet/api/azure.storage.blobs.blobcontainerclient.deleteifexists)
+- [DeleteIfExistsAsync](/dotnet/api/azure.storage.blobs.blobcontainerclient.deleteifexistsasync)
+
+The **Delete** and **DeleteAsync** methods throw an exception if the container doesn't exist.
+
+The **DeleteIfExists** and **DeleteIfExistsAsync** methods return a Boolean value indicating whether the container was deleted. If the specified container doesn't exist, then these methods return **False** to indicate that the container wasn't deleted.
+
+After you delete a container, you can't create a container with the same name for at *least* 30 seconds. Attempting to create a container with the same name will fail with HTTP error code 409 (Conflict). Any other operations on the container or the blobs it contains will fail with HTTP error code 404 (Not Found).
+
+The following example deletes the specified container, and handles the exception if the container doesn't exist:
++
+The following example shows how to delete all of the containers that start with a specified prefix.
++
+## Restore a deleted container
+
+When container soft delete is enabled for a storage account, a container and its contents may be recovered after it has been deleted, within a retention period that you specify. You can restore a soft deleted container by calling either of the following methods of the [BlobServiceClient](/dotnet/api/azure.storage.blobs.blobserviceclient) class.
+
+- [UndeleteBlobContainer](/dotnet/api/azure.storage.blobs.blobserviceclient.undeleteblobcontainer)
+- [UndeleteBlobContainerAsync](/dotnet/api/azure.storage.blobs.blobserviceclient.undeleteblobcontainerasync)
+
+The following example finds a deleted container, gets the version ID of that deleted container, and then passes that ID into the [UndeleteBlobContainerAsync](/dotnet/api/azure.storage.blobs.blobserviceclient.undeleteblobcontainerasync) method to restore the container.
+
+```csharp
+public static async Task RestoreContainer(BlobServiceClient client, string containerName)
+{
+ await foreach (BlobContainerItem item in client.GetBlobContainersAsync
+ (BlobContainerTraits.None, BlobContainerStates.Deleted))
+ {
+ if (item.Name == containerName && (item.IsDeleted == true))
+ {
+ try
+ {
+ await client.UndeleteBlobContainerAsync(containerName, item.VersionId);
+ }
+ catch (RequestFailedException e)
+ {
+ Console.WriteLine("HTTP error code {0}: {1}",
+ e.Status, e.ErrorCode);
+ Console.WriteLine(e.Message);
+ }
+ }
+ }
+}
+```
+
+## See also
+
+- [Get started with Azure Blob Storage and .NET](storage-blob-dotnet-get-started.md)
+- [Soft delete for containers](soft-delete-container-overview.md)
+- [Enable and manage soft delete for containers](soft-delete-container-enable.md)
+- [Restore Container](/en-us/rest/api/storageservices/restore-container)
storage Storage Blob Container Lease https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-lease.md
+
+ Title: Create and manage blob or container leases with .NET - Azure Storage
+description: Learn how to manage a lock on a blob or container in your Azure Storage account using the .NET client library.
+++++ Last updated : 03/28/2022++
+ms.devlang: csharp
+++
+# Create and manage blob or container leases with .NET
+
+A lease establishes and manages a lock on a container or the blobs in a container. You can use the .NET client library to acquire, renew, release and break leases. To learn more about leasing blobs or containers, see [Lease Container](/rest/api/storageservices/lease-container) or [Lease Blobs](/rest/api/storageservices/lease-blob).
+
+## Acquire a lease
+
+When you acquire a lease, you'll obtain a lease ID that your code can use to operate on the blob or container. To acquire a lease, create an instance of the [BlobLeaseClient](/dotnet/api/azure.storage.blobs.specialized.blobleaseclient) class, and then use either of these methods:
+
+- [Acquire](/dotnet/api/azure.storage.blobs.specialized.blobleaseclient.acquire)
+- [AcquireAsync](/dotnet/api/azure.storage.blobs.specialized.blobleaseclient.acquireasync)
+
+The following example acquires a 30 second lease for a container.
+
+```csharp
+public static async Task AcquireLease(BlobContainerClient containerClient)
+{
+ BlobLeaseClient blobLeaseClient = containerClient.GetBlobLeaseClient();
+
+ TimeSpan ts = new TimeSpan(0, 0, 0, 30);
+ Response<BlobLease> blobLeaseResponse = await blobLeaseClient.AcquireAsync(ts);
+
+ Console.WriteLine("Blob Lease Id:" + blobLeaseResponse.Value.LeaseId);
+ Console.WriteLine("Remaining Lease Time: " + blobLeaseResponse.Value.LeaseTime);
+}
+```
+
+## Renew a lease
+
+If your lease expires, you can renew it. To renew a lease, use either of the following methods of the [BlobLeaseClient](/dotnet/api/azure.storage.blobs.specialized.blobleaseclient) class:
+
+- [Renew](/dotnet/api/azure.storage.blobs.specialized.blobleaseclient.renew)
+- [RenewAsync](/dotnet/api/azure.storage.blobs.specialized.blobleaseclient.renewasync)
+
+Specify the lease ID by setting the [IfMatch](/dotnet/api/azure.matchconditions.ifmatch) property of a [RequestConditions](/dotnet/api/azure.requestconditions) instance.
+
+The following example renews a lease for a blob.
+
+```csharp
+public static async Task RenewLease(BlobClient blobClient, string leaseID)
+{
+ BlobLeaseClient blobLeaseClient = blobClient.GetBlobLeaseClient();
+ RequestConditions requestConditions = new RequestConditions();
+ requestConditions.IfMatch = new ETag(leaseID);
+ await blobLeaseClient.RenewAsync();
+}
+```
+
+## Release a lease
+
+You can either wait for a lease to expire or explicitly release it. When you release a lease, other clients can obtain a lease. You can release a lease by using either of these methods of the [BlobLeaseClient](/dotnet/api/azure.storage.blobs.specialized.blobleaseclient) class.
+
+- [Release](/dotnet/api/azure.storage.blobs.specialized.blobleaseclient.release)
+- [ReleaseAsync](/dotnet/api/azure.storage.blobs.specialized.blobleaseclient.releaseasync)
+
+The following example releases the lease on a container.
+
+```csharp
+public static async Task ReleaseLease(BlobContainerClient containerClient)
+{
+ BlobLeaseClient blobLeaseClient = containerClient.GetBlobLeaseClient();
+ await blobLeaseClient.ReleaseAsync();
+}
+```
+
+## Break a lease
+
+When you break a lease, the lease ends, but other clients can't acquire a lease until the lease period expires. You can break a lease by using either of these methods:
+
+- [Break](/dotnet/api/azure.storage.blobs.specialized.blobleaseclient.break)
+- [BreakAsync](/dotnet/api/azure.storage.blobs.specialized.blobleaseclient.breakasync);
+
+The following example breaks the lease on a blob.
+
+```csharp
+public static async Task BreakLease(BlobClient blobClient)
+{
+ BlobLeaseClient blobLeaseClient = blobClient.GetBlobLeaseClient();
+ await blobLeaseClient.BreakAsync();
+}
+```
+
+## See also
+
+- [Get started with Azure Blob Storage and .NET](storage-blob-dotnet-get-started.md)
+- [Managing Concurrency in Blob storage](concurrency-manage.md)
storage Storage Blob Container Properties Metadata https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-properties-metadata.md
Title: Use .NET to manage properties and metadata for a blob container
description: Learn how to set and retrieve system properties and store custom metadata on blob containers in your Azure Storage account using the .NET client library. -+ Previously updated : 07/01/2020- Last updated : 03/28/2022+ ms.devlang: csharp
Metadata name/value pairs are valid HTTP headers, and so should adhere to all re
## Retrieve container properties
-# [.NET v12 SDK](#tab/dotnet)
- To retrieve container properties, call one of the following methods: - [GetProperties](/dotnet/api/azure.storage.blobs.blobcontainerclient.getproperties)
The following code example fetches a container's system properties and writes so
:::code language="csharp" source="~/azure-storage-snippets/blobs/howto/dotnet/dotnet-v12/Metadata.cs" id="Snippet_ReadContainerProperties":::
-# [.NET v11 SDK](#tab/dotnet11)
-
-Retrieving property and metadata values for a Blob storage resource is a two-step process. Before you can read these values, you must explicitly fetch them by calling the **FetchAttributes** or **FetchAttributesAsync** method. The exception to this rule is that the **Exists** and **ExistsAsync** methods call the appropriate **FetchAttributes** method under the covers. When you call one of these methods, you do not need to also call **FetchAttributes**.
-
-> [!IMPORTANT]
-> If you find that property or metadata values for a storage resource have not been populated, then check that your code calls the **FetchAttributes** or **FetchAttributesAsync** method.
-
-To retrieve container properties, call one of the following methods:
--- [FetchAttributes](/dotnet/api/microsoft.azure.storage.blob.cloudblobcontainer.fetchattributes)-- [FetchAttributesAsync](/dotnet/api/microsoft.azure.storage.blob.cloudblobcontainer.fetchattributesasync)-
-The following code example fetches a container's system properties and writes some property values to a console window:
-
-```csharp
-private static async Task ReadContainerPropertiesAsync(CloudBlobContainer container)
-{
- try
- {
- // Fetch some container properties and write out their values.
- await container.FetchAttributesAsync();
- Console.WriteLine("Properties for container {0}", container.StorageUri.PrimaryUri);
- Console.WriteLine("Public access level: {0}", container.Properties.PublicAccess);
- Console.WriteLine("Last modified time in UTC: {0}", container.Properties.LastModified);
- }
- catch (StorageException e)
- {
- Console.WriteLine("HTTP error code {0}: {1}",
- e.RequestInformation.HttpStatusCode,
- e.RequestInformation.ErrorCode);
- Console.WriteLine(e.Message);
- Console.ReadLine();
- }
-}
-```
--- ## Set and retrieve metadata
-# [.NET v12 SDK](#tab/dotnet)
- You can specify metadata as one or more name-value pairs on a blob or container resource. To set metadata, add name-value pairs to an [IDictionary](/dotnet/api/system.collections.idictionary) object, and then call one of the following methods to write the values: - [SetMetadata](/dotnet/api/azure.storage.blobs.blobcontainerclient.setmetadata)
Then, read the values, as shown in the example below.
:::code language="csharp" source="~/azure-storage-snippets/blobs/howto/dotnet/dotnet-v12/Metadata.cs" id="Snippet_ReadContainerMetadata":::
-# [.NET v11 SDK](#tab/dotnet11)
-
-You can specify metadata as one or more name-value pairs on a blob or container resource. To set metadata, add name-value pairs to the **Metadata** collection on the resource, then call one of the following methods to write the values:
--- [SetMetadata](/dotnet/api/microsoft.azure.storage.blob.cloudblobcontainer.setmetadata)-- [SetMetadataAsync](/dotnet/api/microsoft.azure.storage.blob.cloudblobcontainer.setmetadataasync)-
-The name of your metadata must conform to the naming conventions for C# identifiers. Metadata names preserve the case with which they were created, but are case-insensitive when set or read. If two or more metadata headers with the same name are submitted for a resource, Blob storage comma-separates and concatenates the two values and return HTTP response code 200 (OK).
-
-The following code example sets metadata on a container. One value is set using the collection's **Add** method. The other value is set using implicit key/value syntax. Both are valid.
-
-```csharp
-public static async Task AddContainerMetadataAsync(CloudBlobContainer container)
-{
- try
- {
- // Add some metadata to the container.
- container.Metadata.Add("docType", "textDocuments");
- container.Metadata["category"] = "guidance";
-
- // Set the container's metadata.
- await container.SetMetadataAsync();
- }
- catch (StorageException e)
- {
- Console.WriteLine("HTTP error code {0}: {1}",
- e.RequestInformation.HttpStatusCode,
- e.RequestInformation.ErrorCode);
- Console.WriteLine(e.Message);
- Console.ReadLine();
- }
-}
-```
-
-To retrieve metadata, call the **FetchAttributes** or **FetchAttributesAsync** method on your blob or container to populate the **Metadata** collection, then read the values, as shown in the example below.
-
-```csharp
-public static async Task ReadContainerMetadataAsync(CloudBlobContainer container)
-{
- try
- {
- // Fetch container attributes in order to populate the container's properties and metadata.
- await container.FetchAttributesAsync();
-
- // Enumerate the container's metadata.
- Console.WriteLine("Container metadata:");
- foreach (var metadataItem in container.Metadata)
- {
- Console.WriteLine("\tKey: {0}", metadataItem.Key);
- Console.WriteLine("\tValue: {0}", metadataItem.Value);
- }
- }
- catch (StorageException e)
- {
- Console.WriteLine("HTTP error code {0}: {1}",
- e.RequestInformation.HttpStatusCode,
- e.RequestInformation.ErrorCode);
- Console.WriteLine(e.Message);
- Console.ReadLine();
- }
-}
-```
---- ## See also
+- [Get started with Azure Blob Storage and .NET](storage-blob-dotnet-get-started.md)
- [Get Container Properties operation](/rest/api/storageservices/get-container-properties) - [Set Container Metadata operation](/rest/api/storageservices/set-container-metadata) - [Get Container Metadata operation](/rest/api/storageservices/get-container-metadata)
storage Storage Blob Containers List https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-containers-list.md
Title: List blob containers with .NET - Azure Storage description: Learn how to list blob containers in your Azure Storage account using the .NET client library. -+ Previously updated : 10/14/2020- Last updated : 03/28/2022+ ms.devlang: csharp
When you list the containers in an Azure Storage account from your code, you can
To list containers in your storage account, call one of the following methods:
-# [.NET v12 SDK](#tab/dotnet)
- - [GetBlobContainers](/dotnet/api/azure.storage.blobs.blobserviceclient.getblobcontainers) - [GetBlobContainersAsync](/dotnet/api/azure.storage.blobs.blobserviceclient.getblobcontainersasync)
-# [.NET v11 SDK](#tab/dotnet11)
--- [ListContainersSegmented](/dotnet/api/microsoft.azure.storage.blob.cloudblobclient.listcontainerssegmented)-- [ListContainersSegmentedAsync](/dotnet/api/microsoft.azure.storage.blob.cloudblobclient.listcontainerssegmentedasync)--- The overloads for these methods provide additional options for managing how containers are returned by the listing operation. These options are described in the following sections. ### Manage how many results are returned
By default, a listing operation returns up to 5000 results at a time. To return
If your storage account contains more than 5000 containers, or if you have specified a page size such that the listing operation returns a subset of containers in the storage account, then Azure Storage returns a *continuation token* with the list of containers. A continuation token is an opaque value that you can use to retrieve the next set of results from Azure Storage.
-In your code, check the value of the continuation token to determine whether it is empty (for .NET v12) or null (for .NET v11 and earlier). When the continuation token is null, then the set of results is complete. If the continuation token is not null, then call the listing method again, passing in the continuation token to retrieve the next set of results, until the continuation token is null.
+In your code, check the value of the continuation token to determine whether it is empty. When the continuation token is empty, then the set of results is complete. If the continuation token is not empty, then call the listing method again, passing in the continuation token to retrieve the next set of results, until the continuation token is empty.
### Filter results with a prefix
To filter the list of containers, specify a string for the `prefix` parameter. T
### Return metadata
-To return container metadata with the results, specify the **Metadata** value for the [BlobContainerTraits](/dotnet/api/azure.storage.blobs.models.blobcontainertraits) enum (for .NET v12) or [ContainerListingDetails](/dotnet/api/microsoft.azure.storage.blob.containerlistingdetails) enum (for .NET v11 and earlier). Azure Storage includes metadata with each container returned, so you do not need to also fetch the container metadata.
+To return container metadata with the results, specify the **Metadata** value for the [BlobContainerTraits](/dotnet/api/azure.storage.blobs.models.blobcontainertraits) enum. Azure Storage includes metadata with each container returned, so you do not need to also fetch the container metadata.
## Example: List containers The following example asynchronously lists the containers in a storage account that begin with a specified prefix. The example lists containers that begin with the specified prefix and returns the specified number of results per call to the listing operation. It then uses the continuation token to get the next segment of results. The example also returns container metadata with the results.
-# [.NET v12 SDK](#tab/dotnet)
- :::code language="csharp" source="~/azure-storage-snippets/blobs/howto/dotnet/dotnet-v12/Containers.cs" id="Snippet_ListContainers":::
-# [.NET v11 SDK](#tab/dotnet11)
-
-```csharp
-private static async Task ListContainersWithPrefixAsync(CloudBlobClient blobClient,
- string prefix,
- int? segmentSize)
-{
- Console.WriteLine("List containers beginning with prefix {0}, plus container metadata:", prefix);
-
- BlobContinuationToken continuationToken = null;
- ContainerResultSegment resultSegment;
-
- try
- {
- do
- {
- // List containers beginning with the specified prefix,
- // returning segments of 5 results each.
- // Passing in null for the maxResults parameter returns the maximum number of results (up to 5000).
- // Requesting the container's metadata as part of the listing operation populates the metadata,
- // so it's not necessary to call FetchAttributes() to read the metadata.
- resultSegment = await blobClient.ListContainersSegmentedAsync(
- prefix, ContainerListingDetails.Metadata, segmentSize, continuationToken, null, null);
-
- // Enumerate the containers returned.
- foreach (var container in resultSegment.Results)
- {
- Console.WriteLine("\tContainer:" + container.Name);
-
- // Write the container's metadata keys and values.
- foreach (var metadataItem in container.Metadata)
- {
- Console.WriteLine("\t\tMetadata key: " + metadataItem.Key);
- Console.WriteLine("\t\tMetadata value: " + metadataItem.Value);
- }
- }
-
- // Get the continuation token.
- continuationToken = resultSegment.ContinuationToken;
-
- } while (continuationToken != null);
-
- Console.WriteLine();
- }
- catch (StorageException e)
- {
- Console.WriteLine(e.Message);
- Console.ReadLine();
- throw;
- }
-}
-```
---- ## See also
+- [Get started with Azure Blob Storage and .NET](storage-blob-dotnet-get-started.md)
- [List Containers](/rest/api/storageservices/list-containers2) - [Enumerating Blob Resources](/rest/api/storageservices/enumerating-blob-resources)
storage Storage Blob Copy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-copy.md
Title: Copy a blob with Azure Storage APIs
-description: Learn how to copy a blob using the Azure Storage client libraries.
+ Title: Copy a blob with .NET - Azure Storage
+description: Learn how to copy a blob in Azure Storage by using the .NET client library.
Previously updated : 01/08/2021 Last updated : 03/28/2022 -
+ms.devlang: csharp,
+
-# Copy a blob with Azure Storage client libraries
+# Copy a blob with Azure Storage using the .NET client library
This article demonstrates how to copy a blob in an Azure Storage account. It also shows how to abort an asynchronous copy operation. The example code uses the Azure Storage client libraries.
A copy operation can take any of the following forms:
## Copy a blob
-# [.NET v12 SDK](#tab/dotnet)
- To copy a blob, call one of the following methods: - [StartCopyFromUri](/dotnet/api/azure.storage.blobs.specialized.blobbaseclient.startcopyfromuri)
The following code example gets a [BlobClient](/dotnet/api/azure.storage.blobs.b
:::code language="csharp" source="~/azure-storage-snippets/blobs/howto/dotnet/dotnet-v12/CopyBlob.cs" id="Snippet_CopyBlob":::
-# [.NET v11 SDK](#tab/dotnet11)
-
-To copy a blob, call one of the following methods:
--- [StartCopy](/dotnet/api/microsoft.azure.storage.blob.cloudblob.startcopy)-- [StartCopyAsync](/dotnet/api/microsoft.azure.storage.blob.cloudblob.startcopyasync)-
-The `StartCopy` and `StartCopyAsync` methods return a copy ID value that is used to check status or abort the copy operation.
-
-The following code example gets a reference to a previously created blob and copies it to a new blob in the same container:
-
-```csharp
-private static async Task CopyBlockBlobAsync(CloudBlobContainer container)
-{
- CloudBlockBlob sourceBlob = null;
- CloudBlockBlob destBlob = null;
- string leaseId = null;
-
- try
- {
- // Get a block blob from the container to use as the source.
- sourceBlob = container.ListBlobs().OfType<CloudBlockBlob>().FirstOrDefault();
-
- // Lease the source blob for the copy operation
- // to prevent another client from modifying it.
- // Specifying null for the lease interval creates an infinite lease.
- leaseId = await sourceBlob.AcquireLeaseAsync(null);
-
- // Get a reference to a destination blob (in this case, a new blob).
- destBlob = container.GetBlockBlobReference("copy of " + sourceBlob.Name);
-
- // Ensure that the source blob exists.
- if (await sourceBlob.ExistsAsync())
- {
- // Get the ID of the copy operation.
- string copyId = await destBlob.StartCopyAsync(sourceBlob);
-
- // Fetch the destination blob's properties before checking the copy state.
- await destBlob.FetchAttributesAsync();
-
- Console.WriteLine("Status of copy operation: {0}", destBlob.CopyState.Status);
- Console.WriteLine("Completion time: {0}", destBlob.CopyState.CompletionTime);
- Console.WriteLine("Bytes copied: {0}", destBlob.CopyState.BytesCopied.ToString());
- Console.WriteLine("Total bytes: {0}", destBlob.CopyState.TotalBytes.ToString());
- }
- }
- catch (StorageException e)
- {
- Console.WriteLine(e.Message);
- Console.ReadLine();
- throw;
- }
- finally
- {
- // Break the lease on the source blob.
- if (sourceBlob != null)
- {
- await sourceBlob.FetchAttributesAsync();
-
- if (sourceBlob.Properties.LeaseState != LeaseState.Available)
- {
- await sourceBlob.BreakLeaseAsync(new TimeSpan(0));
- }
- }
- }
-}
-```
-
-# [Python v12 SDK](#tab/python)
-
-To copy a blob, call the [start_copy_from_url](/azure/developer/python/sdk/storage/azure-storage-blob/azure.storage.blob.blobclient#start-copy-from-url-source-url--metadata-none--incremental-copy-false-kwargs-) method. The `start_copy_from_url` method returns a dictionary containing information about the copy operation.
-
-The following code example gets a [BlobClient](/azure/developer/python/sdk/storage/azure-storage-blob/azure.storage.blob.blobclient) representing a previously created blob and copies it to a new blob in the same container:
---- ## Abort a copy operation Aborting a copy operation results in a destination blob of zero length. However, the metadata for the destination blob will have the new values copied from the source blob or set explicitly during the copy operation. To keep the original metadata from before the copy, make a snapshot of the destination blob before calling one of the copy methods.
The [AbortCopyFromUri](/dotnet/api/azure.storage.blobs.specialized.blobbaseclien
:::code language="csharp" source="~/azure-storage-snippets/blobs/howto/dotnet/dotnet-v12/CopyBlob.cs" id="Snippet_StopBlobCopy":::
-# [.NET v11 SDK](#tab/dotnet11)
-
-Check the [CopyState.Status](/dotnet/api/microsoft.azure.storage.blob.copystate.status) property on the destination blob to get the status of the copy operation. The final blob will be committed when the copy completes.
-
-When you abort a copy operation, the destination blob's copy status is set to [CopyStatus.Aborted](/dotnet/api/microsoft.azure.storage.blob.copystatus).
-
-The [AbortCopy](/dotnet/api/microsoft.azure.storage.blob.cloudblob.abortcopy) and [AbortCopyAsync](/dotnet/api/microsoft.azure.storage.blob.cloudblob.abortcopyasync) methods cancel an ongoing copy operation.
-
-```csharp
-// Fetch the destination blob's properties before checking the copy state.
-await destBlob.FetchAttributesAsync();
-
-// Check the copy status. If it is still pending, abort the copy operation.
-if (destBlob.CopyState.Status == CopyStatus.Pending)
-{
- await destBlob.AbortCopyAsync(copyId);
- Console.WriteLine("Copy operation {0} has been aborted.", copyId);
-}
-```
-
-# [Python v12 SDK](#tab/python)
-
-Check the "status" entry in the [CopyProperties](/azure/developer/python/sdk/storage/azure-storage-blob/azure.storage.blob.copyproperties) dictionary returned by [get_blob_properties](/azure/developer/python/sdk/storage/azure-storage-blob/azure.storage.blob.blobclient#get-blob-propertieskwargs-) method to get the status of the copy operation. The final blob will be committed when the copy completes.
-
-When you abort a copy operation, the [status](/azure/developer/python/sdk/storage/azure-storage-blob/azure.storage.blob.copyproperties) is set to "aborted".
-
-The [abort_copy](/azure/developer/python/sdk/storage/azure-storage-blob/azure.storage.blob.blobclient#abort-copy-copy-id-kwargs-) method cancels an ongoing copy operation.
----
-## Azure SDKs
-
-Get more information about Azure SDKs:
--- [Azure SDK for .NET](https://github.com/azure/azure-sdk-for-net)-- [Azure SDK for Java](https://github.com/azure/azure-sdk-for-java)-- [Azure SDK for Python](https://github.com/azure/azure-sdk-for-python)-- [Azure SDK for JavaScript](https://github.com/azure/azure-sdk-for-js)-
-## Next steps
-
-The following topics contain information about copying blobs and aborting ongoing copy operations by using the Azure REST APIs.
+## See also
- [Copy Blob](/rest/api/storageservices/copy-blob) - [Abort Copy Blob](/rest/api/storageservices/abort-copy-blob)
+- [Get started with Azure Blob Storage and .NET](storage-blob-dotnet-get-started.md)
storage Storage Blob Delete https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-delete.md
+
+ Title: Delete and restore a blob with .NET - Azure Storage
+description: Learn how to delete and restore a blob in your Azure Storage account using the .NET client library
++++ Last updated : 03/28/2022+++
+ms.devlang: csharp, python
+++
+# Delete and restore a blob in your Azure Storage account using the .NET client library
+
+This article shows how to delete blobs with the [Azure Storage client library for .NET](/dotnet/api/overview/azure/storage). If you've enabled blob soft delete, you can restore deleted blobs.
+
+## Delete a blob
+
+To delete a blob, call either of these methods:
+
+- [Delete](/dotnet/api/azure.storage.blobs.specialized.blobbaseclient.delete)
+- [DeleteAsync](/dotnet/api/azure.storage.blobs.specialized.blobbaseclient.deleteasync)
+- [DeleteIfExists](/dotnet/api/azure.storage.blobs.specialized.blobbaseclient.deleteifexists)
+- [DeleteIfExistsAsync](/dotnet/api/azure.storage.blobs.specialized.blobbaseclient.deleteifexistsasync)
+
+The following example deletes a blob.
+
+```csharp
+public static async Task DeleteBlob(BlobClient blob)
+{
+ await blob.DeleteAsync();
+}
+```
+
+## Restore a deleted blob
+
+Blob soft delete protects an individual blob and its versions, snapshots, and metadata from accidental deletes or overwrites by maintaining the deleted data in the system for a specified period of time. During the retention period, you can restore the blob to its state at deletion. After the retention period has expired, the blob is permanently deleted. For more information about blob soft delete, see [Soft delete for blobs](soft-delete-blob-overview.md).
+
+You can use the Azure Storage client libraries to restore a soft-deleted blob or snapshot.
+
+#### Restore soft-deleted objects when versioning is disabled
+
+To restore deleted blobs when versioning is not enabled, call either of the following methods:
+
+- [Undelete](/dotnet/api/azure.storage.blobs.specialized.blobbaseclient.undelete)
+- [UndeleteAsync](/dotnet/api/azure.storage.blobs.specialized.blobbaseclient.undeleteasync)
+
+These methods restore soft-deleted blobs and any deleted snapshots associated with them. Calling either of these methods for a blob that has not been deleted has no effect. The following example restores all soft-deleted blobs and their snapshots in a container:
+
+```csharp
+public static async Task UnDeleteBlobs(BlobContainerClient container)
+{
+ foreach (BlobItem blob in container.GetBlobs(BlobTraits.None, BlobStates.Deleted))
+ {
+ await container.GetBlockBlobClient(blob.Name).UndeleteAsync();
+ }
+}
+```
+
+To restore a specific soft-deleted snapshot, first call the [Undelete](/dotnet/api/azure.storage.blobs.specialized.blobbaseclient.undelete) or [UndeleteAsync](/dotnet/api/azure.storage.blobs.specialized.blobbaseclient.undeleteasync) on the base blob, then copy the desired snapshot over the base blob. The following example restores a block blob to the most recently generated snapshot:
+
+```csharp
+public static async Task RestoreSnapshots(BlobContainerClient container, BlobClient blob)
+{
+ // Restore the deleted blob.
+ await blob.UndeleteAsync();
+
+ // List blobs in this container that match prefix.
+ // Include snapshots in listing.
+ Pageable<BlobItem> blobItems = container.GetBlobs
+ (BlobTraits.None, BlobStates.Snapshots, prefix: blob.Name);
+
+ // Get the URI for the most recent snapshot.
+ BlobUriBuilder blobSnapshotUri = new BlobUriBuilder(blob.Uri)
+ {
+ Snapshot = blobItems
+ .OrderByDescending(snapshot => snapshot.Snapshot)
+ .ElementAtOrDefault(1)?.Snapshot
+ };
+
+ // Restore the most recent snapshot by copying it to the blob.
+ blob.StartCopyFromUri(blobSnapshotUri.ToUri());
+}
+```
+
+#### Restore soft-deleted blobs when versioning is enabled
+
+To restore a soft-deleted blob when versioning is enabled, copy a previous version over the base blob. You can use either of the following methods:
+
+- [StartCopyFromUri](/dotnet/api/azure.storage.blobs.specialized.blobbaseclient.startcopyfromuri)
+- [StartCopyFromUriAsync](/dotnet/api/azure.storage.blobs.specialized.blobbaseclient.startcopyfromuriasync)
+
+```csharp
+public static void RestoreBlobsWithVersioning(BlobContainerClient container, BlobClient blob)
+{
+ // List blobs in this container that match prefix.
+ // Include versions in listing.
+ Pageable<BlobItem> blobItems = container.GetBlobs
+ (BlobTraits.None, BlobStates.Version, prefix: blob.Name);
+
+ // Get the URI for the most recent version.
+ BlobUriBuilder blobVersionUri = new BlobUriBuilder(blob.Uri)
+ {
+ VersionId = blobItems
+ .OrderByDescending(version => version.VersionId)
+ .ElementAtOrDefault(1)?.VersionId
+ };
+
+ // Restore the most recently generated version by copying it to the base blob.
+ blob.StartCopyFromUri(blobVersionUri.ToUri());
+}
+```
+
+## See also
+
+- [Get started with Azure Blob Storage and .NET](storage-blob-dotnet-get-started.md)
+- [Delete Blob](/rest/api/storageservices/delete-blob) (REST API)
+- [Soft delete for blobs](soft-delete-blob-overview.md)
+- [Undelete Blob](//rest/api/storageservices/undelete-blob) (REST API)
storage Storage Blob Dotnet Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-dotnet-get-started.md
+
+ Title: Get started with Azure Blob Storage and .NET
+
+description: Get started developing a .NET application that works with Azure Blob Storage. This article helps you set up a project and authorize access to an Azure Blob Storage endpoint.
+++++ Last updated : 03/28/2022+++++
+# Get started with Azure Blob Storage and .NET
+
+This article shows you to connect to Azure Blob Storage by using the Azure Blob Storage client library v12 for .NET. Once connected, your code can operate on containers, blobs, and features of the Blob Storage service.
+
+[Package (NuGet)](https://www.nuget.org/packages/Azure.Storage.Blobs) | [Samples](../common/storage-samples-dotnet.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json#blob-samples) | [API reference](/dotnet/api/azure.storage.blobs) | [Library source code](https://github.com/Azure/azure-sdk-for-net/tree/master/sdk/storage/Azure.Storage.Blobs) | [Give Feedback](https://github.com/Azure/azure-sdk-for-net/issues)
+
+## Prerequisites
+
+- Azure subscription - [create one for free](https://azure.microsoft.com/free/)
+
+- Azure storage account - [create a storage account](../common/storage-account-create.md)
+
+- Current [.NET Core SDK](https://dotnet.microsoft.com/download/dotnet-core) for your operating system. Be sure to get the SDK and not the runtime.
++
+## Set up your project
+
+Open a command prompt and change directory (`cd`) into your project folder. Then, install the Azure Blob Storage client library for .NET package by using the `dotnet add package` command.
+
+```console
+cd myProject
+dotnet add package Azure.Storage.Blobs
+```
+
+Add these `using` statements to the top of your code file.
+
+```csharp
+using Azure.Storage.Blobs;
+using Azure.Storage.Blobs.Models;
+using Azure.Storage.Blobs.Specialized;
+
+```
+
+- [Azure.Storage.Blobs](/dotnet/api/azure.storage.blobs): Contains the primary classes (_client objects_) that you can use to operate on the service, containers, and blobs.
+
+- [Azure.Storage.Blobs.Specialized](/dotnet/api/azure.storage.blobs.specialized): Contains classes that you can use to perform operations specific to a blob type (For example: append blobs).
+
+- [Azure.Storage.Blobs.Models](/dotnet/api/azure.storage.blobs.models): All other utility classes, structures, and enumeration types.
+
+## Connect to Blob Storage
+
+To connect to Blob Storage, create an instance of the [BlobServiceClient](/dotnet/api/azure.storage.blobs.blobserviceclient) class. This object is your starting point. You can use it to operate on the blob service instance and it's containers. You can create a [BlobServiceClient](/dotnet/api/azure.storage.blobs.blobserviceclient) by using an account access key, a shared access signature (SAS), or by using an Azure Active Directory (Azure AD) authorization token.
+
+To learn more about each of these authorization mechanisms, see [Authorize access to data in Azure Storage](../common/authorize-data-access.md).
+
+#### Authorize with an account key
+
+Create a [StorageSharedKeyCredential](/dotnet/api/azure.storage.storagesharedkeycredential) by using the storage account name and account key. Then use that object to initialize a [BlobServiceClient](/dotnet/api/azure.storage.blobs.blobserviceclient).
+
+```csharp
+public static void GetBlobServiceClient(ref BlobServiceClient blobServiceClient,
+ string accountName, string accountKey)
+{
+ Azure.Storage.StorageSharedKeyCredential sharedKeyCredential =
+ new StorageSharedKeyCredential(accountName, accountKey);
+
+ string blobUri = "https://" + accountName + ".blob.core.windows.net";
+
+ blobServiceClient = new BlobServiceClient
+ (new Uri(blobUri), sharedKeyCredential);
+}
+```
+
+You can also create a [BlobServiceClient](/dotnet/api/azure.storage.blobs.blobserviceclient) by using a connection string.
+
+```csharp
+ BlobServiceClient blobServiceClient = new BlobServiceClient(connectionString);
+```
+
+For information about how to obtain account keys and best practice guidelines for properly managing and safeguarding your keys, see [Manage storage account access keys](../common/storage-account-keys-manage.md).
+
+#### Authorize with a SAS token
+
+Create a [Uri](/dotnet/api/system.uri) by using the blob service endpoint and SAS token. Then, create a [BlobServiceClient](/dotnet/api/azure.storage.blobs.blobserviceclient) by using the [Uri](/dotnet/api/system.uri).
+
+```csharp
+public static void GetBlobServiceClientSAS(ref BlobServiceClient blobServiceClient,
+ string accountName, string sasToken)
+{
+ string blobUri = "https://" + accountName + ".blob.core.windows.net";
+
+ blobServiceClient = new BlobServiceClient
+ (new Uri($"{blobUri}?{sasToken}"), null);
+}
+```
+
+To generate and manage SAS tokens, see any of these articles:
+
+- [Grant limited access to Azure Storage resources using shared access signatures (SAS)](../common/storage-sas-overview.md?toc=/azure/storage/blobs/toc.json)
+
+- [Create an account SAS with .NET](../common/storage-account-sas-create-dotnet.md)
+
+- [Create a service SAS for a container or blob](sas-service-create.md)
+
+- [Create a user delegation SAS for a container, directory, or blob with .NET](storage-blob-user-delegation-sas-create-dotnet.md)
+
+#### Authorize with Azure AD
+
+To authorize with Azure AD, you'll need to use a security principal. Which type of security principal you need depends on where your application runs. Use this table as a guide.
+
+| Where the application runs | Security principal | Guidance |
+|--|--||
+| Local machine (developing and testing) | User identity or service principal | [Use the Azure Identity library to get an access token for authorization](../common/identity-library-acquire-token.md) |
+| Azure | Managed identity | [Authorize access to blob data with managed identities for Azure resources](authorize-managed-identity.md) |
+| Servers or clients outside of Azure | Service principal | [Authorize access to blob or queue data from a native or web application](../common/storage-auth-aad-app.md?toc=%2Fazure%2Fstorage%2Fblobs%2Ftoc.json) |
+
+If you're testing on a local machine, or your application will run in Azure virtual machines (VMs), function apps, virtual machine scale sets, or in other Azure services, obtain an OAuth token by creating a [DefaultAzureCredential](/dotnet/api/azure.identity.defaultazurecredential) instance. Use that object to create a [BlobServiceClient](/dotnet/api/azure.storage.blobs.blobserviceclient).
+
+```csharp
+public static void GetBlobServiceClient(ref BlobServiceClient blobServiceClient, string accountName)
+{
+ TokenCredential credential = new DefaultAzureCredential();
+
+ string blobUri = "https://" + accountName + ".blob.core.windows.net";
+
+ blobServiceClient = new BlobServiceClient(new Uri(blobUri), credential);
+}
+```
+
+If you plan to deploy the application to servers and clients that run outside of Azure, you can obtain an OAuth token by using other classes in the [Azure Identity client library for .NET](/dotnet/api/overview/azure/identity-readme) which derive from the [TokenCredential](/dotnet/api/azure.core.tokencredential) class.
+
+This example creates a [ClientSecretCredential](/dotnet/api/azure.identity.clientsecretcredential) instance by using the client ID, client secret, and tenant ID. You can obtain these values when you create an app registration and service principal.
+
+```csharp
+public static void GetBlobServiceClientAzureAD(ref BlobServiceClient blobServiceClient,
+ string accountName, string clientID, string clientSecret, string tenantID)
+{
+
+ TokenCredential credential = new ClientSecretCredential(
+ tenantID, clientID, clientSecret, new TokenCredentialOptions());
+
+ string blobUri = "https://" + accountName + ".blob.core.windows.net";
+
+ blobServiceClient = new BlobServiceClient(new Uri(blobUri), credential);
+}
+
+```
+
+#### Connect anonymously
+
+If you explicitly enable anonymous access, then your code can create connect to Blob Storage without authorize your request. You can create a new service client object for anonymous access by providing the Blob storage endpoint for the account. However, you must also know the name of a container in that account that's available for anonymous access. To learn how to enable anonymous access, see [Configure anonymous public read access for containers and blobs](anonymous-read-access-configure.md).
+
+```csharp
+public static void CreateAnonymousBlobClient()
+{
+ // Create the client object using the Blob storage endpoint for your account.
+ BlobServiceClient blobServiceClient = new BlobServiceClient
+ (new Uri(@"https://storagesamples.blob.core.windows.net/"));
+
+ // Get a reference to a container that's available for anonymous access.
+ BlobContainerClient container = blobServiceClient.GetBlobContainerClient("sample-container");
+
+ // Read the container's properties.
+ // Note this is only possible when the container supports full public read access.
+ Console.WriteLine(container.GetProperties().Value.LastModified);
+ Console.WriteLine(container.GetProperties().Value.ETag);
+}
+```
+
+Alternatively, if you have the URL to a container that is anonymously available, you can use it to reference the container directly.
+
+```csharp
+public static void ListBlobsAnonymously()
+{
+ // Get a reference to a container that's available for anonymous access.
+ BlobContainerClient container = new BlobContainerClient
+ (new Uri(@"https://storagesamples.blob.core.windows.net/sample-container"));
+
+ // List blobs in the container.
+ // Note this is only possible when the container supports full public read access.
+ foreach (BlobItem blobItem in container.GetBlobs())
+ {
+ Console.WriteLine(container.GetBlockBlobClient(blobItem.Name).Uri);
+ }
+}
+```
+
+## Build your application
+
+As you build your application, your code will primarily interact with three types of resources:
+
+- The storage account, which is the unique top-level namespace for your Azure Storage data.
+
+- Containers, which organize the blob data in your storage account.
+
+- Blobs, which store unstructured data like text and binary data.
+
+The following diagram shows the relationship between these resources.
+
+![Diagram of Blob storage architecture](./media/storage-blobs-introduction/blob1.png)
+
+Each type of resource is represented by one or more associated .NET classes. These are the basic classes:
+
+| Class | Description |
+|||
+| [BlobServiceClient](/dotnet/api/azure.storage.blobs.blobserviceclient) | Represents the Blob Storage endpoint for your storage account. |
+| [BlobContainerClient](/dotnet/api/azure.storage.blobs.blobcontainerclient) | Allows you to manipulate Azure Storage containers and their blobs. |
+| [BlobClient](/dotnet/api/azure.storage.blobs.blobclient) | Allows you to manipulate Azure Storage blobs.|
+| [AppendBlobClient](/dotnet/api/azure.storage.blobs.specialized.appendblobclient) | Allows you to perform operations specific to append blobs such as periodically appending log data.|
+| [BlockBlobClient](/dotnet/api/azure.storage.blobs.specialized.blockblobclient)| Allows you to perform operations specific to block blobs such as staging and then committing blocks of data.|
+
+The following guides show you how to use each of these classes to build your application.
+
+| Guide | Description |
+|--||
+| [Create a container](storage-blob-container-create.md) | Create containers. |
+| [Delete and restore containers](storage-blob-container-delete.md) | Delete containers, and if soft-delete is enabled, restore deleted containers. |
+| [List containers](storage-blob-containers-list.md) | List containers in an account and the various options available to customize a listing. |
+| [Manage properties and metadata](storage-blob-container-properties-metadata.md) | Get and set properties and metadata for containers. |
+| [Create and manage leases](storage-blob-container-lease.md) | Establish and manage a lock on a container or the blobs in a container. |
+| [Append data to blobs](storage-blob-append.md) | Learn how to create an append blob and then append data to that blob. |
+| [Upload blobs](storage-blob-upload.md) | Learn how to upload blobs by using strings, streams, file paths, and other methods. |
+| [Download blobs](storage-blob-download.md) | Download blobs by using strings, streams, and file paths. |
+| [Copy blobs](storage-blob-copy.md) | Copy a blob from one account to another account. |
+| [List blobs](storage-blobs-list.md) | List blobs in different ways. |
+| [Delete and restore](storage-blob-delete.md) | Delete blobs, and if soft-delete is enabled, restore deleted blobs. |
+| [Find blobs using tags](storage-blob-tags.md) | Set and retrieve tags as well as use tags to find blobs. |
+| [Manage properties and metadata](storage-blob-properties-metadata.md) | Get and set properties and metadata for blobs. |
+
+## See also
+
+- [Package (NuGet)](https://www.nuget.org/packages/Azure.Storage.Blobs)
+- [Samples](../common/storage-samples-dotnet.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json#blob-samples)
+- [API reference](/dotnet/api/azure.storage.blobs)
+- [Library source code](https://github.com/Azure/azure-sdk-for-net/tree/master/sdk/storage/Azure.Storage.Blobs)
+- [Give Feedback](https://github.com/Azure/azure-sdk-for-net/issues)
storage Storage Blob Download https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-download.md
+
+ Title: Download a blob with .NET - Azure Storage
+description: Learn how to download a blob in Azure Storage by using the .NET client library.
++++ Last updated : 03/28/2022+++
+ms.devlang: csharp, python
+++
+# Download a blob in Azure Storage using the .NET client library
+
+You can download a blob by using any of the following methods:
+
+- [DownloadTo](/dotnet/api/azure.storage.blobs.specialized.blobbaseclient.downloadto)
+- [DownloadToAsync](/dotnet/api/azure.storage.blobs.specialized.blobbaseclient.downloadtoasync)
+- [DownloadContent](/dotnet/api/azure.storage.blobs.specialized.blobbaseclient.downloadcontent)
+-[DownloadContentAsync](/dotnet/api/azure.storage.blobs.specialized.blobbaseclient.downloadcontentasync)
+
+You can also open a stream to read from a blob. The stream will only download the blob as the stream is read from. Use either of the following methods:
+
+- [OpenRead](/dotnet/api/azure.storage.blobs.specialized.blobbaseclient.openread)
+- [OpenReadAsync](/dotnet/api/azure.storage.blobs.specialized.blobbaseclient.openreadasync)
+
+> [!NOTE]
+> The examples in this article assume that you've created a [BlobServiceClient](/dotnet/api/azure.storage.blobs.blobserviceclient) object by using the guidance in the [Get started with Azure Blob Storage and .NET](storage-blob-dotnet-get-started.md) article.
+
+## Download to a file path
+
+The following example downloads a blob by using a file path:
+
+```csharp
+public static async Task DownloadBlob(BlobClient blobClient, string localFilePath)
+{
+ await blobClient.DownloadToAsync(localFilePath);
+}
+```
+
+## Download to a stream
+
+The following example downloads a blob by creating a [Stream](/dotnet/api/system.io.stream) object and then downloading to that stream.
+
+```csharp
+public static async Task DownloadToStream(BlobClient blobClient, string localFilePath)
+{
+ FileStream fileStream = File.OpenWrite(localFilePath);
+ await blobClient.DownloadToAsync(fileStream);
+ fileStream.Close();
+}
+```
+
+## Download to a string
+
+The following example downloads a blob to a string. This example assumes that the blob is a text file.
+
+```csharp
+public static async Task DownloadToText(BlobClient blobClient, string localFilePath)
+{
+ BlobDownloadResult downloadResult = await blobClient.DownloadContentAsync();
+ string downloadedData = downloadResult.Content.ToString();
+ Console.WriteLine("Downloaded data:", downloadedData);
+}
+```
+
+## Download from a stream
+
+The following example downloads a blob by reading from a stream.
+
+```csharp
+public static async Task DownloadfromStream(BlobClient blobClient, string localFilePath)
+{
+ using (var stream = await blobClient.OpenReadAsync())
+ {
+ FileStream fileStream = File.OpenWrite(localFilePath);
+ await stream.CopyToAsync(fileStream);
+ }
+}
+
+```
+
+## See also
+
+- [Get started with Azure Blob Storage and .NET](storage-blob-dotnet-get-started.md)
+- [DownloadStreaming](/dotnet/api/azure.storage.blobs.specialized.blobbaseclient.downloadstreaming) / [DownloadStreamingAsync](/dotnet/api/azure.storage.blobs.specialized.blobbaseclient.downloadstreamingasync)
+- [Get Blob](/rest/api/storageservices/get-blob) (REST API)
storage Storage Blob Properties Metadata https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-properties-metadata.md
Previously updated : 09/25/2020 Last updated : 03/28/2022
In addition to the data they contain, blobs support system properties and user-d
The following code example sets the `ContentType` and `ContentLanguage` system properties on a blob.
-# [.NET v12 SDK](#tab/dotnet)
- To set properties on a blob, call [SetHttpHeaders](/dotnet/api/azure.storage.blobs.specialized.blobbaseclient.sethttpheaders) or [SetHttpHeadersAsync](/dotnet/api/azure.storage.blobs.specialized.blobbaseclient.sethttpheadersasync). Any properties not explicitly set are cleared. The following code example first gets the existing properties on the blob, then uses them to populate the headers that are not being updated. :::code language="csharp" source="~/azure-storage-snippets/blobs/howto/dotnet/dotnet-v12/Metadata.cs" id="Snippet_SetBlobProperties":::
-# [.NET v11 SDK](#tab/dotnet11)
-
-```csharp
-public static async Task SetBlobPropertiesAsync(CloudBlob blob)
-{
- try
- {
- Console.WriteLine("Setting blob properties.");
-
- // You must explicitly set the MIME ContentType every time
- // the properties are updated or the field will be cleared.
- blob.Properties.ContentType = "text/plain";
- blob.Properties.ContentLanguage = "en-us";
-
- // Set the blob's properties.
- await blob.SetPropertiesAsync();
- }
- catch (StorageException e)
- {
- Console.WriteLine("HTTP error code {0}: {1}",
- e.RequestInformation.HttpStatusCode,
- e.RequestInformation.ErrorCode);
- Console.WriteLine(e.Message);
- Console.ReadLine();
- }
-}
-```
--- The following code example gets a blob's system properties and displays some of the values.
-# [.NET v12 SDK](#tab/dotnet)
- :::code language="csharp" source="~/azure-storage-snippets/blobs/howto/dotnet/dotnet-v12/Metadata.cs" id="Snippet_ReadBlobProperties":::
-# [.NET v11 SDK](#tab/dotnet11)
-
-Retrieving metadata and property values for a Blob storage resource is a two-step process. Before you can read these values, you must explicitly fetch them by calling the `FetchAttributes` or `FetchAttributesAsync` method. The exception to this rule is that the `Exists` and `ExistsAsync` methods call the appropriate `FetchAttributes` method under the covers. When you call one of these methods, you don't need to also call `FetchAttributes`.
-
-> [!IMPORTANT]
-> If you find that property or metadata values for a storage resource have not been populated, check that your code calls the `FetchAttributes` or `FetchAttributesAsync` method.
-
-To retrieve blob properties, call the `FetchAttributes` or `FetchAttributesAsync` method on your blob to populate the `Properties` property.
-
-```csharp
-private static async Task GetBlobPropertiesAsync(CloudBlob blob)
-{
- try
- {
- // Fetch the blob properties.
- await blob.FetchAttributesAsync();
-
- // Display some of the blob's property values.
- Console.WriteLine(" ContentLanguage: {0}", blob.Properties.ContentLanguage);
- Console.WriteLine(" ContentType: {0}", blob.Properties.ContentType);
- Console.WriteLine(" Created: {0}", blob.Properties.Created);
- Console.WriteLine(" LastModified: {0}", blob.Properties.LastModified);
- }
- catch (StorageException e)
- {
- Console.WriteLine("HTTP error code {0}: {1}",
- e.RequestInformation.HttpStatusCode,
- e.RequestInformation.ErrorCode);
- Console.WriteLine(e.Message);
- Console.ReadLine();
- }
-}
-```
--- ## Set and retrieve metadata You can specify metadata as one or more name-value pairs on a blob or container resource. To set metadata, add name-value pairs to the `Metadata` collection on the resource. Then, call one of the following methods to write the values:
-# [.NET v12 SDK](#tab/dotnet)
- - [SetMetadata](/dotnet/api/azure.storage.blobs.specialized.blobbaseclient.setmetadata) - [SetMetadataAsync](/dotnet/api/azure.storage.blobs.specialized.blobbaseclient.setmetadataasync)
-# [.NET v11 SDK](#tab/dotnet11)
--- [SetMetadata](/dotnet/api/microsoft.azure.storage.blob.cloudblob.setmetadata)-- [SetMetadataAsync](/dotnet/api/microsoft.azure.storage.blob.cloudblob.setmetadataasync)-- Metadata name/value pairs are valid HTTP headers and should adhere to all restrictions governing HTTP headers. Metadata names must be valid HTTP header names and valid C# identifiers, may contain only ASCII characters, and should be treated as case-insensitive. [Base64-encode](/dotnet/api/system.convert.tobase64string) or [URL-encode](/dotnet/api/system.web.httputility.urlencode) metadata values containing non-ASCII characters. The name of your metadata must conform to the naming conventions for C# identifiers. Metadata names maintain the case used when they were created, but are case-insensitive when set or read. If two or more metadata headers using the same name are submitted for a resource, Azure Blob storage returns HTTP error code 400 (Bad Request). The following code example sets metadata on a blob. One value is set using the collection's `Add` method. The other value is set using implicit key/value syntax.
-# [.NET v12 SDK](#tab/dotnet)
- :::code language="csharp" source="~/azure-storage-snippets/blobs/howto/dotnet/dotnet-v12/Metadata.cs" id="Snippet_AddBlobMetadata":::
-# [.NET v11 SDK](#tab/dotnet11)
-
-```csharp
-public static async Task AddBlobMetadataAsync(CloudBlob blob)
-{
- try
- {
- // Add metadata to the blob by calling the Add method.
- blob.Metadata.Add("docType", "textDocuments");
-
- // Add metadata to the blob by using key/value syntax.
- blob.Metadata["category"] = "guidance";
-
- // Set the blob's metadata.
- await blob.SetMetadataAsync();
- }
- catch (StorageException e)
- {
- Console.WriteLine("HTTP error code {0}: {1}",
- e.RequestInformation.HttpStatusCode,
- e.RequestInformation.ErrorCode);
- Console.WriteLine(e.Message);
- Console.ReadLine();
- }
-}
-```
--- The following code example reads the metadata on a blob.
-# [.NET v12 SDK](#tab/dotnet)
- To retrieve metadata, call the [GetProperties](/dotnet/api/azure.storage.blobs.specialized.blobbaseclient.getproperties) or [GetPropertiesAsync](/dotnet/api/azure.storage.blobs.specialized.blobbaseclient.getpropertiesasync) method on your blob or container to populate the [Metadata](/dotnet/api/azure.storage.blobs.models.blobproperties.metadata) collection, then read the values, as shown in the example below. The **GetProperties** methods retrieve blob properties and metadata in a single call. This is different from the REST APIs which require separate calls to [Get Blob Properties](/rest/api/storageservices/get-blob-properties) and [Get Blob Metadata](/rest/api/storageservices/get-blob-metadata). :::code language="csharp" source="~/azure-storage-snippets/blobs/howto/dotnet/dotnet-v12/Metadata.cs" id="Snippet_ReadBlobMetadata":::
-# [.NET v11 SDK](#tab/dotnet11)
-
-To retrieve metadata, call the `FetchAttributes` or `FetchAttributesAsync` method on your blob or container to populate the `Metadata` collection, then read the values, as shown in the example below.
-
-```csharp
-public static async Task ReadBlobMetadataAsync(CloudBlob blob)
-{
- try
- {
- // Fetch blob attributes in order to populate
- // the blob's properties and metadata.
- await blob.FetchAttributesAsync();
-
- Console.WriteLine("Blob metadata:");
-
- // Enumerate the blob's metadata.
- foreach (var metadataItem in blob.Metadata)
- {
- Console.WriteLine("\tKey: {0}", metadataItem.Key);
- Console.WriteLine("\tValue: {0}", metadataItem.Value);
- }
- }
- catch (StorageException e)
- {
- Console.WriteLine("HTTP error code {0}: {1}",
- e.RequestInformation.HttpStatusCode,
- e.RequestInformation.ErrorCode);
- Console.WriteLine(e.Message);
- Console.ReadLine();
- }
-}
-```
---- ## See also - [Set Blob Properties operation](/rest/api/storageservices/set-blob-properties)
storage Storage Blob Tags https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-tags.md
+
+ Title: Use blob index tags to find data in Azure Blob Storage (.NET)
+description: Learn how to categorize, manage, and query for blob objects by using the .NET client library.
++++ Last updated : 03/28/2022+++
+ms.devlang: csharp, python
+++
+# Use blob index tags to manage and find data in Azure Blob Storage (.NET)
+
+Blob index tags categorize data in your storage account using key-value tag attributes. These tags are automatically indexed and exposed as a searchable multi-dimensional index to easily find data. This article shows you how to set, get, and find data using blob index tags.
+
+To learn more about this feature along with known issues and limitations, see [Manage and find Azure Blob data with blob index tags](storage-manage-find-blobs.md).
+
+## Set and retrieve index tags
+
+You can set and get index tags if your code has authorized access by using an account key or if your code uses a security principal that has been given the appropriate role assignments. For more information, see [Manage and find Azure Blob data with blob index tags](storage-manage-find-blobs.md).
+
+#### Set tags
+
+You can set tags by using either of the following methods:
+
+- [SetTags](/dotnet/api/azure.storage.blobs.specialized.blobbaseclient.settags)
+- [SetTagsAsync](/dotnet/api/azure.storage.blobs.specialized.blobbaseclient.settagsasync)
+
+The following example performs this task.
+
+```csharp
+public static async Task SetTags(BlobClient blobClient)
+{
+ Dictionary<string, string> tags =
+ new Dictionary<string, string>
+ {
+ { "Sealed", "false" },
+ { "Content", "image" },
+ { "Date", "2020-04-20" }
+ };
+
+ await blobClient.SetTagsAsync(tags);
+}
+
+```
+
+You can delete all tags by passing an empty [Dictionary] into the [SetTags](/dotnet/api/azure.storage.blobs.specialized.blobbaseclient.settags) or [SetTagsAsync](/dotnet/api/azure.storage.blobs.specialized.blobbaseclient.settagsasync) method as shown in the following example.
+
+```csharp
+Dictionary<string, string> noTags = new Dictionary<string, string>();
+await blobClient.SetTagsAsync(noTags);
+```
+
+| Related articles |
+|--|
+| [Manage and find Azure Blob data with blob index tags](storage-manage-find-blobs.md) |
+| [Set Blob Tags](/rest/api/storageservices/set-blob-tags) (REST API) |
+
+#### Get tags
+
+You can get tags by using either of the following methods:
+
+- [GetTags](/dotnet/api/azure.storage.blobs.specialized.blobbaseclient.gettags)
+- [GetTagsAsync](/dotnet/api/azure.storage.blobs.specialized.blobbaseclient.gettagsasync)
+
+The following example performs this task.
+
+```csharp
+public static async Task GetTags(BlobClient blobClient)
+{
+ Response<GetBlobTagResult> tagsResponse = await blobClient.GetTagsAsync();
+
+ foreach (KeyValuePair<string, string> tag in tagsResponse.Value.Tags)
+ {
+ Console.WriteLine($"{tag.Key}={tag.Value}");
+ }
+}
+
+```
+
+## Filter and find data with blob index tags
+
+You can use index tags to find and filter data if your code has authorized access by using an account key or if your code uses a security principal that has been given the appropriate role assignments. For more information, see [Manage and find Azure Blob data with blob index tags](storage-manage-find-blobs.md).
+
+> [!NOTE]
+> You can't use index tags to retrieve previous versions. Tags for previous versions aren't passed to the blob index engine. For more information, see [Conditions and known issues](storage-manage-find-blobs.md#conditions-and-known-issues).
+
+You can find data by using either of the following methods:
+
+- [FindBlobsByTags](/dotnet/api/azure.storage.blobs.blobserviceclient.findblobsbytags)
+- [FindBlobsByTagsAsync](/dotnet/api/azure.storage.blobs.blobserviceclient.findblobsbytagsasync)
+
+The following example finds all blobs tagged with a date that falls between a specific range.
+
+```csharp
+public static async Task FindBlobsbyTags(BlobServiceClient serviceClient)
+{
+ string query = @"""Date"" >= '2020-04-20' AND ""Date"" <= '2020-04-30'";
+
+ // Find Blobs given a tags query
+ Console.WriteLine("Find Blob by Tags query: " + query + Environment.NewLine);
+
+ List<TaggedBlobItem> blobs = new List<TaggedBlobItem>();
+ await foreach (TaggedBlobItem taggedBlobItem in serviceClient.FindBlobsByTagsAsync(query))
+ {
+ blobs.Add(taggedBlobItem);
+ }
+
+ foreach (var filteredBlob in blobs)
+ {
+
+ Console.WriteLine($"BlobIndex result: ContainerName= {filteredBlob.BlobContainerName}, " +
+ $"BlobName= {filteredBlob.BlobName}");
+ }
+
+}
+
+```
+
+## See also
+
+- [Manage and find Azure Blob data with blob index tags](storage-manage-find-blobs.md)
+- [Get Blob Tags](/rest/api/storageservices/get-blob-tags) (REST API)
+- [Find Blobs by Tags](/rest/api/storageservices/find-blobs-by-tags) (REST API)
storage Storage Blob Upload https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-upload.md
+
+ Title: Upload a blob using .NET - Azure Storage
+description: Learn how to upload a blob to your Azure Storage account using the .NET client library.
++++ Last updated : 03/28/2022+++
+ms.devlang: csharp, python
+++
+# Upload a blob to Azure Storage by using the .NET client library
+
+You can upload a blob, open a blob stream and write to that, or upload large blobs in blocks.
+
+> [!NOTE]
+> The examples in this article assume that you've created a [BlobServiceClient](/dotnet/api/azure.storage.blobs.blobserviceclient) object by using the guidance in the [Get started with Azure Blob Storage and .NET](storage-blob-dotnet-get-started.md) article. Blobs in Azure Storage are organized into containers. Before you can upload a blob, you must first create a container. To learn how to create a container, see [Create a container in Azure Storage with .NET](storage-blob-container-create.md).
+
+To upload a blob by using a file path, a stream, a binary object or a text string, use either of the following methods:
+
+- [Upload](/dotnet/api/azure.storage.blobs.blobclient.upload)
+- [UploadAsync](/dotnet/api/azure.storage.blobs.blobclient.uploadasync)
+
+To open a stream in Blob Storage, and then write to that stream, use either of the following methods:
+
+- [OpenWrite](/dotnet/api/azure.storage.blobs.specialized.blockblobclient.openwrite)
+- [OpenWriteAsync](/dotnet/api/azure.storage.blobs.specialized.blockblobclient.openwriteasync)
+
+## Upload by using a file path
+
+The following example uploads a blob by using a file path:
+
+```csharp
+public static async Task UploadFile
+ (BlobContainerClient containerClient, string localFilePath)
+{
+ string fileName = Path.GetFileName(localFilePath);
+ BlobClient blobClient = containerClient.GetBlobClient(fileName);
+
+ await blobClient.UploadAsync(localFilePath, true);
+}
+```
+
+## Upload by using a Stream
+
+The following example uploads a blob by creating a [Stream](/dotnet/api/system.io.stream) object, and then uploading that stream.
+
+```csharp
+public static async Task UploadStream
+ (BlobContainerClient containerClient, string localFilePath)
+{
+ string fileName = Path.GetFileName(localFilePath);
+ BlobClient blobClient = containerClient.GetBlobClient(fileName);
+
+ FileStream fileStream = File.OpenRead(localFilePath);
+ await blobClient.UploadAsync(fileStream, true);
+ fileStream.Close();
+}
+```
+
+## Upload by using a BinaryData object
+
+The following example uploads a [BinaryData](/dotnet/api/system.binarydata) object.
+
+```csharp
+public static async Task UploadBinary
+ (BlobContainerClient containerClient, string localFilePath)
+{
+ string fileName = Path.GetFileName(localFilePath);
+ BlobClient blobClient = containerClient.GetBlobClient(fileName);
+
+ FileStream fileStream = File.OpenRead(localFilePath);
+ BinaryReader reader = new BinaryReader(fileStream);
+
+ byte[] buffer = new byte[fileStream.Length];
+
+ reader.Read(buffer, 0, buffer.Length);
+
+ BinaryData binaryData = new BinaryData(buffer);
+
+ await blobClient.UploadAsync(binaryData, true);
+
+ fileStream.Close();
+}
+```
+
+## Upload a string
+
+The following example uploads a string:
+
+```csharp
+public static async Task UploadString
+ (BlobContainerClient containerClient, string localFilePath)
+{
+ string fileName = Path.GetFileName(localFilePath);
+ BlobClient blobClient = containerClient.GetBlobClient(fileName);
+
+ await blobClient.UploadAsync(BinaryData.FromString("hello world"), overwrite: true);
+}
+```
+
+## Upload with index tags
+
+Blob index tags categorize data in your storage account using key-value tag attributes. These tags are automatically indexed and exposed as a searchable multi-dimensional index to easily find data. You can perform this task by adding tags to a [BlobUploadOptions](/dotnet/api/azure.storage.blobs.models.blobuploadoptions) instance, and then passing that instance into the [UploadAsync](/dotnet/api/azure.storage.blobs.blobclient.uploadasync) method.
+
+The following example uploads a blob with three index tags.
+
+```csharp
+public static async Task UploadBlobWithTags
+ (BlobContainerClient containerClient, string localFilePath)
+{
+ string fileName = Path.GetFileName(localFilePath);
+ BlobClient blobClient = containerClient.GetBlobClient(fileName);
+
+ BlobUploadOptions options = new BlobUploadOptions();
+ options.Tags = new Dictionary<string, string>
+ {
+ { "Sealed", "false" },
+ { "Content", "image" },
+ { "Date", "2020-04-20" }
+ };
+
+ await blobClient.UploadAsync(localFilePath, options);
+}
+```
+
+## Upload to a stream in Blob Storage
+
+You can open a stream in Blob Storage and write to that stream. The following example creates a zip file in Blob Storage and writes files to that file. Instead of building a zip file in local memory, only one file at a time is in memory.
+
+```csharp
+public static async Task UploadToStream
+ (BlobContainerClient containerClient, string localDirectoryPath)
+{
+ string zipFileName = Path.GetFileName
+ (Path.GetDirectoryName(localDirectoryPath)) + ".zip";
+
+ BlockBlobClient blockBlobClient =
+
+ containerClient.GetBlockBlobClient(zipFileName);
+
+ using (Stream stream = await blockBlobClient.OpenWriteAsync(true))
+ {
+ using (ZipArchive zip = new ZipArchive
+ (stream, ZipArchiveMode.Create, leaveOpen: false))
+ {
+ foreach (var fileName in Directory.EnumerateFiles(localDirectoryPath))
+ {
+ using (var fileStream = File.OpenRead(fileName))
+ {
+ var entry = zip.CreateEntry(Path.GetFileName
+ (fileName), CompressionLevel.Optimal);
+ using (var innerFile = entry.Open())
+ {
+ await fileStream.CopyToAsync(innerFile);
+ }
+ }
+ }
+ }
+ }
+
+}
+```
+
+## Upload by staging blocks and then committing them
+
+You can have greater control over how to divide our uploads into blocks by manually staging individual blocks of data. When all of the blocks that make up a blob are staged, you can commit them to Blob Storage. You can use this approach if you want to enhance performance by uploading blocks in parallel.
+
+```csharp
+public static async Task UploadInBlocks
+ (BlobContainerClient blobContainerClient, string localFilePath, int blockSize)
+{
+ string fileName = Path.GetFileName(localFilePath);
+ BlockBlobClient blobClient = blobContainerClient.GetBlockBlobClient(fileName);
+
+ FileStream fileStream = File.OpenRead(localFilePath);
+
+ ArrayList blockIDArrayList = new ArrayList();
+
+ byte[] buffer;
+
+ var bytesLeft = (fileStream.Length - fileStream.Position);
+
+ while (bytesLeft > 0)
+ {
+ if (bytesLeft >= blockSize)
+ {
+ buffer = new byte[blockSize];
+ await fileStream.ReadAsync(buffer, 0, blockSize);
+ }
+ else
+ {
+ buffer = new byte[bytesLeft];
+ await fileStream.ReadAsync(buffer, 0, Convert.ToInt32(bytesLeft));
+ bytesLeft = (fileStream.Length - fileStream.Position);
+ }
+
+ using (var stream = new MemoryStream(buffer))
+ {
+ string blockID = Convert.ToBase64String
+ (Encoding.UTF8.GetBytes(Guid.NewGuid().ToString()));
+
+ blockIDArrayList.Add(blockID);
++
+ await blobClient.StageBlockAsync(blockID, stream);
+ }
+
+ bytesLeft = (fileStream.Length - fileStream.Position);
+
+ }
+
+ string[] blockIDArray = (string[])blockIDArrayList.ToArray(typeof(string));
+
+ await blobClient.CommitBlockListAsync(blockIDArray);
+}
+```
+
+## See also
+
+- [Manage and find Azure Blob data with blob index tags](storage-manage-find-blobs.md)
+- [Use blob index tags to manage and find data on Azure Blob Storage](storage-blob-index-how-to.md)
+- [Put Blob](/rest/api/storageservices/put-blob) (REST API)
+- [Put Blob From URL](/rest/api/storageservices/put-blob-from-url) (REST API)
storage Storage Blobs List https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blobs-list.md
Title: List blobs with Azure Storage APIs
-description: Learn how to list blobs in your storage account using the using the Azure Storage client libraries. Code examples show how to list blobs in a flat listing, or how to list blobs hierarchically, as though they were organized into directories or folders.
+ Title: List blobs with .NET - Azure Storage
+description: Learn how to list blobs in your storage account using the Azure Storage client library for .NET. Code examples show how to list blobs in a flat listing, or how to list blobs hierarchically, as though they were organized into directories or folders.
-+ Previously updated : 03/24/2021- Last updated : 03/28/2022+ ms.devlang: csharp, python
-# List blobs with Azure Storage client libraries
+# List blobs using the Azure Storage client library for .NET
When you list blobs from your code, you can specify a number of options to manage how results are returned from Azure Storage. You can specify the number of results to return in each set of results, and then retrieve the subsequent sets. You can specify a prefix to return blobs whose names begin with that character or string. And you can list blobs in a flat listing structure, or hierarchically. A hierarchical listing returns blobs as though they were organized into folders.
When you list blobs from your code, you can specify a number of options to manag
To list the blobs in a storage account, call one of these methods:
-# [.NET v12 SDK](#tab/dotnet)
- - [BlobContainerClient.GetBlobs](/dotnet/api/azure.storage.blobs.blobcontainerclient.getblobs) - [BlobContainerClient.GetBlobsAsync](/dotnet/api/azure.storage.blobs.blobcontainerclient.getblobsasync) - [BlobContainerClient.GetBlobsByHierarchy](/dotnet/api/azure.storage.blobs.blobcontainerclient.getblobsbyhierarchy) - [BlobContainerClient.GetBlobsByHierarchyAsync](/dotnet/api/azure.storage.blobs.blobcontainerclient.getblobsbyhierarchyasync)
-# [.NET v11 SDK](#tab/dotnet11)
--- [CloudBlobClient.ListBlobs](/dotnet/api/microsoft.azure.storage.blob.cloudblobclient.listblobs)-- [CloudBlobClient.ListBlobsSegmented](/dotnet/api/microsoft.azure.storage.blob.cloudblobclient.listblobssegmented)-- [CloudBlobClient.ListBlobsSegmentedAsync](/dotnet/api/microsoft.azure.storage.blob.cloudblobclient.listblobssegmentedasync)-
-To list the blobs in a container, call one of these methods:
--- [CloudBlobContainer.ListBlobs](/dotnet/api/microsoft.azure.storage.blob.cloudblobcontainer.listblobs)-- [CloudBlobContainer.ListBlobsSegmented](/dotnet/api/microsoft.azure.storage.blob.cloudblobcontainer.listblobssegmented)-- [CloudBlobContainer.ListBlobsSegmentedAsync](/dotnet/api/microsoft.azure.storage.blob.cloudblobcontainer.listblobssegmentedasync)-
-# [Python v12 SDK](#tab/python)
--- [ContainerClient.list_blobs](/azure/developer/python/sdk/storage/azure-storage-blob/azure.storage.blob.containerclient#list-blobs-name-starts-with-none--include-none-kwargs-)--- ### Manage how many results are returned By default, a listing operation returns up to 5000 results at a time, but you can specify the number of results that you want each listing operation to return. The examples presented in this article show you how to return results in pages.
To filter the list of blobs, specify a string for the `prefix` parameter. The pr
### Return metadata
-You can return blob metadata with the results.
-
-# [.NET v12 SDK](#tab/dotnet)
-
-Specify the **Metadata** value for the [BlobTraits](/dotnet/api/azure.storage.blobs.models.blobtraits) enumeration.
-
-# [.NET v11 SDK](#tab/dotnet11)
-
-Specify the **Metadata** value for the [BlobListingDetails](/dotnet/api/microsoft.azure.storage.blob.bloblistingdetails) enumeration. Azure Storage includes metadata with each blob returned, so you do not need to call one of the **FetchAttributes** methods in this context to retrieve the blob metadata.
-
-# [Python v12 SDK](#tab/python)
-
-Specify `metadata` for the `include=` parameter when calling [list_blobs](/azure/developer/python/sdk/storage/azure-storage-blob/azure.storage.blob.containerclient#list-blobs-name-starts-with-none--include-none-kwargs-).
---
-### List blob versions or snapshots
--- To list blob versions or snapshots with the .NET v12 client library, specify the [BlobStates](/dotnet/api/azure.storage.blobs.models.blobstates) parameter with the **Version** or **Snapshot** field. Versions and snapshots are listed from oldest to newest. For more information about listing versions, see [List blob versions](versioning-enable.md#list-blob-versions).--- To list the number of snapshots with the Python v12 client library, specify `num_snapshots` in the `include=` parameter when calling [list_blobs](/azure/developer/python/sdk/storage/azure-storage-blob/azure.storage.blob.containerclient#list-blobs-name-starts-with-none--include-none-kwargs-).
+You can return blob metadata with the results by specifying the **Metadata** value for the [BlobTraits](/dotnet/api/azure.storage.blobs.models.blobtraits) enumeration.
### Flat listing versus hierarchical listing
The following example lists the blobs in the specified container using a flat li
If you've enabled the hierarchical namespace feature on your account, directories are not virtual. Instead, they are concrete, independent objects. Therefore, directories appear in the list as zero-length blobs.
-# [.NET v12 SDK](#tab/dotnet)
- :::code language="csharp" source="~/azure-storage-snippets/blobs/howto/dotnet/dotnet-v12/CRUD.cs" id="Snippet_ListBlobsFlatListing":::
-# [.NET v11 SDK](#tab/dotnet11)
-
-If a listing operation returns more than 5000 blobs, or if the number of blobs that are available exceed the number that you specified, then Azure Storage returns a *continuation token* with the list of blobs. A continuation token is an opaque value that you can use to retrieve the next set of results from Azure Storage.
-
-In your code, check the value of the continuation token to determine whether it is null. When the continuation token is null, then the set of results is complete. If the continuation token is not null, then call listing operation again, passing in the continuation token to retrieve the next set of results, until the continuation token is null.
-
-```csharp
-private static async Task ListBlobsFlatListingAsync(CloudBlobContainer container, int? segmentSize)
-{
- BlobContinuationToken continuationToken = null;
- CloudBlob blob;
-
- try
- {
- // Call the listing operation and enumerate the result segment.
- // When the continuation token is null, the last segment has been returned
- // and execution can exit the loop.
- do
- {
- BlobResultSegment resultSegment = await container.ListBlobsSegmentedAsync(string.Empty,
- true, BlobListingDetails.Metadata, segmentSize, continuationToken, null, null);
-
- foreach (var blobItem in resultSegment.Results)
- {
- blob = (CloudBlob)blobItem;
-
- // Write out some blob properties.
- Console.WriteLine("Blob name: {0}", blob.Name);
- }
-
- Console.WriteLine();
-
- // Get the continuation token and loop until it is null.
- continuationToken = resultSegment.ContinuationToken;
-
- } while (continuationToken != null);
- }
- catch (StorageException e)
- {
- Console.WriteLine(e.Message);
- Console.ReadLine();
- throw;
- }
-}
-```
-
-# [Python v12 SDK](#tab/python)
---- The sample output is similar to: ```console
Blob name: FolderA/FolderB/FolderC/blob3.txt
When you call a listing operation hierarchically, Azure Storage returns the virtual directories and blobs at the first level of the hierarchy.
-# [.NET v12 SDK](#tab/dotnet)
- To list blobs hierarchically, call the [BlobContainerClient.GetBlobsByHierarchy](/dotnet/api/azure.storage.blobs.blobcontainerclient.getblobsbyhierarchy), or the [BlobContainerClient.GetBlobsByHierarchyAsync](/dotnet/api/azure.storage.blobs.blobcontainerclient.getblobsbyhierarchyasync) method. The following example lists the blobs in the specified container using a hierarchical listing, with an optional segment size specified, and writes the blob name to the console window. :::code language="csharp" source="~/azure-storage-snippets/blobs/howto/dotnet/dotnet-v12/CRUD.cs" id="Snippet_ListBlobsHierarchicalListing":::
-# [.NET v11 SDK](#tab/dotnet11)
-
-The [Prefix](/dotnet/api/microsoft.azure.storage.blob.cloudblobdirectory.prefix) property of each virtual directory is set so that you can pass the prefix in a recursive call to retrieve the next directory.
-
-To list blobs hierarchically, set the `useFlatBlobListing` parameter of the listing method to **false**.
-
-The following example lists the blobs in the specified container using a flat listing, with an optional segment size specified, and writes the blob name to the console window.
-
-```csharp
-private static async Task ListBlobsHierarchicalListingAsync(CloudBlobContainer container, string prefix)
-{
- CloudBlobDirectory dir;
- CloudBlob blob;
- BlobContinuationToken continuationToken = null;
-
- try
- {
- // Call the listing operation and enumerate the result segment.
- // When the continuation token is null, the last segment has been returned and
- // execution can exit the loop.
- do
- {
- BlobResultSegment resultSegment = await container.ListBlobsSegmentedAsync(prefix,
- false, BlobListingDetails.Metadata, null, continuationToken, null, null);
- foreach (var blobItem in resultSegment.Results)
- {
- // A hierarchical listing may return both virtual directories and blobs.
- if (blobItem is CloudBlobDirectory)
- {
- dir = (CloudBlobDirectory)blobItem;
-
- // Write out the prefix of the virtual directory.
- Console.WriteLine("Virtual directory prefix: {0}", dir.Prefix);
-
- // Call recursively with the prefix to traverse the virtual directory.
- await ListBlobsHierarchicalListingAsync(container, dir.Prefix);
- }
- else
- {
- // Write out the name of the blob.
- blob = (CloudBlob)blobItem;
-
- Console.WriteLine("Blob name: {0}", blob.Name);
- }
- Console.WriteLine();
- }
-
- // Get the continuation token and loop until it is null.
- continuationToken = resultSegment.ContinuationToken;
-
- } while (continuationToken != null);
- }
- catch (StorageException e)
- {
- Console.WriteLine(e.Message);
- Console.ReadLine();
- throw;
- }
-}
-```
-
-# [Python v12 SDK](#tab/python)
-
-To list blobs hierarchically, call the [walk_blobs](/azure/developer/python/sdk/storage/azure-storage-blob/azure.storage.blob.containerclient#walk-blobs-name-starts-with-none--include-none--delimiter--kwargs-) method.
-
-The following example lists the blobs in the specified container using a hierarchical listing, with an optional segment size specified, and writes the blob name to the console window.
---- The sample output is similar to: ```console
Blob name: FolderA/FolderB/FolderC/blob3.txt
> [!NOTE] > Blob snapshots cannot be listed in a hierarchical listing operation.
+### List blob versions or snapshots
+
+To list blob versions or snapshots, specify the [BlobStates](/dotnet/api/azure.storage.blobs.models.blobstates) parameter with the **Version** or **Snapshot** field. Versions and snapshots are listed from oldest to newest.
+
+The following code example shows how to list blob versions.
+
+```csharp
+private static void ListBlobVersions(BlobContainerClient blobContainerClient,
+ string blobName)
+{
+ // Call the listing operation, specifying that blob versions are returned.
+ // Use the blob name as the prefix.
+ var blobVersions = blobContainerClient.GetBlobs
+ (BlobTraits.None, BlobStates.Version, prefix: blobName)
+ .OrderByDescending(version => version.VersionId);
+
+ // Construct the URI for each blob version.
+ foreach (var version in blobVersions)
+ {
+ BlobUriBuilder blobUriBuilder = new BlobUriBuilder(blobContainerClient.Uri)
+ {
+ BlobName = version.Name,
+ VersionId = version.VersionId
+ };
+
+ if ((bool)version.IsLatestVersion.GetValueOrDefault())
+ {
+ Console.WriteLine("Current version: {0}", blobUriBuilder);
+ }
+ else
+ {
+ Console.WriteLine("Previous version: {0}", blobUriBuilder);
+ }
+ }
+}
+```
## Next steps - [List Blobs](/rest/api/storageservices/list-blobs)-- [Enumerating Blob Resources](/rest/api/storageservices/enumerating-blob-resources)
+- [Enumerating Blob Resources](/rest/api/storageservices/enumerating-blob-resources)
+- [Blob versioning](versioning-overview.md)
storage Storage Network Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-network-security.md
During the preview you must use either PowerShell or the Azure CLI to enable thi
You can manage virtual network rules for storage accounts through the Azure portal, PowerShell, or CLIv2. > [!NOTE]
-> If you registered the `AllowGlobalTagsForStorageOnly` feature, and you want to enable access to your storage account from a virtual network/subnet in another Azure AD tenant, or in a region other than the region of the storage account or its paired region, then you must use PowerShell or the Azure CLI. The Azure portal does not show subnets in other Azure AD tenants or in regions other than the region of the storage account or its paired region, and hence cannot be used to configure access rules for virtual networks in other regions.
+> If you registered the `AllowGlobalTagsForStorage` feature, and you want to enable access to your storage account from a virtual network/subnet in another Azure AD tenant, or in a region other than the region of the storage account or its paired region, then you must use PowerShell or the Azure CLI. The Azure portal does not show subnets in other Azure AD tenants or in regions other than the region of the storage account or its paired region, and hence cannot be used to configure access rules for virtual networks in other regions.
#### [Portal](#tab/azure-portal)
storage Storage How To Use Files Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-how-to-use-files-portal.md
Title: Quickstart for managing Azure file shares
-description: See how to create and manage Azure file shares with the Azure portal, Azure CLI, or Azure PowerShell module. Create a storage account, create an Azure file share, and use your Azure file share.
+ Title: Quickstart for creating and using Azure file shares
+description: See how to create and use Azure file shares with the Azure portal, Azure CLI, or Azure PowerShell module. Create a storage account, create an Azure file share, and use your Azure file share.
ms.devlang: azurecli
#Customer intent: As a < type of user >, I want < what? > so that < why? >.
-# Quickstart: Create and manage Azure file shares
-[Azure Files](storage-files-introduction.md) is Microsoft's easy-to-use cloud file system. Azure file shares can be mounted in Windows, Linux, and macOS. This guide walks you through the basics of working with Azure file shares using either the Azure portal, Azure CLI, or Azure PowerShell module.
+# Quickstart: Create and use an Azure file share
+[Azure Files](storage-files-introduction.md) is Microsoft's easy-to-use cloud file system. Azure file shares can be mounted in Windows, Linux, and macOS. This guide shows you how to create an SMB Azure file share using either the Azure portal, Azure CLI, or Azure PowerShell module.
## Applies to | File share type | SMB | NFS |
If you don't have an Azure subscription, create a [free account](https://azure.m
[!INCLUDE [cloud-shell-try-it.md](../../../includes/cloud-shell-try-it.md)]
-If you would like to install and use the PowerShell locally, this guide requires the Azure PowerShell module Az version 0.7 or later. To find out which version of the Azure PowerShell module you are running, execute `Get-Module -ListAvailable Az`. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-Az-ps). If you are running PowerShell locally, you also need to run `Login-AzAccount` to log in to your Azure account.
+If you would like to install and use PowerShell locally, this guide requires the Azure PowerShell module Az version 0.7 or later. To find out which version of the Azure PowerShell module you are running, execute `Get-Module -ListAvailable Az`. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-Az-ps). If you are running PowerShell locally, you also need to run `Login-AzAccount` to log in to your Azure account.
# [Azure CLI](#tab/azure-cli)
storage Partner Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/solution-integration/validated-partners/primary-secondary-storage/partner-overview.md
This article highlights Microsoft partner companies that deliver a network attac
| Partner | Description | Website/product link | | - | -- | -- |
-| ![Nasuni](./media/nasuni-logo.png) |**Nasuni**<br>Nasuni is a file storage platform that replaces enterprise NAS and file servers including the associated infrastructure for BCDR and disk tiering. Virtual edge appliances keep files quickly accessible and synchronized with the cloud. The management console lets you manage multiple storage sites from one location including the ability to provision, monitor, control, and report on your file infrastructure. Continuous versioning to the cloud brings file restore times down to minutes.<br><br>Nasuni cloud file storage built on Azure eliminates traditional NAS and file servers across any number of locations and replaces it with a cloud solution. Nasuni cloud file storage provides infinite file storage, backups, disaster recovery, and multi-site file sharing. Nasuni is a software-as-a-service used for data-center-to-the-cloud initiatives, multi-location file synching, sharing and collaboration, and as a cloud storage companion for VDI environments.|[Partner page](https://www.nasuni.com/partner/microsoft/)<br>[Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/nasunicorporation.nasuni)|
-| ![Panzura](./media/panzura-logo.png) |**Panzura**<br>Panzura is the fabric that transforms Azure cloud storage into a high-performance global file system. By delivering one authoritative data source for all users, Panzura allows enterprises to use Azure as a globally available data center, with all the functionality and speed of a single-site NAS, including automatic file locking, immediate global data consistency, and local file operation performance. |[Partner page](https://panzura.com/partners/microsoft-azure/)<br>[Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/panzura-file-system.panzura-freedom-filer)|
-| ![Pure Storage](./media/pure-logo.png) |**Pure Storage**<br>Pure delivers a modern data experience that empowers organizations to run their operations as a true, automated, storage as-a-service model seamlessly across multiple clouds.|[Partner page](https://www.purestorage.com/company/technology-partners/microsoft.html)<br>[Solution Video](https://azure.microsoft.com/resources/videos/pure-storage-overview)<br>[Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/purestoragemarketplaceadmin.pure_storage_cloud_block_store_deployment?tab=Overview)|
-| ![Qumulo](./media/qumulo-logo.png)|**Qumulo**<br>Qumulo is a fast, scalable, and simple to use file system which makes it easy to store, manage, and run applications that use file data at scale on Microsoft Azure. Qumulo on Azure offers multiple petabytes (PB) of storage capacity and up to 20 GB/s of performance per file system. Windows (SMB) and Linux (NFS) are both natively supported. Patented software architecture delivers a low per-terabyte (TB) cost Media & Entertainment, Genomics, Technology, Natural Resources, and Finance companies all run their most demanding workloads on Qumulo in the cloud. With a Net Promoter Score of 89, customers use Qumulo for its scale, performance and ease of use capabilities like real-time visual insights into how storage is used and award winning Slack based support. Sign up for a free POC today through [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/qumulo1584033880660.qumulo-saas?tab=Overview) or [Qumulo.com](https://qumulo.com/). | [Partner page](https://qumulo.com/azure/)<br>[Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/qumulo1584033880660.qumulo-saas?tab=Overview)<br>[Datasheet](https://query.prod.cms.rt.microsoft.com/cms/api/am/binary/RWUtF0)|
-| ![Scality](./media/scality-logo.png) |**Scality**<br>Scality builds a software-defined file and object platform designed for on-premise, hybrid, and multi-cloud environments. ScalityΓÇÖs integration with Azure Blob Storage enable enterprises to manage and secure their data between on-premises environments and Azure, and meet the demand of high-performance, cloud-based file workloads. |[Partner page](https://www.scality.com/partners/azure/)|
-| ![Tiger Technology company logo](./media/tiger-logo.png) |**Tiger Technology**<br>Tiger Technology offers high-performance, secure, data management software solutions. Tiger Technology enables organizations of any size to manage their digital assets on-premises, in any public cloud, or through a hybrid model. <br><br> Tiger Bridge is a non-proprietary, software-only data, and storage management system. It blends on-premises and multi-tier cloud storage into a single space, and enables hybrid workflows. This transparent file server extension lets you benefit from Azure scale and services, while preserving legacy applications and workflows. Tiger Bridge addresses several data management challenges, including: file server extension, disaster recovery, cloud migration, backup and archive, remote collaboration, and multi-site sync. It also offers continuous data protection. |[Partner page](https://www.tiger-technology.com/partners/microsoft-azure/)<br>[Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/tiger-technology.tigerbridge_vm)|
-| ![XenData company logo](./media/xendata-logo.png) |**XenData**<br>XenData software creates multi-tier storage systems that manage files and folders across on-premises storage and Azure Blob Storage. XenData Multi-Site Sync software creates a global file system for distributed teams, enabling them to share and synchronize files across multiple locations. XenData cloud solutions are optimized for video files, supporting video streaming and partial file restore. They are integrated with many complementary software products used in the Media and Entertainment industry and support a variety of workflows. Other industries and applications that use XenData solutions include Oil and Gas, Engineering and Scientific Data, Video Surveillance and Medical Imaging. |[Partner page](https://xendata.com/tech_partners_cloud/azure/)|
+| ![Nasuni.](./media/nasuni-logo.png) |**Nasuni**<br>Nasuni is a file storage platform that replaces enterprise NAS and file servers including the associated infrastructure for BCDR and disk tiering. Virtual edge appliances keep files quickly accessible and synchronized with the cloud. The management console lets you manage multiple storage sites from one location including the ability to provision, monitor, control, and report on your file infrastructure. Continuous versioning to the cloud brings file restore times down to minutes.<br><br>Nasuni cloud file storage built on Azure eliminates traditional NAS and file servers across any number of locations and replaces it with a cloud solution. Nasuni cloud file storage provides infinite file storage, backups, disaster recovery, and multi-site file sharing. Nasuni is a software-as-a-service used for data-center-to-the-cloud initiatives, multi-location file synching, sharing and collaboration, and as a cloud storage companion for VDI environments.|[Partner page](https://www.nasuni.com/partner/microsoft/)<br>[Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/nasunicorporation.nasuni)|
+| ![Panzura.](./media/panzura-logo.png) |**Panzura**<br>Panzura is the fabric that transforms Azure cloud storage into a high-performance global file system. By delivering one authoritative data source for all users, Panzura allows enterprises to use Azure as a globally available data center, with all the functionality and speed of a single-site NAS, including automatic file locking, immediate global data consistency, and local file operation performance. |[Partner page](https://panzura.com/partners/microsoft-azure/)<br>[Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/panzura-file-system.panzura-freedom-filer)|
+| ![Pure Storage.](./media/pure-logo.png) |**Pure Storage**<br>Pure delivers a modern data experience that empowers organizations to run their operations as a true, automated, storage as-a-service model seamlessly across multiple clouds.|[Partner page](https://www.purestorage.com/company/technology-partners/microsoft.html)<br>[Solution Video](https://azure.microsoft.com/resources/videos/pure-storage-overview)<br>[Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/purestoragemarketplaceadmin.pure_storage_cloud_block_store_deployment?tab=Overview)|
+| ![Qumulo.](./media/qumulo-logo.png)|**Qumulo**<br>Qumulo is a fast, scalable, and simple to use file system which makes it easy to store, manage, and run applications that use file data at scale on Microsoft Azure. Qumulo on Azure offers multiple petabytes (PB) of storage capacity and up to 20 GB/s of performance per file system. Windows (SMB) and Linux (NFS) are both natively supported. Patented software architecture delivers a low per-terabyte (TB) cost Media & Entertainment, Genomics, Technology, Natural Resources, and Finance companies all run their most demanding workloads on Qumulo in the cloud. With a Net Promoter Score of 89, customers use Qumulo for its scale, performance and ease of use capabilities like real-time visual insights into how storage is used and award winning Slack based support. Sign up for a free POC today through [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/qumulo1584033880660.qumulo-saas?tab=Overview) or [Qumulo.com](https://qumulo.com/). | [Partner page](https://qumulo.com/azure/)<br>[Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/qumulo1584033880660.qumulo-saas?tab=Overview)<br>[Datasheet](https://query.prod.cms.rt.microsoft.com/cms/api/am/binary/RWUtF0)|
+| ![Scality.](./media/scality-logo.png) |**Scality**<br>Scality builds a software-defined file and object platform designed for on-premise, hybrid, and multi-cloud environments. ScalityΓÇÖs integration with Azure Blob Storage enable enterprises to manage and secure their data between on-premises environments and Azure, and meet the demand of high-performance, cloud-based file workloads. |[Partner page](https://www.scality.com/partners/azure/)|
+| ![Tiger Technology company logo.](./media/tiger-logo.png) |**Tiger Technology**<br>Tiger Technology offers high-performance, secure, data management software solutions. Tiger Technology enables organizations of any size to manage their digital assets on-premises, in any public cloud, or through a hybrid model. <br><br> Tiger Bridge is a non-proprietary, software-only data, and storage management system. It blends on-premises and multi-tier cloud storage into a single space, and enables hybrid workflows. This transparent file server extension lets you benefit from Azure scale and services, while preserving legacy applications and workflows. Tiger Bridge addresses several data management challenges, including: file server extension, disaster recovery, cloud migration, backup and archive, remote collaboration, and multi-site sync. It also offers continuous data protection. |[Partner page](https://www.tiger-technology.com/partners/microsoft-azure/)<br>[Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/tiger-technology.tigerbridge_vm)|
+| ![XenData company logo.](./media/xendata-logo.png) |**XenData**<br>XenData software creates multi-tier storage systems that manage files and folders across on-premises storage and Azure Blob Storage. XenData Multi-Site Sync software creates a global file system for distributed teams, enabling them to share and synchronize files across multiple locations. XenData cloud solutions are optimized for video files, supporting video streaming and partial file restore. They are integrated with many complementary software products used in the Media and Entertainment industry and support a variety of workflows. Other industries and applications that use XenData solutions include Oil and Gas, Engineering and Scientific Data, Video Surveillance and Medical Imaging. |[Partner page](https://xendata.com/tech_partners_cloud/azure/)|
+| ![Silk company logo.](./media/silk-logo.jpg) |**Silk**<br>The Silk Platform quickly moves mission-critical data to Azure and keeps it operating at performance standards on par with even the fastest on-prem environments. Silk works to ensure a seamless, efficient, and smooth migration process, followed by unparalleled performance speeds for all data and applications in the Azure cloud. The platform makes cloud environments run up to 10x faster and the entire application stack is more resilient to any infrastructure hiccups or malfunctions. |[Partner page](https://silk.us/solutions/azure/)<br>[Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/silk.silk_cloud_data_platform?tab=overview)|
Are you a storage partner but your solution is not listed yet? Send us your info [here](https://forms.office.com/pages/responsepage.aspx?id=v4j5cvGGr0GRqy180BHbR3i8TQB_XnRAsV3-7XmQFpFUQjY4QlJYUzFHQ0ZBVDNYWERaUlNRVU5IMyQlQCN0PWcu). ## Next steps
virtual-desktop Configure Vm Gpu https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/configure-vm-gpu.md
>[!IMPORTANT] >This content applies to Azure Virtual Desktop with Azure Resource Manager Azure Virtual Desktop objects. If you're using Azure Virtual Desktop (classic) without Azure Resource Manager objects, see [this article](./virtual-desktop-fall-2019/configure-vm-gpu-2019.md).
-Azure Virtual Desktop supports GPU-accelerated rendering and encoding for improved app performance and scalability. GPU acceleration is particularly crucial for graphics-intensive apps.
+Azure Virtual Desktop supports GPU-accelerated rendering and encoding for improved app performance and scalability. GPU acceleration is particularly crucial for graphics-intensive apps and is supported in the following operating systems:
-Follow the instructions in this article to create a GPU optimized Azure virtual machine, add it to your host pool, and configure it to use GPU acceleration for rendering and encoding. This article assumes you already have a Azure Virtual Desktop tenant configured.
-
-## Select an appropriate GPU optimized Azure virtual machine size
-
-Select one of Azure's [NV-series](../virtual-machines/nv-series.md), [NVv3-series](../virtual-machines/nvv3-series.md), [NVv4-series](../virtual-machines/nvv4-series.md) or [NCasT4_v3-series](../virtual-machines/nct4-v3-series.md) VM sizes. These are tailored for app and desktop virtualization and enable most apps and the Windows user interface to be GPU accelerated. The right choice for your host pool depends on a number of factors, including your particular app workloads, desired quality of user experience, and cost. In general, larger and more capable GPUs offer a better user experience at a given user density, while smaller and fractional-GPU sizes allow more fine-grained control over cost and quality. Consider NV series VM retirement when selecting VM, details on [NV retirement](../virtual-machines/nv-series-retirement.md)
+* Windows 10 version 1511 or newer
+* Windows Server 2016 or newer
>[!NOTE]
->Azure's NC, NCv2, NCv3, ND, and NDv2 series VMs are generally not appropriate for Azure Virtual Desktop session hosts. These VMs are tailored for specialized, high-performance compute or machine learning tools, such as those built with NVIDIA CUDA. They do not support GPU acceleration for most apps or the Windows user interface.
-
+> Multi-session versions of Windows are not specifically listed, however each GPU in NV-series Azure virtual machine comes with a GRID license that supports 25 concurrent users. For more information, see [NV-series](../virtual-machines/nv-series.md).
-## Create a host pool, provision your virtual machine, and configure an app group
+Follow the instructions in this article to create a GPU optimized Azure virtual machine, add it to your host pool, and configure it to use GPU acceleration for rendering and encoding. This article assumes you have already [created a host pool](./create-host-pools-azure-marketplace.md) and an [application group](./manage-app-groups.md).
-Create a new host pool using a VM of the size you selected. For instructions, see [Tutorial: Create a host pool with the Azure portal](./create-host-pools-azure-marketplace.md).
+## Select an appropriate GPU-optimized Azure virtual machine size
-Azure Virtual Desktop supports GPU-accelerated rendering and encoding in the following operating systems:
-
-* Windows 10 version 1511 or newer
-* Windows Server 2016 or newer
+Select one of Azure's [NV-series](../virtual-machines/nv-series.md), [NVv3-series](../virtual-machines/nvv3-series.md), [NVv4-series](../virtual-machines/nvv4-series.md) or [NCasT4_v3-series](../virtual-machines/nct4-v3-series.md) VM sizes to use as a session host. These are tailored for app and desktop virtualization and enable most apps and the Windows user interface to be GPU accelerated. The right choice for your host pool depends on a number of factors, including your particular app workloads, desired quality of user experience, and cost. In general, larger and more capable GPUs offer a better user experience at a given user density, while smaller and fractional-GPU sizes allow more fine-grained control over cost and quality. Consider NV series VM retirement when selecting VM, details on [NV retirement](../virtual-machines/nv-series-retirement.md)
>[!NOTE]
->Multi-session OS is not specifically listed however NV instances GRID license supports 25 concurrent users, see [NV-series](../virtual-machines/nv-series.md)
-
-You must also configure an app group, or use the default desktop app group (named "Desktop Application Group") that's automatically created when you create a new host pool. For instructions, see [Tutorial: Manage app groups for Azure Virtual Desktop](./manage-app-groups.md).
+>Azure's NC, NCv2, NCv3, ND, and NDv2 series VMs are generally not appropriate for Azure Virtual Desktop session hosts. These VMs are tailored for specialized, high-performance compute or machine learning tools, such as those built with NVIDIA CUDA. They do not support GPU acceleration for most apps or the Windows user interface.
## Install supported graphics drivers in your virtual machine
virtual-desktop Create Fslogix Profile Container https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/create-fslogix-profile-container.md
We recommend using FSLogix profile containers as a user profile solution for the
You can create FSLogix profile containers using [Azure NetApp Files](https://azure.microsoft.com/services/netapp/), an easy-to-use Azure native platform service that helps customers quickly and reliably provision enterprise-grade SMB volumes for their Azure Virtual Desktop environments. To learn more about Azure NetApp Files, see [What is Azure NetApp Files?](../azure-netapp-files/azure-netapp-files-introduction.md)
-This guide will show you how to set up an Azure NetApp Files account and create FSLogix profile containers in Azure Virtual Desktop.
-
-This article assumes you already have [host pools](create-host-pools-azure-marketplace.md) set up and grouped into one or more tenants in your Azure Virtual Desktop environment. To learn how to set up tenants, see [Create a tenant in Azure Virtual Desktop](./virtual-desktop-fall-2019/tenant-setup-azure-active-directory.md) and [our Tech Community blog post](https://techcommunity.microsoft.com/t5/Windows-IT-Pro-Blog/Getting-started-with-Windows-Virtual-Desktop/ba-p/391054).
+This guide will show you how to set up an Azure NetApp Files account and create FSLogix profile containers in Azure Virtual Desktop. It assumes you have already [created a host pool](./create-host-pools-azure-marketplace.md) and an [application group](./manage-app-groups.md).
The instructions in this guide are specifically for Azure Virtual Desktop users. If you're looking for more general guidance for how to set up Azure NetApp Files and create FSLogix profile containers outside of Azure Virtual Desktop, see the [Set up Azure NetApp Files and create an NFS volume quickstart](../azure-netapp-files/azure-netapp-files-quickstart-set-up-account-create-volumes.md).
This section is based on [Create a profile container for a host pool using a fil
13. Create a value named **DeleteLocalProfileWhenVHDShouldApply** with a DWORD value of 1 to avoid problems with existing local profiles before you sign in. >[!WARNING]
- >Be careful when creating the DeleteLocalProfileWhenVHDShouldApply value. When the FSLogix Profiles system determines a user should have an FSLogix profile, but a local profile already exists, Profile Container will permanently delete the local profile. The user will then be signed in with the new FSLogix profile.
-
-## Assign users to session host
-
-1. Open **PowerShell ISE** as administrator and sign in to Azure Virtual Desktop.
-
-2. Run the following cmdlets:
-
- ```powershell
- Import-Module Microsoft.RdInfra.RdPowershell
- # (Optional) Install-Module Microsoft.RdInfra.RdPowershell
- $brokerurl = "https://rdbroker.wvd.microsoft.com"
- Add-RdsAccount -DeploymentUrl $brokerurl
- ```
-
-3. When prompted for credentials, enter the credentials for the user with the Tenant Creator or RDS Owner/RDS Contributor roles on the Azure Virtual Desktop tenant.
-
-4. Run the following cmdlets to assign a user to a Remote Desktop group:
-
- ```powershell
- $wvdTenant = "<your-wvd-tenant>"
- $hostPool = "<wvd-pool>"
- $appGroup = "Desktop Application Group"
- $user = "<user-principal>"
- Add-RdsAppGroupUser $wvdTenant $hostPool $appGroup $user
- ```
+ >Be careful when creating the *DeleteLocalProfileWhenVHDShouldApply* value. When the FSLogix Profiles system determines a user should have an FSLogix profile, but a local profile already exists, Profile Container will permanently delete the local profile. The user will then be signed in with the new FSLogix profile.
## Make sure users can access the Azure NetApp File share
virtual-desktop Create Host Pools Azure Marketplace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/create-host-pools-azure-marketplace.md
To start creating your new host pool:
10. Select **Next: Virtual Machines >**. 11. If you've already created virtual machines and want to use them with the new host pool, select **No**, select **Next: Workspace >** and jump to the [Workspace information](#workspace-information) section. If you want to create new virtual machines and register them to the new host pool, select **Yes**.
+
+12. Once you create your host pool, you can get the host pool's registration key by going to the host pool's **Overview** page and selecting **Registration key**. Use this key when adding virtual machines created outside of Azure Virtual Desktop to your host pool.
+
+ > [!div class="mx-imgBorder"]
+ > ![A screenshot showing the registration key option in the host pool overview blade, highlighted with a red border.](media/registration-key-host-pool-page.png)
### [Azure CLI](#tab/azure-cli)
virtual-desktop Create Host Pools Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/create-host-pools-powershell.md
>[!IMPORTANT] >This content applies to Azure Virtual Desktop with Azure Resource Manager Azure Virtual Desktop objects. If you're using Azure Virtual Desktop (classic) without Azure Resource Manager objects, see [this article](./virtual-desktop-fall-2019/create-host-pools-powershell-2019.md).
-Host pools are a collection of one or more identical virtual machines within Azure Virtual Desktop tenant environments. Each host pool can be associated with multiple RemoteApp groups, one desktop app group, and multiple session hosts.
+Host pools are a collection of one or more identical virtual machines within Azure Virtual Desktop. Each host pool can be associated with multiple RemoteApp groups, one desktop app group, and multiple session hosts.
## Create a host pool
virtual-desktop Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/disaster-recovery.md
To keep your organization's data safe, you may need to adopt a business continuity and disaster recovery (BCDR) strategy. A sound BCDR strategy keeps your apps and workload up and running during planned and unplanned service or Azure outages.
-Azure Virtual Desktop offers BCDR for the Azure Virtual Desktop service to preserve customer metadata during outages. When an outage occurs in a region, the service infrastructure components will fail over to the secondary location and continue functioning as normal. You can still access service-related metadata, and users can still connect to available hosts. End-user connections will stay online as long as the tenant environment or hosts remain accessible.
+Azure Virtual Desktop offers BCDR for the Azure Virtual Desktop service to preserve customer metadata during outages. When an outage occurs in a region, the service infrastructure components will fail over to the secondary location and continue functioning as normal. You can still access service-related metadata, and users can still connect to available hosts. End-user connections will stay online as long as the hosts remain accessible.
To make sure users can still connect during a region outage, you need to replicate their virtual machines (VMs) in a different location. During outages, the primary site fails over to the replicated VMs in the secondary location. Users can continue to access apps from the secondary location without interruption. On top of VM replication, you'll need to keep user identities accessible at the secondary location. If you're using profile containers, you'll also need to replicate them. Finally, make sure your business apps that rely on data in the primary location can fail over with the rest of the data.
virtual-desktop Expand Existing Host Pool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/expand-existing-host-pool.md
As you ramp up usage within your host pool, you may need to expand your existing host pool with new session hosts to handle the new load.
-This article will tell you how you can expand an existing host pool with new session hosts.
+This article will tell you how you can expand an existing host pool by adding new session hosts.
## What you need to expand the host pool
Before you start, make sure you've created a host pool and session host virtual
You'll also need the following information from when you first created the host pool and session host VMs:
+- Registration token
- VM size, image, and name prefix - Domain join administrator credentials - Virtual network name and subnet name
To expand your host pool by adding virtual machines:
4. Select **Session hosts** from the menu on the left side of the screen.
-5. Select **+Add** to start creating your host pool.
+5. Select **+Add** to start adding session hosts to your host pool.
-6. Ignore the the Basics tab and instead select the **VM details** tab. Here you can view and edit the details of the virtual machine (VM) you want to add to the host pool.
+ > [!div class="mx-imgBorder"]
+ > ![A screenshot showing the "+Add" option in the portal to add session hosts to a host pool, highlighted with a red border.](media/portal-add-vms.png)
-7. Select the resource group you want to create the VMs under, then select the region. You can choose the current region you're using or a new region.
+6. If the registration token you used to initially create the host pool has now expired, you'll receive the following banner. Select **->** to generate a new registration token. If your host pool's registration token is still valid, skip ahead to step 10.
-8. Enter the number of session hosts you want to add to your host pool into **Number of VMs**. For example, if you're expanding your host pool by five hosts, enter **5**.
+ > [!div class="mx-imgBorder"]
+ > ![A screenshot showing the red banner indicating an invalid registration token highlighted with a red border.](media/registration-token.png)
+
+7. Select **Generate new key** and select an expiration date. We recommend you set the expiration date to be the maximum of 27 days so that you don't need to regenerate a registration key frequently. Select **Ok** to generate the registration key.
+
+ > [!div class="mx-imgBorder"]
+ > ![A screenshot showing the options to generate a new registration key and set an expiration date for it, highlighted with a red border.](media/registration-token-2.png)
+
+8. After a few seconds, your host pool's new registration key will get filled in the text box. Select **Download** and then exit out of the **Registration key** context blade.
+
+9. Select **+Add** once again to start adding session hosts to your host pool.
+
+10. Ignore the the Basics tab and instead select the **Virtual machines** tab. Here you can view and edit the details of the virtual machine (VM) you want to add to the host pool.
+
+11. Select the resource group you want to create the VMs in, then select the region. You can choose the current region the VMs in your host pool are in or a new region.
+
+12. Enter the number of session hosts you want to add to your host pool into **Number of VMs**. For example, if you're expanding your host pool by five hosts, enter **5**.
>[!NOTE] >Although it's possible to edit the image and prefix of the VMs, we don't recommended editing them if you have VMs with different images in the same host pool. Edit the image and prefix only if you plan on removing VMs with older images from the affected host pool.
-9. For the **virtual network information**, select the virtual network and subnet to which you want the virtual machines to be joined to. You can select the same virtual network your existing machines currently use or choose a different one that's more suitable to the region you selected in step 7.
+13. For the **Virtual network**, select the virtual network and subnet to which you want the virtual machines to be joined. You can select the same virtual network your host pool's existing machines are currently in or choose a different one that's more suitable to the region you selected in step 11.
-10. For the **Domain to join**, select if you want to join the virtual machines to Active Directory or [Azure Active Directory](deploy-azure-ad-joined-vm.md). Selecting **Enroll the VM with Intune** automatically enrolls the virtual machines in Intune. All virtual machines in a host pool should be joined to the same domain or Azure AD tenant.
+14. For the **Domain to join**, select if you want to join the virtual machines to Active Directory or [Azure Active Directory](deploy-azure-ad-joined-vm.md). Selecting **Enroll the VM with Intune** automatically enrolls the virtual machines in Intune. All virtual machines in a host pool should be joined to the same domain or Azure AD tenant.
-11. For the **AD domain join UPN**, enter an Active Directory domain username and password associated with the domain you selected. These credentials will be used to join the virtual machines to the Active Directory domain.
+15. For the **AD domain join UPN**, enter an Active Directory domain username and password associated with the domain you selected. These credentials will be used to join the virtual machines to the Active Directory domain.
>[!NOTE] >Ensure your admin names comply with info given here. And that there is no MFA enabled on the account.
-12. For the **Virtual Machine Administrator account**, enter the local administrator account information you want to use for all virtual machines.
+16. For the **Virtual Machine Administrator account**, enter the local administrator account information you want to use for all virtual machines.
-13. Select the **Tags** tab if you have any tags that you want to group the virtual machines with. Otherwise, skip this tab.
+17. Select the **Tags** tab if you have any tags that you want to group the virtual machines with. Otherwise, skip this tab.
-14. Select the **Review + Create** tab. Review your choices, and if everything looks fine, select **Create**.
+18. Select the **Review + Create** tab. Review your choices, and if everything looks fine, select **Create**.
## Next steps
virtual-desktop Fslogix Containers Azure Files https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/fslogix-containers-azure-files.md
To ensure your Azure Virtual Desktop environment follows best practices:
## Next steps
-Use the following guides to set up a Azure Virtual Desktop environment.
--- To start building out your desktop virtualization solution, see [Create a tenant in Azure Virtual Desktop](./virtual-desktop-fall-2019/tenant-setup-azure-active-directory.md).-- To create a host pool within your Azure Virtual Desktop tenant, see [Create a host pool with Azure Marketplace](create-host-pools-azure-marketplace.md).-- To set up fully managed file shares in the cloud, see [Set up Azure Files share](/azure/storage/files/storage-files-active-directory-enable/).-- To configure FSLogix profile containers, see [Create a profile container for a host pool using a file share](create-host-pools-user-profile.md).-- To assign users to a host pool, see [Manage app groups for Azure Virtual Desktop](manage-app-groups.md).-- To access your Azure Virtual Desktop resources from a web browser, see [Connect to Azure Virtual Desktop](./user-documentation/connect-web.md).
+To learn more about storage options for FSLogix profile containers, see [Storage options for FSLogix profile containers in Azure Virtual Desktop](store-fslogix-profile.md).
virtual-desktop Multimedia Redirection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/multimedia-redirection.md
Title: Multimedia redirection on Azure Virtual Desktop - Azure
description: How to use multimedia redirection for Azure Virtual Desktop (preview). Previously updated : 03/18/2022 Last updated : 03/28/2022
The following list shows websites that are known to work with MMR. MMR is suppos
- Fox Sports - IMDB - Sites with embedded YouTube videos, such as Medium, Udacity, Los Angeles Times, and so on.
+- Teams Live Events (on web)
+ - Currently, Teams live events aren't media-optimized for Azure Virtual Desktop and Windows 365. MMR is a short-term workaround for a smoother Teams live events playback on Azure Virtual Desktop.
+
+### How to use MMR for Teams live events
+
+To use MMR for Teams live events:
+
+1. First, open the link to the Teams event in either a Microsoft Edge or Google Chrome browser.
+
+2. Make sure you can see a green check mark next to the [multimedia redirection status icon](#the-multimedia-redirection-status-icon). If the green check mark is there, MMR is enabled for Teams live events.
+
+3. Select **Watch on the web instead**. The Teams live event should automatically start playing in your browser. Make sure you only select **Watch on the web instead**, as shown in the following screenshot. If you use the Teams app, MMR won't work.
+
+The following screenshot highlights the areas described in the previous steps:
+ ## Requirements
virtual-desktop Set Up Service Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/set-up-service-alerts.md
This section shows you how to configure Azure Service Health and how to set up n
We recommend you create service alerts for the following health event types: -- **Service issue:** Receive notifications on major issues that impact connectivity of your users with the service or with the ability to manage your Azure Virtual Desktop tenant.
+- **Service issue:** Receive notifications on major issues that impact connectivity of your users with the service or with the ability to manage Azure Virtual Desktop.
- **Health advisory:** Receive notifications that require your attention. The following are some examples of this type of notification: - Virtual Machines (VMs) not securely configured as open port 3389 - Deprecation of functionality
virtual-desktop Store Fslogix Profile https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/store-fslogix-profile.md
For more information about Azure Files performance, see [File share and file sca
## Next steps
-To learn more about FSLogix profile containers, user profile disks, and other user profile technologies, see the table in [FSLogix profile containers and Azure files](fslogix-containers-azure-files.md).
+To learn more about FSLogix profile containers, user profile disks, and other user profile technologies, see the table in [FSLogix profile containers and Azure Files](fslogix-containers-azure-files.md).
If you're ready to create your own FSLogix profile containers, get started with one of these tutorials: -- [Create an Azure file share with a domain controller](create-file-share.md)
+- [Create an Azure file share with Active Directory Domain Services](create-file-share.md)
- [Create an Azure file share with Azure Active Directory](create-profile-container-azure-ad.md) - [Create an Azure file share with Azure Active Directory Domain Services](create-profile-container-adds.md) - [Create an FSLogix profile container for a host pool using Azure NetApp files](create-fslogix-profile-container.md) - The instructions in [Deploy a two-node Storage Spaces Direct scale-out file server for UPD storage in Azure](/windows-server/remote/remote-desktop-services/rds-storage-spaces-direct-deployment/) also apply when you use an FSLogix profile container instead of a user profile disk-
-You can also start from the very beginning and set up your own Azure Virtual Desktop solution at [Create a tenant in Azure Virtual Desktop](./virtual-desktop-fall-2019/tenant-setup-azure-active-directory.md).
virtual-desktop Troubleshoot Service Connection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/troubleshoot-service-connection.md
A user can start Remote Desktop clients and is able to authenticate, however the
3. If the web client is being used, confirm that there are no cached credentials issues.
-4. If the user is part of an Azure Active Directory (AD) user group, make sure the user group is a security group instead of a distribution group. Azure Virtual Desktop doesn't support Azure AD distribution groups.
+4. If the user is part of an Azure Active Directory user group, make sure the user group is a security group instead of a distribution group. Azure Virtual Desktop doesn't support Azure AD distribution groups.
## User loses existing feed and no remote resource is displayed (no feed)
-This error usually appears after a user moved their subscription from one Azure AD tenant to another. As a result, the service loses track of their user assignments, since those are still tied to the old Azure AD tenant.
+This error usually appears after a user moved their subscription from one Azure Active Directory tenant to another. As a result, the service loses track of their user assignments, since those are still tied to the old Azure Active Directory tenant.
To resolve this, all you need to do is reassign the users to their app groups.
virtual-desktop Troubleshoot Set Up Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/troubleshoot-set-up-issues.md
If your operation goes over the quota limit, you can do one of the following thi
### Error: Can't see user assignments in app groups.
-**Cause**: This error usually happens after you've moved the subscription from 1 Azure Active Directory (AD) tenant to another. If your old assignments are still tied to the old Azure AD tenant, the Azure portal will lose track of them.
+**Cause**: This error usually happens after you've moved the subscription from one Azure Active Directory tenant to another. If your old assignments are still tied to the previous Azure Active Directory tenant, the Azure portal will lose track of them.
**Fix**: You'll need to reassign users to app groups.
virtual-desktop Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/whats-new.md
The PowerShell commands that migrate metadata from Azure Virtual Desktop (classi
### Increased application group limit
-We've increased number of Azure Virtual Desktop application groups you can have on each Azure Active Directory (Azure AD) tenant from 200 to 500. For more information, see [our blog post](https://techcommunity.microsoft.com/t5/azure-virtual-desktop/increase-in-avd-application-group-limit-to-500/m-p/3094678).
+We've increased number of Azure Virtual Desktop application groups you can have on each Azure Active Directory tenant from 200 to 500. For more information, see [our blog post](https://techcommunity.microsoft.com/t5/azure-virtual-desktop/increase-in-avd-application-group-limit-to-500/m-p/3094678).
### Updates to required URLs
You can now automatically create trusted launch virtual machines through the hos
### Azure Active Directory Join VMs with FSLogix profiles on Azure Files
-Azure AD-joined session hosts for FSLogix profiles on Azure Files in Windows 10 and 11 multi-session is now in public preview. We've updated Azure Files to use a Kerberos protocol for Azure AD that lets you secure folders in the file share to individual users. This new feature also allows FSLogix to function within your deployment without an Active Directory Domain Controller. For more information, check out [our blog post](https://techcommunity.microsoft.com/t5/azure-virtual-desktop-blog/announcing-public-preview-of-fslogix-profiles-for-azure-ad/ba-p/3019855).
+Azure Active Directory-joined session hosts for FSLogix profiles on Azure Files in Windows 10 and 11 multi-session is now in public preview. We've updated Azure Files to use a Kerberos protocol for Azure Active Directory that lets you secure folders in the file share to individual users. This new feature also allows FSLogix to function within your deployment without an Active Directory Domain Controller. For more information, check out [our blog post](https://techcommunity.microsoft.com/t5/azure-virtual-desktop-blog/announcing-public-preview-of-fslogix-profiles-for-azure-ad/ba-p/3019855).
### Azure Virtual Desktop pricing calculator updates
You can also now set host pool, app group, and workspace diagnostic settings whi
### Azure Active Directory domain join
-Azure Active Directory domain join is now generally available. This service lets you join your session hosts to Azure Active Directory. Domain join also lets you autoenroll into Intune as part of Microsoft Endpoint Manager. You can access this feature in the Azure public cloud, but not the Government cloud or Azure China. For more information, see [our blog post](https://techcommunity.microsoft.com/t5/azure-virtual-desktop/announcing-general-availability-of-azure-ad-joined-vms-support/ba-p/2751083).
+Azure Active Directory domain join is now generally available. This service lets you join your session hosts to Azure Active Directory (Azure AD). Domain join also lets you autoenroll into Intune as part of Microsoft Endpoint Manager. You can access this feature in the Azure public cloud, but not the Government cloud or Azure China. For more information, see [our blog post](https://techcommunity.microsoft.com/t5/azure-virtual-desktop/announcing-general-availability-of-azure-ad-joined-vms-support/ba-p/2751083).
### Azure China
Multimedia redirection gives you smooth video playback while watching videos in
Azure Virtual Desktop now supports Windows Defender Application Control to control which drivers and applications are allowed to run on Windows virtual machines (VMs), and Azure Disk Encryption, which uses Windows BitLocker to provide volume encryption for the OS and data disks of your VMs. For more information, see [our announcement](https://techcommunity.microsoft.com/t5/azure-virtual-desktop/support-for-windows-defender-application-control-and-azure-disk/m-p/2658633#M7685).
-### Signing into Azure AD using smart cards are now supported in Azure Virtual Desktop
+### Signing into Azure Active Directory using smart cards are now supported in Azure Virtual Desktop
-While this isn't a new feature for Azure AD, Azure Virtual Desktop now supports configuring Active Directory Federation Services to sign in with smart cards. For more information, see [our announcement](https://techcommunity.microsoft.com/t5/azure-virtual-desktop/signing-in-to-azure-ad-using-smart-cards-now-supported-in-azure/m-p/2654209#M7671).
+While this isn't a new feature for Azure Active Directory, Azure Virtual Desktop now supports configuring Active Directory Federation Services to sign in with smart cards. For more information, see [our announcement](https://techcommunity.microsoft.com/t5/azure-virtual-desktop/signing-in-to-azure-ad-using-smart-cards-now-supported-in-azure/m-p/2654209#M7671).
### Screen capture protection is now generally available
Here's what this change does for you:
- Monitoring functions that used to be done through PowerShell or the Diagnostics Service web app have now moved to Log Analytics in the Azure portal. You also now have two options to visualize your reports. You can run Kusto queries and use Workbooks to create visual reports. -- You're no longer required to complete Azure Active Directory (Azure AD) consent to use Azure Virtual Desktop. In this update, the Azure AD tenant on your Azure subscription authenticates your users and provides Azure RBAC controls for your admins.
+- You're no longer required to complete Azure Active Directory consent to use Azure Virtual Desktop. In this update, the Azure Active Directory tenant on your Azure subscription authenticates your users and provides Azure RBAC controls for your admins.
### PowerShell support
virtual-machines Custom Script Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/custom-script-linux.md
If your script is on a local server, you might still need to open additional fir
* You can have only one version of an extension applied to the VM. To run a second custom script, you can update the existing extension with a new configuration. Alternatively, you can remove the custom script extension and reapply it with the updated script. * If you want to schedule when a script will run, use the extension to create a Cron job. * When the script is running, you'll only see a "transitioning" extension status from the Azure portal or CLI. If you want more frequent status updates for a running script, you'll need to create your own solution.
-* The Custom Script Extension doesn't natively support proxy servers. However, you can use a file transfer tool that supports proxy servers within your script, such as *Curl*.
+* The Custom Script Extension doesn't natively support proxy servers. However, you can use a file transfer tool that supports proxy servers within your script, such as `Curl`.
* Be aware of non-default directory locations that your scripts or commands might rely on. Have logic to handle this situation. ## Extension schema
az vm extension set \
If you deploy the Custom Script Extension from the Azure portal, you don't have control over the expiration of the SAS token for accessing the script in your storage account. The result is that the initial deployment works, but when the storage account's SAS token expires, any subsequent scaling operation fails because the Custom Script Extension can no longer access the storage account.
-We recommend that you use [PowerShell](/powershell/module/az.Compute/Add-azVmssExtension?view=azps-7.0.0), the [Azure CLI](/cli/azure/vmss/extension), or an Azure Resource Manager template when you deploy the Custom Script Extension on a virtual machine scale set. This way, you can choose to use a managed identity or have direct control of the expiration of the SAS token for accessing the script in your storage account for as long as you need.
+We recommend that you use [PowerShell](/powershell/module/az.Compute/Add-azVmssExtension?view=azps-7.0.0), the [Azure CLI](/cli/azure/vmss/extension), or an [Azure Resource Manager template](/azure/templates/microsoft.compute/virtualmchinescalesets/extensions) when you deploy the Custom Script Extension on a virtual machine scale set. This way, you can choose to use a managed identity or have direct control of the expiration of the SAS token for accessing the script in your storage account for as long as you need.
## Troubleshooting When the Custom Script Extension runs, the script is created or downloaded into a directory that's similar to the following example. The command output is also saved into this directory in `stdout` and `stderr` files.
virtual-machines Image Builder Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/image-builder-overview.md
While it is possible to create custom VM images by hand or by other tools, the p
- You do not have to make your customization artifacts publicly accessible for Image Builder to be able to fetch them. Image Builder can use your [Azure Managed Identity](../active-directory/managed-identities-azure-resources/overview.md) to fetch these resources and you can restrict the privileges of this identity as tightly as required using Azure-RBAC. This not only means you can keep your artifacts secret, but they also cannot be tampered with by unauthorized actors. - Copies of customization artifacts, transient compute & storage resources, and resulting images are all stored securely within your subscription with access controlled by Azure-RBAC. This includes the build VM used to create the customized image and ensuring your customization scripts and files are not being copied to an unknown VM in an unknown subscription. Furthermore, you can achieve a high degree of isolation from other customersΓÇÖ workloads using [Isolated VM offerings](./isolation.md) for the build VM. - You can connect Image Builder to your existing virtual networks so you can communicate with existing configuration servers (DSC, Chef, Puppet, etc.), file shares, or any other routable servers & services.
+- You can configure Image Builder to assign your User Assigned Identities to the Image Builder Build VM (*that is created by the Image Builder service in your subscription and is used to build and customize the image*). You can then use these identities at customization time to access Azure resources, including secrets, in your subscription. There is no need to assign Image Builder direct access to those resources.
## Regions
virtual-machines Image Builder Json https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/image-builder-json.md
The Image Builder Build VM User Assigned Identity:
* Supports cross subscription scenarios (identity created in one subscription while the image template is created in another subscription under the same tenant) * Doesn't support cross tenant scenarios (identity created in one tenant while the image template is created in another tenant)
-To learn more, see [How to use managed identities for Azure resources on an Azure VM to acquire an access token](../../active-directory/managed-identities-azure-resources/how-to-use-vm-token.md) and [How to use managed identities for Azure resources on an Azure VM](../../active-directory/managed-identities-azure-resources/how-to-use-vm-sign-in.md).
+To learn more, see [How to use managed identities for Azure resources on an Azure VM to acquire an access token](../../active-directory/managed-identities-azure-resources/how-to-use-vm-token.md) and [How to use managed identities for Azure resources on an Azure VM for sign-in](../../active-directory/managed-identities-azure-resources/how-to-use-vm-sign-in.md).
## Properties: source
The shell customizer supports running PowerShell scripts and inline command, the
"type": "PowerShell", "name": "<name>", "scriptUri": "<path to script>",
- "runElevated": "<true false>",
+ "runElevated": <true false>,
"sha256Checksum": "<sha256 checksum>" }, {
The shell customizer supports running PowerShell scripts and inline command, the
"name": "<name>", "inline": "<PowerShell syntax to run>", "validExitCodes": "<exit code>",
- "runElevated": "<true or false>"
+ "runElevated": <true or false>
} ], ```
virtual-machines Use Remote Desktop https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/use-remote-desktop.md
Next, install xfce using `apt` as follows:
```bash sudo apt-get update
-sudo apt-get -y install xfce4
+sudo DEBIAN_FRONTEND=noninteractive apt-get -y install xfce4
sudo apt install xfce4-session ```
virtual-machines Automation Configure Devops https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/automation-configure-devops.md
# Use SAP Deployment Automation Framework from Azure DevOps Services You can use Azure Repos to store your configuration files and Azure Pipelines to deploy and configure the infrastructure and the SAP application. - ## Sign up for Azure DevOps Services To use Azure DevOps Services, you'll need an Azure DevOps organization. An organization is used to connect groups of related projects. Use your work or school account to automatically connect your organization to your Azure Active Directory (Azure AD). To create an account, open [Azure DevOps](https://azure.microsoft.com/services/devops/) and either _sign-in_ or create a new account. Record the URL of the project.
Open (https://dev.azure.com) and create a new project by clicking on the _New Pr
### Import the repository
-Start by importing the SAP Deployment Automation Framework GitHub repository into Azure Repos. Navigate to the Repositories section and choose Import a repository. Import the 'https://github.com/Azure/sap-automation.git' repository into Azure DevOps. For more info, see [Import a repository](/azure/devops/repos/git/import-git-repository?view=azure-devops&preserve-view=true)
+Start by importing the SAP Deployment Automation Framework GitHub repository into Azure Repos.
+
+Navigate to the Repositories section and choose Import a repository, import the 'https://github.com/Azure/sap-automation.git' repository into Azure DevOps. For more info, see [Import a repository](/azure/devops/repos/git/import-git-repository?view=azure-devops&preserve-view=true)
-Some of the pipelines will add files to the Azure Repos and therefore require pull permissions. Assign "Contribute" permissions to the 'Build Service' using the Security tab of the source code repository in the Repositories section in Project settings.
+> [!NOTE]
+> Most of the pipelines will add files to the Azure Repos and therefore require pull permissions. Assign "Contribute" permissions to the 'Build Service' using the Security tab of > the source code repository in the Repositories section in Project settings.
:::image type="content" source="./media/automation-devops/automation-repo-permissions.png" alt-text="Picture showing repository permissions":::
+If you are unable to import a repository, you can create the 'sap-automation' repository and manually import the content from the SAP Deployment Automation Framework GitHub repository to it.
+
+### Create the repository for manual import
+
+> [!NOTE]
+> Only do this step if you are unable to import the repository directly.
+
+Create the 'sap-automation' repository by navigating to the 'Repositories' section in 'Project Settings' and clicking the _Create_ button.
+
+Choose the repository type 'Git' and provide a name for the repository, for example 'sap-automation'.
+### Cloning the repository
+
+In order to provide a more comprehensive editing capability of the content, you can clone the repository to a local folder and edit the contents locally.
+Clone the repository to a local folder by clicking the _Clone_ button in the Files view in the Repos section of the portal. For more info see [Cloning a repository](/azure/devops/repos/git/clone?view=azure-devops#clone-an-azure-repos-git-repo&preserve-view=true)
++
+### Manually importing the repository content using a local clone
+
+In case you were not able to import the content from the SAP Deployment Automation Framework GitHub repository you can download the content manually and add it to the folder of your local clone of the Azure DevOps repository.
+
+Navigate to 'https://github.com/Azure/SAP-automation' repository and download the repository content as a ZIP file by clicking the _Code_ button and choosing _Download ZIP_.
+
+Copy the content from the zip file to the root folder of your local clone.
+
+Open the local folder in Visual Studio code, you should see that there are changes that need to be synchronized by the indicator by the source control icon as is shown in the picture below.
++
+Select the source control icon and provide a message about the change, for example: "Import from GitHub" and press Cntr-Enter to commit the changes. Next select the _Sync Changes_ button to synchronize the changes back to the repository.
### Create configuration root folder
-Go to the new repository and create a top level folder called 'WORKSPACES', this folder will be the root folder for all the SAP deployment configuration files. In the dialog, enter 'WORKSPACES' as folder name and 'readme.md' as file name.
+Using your local clone create a top level folder called 'WORKSPACES', this folder will be the root folder for all the SAP deployment configuration files. Create the following folders in the 'WORKSPACES' folder: 'DEPLOYER', 'LIBRARY', 'LANDSCAPE' and 'SYSTEM'.
-Optionally enter some content in the file and save it by clicking the _commit_ button.
+Optionally you may copy the sample configuration files from the 'samples/WORKSPACES' folders to the WORKSPACES folder you just created, this will allow you to experiment with sample deployments.
-> [!NOTE]
-> In order to create the folder using Git you must also create a file.
+Push the changes to Azure DevOps repos by selecting the source control icon and providing a message about the change, for example: "Import of sample configurations" and press Cntr-Enter to commit the changes. Next select the _Sync Changes_ button to synchronize the changes back to the repository.
## Set up the Azure Pipelines
-To remove the Azure resources, you need an Azure Resource Manager service connection.
+To remove the Azure resources, you need an Azure Resource Manager service connection. For more information see [Manage service connections](/azure/devops/pipelines/library/service-endpoints?view=azure-devops&preserve-view=true)
To create the service connection, go to Project settings and navigate to the Service connections setting in the Pipelines section.
To create the service connection, go to Project settings and navigate to the Ser
Choose _Azure Resource Manager_ as the service connection type and _Service principal (manual)_ as the authentication method. Enter the target subscription, typically the control plane subscription, and provide the service principal details (verify that they're valid using the _Verify_ button). For more information on how to create a service principal, see [Creating a Service Principal](automation-deploy-control-plane.md#prepare-the-deployment-credentials).
-Enter a Service connection name, for instance 'Connection to DEV subscription' and ensure that the _Grant access permission to all pipelines_ checkbox is checked. Select _Verify and save_ to save the service connection.
+Enter a Service connection name, for instance 'Connection to MGMT subscription' and ensure that the _Grant access permission to all pipelines_ checkbox is checked. Select _Verify and save_ to save the service connection.
## Create Azure Pipelines
Create a new variable group 'SDAF-General' using the Library page in the Pipelin
| Deployment_Configuration_Path | WORKSPACES | For testing the sample configuration use 'samples/WORKSPACES' instead of WORKSPACES. | | Branch | main | | | S-Username | `<SAP Support user account name>` | |
-| S-Password | `<SAP Support user password>` | Change variable type to secret by clicking the lock icon |
+| S-Password | `<SAP Support user password>` | Change variable type to secret by clicking the lock icon |
| `advice.detachedHead` | false | | | `skipComponentGovernanceDetection` | true | |
-| `tf_version` | 1.1.4 | The Terraform version to use, see [Terraform download](https://www.terraform.io/downloads) |
+| `tf_version` | 1.1.7 | The Terraform version to use, see [Terraform download](https://www.terraform.io/downloads) |
-Save the variables and assign permissions for all pipelines using _Pipeline permissions_.
+Save the variables.
+
+> [!NOTE]
+> Remember to assign permissions for all pipelines using _Pipeline permissions_.
### Environment specific variables
Create a new variable group 'SDAF-MGMT' for the control plane environment using
| Variable | Value | Notes | | | - | -- | | Agent | Either 'Azure Pipelines' or the name of the agent pool containing the deployer, for instance 'MGMT-WEEU-POOL' Note, this pool will be created in a later step. |
-| ARM_CLIENT_ID | Service principal application id | |
-| ARM_CLIENT_SECRET | Service principal password | Change variable type to secret by clicking the lock icon |
-| ARM_SUBSCRIPTION_ID | Target subscription ID | |
-| ARM_TENANT_ID | Tenant ID for service principal | |
+| ARM_CLIENT_ID | Enter the Service principal application id. | |
+| ARM_CLIENT_SECRET | Enter the Service principal password. | Change variable type to secret by clicking the lock icon |
+| ARM_SUBSCRIPTION_ID | Enter the target subscription id. | |
+| ARM_TENANT_ID | Enter the Tenant id for the service principal. | |
| AZURE_CONNECTION_NAME | Previously created connection name | | | sap_fqdn | SAP Fully Qualified Domain Name, for example sap.contoso.net | Only needed if Private DNS isn't used. |
+Save the variables.
-Clone the group for each environment 'SDAF-DEV', 'SDAF-QA', ... and update the values to reflect the environment.
-
-| Variable | Value | Notes |
-| | - | -- |
-| Agent | Either 'Azure Pipelines' or the name of the agent pool containing the deployer, for instance 'MGMT-WEEU-POOL' Note, this pool will be created in a later step. |
-| ARM_CLIENT_ID | Service principal application id | |
-| ARM_CLIENT_SECRET | Service principal password | Change variable type to secret by clicking the lock icon |
-| ARM_SUBSCRIPTION_ID | Target subscription ID | |
-| ARM_TENANT_ID | Tenant ID for service principal | |
-| AZURE_CONNECTION_NAME | Previously created connection name | |
-| sap_fqdn | SAP Fully Qualified Domain Name, for example sap.contoso.net | Only needed if Private DNS isn't used. |
-
+> [!NOTE]
+> Remember to assign permissions for all pipelines using _Pipeline permissions_.
+>
+> You can use the clone functionality to create the next environment variable group.
-Save the variables and assign permissions for all pipelines using _Pipeline permissions_.
## Register the Deployer as a self-hosted agent for Azure DevOps
virtual-machines Automation Devops Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/automation-devops-tutorial.md
You'll perform the following tasks during this lab:
- An Azure subscription. If you don't have an Azure subscription, you can [create a free account here](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+> [!Note]
+> The free Azure account may not be sufficient to run the deployment.
+
+- A Service Principal with 'Contributor' permissions in the target subscriptions. For more information, see [Prepare the deployment credentials](automation-deploy-control-plane.md#prepare-the-deployment-credentials).
+ - A configured Azure DevOps instance, follow the steps here [Configure Azure DevOps Services for SAP Deployment Automation](automation-configure-devops.md) - For the 'SAP software acquisition' and the 'Configuration and SAP installation' pipelines a configured self hosted agent, see [Configure a self-hosted agent for SAP Deployment Automation](automation-configure-devops.md#register-the-deployer-as-a-self-hosted-agent-for-azure-devops) > [!Note]
-> The free Azure account may not be sufficient to run the deployment.
+> The self hosted agent virtual machine will be deployed as part of the control plane deployment.
## Overview
virtual-machines Automation Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/automation-tutorial.md
A valid SAP user account (SAP-User or S-User account) with software download pri
- `az` version 2.28.0 or higher
- - `terraform` version 1.0.8 or higher. [Upgrade using the Terraform instructions](https://www.terraform.io/upgrade-guides/0-12.html) as necessary.
+ - `terraform` version 1.1.4 or higher. [Upgrade using the Terraform instructions](https://www.terraform.io/upgrade-guides/0-12.html) as necessary.
## Create service principal