Updates from: 08/10/2021 03:12:22
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Custom Domain https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/custom-domain.md
Verify each subdomain you plan to use. Verifying just the top-level domain isn't
> [!TIP] > You can manage your custom domain with any publicly available DNS service, such as GoDaddy. If you don't have a DNS server, you can use App Service domains. To use App Service domains: >
-> 1. [Buy a custom domain name](/azure/app-service/manage-custom-dns-buy-domain).
+> 1. [Buy a custom domain name](../app-service/manage-custom-dns-buy-domain.md).
> 1. [Add your custom domain in Azure AD](../active-directory/fundamentals/add-custom-domain.md).
-> 1. Validate the domain name by [managing custom DNS records](/azure/app-service/manage-custom-dns-buy-domain#manage-custom-dns-records).
+> 1. Validate the domain name by [managing custom DNS records](../app-service/manage-custom-dns-buy-domain.md#manage-custom-dns-records).
## Step 2. Create a new Azure Front Door instance
To use your own web application firewall in front of Azure Front Door, you need
## Next steps
-Learn about [OAuth authorization requests](protocols-overview.md).
+Learn about [OAuth authorization requests](protocols-overview.md).
active-directory-b2c Custom Email Sendgrid https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/custom-email-sendgrid.md
Custom email verification requires the use of a third-party email provider like
## Create a SendGrid account
-If you don't already have one, start by setting up a SendGrid account (Azure customers can unlock 25,000 free emails each month). For setup instructions, see the [Create a SendGrid Account](../sendgrid-dotnet-how-to-send-email.md#create-a-sendgrid-account) section of [How to send email using SendGrid with Azure](../sendgrid-dotnet-how-to-send-email.md).
+If you don't already have one, start by setting up a SendGrid account (Azure customers can unlock 25,000 free emails each month). For setup instructions, see the [Create a SendGrid Account](https://docs.sendgrid.com/for-developers/partners/microsoft-azure-2021#create-a-sendgrid-account) section of [How to send email using SendGrid with Azure](https://docs.sendgrid.com/for-developers/partners/microsoft-azure-2021#create-a-twilio-sendgrid-accountcreate-a-twilio-sendgrid-account).
-Be sure to complete the section in which you [create a SendGrid API key](../sendgrid-dotnet-how-to-send-email.md#to-find-your-sendgrid-api-key). Record the API key for use in a later step.
+Be sure to complete the section in which you [create a SendGrid API key](https://docs.sendgrid.com/for-developers/partners/microsoft-azure-2021#to-find-your-sendgrid-api-key). Record the API key for use in a later step.
> [!IMPORTANT] > SendGrid offers customers the ability to send emails from shared IP and [dedicated IP addresses](https://sendgrid.com/docs/ui/account-and-settings/dedicated-ip-addresses/). When using dedicated IP addresses, you need to build your own reputation properly with an IP address warm-up. For more information, see [Warming Up An Ip Address](https://sendgrid.com/docs/ui/sending-email/warming-up-an-ip-address/).
You can find an example of a custom email verification policy on GitHub:
- [Custom email verification - DisplayControls](https://github.com/azure-ad-b2c/samples/tree/master/policies/custom-email-verifcation-displaycontrol) - For information about using a custom REST API or any HTTP-based SMTP email provider, see [Define a RESTful technical profile in an Azure AD B2C custom policy](restful-technical-profile.md).
active-directory-b2c Identity Provider Azure Ad B2c https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/identity-provider-azure-ad-b2c.md
Previously updated : 03/15/2021 Last updated : 08/09/2021
This article describes how to set up a federation with another Azure AD B2C tena
[!INCLUDE [active-directory-b2c-customization-prerequisites](../../includes/active-directory-b2c-customization-prerequisites.md)]
+### Verify the application's publisher domain
+As of November 2020, new application registrations show up as unverified in the user consent prompt unless [the application's publisher domain is verified](../active-directory/develop/howto-configure-publisher-domain.md) ***and*** the companyΓÇÖs identity has been verified with the Microsoft Partner Network and associated with the application. ([Learn more](../active-directory/develop/publisher-verification-overview.md) about this change.) Note that for Azure AD B2C user flows, the publisherΓÇÖs domain appears only when using a [Microsoft account](../active-directory-b2c/identity-provider-microsoft-account.md) or other Azure AD tenant as the identity provider. To meet these new requirements, do the following:
+
+1. [Verify your company identity using your Microsoft Partner Network (MPN) account](https://docs.microsoft.com/partner-center/verification-responses). This process verifies information about your company and your companyΓÇÖs primary contact.
+1. Complete the publisher verification process to associate your MPN account with your app registration using one of the following options:
+ - If the app registration for the Microsoft account identity provider is in an Azure AD tenant, [verify your app in the App Registration portal](../active-directory/develop/mark-app-as-publisher-verified.md).
+ - If your app registration for the Microsoft account identity provider is in an Azure AD B2C tenant, [mark your app as publisher verified using Microsoft Graph APIs](../active-directory/develop/troubleshoot-publisher-verification.md#making-microsoft-graph-api-calls) (for example, using Graph Explorer). The UI for setting an appΓÇÖs verified publisher is currently disabled for Azure AD B2C tenants.
+ ## Create an Azure AD B2C application To enable sign-in for users with an account from another Azure AD B2C tenant (for example, Fabrikam), in your Azure AD B2C (for example, Contoso):
active-directory-b2c Identity Provider Azure Ad Single Tenant https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/identity-provider-azure-ad-single-tenant.md
Previously updated : 06/17/2021 Last updated : 08/09/2021
This article shows you how to enable sign-in for users from a specific Azure AD
[!INCLUDE [active-directory-b2c-customization-prerequisites](../../includes/active-directory-b2c-customization-prerequisites.md)]
+### Verify the application's publisher domain
+As of November 2020, new application registrations show up as unverified in the user consent prompt unless [the application's publisher domain is verified](../active-directory/develop/howto-configure-publisher-domain.md) ***and*** the companyΓÇÖs identity has been verified with the Microsoft Partner Network and associated with the application. ([Learn more](../active-directory/develop/publisher-verification-overview.md) about this change.) Note that for Azure AD B2C user flows, the publisherΓÇÖs domain appears only when using a [Microsoft account](../active-directory-b2c/identity-provider-microsoft-account.md) or other Azure AD tenant as the identity provider. To meet these new requirements, do the following:
+
+1. [Verify your company identity using your Microsoft Partner Network (MPN) account](https://docs.microsoft.com/partner-center/verification-responses). This process verifies information about your company and your companyΓÇÖs primary contact.
+1. Complete the publisher verification process to associate your MPN account with your app registration using one of the following options:
+ - If the app registration for the Microsoft account identity provider is in an Azure AD tenant, [verify your app in the App Registration portal](../active-directory/develop/mark-app-as-publisher-verified.md).
+ - If your app registration for the Microsoft account identity provider is in an Azure AD B2C tenant, [mark your app as publisher verified using Microsoft Graph APIs](../active-directory/develop/troubleshoot-publisher-verification.md#making-microsoft-graph-api-calls) (for example, using Graph Explorer). The UI for setting an appΓÇÖs verified publisher is currently disabled for Azure AD B2C tenants.
+ ## Register an Azure AD app To enable sign-in for users with an Azure AD account from a specific Azure AD organization, in Azure Active Directory B2C (Azure AD B2C), you need to create an application in [Azure portal](https://portal.azure.com). For more information, see [Register an application with the Microsoft identity platform](../active-directory/develop/quickstart-register-app.md).
active-directory-b2c Identity Provider Microsoft Account https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/identity-provider-microsoft-account.md
Previously updated : 03/17/2021 Last updated : 08/09/2021
zone_pivot_groups: b2c-policy-type
[!INCLUDE [active-directory-b2c-customization-prerequisites](../../includes/active-directory-b2c-customization-prerequisites.md)]
+### Verify the application's publisher domain
+As of November 2020, new application registrations show up as unverified in the user consent prompt unless [the application's publisher domain is verified](../active-directory/develop/howto-configure-publisher-domain.md) ***and*** the companyΓÇÖs identity has been verified with the Microsoft Partner Network and associated with the application. ([Learn more](../active-directory/develop/publisher-verification-overview.md) about this change.) Note that for Azure AD B2C user flows, the publisherΓÇÖs domain appears only when using a Microsoft account or other [Azure AD](../active-directory-b2c/identity-provider-azure-ad-single-tenant.md) tenant as the identity provider. To meet these new requirements, do the following:
+
+1. [Verify your company identity using your Microsoft Partner Network (MPN) account](https://docs.microsoft.com/partner-center/verification-responses). This process verifies information about your company and your companyΓÇÖs primary contact.
+1. Complete the publisher verification process to associate your MPN account with your app registration using one of the following options:
+ - If the app registration for the Microsoft account identity provider is in an Azure AD tenant, [verify your app in the App Registration portal](../active-directory/develop/mark-app-as-publisher-verified.md).
+ - If your app registration for the Microsoft account identity provider is in an Azure AD B2C tenant, [mark your app as publisher verified using Microsoft Graph APIs](../active-directory/develop/troubleshoot-publisher-verification.md#making-microsoft-graph-api-calls) (for example, using Graph Explorer). The UI for setting an appΓÇÖs verified publisher is currently disabled for Azure AD B2C tenants.
+ ## Create a Microsoft account application To enable sign-in for users with a Microsoft account in Azure Active Directory B2C (Azure AD B2C), you need to create an application in [Azure portal](https://portal.azure.com). For more information, see [Register an application with the Microsoft identity platform](../active-directory/develop/quickstart-register-app.md). If you don't already have a Microsoft account, you can get one at [https://www.live.com/](https://www.live.com/).
active-directory-b2c Partner Akamai https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/partner-akamai.md
Akamai WAF integration includes the following components:
- **Azure AD B2C Tenant** ΓÇô The authorization server, responsible for verifying the userΓÇÖs credentials using the custom policies defined in the tenant. It's also known as the identity provider. -- [**Azure Front Door**](https://docs.microsoft.com/azure/frontdoor/front-door-overview) ΓÇô Responsible for enabling custom domains for Azure B2C tenant. All traffic from Cloudflare WAF will be routed to Azure Front Door before arriving at Azure AD B2C tenant.
+- [**Azure Front Door**](../frontdoor/front-door-overview.md) ΓÇô Responsible for enabling custom domains for Azure B2C tenant. All traffic from Cloudflare WAF will be routed to Azure Front Door before arriving at Azure AD B2C tenant.
- [**Akamai WAF**](https://www.akamai.com/us/en/resources/waf.jsp) ΓÇô The web application firewall, which manages all traffic that is sent to the authorization server. ## Integrate with Azure AD B2C
-1. To use custom domains in Azure AD B2C, it's required to use custom domain feature provided by Azure Front Door. Learn how to [enable Azure AD B2C custom domains](https://docs.microsoft.com/azure/active-directory-b2c/custom-domain?pivots=b2c-user-flow).
+1. To use custom domains in Azure AD B2C, it's required to use custom domain feature provided by Azure Front Door. Learn how to [enable Azure AD B2C custom domains](./custom-domain.md?pivots=b2c-user-flow).
-2. After custom domain for Azure AD B2C is successfully configured using Azure Front Door, [test the custom domain](https://docs.microsoft.com/azure/active-directory-b2c/custom-domain?pivots=b2c-custom-policy#test-your-custom-domain) before proceeding further.
+2. After custom domain for Azure AD B2C is successfully configured using Azure Front Door, [test the custom domain](./custom-domain.md?pivots=b2c-custom-policy#test-your-custom-domain) before proceeding further.
## Onboard with Akamai
Check the following to ensure all traffic to Azure AD B2C is now going through t
## Next steps -- [Configure a custom domain in Azure AD B2C](https://docs.microsoft.com/azure/active-directory-b2c/custom-domain?pivots=b2c-user-flow)
+- [Configure a custom domain in Azure AD B2C](./custom-domain.md?pivots=b2c-user-flow)
-- [Get started with custom policies in Azure AD B2C](https://docs.microsoft.com/azure/active-directory-b2c/custom-policy-get-started?tabs=applications)
+- [Get started with custom policies in Azure AD B2C](./tutorial-create-user-flows.md?pivots=b2c-custom-policy&tabs=applications)
active-directory-b2c Partner Bloksec https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/partner-bloksec.md
- Title: Tutorial to configure Azure Active Directory B2C with BlokSec-
-description: Learn how to integrate Azure AD B2C authentication with BlokSec for Passwordless authentication
------- Previously updated : 7/15/2021--
-zone_pivot_groups: b2c-policy-type
--
-# Tutorial: Configure Azure Active Directory B2C with BlokSec for passwordless authentication
------
-In this sample tutorial, learn how to integrate Azure Active Directory (AD) B2C authentication with [BlokSec](https://bloksec.com/). BlokSec simplifies the end-user login experience by providing customers passwordless authentication and tokenless multifactor authentication (MFA). BlokSec protects customers against identity-centric cyber-attacks such as password stuffing, phishing, and man-in-the-middle attacks.
-
-## Scenario description
-
-BlokSec integration includes the following components:
--- **Azure AD B2C** ΓÇô Configured as the authorization server/identity provider for any B2C application.--- **BlokSec Decentralized Identity Router** ΓÇô Acts as a gateway for services that wish to apply BlokSecΓÇÖs DIaaSΓäó to route authentication and authorization requests to end usersΓÇÖ Personal Identity Provider (PIdP) applications; configured as an OpenID Connect (OIDC) identity provider in Azure AD B2C.--- **BlokSec SDK-based mobile app** ΓÇô Acts as the usersΓÇÖ PIdP in the decentralized authentication scenario. The freely downloadable [BlokSec yuID](https://play.google.com/store/apps/details?id=com.bloksec) application can be used if your organization prefers not to develop your own mobile applications using the BlokSec SDKs.
-The following architecture diagram shows the implementation.
-
-![image shows the architecture diagram](./media/partner-bloksec/partner-bloksec-architecture-diagram.png)
-
-|Steps| Description|
-|:|:-|
-|1.| User attempts to log in to an Azure AD B2C application and is forwarded to Azure AD B2CΓÇÖs combined sign-in and sign-up policy.|
-|2.| Azure AD B2C redirects the user to the BlokSec decentralized identity router using the OIDC authorization code flow.|
-|3.| The BlokSec decentralized router sends a push notification to the userΓÇÖs mobile app including all context details of the authentication and authorization request.|
-|4.| The user reviews the authentication challenge, if accepted the user is prompted for biometry such as fingerprint or facial scan as available on their device, proving the userΓÇÖs identity.|
-|5.| The response is digitally signed with the userΓÇÖs unique digital key. Final authentication response provides proof of possession, presence, and consent. The respond is returned to the BlokSec decentralized identity router.|
-|6.| The BlokSec decentralized identity router verifies the digital signature against the userΓÇÖs immutable unique public key that is stored in a distributed ledger, then replies to Azure AD B2C with the authentication result.|
-|7.| Based on the authentication result user is granted/denied access.|
-
-## Onboard to BlokSec
-
-Request a demo tenant with BlokSec by filling out [the form](https://bloksec.com/request-a-demo/). In the message field indicates that you would like to onboard with Azure AD B2C. Download and install the free BlokSec yuID mobile app from the app store. Once your demo tenant has been prepared, you'll receive an email. On your mobile device where the BlokSec application is installed, select the link to register your admin account with your yuID app.
-
-## Prerequisites
-
-To get started, you'll need:
--- An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).--- An [Azure AD B2C tenant](/azure/active-directory-b2c/tutorial-create-tenant) that's linked to your Azure subscription.--- A BlokSec [trial account](https://bloksec.com/).--- If you haven't already done so, [register](/azure/active-directory-b2c/tutorial-register-applications) a web application, [and enable ID token implicit grant](/azure/active-directory-b2c/tutorial-register-applications#enable-id-token-implicit-grant).-
-## Prerequisites
-
-To get started, you'll need:
--- An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).--- An [Azure AD B2C tenant](/azure/active-directory-b2c/tutorial-create-tenant) that's linked to your Azure subscription.--- A BlokSec [trial account](https://bloksec.com/).--- If you haven't already done so, [register](/azure/active-directory-b2c/tutorial-register-applications) a web application, [and enable ID token implicit grant](/azure/active-directory-b2c/tutorial-register-applications#enable-id-token-implicit-grant).--- Complete the steps in the [**Get started with custom policies in Azure Active Directory B2C**](/azure/active-directory-b2c/tutorial-create-user-flows?pivots=b2c-custom-policy).-
-### Part 1 - Create an application registration in BlokSec
-
-1. Sign in to the BlokSec admin portal. A link will be included as part of your account registration email received when you onboard to BlokSec.
-
-2. On the main dashboard, select **Add Application > Create Custom**
-
-3. Complete the application details as follows and submit:
-
- |Property |Value |
- |||
- | Name |Azure AD B2C or your desired application name|
- |SSO type | OIDC|
- |Logo URI |[https://bloksec.io/assets/AzureB2C.png](https://bloksec.io/assets/AzureB2C.png) a link to the image of your choice|
- |Redirect URIs | https://**your-B2C-tenant-name**.b2clogin.com/**your-B2C-tenant-name**.onmicrosoft.com/oauth2/authresp<BR>**For Example**: 'https://fabrikam.b2clogin.com/fabrikam.onmicrosoft.com/oauth2/authresp' <BR><BR>If you use a custom domain, enter https://**your-domain-name**/**your-tenant-name**.onmicrosoft.com/oauth2/authresp. <BR> Replace your-domain-name with your custom domain, and your-tenant-name with the name of your tenant. |
- |Post log out redirect URIs |https://**your-B2C-tenant-name**.b2clogin.com/**your-B2C-tenant-name**.onmicrosoft.com/**{policy}**/oauth2/v2.0/logout <BR> [Send a sign-out request](/azure/active-directory-b2c/openid-connect#send-a-sign-out-request). |
-
-4. Once saved, select the newly created Azure AD B2C application to open the application configuration, select **Generate App Secret**.
-
->[!NOTE]
->You'll need application ID and application secret later to configure the Identity provider in Azure AD B2C.
--
-### Part 2 - Add a new Identity provider in Azure AD B2C
-
-1. Sign-in to the [Azure portal](https://portal.azure.com/#home) as the global administrator of your Azure AD B2C tenant.
-
-2. Make sure you're using the directory that contains your Azure AD B2C tenant by selecting the **Directory + subscription** filter in the top menu and choosing the directory that contains your tenant.
-
-3. Choose **All services** in the top-left corner of the Azure portal, search for and select **Azure AD B2C**
-
-4. Navigate to **Dashboard > Azure Active Directory B2C > Identity providers**
-
-5. Select New **OpenID Connect Provider**
-
-6. Select **Add**
-
-### Part 3 - Configure an Identity provider
-
-1. Select **Identity provider type > OpenID Connect**
-
-2. Fill out the form to set up the Identity provider:
-
-|Property |Value |
-|:|:|
-|Name |Enter BlokSec yuID ΓÇô Passwordless or a name of your choice|
-|Metadata URL|https://api.bloksec.io/oidc/.well-known/openid-configuration|
-|Client ID|The application ID from the BlokSec admin UI captured in **Part 1**|
-|Client Secret|The application Secret from the BlokSec admin UI captured in **Part 1**|
-|Scope|OpenID email profile|
-|Response type|Code|
-|Domain hint|yuID|
-
-3. Select **OK**
-
-4. Select **Map this identity providerΓÇÖs claims**.
-
-5. Fill out the form to map the Identity provider:
-
-|Property |Value |
-|:|:|
-|User ID|sub|
-|Display name|name|
-|Given name|given_name|
-|Surname|family_name|
-|Email|email|
-
-6. Select **Save** to complete the setup for your new OIDC Identity provider.
-
-### Part 4 - User registration
-
-1. Sign-in to BlokSec admin console with the credential provided earlier.
-
-2. Navigate to Azure AD B2C application that was created earlier. Select the gear icon at the top-right, and then select **Create Account**.
-
-3. Enter the userΓÇÖs information in the Create Account form, making note of the Account Name, and select **Submit**.
-
-The user will receive an **account registration email** at the provided email address. Have the user follow the registration link on the mobile device where the BlokSec yuID app is installed,
-
-### Part 5 - Create a user flow policy
-
-You should now see BlokSec as a new OIDC Identity provider listed within your B2C identity providers.
-
-1. In your Azure AD B2C tenant, under **Policies**, select **User flows**.
-
-2. Select **New user flow**
-
-3. Select **Sign up and sign in** > **Version** > **Create**.
-
-4. Enter a **Name** for your policy.
-
-5. In the Identity providers section, select your newly created BlokSec Identity provider.
-
-6. Select **None** for Local Accounts to disable email and password-based authentication.
-
-7. Select **Run user flow**
-
-8. In the form, enter the Replying URL, for example, https://jwt.ms
-
-9. The browser will be redirected to the BlokSec login page. Enter the account name registered during User registration. The user will receive a push notification to their mobile device where the BlokSec yuID application is installed; upon opening the notification, the user will be presented with an authentication challenge
-
-10. Once the authentication challenge is accepted, the browser will redirect the user to the replying URL.
-
-## Next steps
-
-For additional information, review the following articles:
--- [Custom policies in Azure AD B2C](/azure/active-directory-b2c/custom-policy-overview)--- [Get started with custom policies in Azure AD B2C](/azure/active-directory-b2c/tutorial-create-user-flows?pivots=b2c-custom-policy)---
->[!NOTE]
->In Azure Active Directory B2C, [**custom policies**](/azure/active-directory-b2c/user-flow-overview) are designed primarily to address complex scenarios. For most scenarios, we recommend that you use built-in [**user flows**](/azure/active-directory-b2c/user-flow-overview).
-
-### Part 2 - Create a policy key
-
-Store the client secret that you previously recorded in your Azure AD B2C tenant.
-
-1. Sign in to the [Azure portal](https://portal.azure.com/).
-
-2. Make sure you're using the directory that contains your Azure AD B2C tenant. Select the **Directory + subscription** filter in the top menu and choose the directory that contains your tenant.
-
-3. Choose **All services** in the top-left corner of the Azure portal, and then search for and select **Azure AD B2C**.
-
-4. On the Overview page, select **Identity Experience Framework**.
-
-5. Select **Policy Keys** and then select **Add**.
-
-6. For **Options**, choose `Manual`.
-
-7. Enter a **Name** for the policy key. For example, `BlokSecAppSecret`. The prefix `B2C_1A_` is added automatically to the name of your key.
-
-8. In **Secret**, enter your client secret that you previously recorded.
-
-9. For **Key usage**, select `Signature`.
-
-10. Select **Create**.
-
-### Part 3 - Configure BlokSec as an Identity provider
-
-To enable users to sign in using BlokSec decentralized identity, you need to define BlokSec as a claims provider that Azure AD B2C can communicate with through an endpoint. The endpoint provides a set of claims that are used by Azure AD B2C to verify a specific user has authenticated using biometry such as fingerprint or facial scan as available on their device, proving the userΓÇÖs identity.
-
-You can define BlokSec as a claims provider by adding it to the **ClaimsProvider** element in the extension file of your policy
-
-1. Open the `TrustFrameworkExtensions.xml`.
-
-2. Find the **ClaimsProviders** element. If it dosen't exist, add it under the root element.
-
-3. Add a new **ClaimsProvider** as follows:
-
- ```xml
- <ClaimsProvider>
- <Domain>bloksec</Domain>
- <DisplayName>BlokSec</DisplayName>
- <TechnicalProfiles>
- <TechnicalProfile Id="BlokSec-OpenIdConnect">
- <DisplayName>BlokSec</DisplayName>
- <Description>Login with your BlokSec decentriled identity</Description>
- <Protocol Name="OpenIdConnect" />
- <Metadata>
- <Item Key="METADATA">https://api.bloksec.io/oidc/.well-known/openid-configuration</Item>
- <!-- Update the Client ID below to the BlokSec Application ID -->
- <Item Key="client_id">00000000-0000-0000-0000-000000000000</Item>
- <Item Key="response_types">code</Item>
- <Item Key="scope">openid profile email</Item>
- <Item Key="response_mode">form_post</Item>
- <Item Key="HttpBinding">POST</Item>
- <Item Key="UsePolicyInRedirectUri">false</Item>
- <Item Key="DiscoverMetadataByTokenIssuer">true</Item>
- <Item Key="ValidTokenIssuerPrefixes">https://api.bloksec.io/oidc</Item>
- </Metadata>
- <CryptographicKeys>
- <Key Id="client_secret" StorageReferenceId="B2C_1A_BlokSecAppSecret" />
- </CryptographicKeys>
- <OutputClaims>
- <OutputClaim ClaimTypeReferenceId="issuerUserId" PartnerClaimType="sub" />
- <OutputClaim ClaimTypeReferenceId="displayName" PartnerClaimType="name" />
- <OutputClaim ClaimTypeReferenceId="givenName" PartnerClaimType="given_name" />
- <OutputClaim ClaimTypeReferenceId="surName" PartnerClaimType="family_name" />
- <OutputClaim ClaimTypeReferenceId="email" PartnerClaimType="email" />
- <OutputClaim ClaimTypeReferenceId="authenticationSource" DefaultValue="socialIdpAuthentication" AlwaysUseDefaultValue="true" />
- <OutputClaim ClaimTypeReferenceId="identityProvider" PartnerClaimType="iss" />
- </OutputClaims>
- <OutputClaimsTransformations>
- <OutputClaimsTransformation ReferenceId="CreateRandomUPNUserName" />
- <OutputClaimsTransformation ReferenceId="CreateUserPrincipalName" />
- <OutputClaimsTransformation ReferenceId="CreateAlternativeSecurityId" />
- <OutputClaimsTransformation ReferenceId="CreateSubjectClaimFromAlternativeSecurityId" />
- </OutputClaimsTransformations>
- <UseTechnicalProfileForSessionManagement ReferenceId="SM-SocialLogin" />
- </TechnicalProfile>
- </TechnicalProfiles>
- </ClaimsProvider>
- ```
-
-4. Set **client_id** to the application ID from the application registration.
-
-5. Save the file.
-
-### Part 4 - Add a user journey
-
-At this point, the identity provider has been set up, but it's not yet available in any of the sign-in pages. If you don't have your own custom user journey, create a duplicate of an existing template user journey, otherwise continue to the next step.
-
-1. Open the `TrustFrameworkBase.xml` file from the starter pack.
-
-2. Find and copy the entire contents of the **UserJourneys** element that includes ID=`SignUpOrSignIn`.
-
-3. Open the `TrustFrameworkExtensions.xml` and find the **UserJourneys** element. If the element doesn't exist, add one.
-
-4. Paste the entire content of the **UserJourney** element that you copied as a child of the **UserJourneys** element.
-
-5. Rename the ID of the user journey. For example, ID=`CustomSignUpSignIn`.
-
-### Part 5 - Add the identity provider to a user journey
-
-Now that you have a user journey, add the new identity provider to the user journey. First add a sign-in button, then link the button to an action. The action is the technical profile you created earlier.
-
-1. Find the orchestration step element that includes Type=`CombinedSignInAndSignUp`, or Type=`ClaimsProviderSelection` in the user journey. It's usually the first orchestration step. The **ClaimsProviderSelections** element contains a list of identity providers that a user can sign in with. The order of the elements controls the order of the sign-in buttons presented to the user. Add a **ClaimsProviderSelection** XML element. Set the value of **TargetClaimsExchangeId** to a friendly name.
-
-2. In the next orchestration step, add a **ClaimsExchange** element. Set the **Id** to the value of the target claims exchange ID. Update the value of **TechnicalProfileReferenceId** to the ID of the technical profile you created earlier.
-
-The following XML demonstrates the first two orchestration steps of a user journey with the identity provider:
-
-```xml
-<OrchestrationStep Order="1" Type="CombinedSignInAndSignUp" ContentDefinitionReferenceId="api.signuporsignin">
- <ClaimsProviderSelections>
- ...
- <ClaimsProviderSelection TargetClaimsExchangeId="BlokSecExchange" />
- </ClaimsProviderSelections>
- ...
-</OrchestrationStep>
-
-<OrchestrationStep Order="2" Type="ClaimsExchange">
- ...
- <ClaimsExchanges>
- <ClaimsExchange Id="BlokSecExchange" TechnicalProfileReferenceId="BlokSec-OpenIdConnect" />
- </ClaimsExchanges>
-</OrchestrationStep>
-```
-
-### Part 6 - Configure the relying party policy
-
-The relying party policy, for example [SignUpSignIn.xml](https://github.com/Azure-Samples/active-directory-b2c-custom-policy-starterpack/blob/master/SocialAndLocalAccounts/SignUpOrSignin.xml), specifies the user journey which Azure AD B2C will execute. Find the **DefaultUserJourney** element within relying party. Update the **ReferenceId** to match the user journey ID, in which you added the identity provider.
-
-In the following example, for the `CustomSignUpOrSignIn` user journey, the ReferenceId is set to `CustomSignUpOrSignIn`.
-```xml
-<RelyingParty>
- <DefaultUserJourney ReferenceId="CustomSignUpSignIn" />
- ...
-</RelyingParty>
-```
-
-### Part 7 - Upload the custom policy
-
-1. Sign in to the [Azure portal](https://portal.azure.com/#home).
-
-2. Select the **Directory + Subscription** icon in the portal toolbar, and then select the directory that contains your Azure AD B2C tenant.
-
-3. In the [Azure portal](https://portal.azure.com/#home), search for and select **Azure AD B2C**.
-
-4. Under Policies, select **Identity Experience Framework**.
-Select **Upload Custom Policy**, and then upload the two policy files that you changed, in the following order: the extension policy, for example `TrustFrameworkExtensions.xml`, then the relying party policy, such as `SignUpSignIn.xml`.
-
-### Part 8 - Test your custom policy
-
-1. Select your relying party policy, for example `B2C_1A_signup_signin`.
-
-2. For **Application**, select a web application that you [previously registered](/azure/active-directory-b2c/tutorial-register-applications). The **Reply URL** should show `https://jwt.ms`.
-
-3. Select the **Run now** button.
-
-4. From the sign-up or sign-in page, select **Google** to sign in with Google account.
-
-If the sign-in process is successful, your browser is redirected to `https://jwt.ms`, which displays the contents of the token returned by Azure AD B2C.
-
-## Next steps
-
-For additional information, review the following articles:
--- [Custom policies in Azure AD B2C](/azure/active-directory-b2c/custom-policy-overview)--- [Get started with custom policies in Azure AD B2C](/azure/active-directory-b2c/tutorial-create-user-flows?pivots=b2c-custom-policy)-+
+ Title: Tutorial to configure Azure Active Directory B2C with BlokSec
+
+description: Learn how to integrate Azure AD B2C authentication with BlokSec for Passwordless authentication
+++++++ Last updated : 7/15/2021++
+zone_pivot_groups: b2c-policy-type
++
+# Tutorial: Configure Azure Active Directory B2C with BlokSec for passwordless authentication
++++++
+In this sample tutorial, learn how to integrate Azure Active Directory (AD) B2C authentication with [BlokSec](https://bloksec.com/). BlokSec simplifies the end-user login experience by providing customers passwordless authentication and tokenless multifactor authentication (MFA). BlokSec protects customers against identity-centric cyber-attacks such as password stuffing, phishing, and man-in-the-middle attacks.
+
+## Scenario description
+
+BlokSec integration includes the following components:
+
+- **Azure AD B2C** ΓÇô Configured as the authorization server/identity provider for any B2C application.
+
+- **BlokSec Decentralized Identity Router** ΓÇô Acts as a gateway for services that wish to apply BlokSecΓÇÖs DIaaSΓäó to route authentication and authorization requests to end usersΓÇÖ Personal Identity Provider (PIdP) applications; configured as an OpenID Connect (OIDC) identity provider in Azure AD B2C.
+
+- **BlokSec SDK-based mobile app** ΓÇô Acts as the usersΓÇÖ PIdP in the decentralized authentication scenario. The freely downloadable [BlokSec yuID](https://play.google.com/store/apps/details?id=com.bloksec) application can be used if your organization prefers not to develop your own mobile applications using the BlokSec SDKs.
+The following architecture diagram shows the implementation.
+
+![image shows the architecture diagram](./media/partner-bloksec/partner-bloksec-architecture-diagram.png)
+
+|Steps| Description|
+|:|:-|
+|1.| User attempts to log in to an Azure AD B2C application and is forwarded to Azure AD B2CΓÇÖs combined sign-in and sign-up policy.|
+|2.| Azure AD B2C redirects the user to the BlokSec decentralized identity router using the OIDC authorization code flow.|
+|3.| The BlokSec decentralized router sends a push notification to the userΓÇÖs mobile app including all context details of the authentication and authorization request.|
+|4.| The user reviews the authentication challenge, if accepted the user is prompted for biometry such as fingerprint or facial scan as available on their device, proving the userΓÇÖs identity.|
+|5.| The response is digitally signed with the userΓÇÖs unique digital key. Final authentication response provides proof of possession, presence, and consent. The respond is returned to the BlokSec decentralized identity router.|
+|6.| The BlokSec decentralized identity router verifies the digital signature against the userΓÇÖs immutable unique public key that is stored in a distributed ledger, then replies to Azure AD B2C with the authentication result.|
+|7.| Based on the authentication result user is granted/denied access.|
+
+## Onboard to BlokSec
+
+Request a demo tenant with BlokSec by filling out [the form](https://bloksec.com/request-a-demo/). In the message field indicates that you would like to onboard with Azure AD B2C. Download and install the free BlokSec yuID mobile app from the app store. Once your demo tenant has been prepared, you'll receive an email. On your mobile device where the BlokSec application is installed, select the link to register your admin account with your yuID app.
+
+## Prerequisites
+
+To get started, you'll need:
+
+- An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+
+- An [Azure AD B2C tenant](/azure/active-directory-b2c/tutorial-create-tenant) that's linked to your Azure subscription.
+
+- A BlokSec [trial account](https://bloksec.com/).
+
+- If you haven't already done so, [register](/azure/active-directory-b2c/tutorial-register-applications) a web application, [and enable ID token implicit grant](/azure/active-directory-b2c/tutorial-register-applications#enable-id-token-implicit-grant).
+
+## Prerequisites
+
+To get started, you'll need:
+
+- An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+
+- An [Azure AD B2C tenant](/azure/active-directory-b2c/tutorial-create-tenant) that's linked to your Azure subscription.
+
+- A BlokSec [trial account](https://bloksec.com/).
+
+- If you haven't already done so, [register](/azure/active-directory-b2c/tutorial-register-applications) a web application, [and enable ID token implicit grant](/azure/active-directory-b2c/tutorial-register-applications#enable-id-token-implicit-grant).
+
+- Complete the steps in the [**Get started with custom policies in Azure Active Directory B2C**](/azure/active-directory-b2c/tutorial-create-user-flows?pivots=b2c-custom-policy).
+
+### Part 1 - Create an application registration in BlokSec
+
+1. Sign in to the BlokSec admin portal. A link will be included as part of your account registration email received when you onboard to BlokSec.
+
+2. On the main dashboard, select **Add Application > Create Custom**
+
+3. Complete the application details as follows and submit:
+
+ |Property |Value |
+ |||
+ | Name |Azure AD B2C or your desired application name|
+ |SSO type | OIDC|
+ |Logo URI |[https://bloksec.io/assets/AzureB2C.png](https://bloksec.io/assets/AzureB2C.png) a link to the image of your choice|
+ |Redirect URIs | https://**your-B2C-tenant-name**.b2clogin.com/**your-B2C-tenant-name**.onmicrosoft.com/oauth2/authresp<BR>**For Example**: 'https://fabrikam.b2clogin.com/fabrikam.onmicrosoft.com/oauth2/authresp' <BR><BR>If you use a custom domain, enter https://**your-domain-name**/**your-tenant-name**.onmicrosoft.com/oauth2/authresp. <BR> Replace your-domain-name with your custom domain, and your-tenant-name with the name of your tenant. |
+ |Post log out redirect URIs |https://**your-B2C-tenant-name**.b2clogin.com/**your-B2C-tenant-name**.onmicrosoft.com/**{policy}**/oauth2/v2.0/logout <BR> [Send a sign-out request](/azure/active-directory-b2c/openid-connect#send-a-sign-out-request). |
+
+4. Once saved, select the newly created Azure AD B2C application to open the application configuration, select **Generate App Secret**.
+
+>[!NOTE]
+>You'll need application ID and application secret later to configure the Identity provider in Azure AD B2C.
++
+### Part 2 - Add a new Identity provider in Azure AD B2C
+
+1. Sign-in to the [Azure portal](https://portal.azure.com/#home) as the global administrator of your Azure AD B2C tenant.
+
+2. Make sure you're using the directory that contains your Azure AD B2C tenant by selecting the **Directory + subscription** filter in the top menu and choosing the directory that contains your tenant.
+
+3. Choose **All services** in the top-left corner of the Azure portal, search for and select **Azure AD B2C**
+
+4. Navigate to **Dashboard > Azure Active Directory B2C > Identity providers**
+
+5. Select New **OpenID Connect Provider**
+
+6. Select **Add**
+
+### Part 3 - Configure an Identity provider
+
+1. Select **Identity provider type > OpenID Connect**
+
+2. Fill out the form to set up the Identity provider:
+
+|Property |Value |
+|:|:|
+|Name |Enter BlokSec yuID ΓÇô Passwordless or a name of your choice|
+|Metadata URL|https://api.bloksec.io/oidc/.well-known/openid-configuration|
+|Client ID|The application ID from the BlokSec admin UI captured in **Part 1**|
+|Client Secret|The application Secret from the BlokSec admin UI captured in **Part 1**|
+|Scope|OpenID email profile|
+|Response type|Code|
+|Domain hint|yuID|
+
+3. Select **OK**
+
+4. Select **Map this identity providerΓÇÖs claims**.
+
+5. Fill out the form to map the Identity provider:
+
+|Property |Value |
+|:|:|
+|User ID|sub|
+|Display name|name|
+|Given name|given_name|
+|Surname|family_name|
+|Email|email|
+
+6. Select **Save** to complete the setup for your new OIDC Identity provider.
+
+### Part 4 - User registration
+
+1. Sign-in to BlokSec admin console with the credential provided earlier.
+
+2. Navigate to Azure AD B2C application that was created earlier. Select the gear icon at the top-right, and then select **Create Account**.
+
+3. Enter the userΓÇÖs information in the Create Account form, making note of the Account Name, and select **Submit**.
+
+The user will receive an **account registration email** at the provided email address. Have the user follow the registration link on the mobile device where the BlokSec yuID app is installed,
+
+### Part 5 - Create a user flow policy
+
+You should now see BlokSec as a new OIDC Identity provider listed within your B2C identity providers.
+
+1. In your Azure AD B2C tenant, under **Policies**, select **User flows**.
+
+2. Select **New user flow**
+
+3. Select **Sign up and sign in** > **Version** > **Create**.
+
+4. Enter a **Name** for your policy.
+
+5. In the Identity providers section, select your newly created BlokSec Identity provider.
+
+6. Select **None** for Local Accounts to disable email and password-based authentication.
+
+7. Select **Run user flow**
+
+8. In the form, enter the Replying URL, for example, https://jwt.ms
+
+9. The browser will be redirected to the BlokSec login page. Enter the account name registered during User registration. The user will receive a push notification to their mobile device where the BlokSec yuID application is installed; upon opening the notification, the user will be presented with an authentication challenge
+
+10. Once the authentication challenge is accepted, the browser will redirect the user to the replying URL.
+
+## Next steps
+
+For additional information, review the following articles:
+
+- [Custom policies in Azure AD B2C](/azure/active-directory-b2c/custom-policy-overview)
+
+- [Get started with custom policies in Azure AD B2C](/azure/active-directory-b2c/tutorial-create-user-flows?pivots=b2c-custom-policy)
+++
+>[!NOTE]
+>In Azure Active Directory B2C, [**custom policies**](/azure/active-directory-b2c/user-flow-overview) are designed primarily to address complex scenarios. For most scenarios, we recommend that you use built-in [**user flows**](/azure/active-directory-b2c/user-flow-overview).
+
+### Part 2 - Create a policy key
+
+Store the client secret that you previously recorded in your Azure AD B2C tenant.
+
+1. Sign in to the [Azure portal](https://portal.azure.com/).
+
+2. Make sure you're using the directory that contains your Azure AD B2C tenant. Select the **Directory + subscription** filter in the top menu and choose the directory that contains your tenant.
+
+3. Choose **All services** in the top-left corner of the Azure portal, and then search for and select **Azure AD B2C**.
+
+4. On the Overview page, select **Identity Experience Framework**.
+
+5. Select **Policy Keys** and then select **Add**.
+
+6. For **Options**, choose `Manual`.
+
+7. Enter a **Name** for the policy key. For example, `BlokSecAppSecret`. The prefix `B2C_1A_` is added automatically to the name of your key.
+
+8. In **Secret**, enter your client secret that you previously recorded.
+
+9. For **Key usage**, select `Signature`.
+
+10. Select **Create**.
+
+### Part 3 - Configure BlokSec as an Identity provider
+
+To enable users to sign in using BlokSec decentralized identity, you need to define BlokSec as a claims provider that Azure AD B2C can communicate with through an endpoint. The endpoint provides a set of claims that are used by Azure AD B2C to verify a specific user has authenticated using biometry such as fingerprint or facial scan as available on their device, proving the userΓÇÖs identity.
+
+You can define BlokSec as a claims provider by adding it to the **ClaimsProvider** element in the extension file of your policy
+
+1. Open the `TrustFrameworkExtensions.xml`.
+
+2. Find the **ClaimsProviders** element. If it dosen't exist, add it under the root element.
+
+3. Add a new **ClaimsProvider** as follows:
+
+ ```xml
+ <ClaimsProvider>
+ <Domain>bloksec</Domain>
+ <DisplayName>BlokSec</DisplayName>
+ <TechnicalProfiles>
+ <TechnicalProfile Id="BlokSec-OpenIdConnect">
+ <DisplayName>BlokSec</DisplayName>
+ <Description>Login with your BlokSec decentriled identity</Description>
+ <Protocol Name="OpenIdConnect" />
+ <Metadata>
+ <Item Key="METADATA">https://api.bloksec.io/oidc/.well-known/openid-configuration</Item>
+ <!-- Update the Client ID below to the BlokSec Application ID -->
+ <Item Key="client_id">00000000-0000-0000-0000-000000000000</Item>
+ <Item Key="response_types">code</Item>
+ <Item Key="scope">openid profile email</Item>
+ <Item Key="response_mode">form_post</Item>
+ <Item Key="HttpBinding">POST</Item>
+ <Item Key="UsePolicyInRedirectUri">false</Item>
+ <Item Key="DiscoverMetadataByTokenIssuer">true</Item>
+ <Item Key="ValidTokenIssuerPrefixes">https://api.bloksec.io/oidc</Item>
+ </Metadata>
+ <CryptographicKeys>
+ <Key Id="client_secret" StorageReferenceId="B2C_1A_BlokSecAppSecret" />
+ </CryptographicKeys>
+ <OutputClaims>
+ <OutputClaim ClaimTypeReferenceId="issuerUserId" PartnerClaimType="sub" />
+ <OutputClaim ClaimTypeReferenceId="displayName" PartnerClaimType="name" />
+ <OutputClaim ClaimTypeReferenceId="givenName" PartnerClaimType="given_name" />
+ <OutputClaim ClaimTypeReferenceId="surName" PartnerClaimType="family_name" />
+ <OutputClaim ClaimTypeReferenceId="email" PartnerClaimType="email" />
+ <OutputClaim ClaimTypeReferenceId="authenticationSource" DefaultValue="socialIdpAuthentication" AlwaysUseDefaultValue="true" />
+ <OutputClaim ClaimTypeReferenceId="identityProvider" PartnerClaimType="iss" />
+ </OutputClaims>
+ <OutputClaimsTransformations>
+ <OutputClaimsTransformation ReferenceId="CreateRandomUPNUserName" />
+ <OutputClaimsTransformation ReferenceId="CreateUserPrincipalName" />
+ <OutputClaimsTransformation ReferenceId="CreateAlternativeSecurityId" />
+ <OutputClaimsTransformation ReferenceId="CreateSubjectClaimFromAlternativeSecurityId" />
+ </OutputClaimsTransformations>
+ <UseTechnicalProfileForSessionManagement ReferenceId="SM-SocialLogin" />
+ </TechnicalProfile>
+ </TechnicalProfiles>
+ </ClaimsProvider>
+ ```
+
+4. Set **client_id** to the application ID from the application registration.
+
+5. Save the file.
+
+### Part 4 - Add a user journey
+
+At this point, the identity provider has been set up, but it's not yet available in any of the sign-in pages. If you don't have your own custom user journey, create a duplicate of an existing template user journey, otherwise continue to the next step.
+
+1. Open the `TrustFrameworkBase.xml` file from the starter pack.
+
+2. Find and copy the entire contents of the **UserJourneys** element that includes ID=`SignUpOrSignIn`.
+
+3. Open the `TrustFrameworkExtensions.xml` and find the **UserJourneys** element. If the element doesn't exist, add one.
+
+4. Paste the entire content of the **UserJourney** element that you copied as a child of the **UserJourneys** element.
+
+5. Rename the ID of the user journey. For example, ID=`CustomSignUpSignIn`.
+
+### Part 5 - Add the identity provider to a user journey
+
+Now that you have a user journey, add the new identity provider to the user journey. First add a sign-in button, then link the button to an action. The action is the technical profile you created earlier.
+
+1. Find the orchestration step element that includes Type=`CombinedSignInAndSignUp`, or Type=`ClaimsProviderSelection` in the user journey. It's usually the first orchestration step. The **ClaimsProviderSelections** element contains a list of identity providers that a user can sign in with. The order of the elements controls the order of the sign-in buttons presented to the user. Add a **ClaimsProviderSelection** XML element. Set the value of **TargetClaimsExchangeId** to a friendly name.
+
+2. In the next orchestration step, add a **ClaimsExchange** element. Set the **Id** to the value of the target claims exchange ID. Update the value of **TechnicalProfileReferenceId** to the ID of the technical profile you created earlier.
+
+The following XML demonstrates the first two orchestration steps of a user journey with the identity provider:
+
+```xml
+<OrchestrationStep Order="1" Type="CombinedSignInAndSignUp" ContentDefinitionReferenceId="api.signuporsignin">
+ <ClaimsProviderSelections>
+ ...
+ <ClaimsProviderSelection TargetClaimsExchangeId="BlokSecExchange" />
+ </ClaimsProviderSelections>
+ ...
+</OrchestrationStep>
+
+<OrchestrationStep Order="2" Type="ClaimsExchange">
+ ...
+ <ClaimsExchanges>
+ <ClaimsExchange Id="BlokSecExchange" TechnicalProfileReferenceId="BlokSec-OpenIdConnect" />
+ </ClaimsExchanges>
+</OrchestrationStep>
+```
+
+### Part 6 - Configure the relying party policy
+
+The relying party policy, for example [SignUpSignIn.xml](https://github.com/Azure-Samples/active-directory-b2c-custom-policy-starterpack/blob/master/SocialAndLocalAccounts/SignUpOrSignin.xml), specifies the user journey which Azure AD B2C will execute. Find the **DefaultUserJourney** element within relying party. Update the **ReferenceId** to match the user journey ID, in which you added the identity provider.
+
+In the following example, for the `CustomSignUpOrSignIn` user journey, the ReferenceId is set to `CustomSignUpOrSignIn`.
+```xml
+<RelyingParty>
+ <DefaultUserJourney ReferenceId="CustomSignUpSignIn" />
+ ...
+</RelyingParty>
+```
+
+### Part 7 - Upload the custom policy
+
+1. Sign in to the [Azure portal](https://portal.azure.com/#home).
+
+2. Select the **Directory + Subscription** icon in the portal toolbar, and then select the directory that contains your Azure AD B2C tenant.
+
+3. In the [Azure portal](https://portal.azure.com/#home), search for and select **Azure AD B2C**.
+
+4. Under Policies, select **Identity Experience Framework**.
+Select **Upload Custom Policy**, and then upload the two policy files that you changed, in the following order: the extension policy, for example `TrustFrameworkExtensions.xml`, then the relying party policy, such as `SignUpSignIn.xml`.
+
+### Part 8 - Test your custom policy
+
+1. Select your relying party policy, for example `B2C_1A_signup_signin`.
+
+2. For **Application**, select a web application that you [previously registered](/azure/active-directory-b2c/tutorial-register-applications). The **Reply URL** should show `https://jwt.ms`.
+
+3. Select the **Run now** button.
+
+4. From the sign-up or sign-in page, select **Google** to sign in with Google account.
+
+If the sign-in process is successful, your browser is redirected to `https://jwt.ms`, which displays the contents of the token returned by Azure AD B2C.
+
+## Next steps
+
+For additional information, review the following articles:
+
+- [Custom policies in Azure AD B2C](/azure/active-directory-b2c/custom-policy-overview)
+
+- [Get started with custom policies in Azure AD B2C](/azure/active-directory-b2c/tutorial-create-user-flows?pivots=b2c-custom-policy)
+
active-directory Application Provisioning Config Problem Scim Compatibility https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/app-provisioning/application-provisioning-config-problem-scim-compatibility.md
Use the flags below in the tenant URL of your application in order to change the
:::image type="content" source="media/application-provisioning-config-problem-scim-compatibility/scim-flags.jpg" alt-text="SCIM flags to later behavior.":::
-* Use the following URL to update PATCH behavior and ensure SCIM compliance (e.g. active as boolean and proper group membership removals). This behavior is currently only available when using the flag, but will become the default behavior over the next few months. Note this preview flag currently does not work with on-demand provisioning.
+Use the following URL to update PATCH behavior and ensure SCIM compliance. The flag will alter the following behaviors:
+- Requests made to disable users
+- Requests to add a single-value string attribute
+- Requests to replace multiple attributes
+- Requests to remove a group member
+
+This behavior is currently only available when using the flag, but will become the default behavior over the next few months. Note this preview flag currently does not work with on-demand provisioning.
* **URL (SCIM Compliant):** aadOptscim062020 * **SCIM RFC references:**
- * https://tools.ietf.org/html/rfc7644#section-3.5.2
- * **Behavior:**
+ * https://tools.ietf.org/html/rfc7644#section-3.5.2
+
+Below are sample requests to help outline what the sync engine currently sends versus the requests that are sent once the feature flag is enabled.
+
+**Requests made to disable users:**
+
+**Without feature flag**
```json
- PATCH https://[...]/Groups/ac56b4e5-e079-46d0-810e-85ddbd223b09
- {
+ {
"schemas": [ "urn:ietf:params:scim:api:messages:2.0:PatchOp" ], "Operations": [ {
- "op": "remove",
- "path": "members[value eq \"16b083c0-f1e8-4544-b6ee-27a28dc98761\"]"
+ "op": "Replace",
+ "path": "active",
+ "value": "False"
} ]
- }
+}
+ ```
- PATCH https://[...]/Groups/ac56b4e5-e079-46d0-810e-85ddbd223b09
- {
+**With feature flag**
+ ```json
+ {
"schemas": [ "urn:ietf:params:scim:api:messages:2.0:PatchOp" ], "Operations": [ {
- "op": "add",
- "path": "members",
+ "op": "replace",
+ "path": "active",
+ "value": false
+ }
+ ]
+}
+ ```
+
+**Requests made to add a single-value string attribute:**
+
+**Without feature flag**
+ ```json
+{
+ "schemas": [
+ "urn:ietf:params:scim:api:messages:2.0:PatchOp"
+ ],
+ "Operations": [
+ {
+ "op": "Add",
+ "path": "nickName",
"value": [ {
- "value": "10263a6910a84ef9a581dd9b8dcc0eae"
+ "value": "Babs"
} ] } ]
- }
+}
+ ```
- PATCH https://[...]/Users/ac56b4e5-e079-46d0-810e-85ddbd223b09
- {
+**With feature flag**
+ ```json
+ {
"schemas": [ "urn:ietf:params:scim:api:messages:2.0:PatchOp" ], "Operations": [ { "op": "replace",
+ "path": "active",
+ "value": false
+ }
+ ]
+}
+ ```
+
+**Requests to replace multiple attributes:**
+
+**Without feature flag**
+ ```json
+{
+ "schemas": [
+ "urn:ietf:params:scim:api:messages:2.0:PatchOp"
+ ],
+ "Operations": [
+ {
+ "op": "Replace",
+ "path": "displayName",
+ "value": "Pvlo"
+ },
+ {
+ "op": "Replace",
"path": "emails[type eq \"work\"].value",
- "value": "someone@contoso.com"
+ "value": "TestBcwqnm@test.microsoft.com"
},
+ {
+ "op": "Replace",
+ "path": "name.givenName",
+ "value": "Gtfd"
+ },
+ {
+ "op": "Replace",
+ "path": "name.familyName",
+ "value": "Pkqf"
+ },
+ {
+ "op": "Replace",
+ "path": "externalId",
+ "value": "Eqpj"
+ },
+ {
+ "op": "Replace",
+ "path": "urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:employeeNumber",
+ "value": "Eqpj"
+ }
+ ]
+}
+ ```
+
+**With feature flag**
+ ```json
+{
+ "schemas": [
+ "urn:ietf:params:scim:api:messages:2.0:PatchOp"
+ ],
+ "Operations": [
{ "op": "replace",
- "path": "emails[type eq \"work\"].primary",
- "value": true
+ "path": "emails[type eq \"work\"].value",
+ "value": "TestMhvaes@test.microsoft.com"
}, { "op": "replace", "value": {
- "active": false,
- "userName": "someone"
+ "displayName": "Bjfe",
+ "name.givenName": "Kkom",
+ "name.familyName": "Unua",
+ "urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:employeeNumber": "Aklq"
} } ]
- }
+}
+ ```
+
+**Requests made to remove a group member:**
- PATCH https://[...]/Users/ac56b4e5-e079-46d0-810e-85ddbd223b09
- {
+**Without feature flag**
+ ```json
+{
"schemas": [ "urn:ietf:params:scim:api:messages:2.0:PatchOp" ], "Operations": [ {
- "op": "replace",
- "path": "active",
- "value": false
+ "op": "Remove",
+ "path": "members",
+ "value": [
+ {
+ "value": "u1091"
+ }
+ ]
} ]
- }
+}
+ ```
- PATCH https://[...]/Users/ac56b4e5-e079-46d0-810e-85ddbd223b09
- {
+**With feature flag**
+ ```json
+{
"schemas": [ "urn:ietf:params:scim:api:messages:2.0:PatchOp" ], "Operations": [ {
- "op": "add",
- "path": "urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:department",
- "value": "Tech Infrastructure"
+ "op": "remove",
+ "path": "members[value eq \"7f4bc1a3-285e-48ae-8202-5accb43efb0e\"]"
} ]
- }
-
+}
``` + * **Downgrade URL:** Once the new SCIM compliant behavior becomes the default on the non-gallery application, you can use the following URL to roll back to the old, non SCIM compliant behavior: AzureAdScimPatch2017
active-directory Use Scim To Provision Users And Groups https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/app-provisioning/use-scim-to-provision-users-and-groups.md
TLS 1.2 Cipher Suites minimum bar:
### IP Ranges The Azure AD provisioning service currently operates under the IP Ranges for AzureActiveDirectory as listed [here](https://www.microsoft.com/download/details.aspx?id=56519&WT.mc_id=rss_alldownloads_all). You can add the IP ranges listed under the AzureActiveDirectory tag to allow traffic from the Azure AD provisioning service into your application. Note that you will need to review the IP range list carefully for computed addresses. An address such as '40.126.25.32' could be represented in the IP range list as '40.126.0.0/18'. You can also programmatically retrieve the IP range list using the following [API](/rest/api/virtualnetwork/servicetags/list).
-Azure AD also supports an agent based solution to provide connectivity to applications in private networks (on-premises, hosted in Azure, hosted in AWS, etc.). Customers can deploy a lightweight agent, which provides connectivity to Azure AD without opening an inbound ports, on a server in their private network. Learn more [here](/app-provisioning/on-premises-scim-provisioning).
+Azure AD also supports an agent based solution to provide connectivity to applications in private networks (on-premises, hosted in Azure, hosted in AWS, etc.). Customers can deploy a lightweight agent, which provides connectivity to Azure AD without opening an inbound ports, on a server in their private network. Learn more [here](/azure/active-directory/app-provisioning/on-premises-scim-provisioning).
## Build a SCIM endpoint
active-directory How To Migrate Mfa Server To Azure Mfa User Authentication https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/authentication/how-to-migrate-mfa-server-to-azure-mfa-user-authentication.md
We recommend reviewing MFA Server logs to ensure no users or applications are us
### Convert your domains to managed authentication
-You should now [convert your federated domains in Azure AD to managed](../hybrid/plan-migrate-adfs-password-hash-sync.md#convert-domains-from-federated-to-managed) and remove the staged rollout configuration.
+You should now [convert your federated domains in Azure AD to managed](../hybrid/migrate-from-federation-to-cloud-authentication.md#convert-domains-from-federated-to-managed) and remove the staged rollout configuration.
This ensures new users use cloud authentication without being added to the migration groups. ### Revert claims rules on AD FS and remove MFA Server authentication provider
For more information on migrating applications to Azure, see [Resources for migr
- [Migrate from Microsoft MFA Server to Azure multi-factor authentication (Overview)](how-to-migrate-mfa-server-to-azure-mfa.md) - [Migrate applications from Windows Active Directory to Azure Active Directory](../manage-apps/migrate-application-authentication-to-azure-active-directory.md)-- [Plan your cloud authentication strategy](../fundamentals/active-directory-deployment-plans.md)
+- [Plan your cloud authentication strategy](../fundamentals/active-directory-deployment-plans.md)
active-directory Custom Rbac For Developers https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/custom-rbac-for-developers.md
Role-based access control (RBAC) allows certain users or groups to have specific permissions regarding which resources they have access to, what they can do with those resources, and who manages which resources. This article explains application-specific role-based access control. > [!NOTE]
-> Application role-based access control differs from [Azure role-based access control](/azure/role-based-access-control/overview) and [Azure AD role-based access control](../roles/custom-overview.md#understand-azure-ad-role-based-access-control). Azure custom roles and built-in roles are both part of Azure RBAC, which helps you manage Azure resources. Azure AD RBAC allows you to manage Azure AD resources.
+> Application role-based access control differs from [Azure role-based access control](../../role-based-access-control/overview.md) and [Azure AD role-based access control](../roles/custom-overview.md#understand-azure-ad-role-based-access-control). Azure custom roles and built-in roles are both part of Azure RBAC, which helps you manage Azure resources. Azure AD RBAC allows you to manage Azure AD resources.
App roles and groups both store information about user assignments in the Azure
Using custom storage allows developers extra customization and control over how to assign roles to users and how to represent them. However, the extra flexibility also introduces more responsibility. For example, there's no mechanism currently available to include this information in tokens returned from Azure AD. If developers maintain role information in a custom data store, they'll need to have the apps retrieve the roles. This is typically done using extensibility points defined in the middleware available to the platform that is being used to develop the application. Furthermore, developers are responsible for properly securing the custom data store. > [!NOTE]
-> Using [Azure AD B2C Custom policies](/azure/active-directory-b2c/custom-policy-overview) it is possible to interact with custom data stores and to include custom claims within a token.
+> Using [Azure AD B2C Custom policies](../../active-directory-b2c/custom-policy-overview.md) it is possible to interact with custom data stores and to include custom claims within a token.
## Choosing an approach
Although either app roles or groups can be used for authorization, key differenc
- [How to add app roles to your application and receive them in the token](./howto-add-app-roles-in-azure-ad-apps.md). - [Register an application with the Microsoft identity platform](./quickstart-register-app.md).-- [Azure Identity Management and access control security best practices](/azure/security/fundamentals/identity-management-best-practices).
+- [Azure Identity Management and access control security best practices](../../security/fundamentals/identity-management-best-practices.md).
active-directory Msal Net Migration Confidential Client https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/msal-net-migration-confidential-client.md
For app registrations:
The confidential client scenarios are:
- - [Daemon scenarios](/azure/active-directory/develop/msal-net-migration-confidential-client?tabs=daemon#migrate-daemon-scenarios) supported by web apps, web APIs, and daemon console applications.
- - [Web API calling downstream web APIs](/azure/active-directory/develop/msal-net-migration-confidential-client?tabs=obo#migrate-on-behalf-of-calls-obo-in-web-apis) supported by web APIs calling downstream web APIs on behalf of the user.
- - [Web app calling web APIs](/azure/active-directory/develop/msal-net-migration-confidential-client?tabs=authcode#migrate-acquiretokenbyauthorizationcodeasync-in-web-apps) supported by web apps that sign in users and call a downstream web API.
+ - [Daemon scenarios](?tabs=daemon#migrate-daemon-apps) supported by web apps, web APIs, and daemon console applications.
+ - [Web API calling downstream web APIs](?tabs=obo#migrate-a-web-api-that-calls-downstream-web-apis) supported by web APIs calling downstream web APIs on behalf of the user.
+ - [Web app calling web APIs](?tabs=authcode#migrate-a-web-api-that-calls-downstream-web-apis) supported by web apps that sign in users and call a downstream web API.
You might have provided a wrapper around ADAL.NET to handle certificates and caching. This article uses the same approach to illustrate the process of migrating from ADAL.NET to MSAL.NET. However, this code is only for demonstration purposes. Don't copy/paste these wrappers or integrate them in your code as they are.
The ADAL code for your app uses daemon scenarios if it contains a call to `Authe
- A resource (app ID URI) as a first parameter - `IClientAssertionCertificate` or `ClientAssertion` as the second parameter
-`AuthenticationContext.AcquireTokenAsync` doesn't have a parameter of type `UserAssertion`. If it does, then your app is a web API, and it's using the [web API calling downstream web APIs ](/azure/active-directory/develop/msal-net-migration-confidential-client?#migrate-on-behalf-of-calls-obo-in-web-apis) scenario.
+`AuthenticationContext.AcquireTokenAsync` doesn't have a parameter of type `UserAssertion`. If it does, then your app is a web API, and it's using the [web API calling downstream web APIs](?tabs=obo#migrate-a-web-api-that-calls-downstream-web-apis) scenario.
#### Update the code of daemon scenarios
You can troubleshoot the exception by using these steps:
## Next steps Learn more about the [differences between ADAL.NET and MSAL.NET apps](msal-net-differences-adal-net.md).
-Learn more about [token cache serialization in MSAL.NET](msal-net-token-cache-serialization.md)
+Learn more about [token cache serialization in MSAL.NET](msal-net-token-cache-serialization.md)
active-directory Security Best Practices For App Registration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/security-best-practices-for-app-registration.md
Scenarios that require **implicit flow** can now use **Auth code flow** to reduc
## Credential configuration Credentials are a vital part of an application registration when your application is used as a confidential client. If your app registration is used only as a Public Client App (allows users to sign in using a public endpoint), ensure that you don't have any credentials on your application object. Review the credentials used in your applications for freshness of use and their expiration. An unused credential on an application can result in security breach.
-While it's convenient to use password secrets as a credential, we strongly recommend that you use x509 certificates as the only credential type for getting tokens for your application. Monitor your production pipelines to ensure credentials of any kind are never committed into code repositories. If using Azure, we strongly recommend using Managed Identity so application credentials are automatically managed. Refer to the [managed identities documentation](../managed-identities-azure-resources/overview.md) for more details. [Credential Scanner](/azure/security/develop/security-code-analysis-overview#credential-scanner) is a static analysis tool that you can use to detect credentials (and other sensitive content) in your source code and build output.
+While it's convenient to use password secrets as a credential, we strongly recommend that you use x509 certificates as the only credential type for getting tokens for your application. Monitor your production pipelines to ensure credentials of any kind are never committed into code repositories. If using Azure, we strongly recommend using Managed Identity so application credentials are automatically managed. Refer to the [managed identities documentation](../managed-identities-azure-resources/overview.md) for more details. [Credential Scanner](../../security/develop/security-code-analysis-overview.md#credential-scanner) is a static analysis tool that you can use to detect credentials (and other sensitive content) in your source code and build output.
![certificates and secrets on Azure portal](media/active-directory-application-registration-best-practices/credentials.png)
active-directory Licensing Directory Independence https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/enterprise-users/licensing-directory-independence.md
Title: Characteristics of multiple tenant interaction - Azure AD | Microsoft Docs
+ Title: Characteristics of multi-tenant interaction - Azure AD | Microsoft Docs
description: Understanding the data independence of your Azure Active Directory organizations documentationcenter: ''
-# Understand how multiple Azure Active Directory organizations interact
+# Understand how multiple Azure Active Directory tenant organizations interact
In Azure Active Directory (Azure AD), each Azure AD organization is fully independent: a peer that is logically independent from the other Azure AD organizations that you manage. This independence between organizations includes resource independence, administrative independence, and synchronization independence. There is no parent-child relationship between organizations.
In Azure Active Directory (Azure AD), each Azure AD organization is fully indepe
If a non-administrative user of organization 'Contoso' creates a test organization 'Test,' then: * By default, the user who creates a organization is added as an external user in that new organization, and assigned the global administrator role in that organization.
-* The administrators of organization 'Contoso' have no direct administrative privileges to organization 'Test,' unless an administrator of 'Test' specifically grants them these privileges. However, administrators of 'Contoso' can control access to organization 'Test' if they control the user account that created 'Test.'
+* The administrators of organization 'Contoso' have no direct administrative privileges to organization 'Test,' unless an administrator of 'Test' specifically grants them these privileges. However, administrators of 'Contoso' can control access to organization 'Test' if they sign in to the user account that created 'Test.'
* If you add or remove an Azure AD role for a user in one organization, the change does not affect the roles that the user is assigned in any other Azure AD organization. ## Synchronization independence
active-directory Azure Ad Account https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/external-identities/azure-ad-account.md
Previously updated : 06/02/2021 Last updated : 08/09/2021
Azure AD account is an identity provider option for your self-service sign-up us
![Azure AD account in a self-service sign-up user flow](media/azure-ad-account/azure-ad-account-user-flow.png)
+## Verifying the application's publisher domain
+As of November 2020, new application registrations show up as unverified in the user consent prompt unless [the application's publisher domain is verified](../develop/howto-configure-publisher-domain.md) ***and*** the companyΓÇÖs identity has been verified with the Microsoft Partner Network and associated with the application. ([Learn more](../develop/publisher-verification-overview.md) about this change.) Note that for Azure AD user flows, the publisherΓÇÖs domain appears only when using a [Microsoft account](microsoft-account.md) or other Azure AD tenant as the identity provider. To meet these new requirements, do the following:
+
+1. [Verify your company identity using your Microsoft Partner Network (MPN) account](https://docs.microsoft.com/partner-center/verification-responses). This process verifies information about your company and your companyΓÇÖs primary contact.
+1. Complete the publisher verification process to associate your MPN account with your app registration using one of the following options:
+ - If the app registration for the Microsoft account identity provider is in an Azure AD tenant, [verify your app in the App Registration portal](../develop/mark-app-as-publisher-verified.md).
+ - If your app registration for the Microsoft account identity provider is in an Azure AD B2C tenant, [mark your app as publisher verified using Microsoft Graph APIs](../develop/troubleshoot-publisher-verification.md#making-microsoft-graph-api-calls) (for example, using Graph Explorer).
+ ## Next steps - [Add Azure Active Directory B2B collaboration users](add-users-administrator.md)
active-directory Microsoft Account https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/external-identities/microsoft-account.md
Previously updated : 07/26/2021 Last updated : 08/09/2021
Microsoft account is an identity provider option for your self-service sign-up u
![Microsoft account in a self-service sign-up user flow](media/microsoft-account/microsoft-account-user-flow.png)
+## Verifying the application's publisher domain
+As of November 2020, new application registrations show up as unverified in the user consent prompt unless [the application's publisher domain is verified](../develop/howto-configure-publisher-domain.md) ***and*** the companyΓÇÖs identity has been verified with the Microsoft Partner Network and associated with the application. ([Learn more](../develop/publisher-verification-overview.md) about this change.) Note that for Azure AD user flows, the publisherΓÇÖs domain appears only when using a Microsoft account or other [Azure AD tenant](azure-ad-account.md) as the identity provider. To meet these new requirements, do the following:
+
+1. [Verify your company identity using your Microsoft Partner Network (MPN) account](https://docs.microsoft.com/partner-center/verification-responses). This process verifies information about your company and your companyΓÇÖs primary contact.
+1. Complete the publisher verification process to associate your MPN account with your app registration using one of the following options:
+ - If the app registration for the Microsoft account identity provider is in an Azure AD tenant, [verify your app in the App Registration portal](../develop/mark-app-as-publisher-verified.md).
+ - If your app registration for the Microsoft account identity provider is in an Azure AD B2C tenant, [mark your app as publisher verified using Microsoft Graph APIs](../develop/troubleshoot-publisher-verification.md#making-microsoft-graph-api-calls) (for example, using Graph Explorer).
+ ## Next steps - [Add Azure Active Directory B2B collaboration users](add-users-administrator.md)
active-directory Redemption Experience https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/external-identities/redemption-experience.md
When a guest signs in to access resources in a partner organization for the firs
> [!NOTE] > The consent experience appears only after the user signs in, and not before. There are some scenarios where the consent experience will not be displayed to the user, for example: > - The user already accepted the consent experience
-> - The admin [grants tenant-wide admin consent to an application](/azure/active-directory/manage-apps/grant-admin-consent)
+> - The admin [grants tenant-wide admin consent to an application](../manage-apps/grant-admin-consent.md)
In your directory, the guest's **Invitation accepted** value changes to **Yes**. If an MSA was created, the guestΓÇÖs **Source** shows **Microsoft Account**. For more information about guest user account properties, see [Properties of an Azure AD B2B collaboration user](user-properties.md). If you see an error that requires admin consent while accessing an application, see [how to grant admin consent to apps](../develop/v2-admin-consent.md).
If you see an error that requires admin consent while accessing an application,
- [Add Azure Active Directory B2B collaboration users in the Azure portal](add-users-administrator.md) - [How do information workers add B2B collaboration users to Azure Active Directory?](add-users-information-worker.md) - [Add Azure Active Directory B2B collaboration users by using PowerShell](customize-invitation-api.md#powershell)-- [Leave an organization as a guest user](leave-the-organization.md)
+- [Leave an organization as a guest user](leave-the-organization.md)
active-directory Azure Active Directory B2c Deployment Plans https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/fundamentals/azure-active-directory-b2c-deployment-plans.md
This phase includes the following capabilities:
### Business requirements review -- Assess the primary reason to switch off existing systems and [move to Azure AD B2C](https://docs.microsoft.com/azure/active-directory-b2c/overview).
+- Assess the primary reason to switch off existing systems and [move to Azure AD B2C](../../active-directory-b2c/overview.md).
-- For a new application, [plan and design](https://docs.microsoft.com/azure/active-directory-b2c/best-practices#planning-and-design) the Customer Identity Access Management (CIAM) system
+- For a new application, [plan and design](../../active-directory-b2c/best-practices.md#planning-and-design) the Customer Identity Access Management (CIAM) system
-- Identify customer's location and [create a tenant in the corresponding datacenter](https://docs.microsoft.com/azure/active-directory-b2c/tutorial-create-tenant).
+- Identify customer's location and [create a tenant in the corresponding datacenter](../../active-directory-b2c/tutorial-create-tenant.md).
- Check the type of applications you have
- - Check the platforms that are currently supported - [MSAL](https://docs.microsoft.com/azure/active-directory/develop/msal-overview) or [Open source](https://azure.microsoft.com/free/open-source/search/?OCID=AID2200277_SEM_f63bcafc4d5f1d7378bfaa2085b249f9:G:s&ef_id=f63bcafc4d5f1d7378bfaa2085b249f9:G:s&msclkid=f63bcafc4d5f1d7378bfaa2085b249f9).
- - For backend services, use the [client credentials flow](https://docs.microsoft.com/azure/active-directory/develop/msal-authentication-flows#client-credentials).
+ - Check the platforms that are currently supported - [MSAL](../develop/msal-overview.md) or [Open source](https://azure.microsoft.com/free/open-source/search/?OCID=AID2200277_SEM_f63bcafc4d5f1d7378bfaa2085b249f9:G:s&ef_id=f63bcafc4d5f1d7378bfaa2085b249f9:G:s&msclkid=f63bcafc4d5f1d7378bfaa2085b249f9).
+ - For backend services, use the [client credentials flow](../develop/msal-authentication-flows.md#client-credentials).
- If you intend to migrate from an existing Identity Provider (IdP)
- - Consider using the [seamless migration approach](https://docs.microsoft.com/azure/active-directory-b2c/user-migration#seamless-migration)
+ - Consider using the [seamless migration approach](../../active-directory-b2c/user-migration.md#seamless-migration)
- Learn [how to migrate the existing applications](https://github.com/azure-ad-b2c/user-migration) - Ensure the coexistence of multiple solutions at once.
This phase includes the following capabilities:
### Stakeholders When technology projects fail, it's typically because of mismatched expectations on impact, outcomes, and responsibilities. To avoid these pitfalls, [ensure that you're engaging the right
-stakeholders](https://docs.microsoft.com/azure/active-directory/fundamentals/active-directory-deployment-plans#include-the-right-stakeholders) and that stakeholders understand their roles.
+stakeholders](./active-directory-deployment-plans.md#include-the-right-stakeholders) and that stakeholders understand their roles.
- Identify the primary architect, project manager, and owner for the application.
This phase includes the following capabilities:
| Capability | Description | |:-|:--|
-| [Deploy authentication and authorization](#deploy-authentication-and-authorization) | Understand the [authentication and authorization](https://docs.microsoft.com/azure/active-directory/develop/authentication-vs-authorization) scenarios |
+| [Deploy authentication and authorization](#deploy-authentication-and-authorization) | Understand the [authentication and authorization](../develop/authentication-vs-authorization.md) scenarios |
| [Deploy applications and user identities](#deploy-applications-and-user-identities) | Plan to deploy client application and migrate user identities | | [Client application onboarding and deliverables](#client-application-onboarding-and-deliverables) | Onboard the client application and test the solution | | [Security](#security) | Enhance the security of your Identity solution |
This phase includes the following capabilities:
### Deploy authentication and authorization -- Start with [setting up an Azure AD B2C tenant](https://docs.microsoft.com/azure/active-directory-b2c/tutorial-create-tenant).
+- Start with [setting up an Azure AD B2C tenant](../../active-directory-b2c/tutorial-create-tenant.md).
- For business driven authorization, use the [Azure AD B2C Identity Experience Framework (IEF) sample user journeys](https://github.com/azure-ad-b2c/samples#local-account-policy-enhancements)
Follow this sample checklist for more guidance:
All Azure AD B2C projects start with one or more client applications, which may have different business goals.
-1. [Create or configure client applications](https://docs.microsoft.com/azure/active-directory-b2c/app-registrations-training-guide). Refer to these [code samples](https://docs.microsoft.com/azure/active-directory-b2c/code-samples) for implementation.
+1. [Create or configure client applications](../../active-directory-b2c/app-registrations-training-guide.md). Refer to these [code samples](../../active-directory-b2c/integrate-with-app-code-samples.md) for implementation.
-2. Next, setup your user journey based on built-in or custom user flows. [Learn when to use user flows vs. custom policies](https://docs.microsoft.com/azure/active-directory-b2c/user-flow-overview#comparing-user-flows-and-custom-policies).
+2. Next, setup your user journey based on built-in or custom user flows. [Learn when to use user flows vs. custom policies](../../active-directory-b2c/user-flow-overview.md#comparing-user-flows-and-custom-policies).
-3. Setup IdPs based on your business need. [Learn how to add Azure Active Directory B2C as an IdP](https://docs.microsoft.com/azure/active-directory-b2c/add-identity-provider).
+3. Setup IdPs based on your business need. [Learn how to add Azure Active Directory B2C as an IdP](../../active-directory-b2c/add-identity-provider.md).
-4. Migrate your users. [Learn about user migration approaches](https://docs.microsoft.com/azure/active-directory-b2c/user-migration). Refer to [Azure AD B2C IEF sample user journeys](https://github.com/azure-ad-b2c/samples) for advanced scenarios.
+4. Migrate your users. [Learn about user migration approaches](../../active-directory-b2c/user-migration.md). Refer to [Azure AD B2C IEF sample user journeys](https://github.com/azure-ad-b2c/samples) for advanced scenarios.
Consider this sample checklist as you **deploy your applications**:
Consider this sample checklist as you **deploy your applications**:
- Check if all the frontend and backend applications are hosted in on-premises, cloud, or hybrid-cloud. -- Check the platforms/languages used such as, [ASP.NET](https://docs.microsoft.com/azure/active-directory-b2c/quickstart-web-app-dotnet), Java, and Node.js.
+- Check the platforms/languages used such as, [ASP.NET](../../active-directory-b2c/quickstart-web-app-dotnet.md), Java, and Node.js.
- Check where the current user attributes are stored. It could be Lightweight Directory Access Protocol (LDAP) or databases.
Consider this sample checklist as you **deploy user identities**:
- Check the number of users accessing the applications. -- Check the type of IdPs that are needed. For example, Facebook, local account, and [Active Directory Federation Services (AD FS)](https://docs.microsoft.com/windows-server/identity/active-directory-federation-services).
+- Check the type of IdPs that are needed. For example, Facebook, local account, and [Active Directory Federation Services (AD FS)](/windows-server/identity/active-directory-federation-services).
-- Outline the claim schema that is required from your application, [Azure AD B2C](https://docs.microsoft.com/azure/active-directory-b2c/claimsschema), and your IdPs if applicable.
+- Outline the claim schema that is required from your application, [Azure AD B2C](../../active-directory-b2c/claimsschema.md), and your IdPs if applicable.
-- Outline the information that is required to capture during a [sign-in/sign-up flow](https://docs.microsoft.com/azure/active-directory-b2c/add-sign-up-and-sign-in-policy?pivots=b2c-user-flow).
+- Outline the information that is required to capture during a [sign-in/sign-up flow](../../active-directory-b2c/add-sign-up-and-sign-in-policy.md?pivots=b2c-user-flow).
### Client application onboarding and deliverables
Consider this sample checklist while you **onboard an application**:
| Define the target group of the application | Check if this application is an end customer application, business customer application, or a digital service. Check if there is a need for employee login. | | Identify the business value behind an application | Understand the full business case behind an application to find the best fit of Azure AD B2C solution and integration with further client applications.| | Check the identity groups you have | Cluster identities in different types of groups with different types of requirements, such as **Business to Customer** (B2C) for end customers and business customers, **Business to Business** (B2B) for partners and suppliers, **Business to Employee** (B2E) for your employees and external employees, **Business to Machine** (B2M) for IoT device logins and service accounts.|
-| Check the IdP you need for your business needs and processes | Azure AD B2C [supports several types of IdPs](https://docs.microsoft.com/azure/active-directory-b2c/add-identity-provider#select-an-identity-provider) and depending on the use case the right IdP should be chosen. For example, for a Customer to Customer mobile application a fast and easy user login is required. In another use case, for a Business to Customer with digital services additional compliance requirements are necessary. The user may need to log in with their business identity such as E-mail login. |
+| Check the IdP you need for your business needs and processes | Azure AD B2C [supports several types of IdPs](../../active-directory-b2c/add-identity-provider.md#select-an-identity-provider) and depending on the use case the right IdP should be chosen. For example, for a Customer to Customer mobile application a fast and easy user login is required. In another use case, for a Business to Customer with digital services additional compliance requirements are necessary. The user may need to log in with their business identity such as E-mail login. |
| Check the regulatory constraints | Check if there is any reason to have remote profiles or specific privacy policies. |
-| Design the sign-in and sign-up flow | Decide whether an email verification or email verification inside sign-ups will be needed. First check-out process such as Shop systems or [Azure AD Multi-Factor Authentication (MFA)](https://docs.microsoft.com/azure/active-directory/authentication/concept-mfa-howitworks) is needed or not. Watch [this video](https://www.youtube.com/watch?v=c8rN1ZaR7wk&list=PL3ZTgFEc7LyuJ8YRSGXBUVItCPnQz3YX0&index=4). |
+| Design the sign-in and sign-up flow | Decide whether an email verification or email verification inside sign-ups will be needed. First check-out process such as Shop systems or [Azure AD Multi-Factor Authentication (MFA)](../authentication/concept-mfa-howitworks.md) is needed or not. Watch [this video](https://www.youtube.com/watch?v=c8rN1ZaR7wk&list=PL3ZTgFEc7LyuJ8YRSGXBUVItCPnQz3YX0&index=4). |
| Check the type of application and authentication protocol used or that will be implemented | Information exchange about the implementation of client application such as Web application, SPA, or Native application. Authentication protocols for client application and Azure AD B2C could be OAuth, OIDC, and SAML. Watch [this video](https://www.youtube.com/watch?v=r2TIVBCm7v4&list=PL3ZTgFEc7LyuJ8YRSGXBUVItCPnQz3YX0&index=9)|
-| Plan user migration | Discuss the possibilities of [user migration with Azure AD B2C](https://docs.microsoft.com/azure/active-directory-b2c/user-migration#:~:text=Pre%20Migration%20Flow%20in%20Azure%20AD%20B2C%20In,B2C%20directory%20with%20the%20current%20credentials.%20See%20More.). There are several scenarios possible such as Just In Times (JIT) migration, and bulk import/export. Watch [this video](https://www.youtube.com/watch?v=lCWR6PGUgz0&list=PL3ZTgFEc7LyuJ8YRSGXBUVItCPnQz3YX0&index=2). You can also consider using [Microsoft Graph API](https://www.youtube.com/watch?v=9BRXBtkBzL4&list=PL3ZTgFEc7LyuJ8YRSGXBUVItCPnQz3YX0&index=3) for user migration.|
+| Plan user migration | Discuss the possibilities of [user migration with Azure AD B2C](../../active-directory-b2c/user-migration.md). There are several scenarios possible such as Just In Times (JIT) migration, and bulk import/export. Watch [this video](https://www.youtube.com/watch?v=lCWR6PGUgz0&list=PL3ZTgFEc7LyuJ8YRSGXBUVItCPnQz3YX0&index=2). You can also consider using [Microsoft Graph API](https://www.youtube.com/watch?v=9BRXBtkBzL4&list=PL3ZTgFEc7LyuJ8YRSGXBUVItCPnQz3YX0&index=3) for user migration.|
Consider this sample checklist while you **deliver**. | Capability | Description | |:--|:-| |Protocol information| Gather the base path, policies, metadata URL of both variants. Depending on the client application, specify the attributes such as sample login, client application ID, secrets, and redirects.|
-| Application samples | Refer to the provided [sample codes](https://docs.microsoft.com/azure/active-directory-b2c/code-samples). |
-|Pen testing | Before the tests, inform your operations team about the pen tests and then test all user flows including the OAuth implementation. Learn more about [Penetration testing](https://docs.microsoft.com/azure/security/fundamentals/pen-testing) and the [Microsoft Cloud unified penetration testing rules of engagement](https://www.microsoft.com/msrc/pentest-rules-of-engagement).
-| Unit testing | Perform unit testing and generate tokens [using Resource owner password credential (ROPC) flows](https://docs.microsoft.com/azure/active-directory/develop/v2-oauth-ropc). If you hit the Azure AD B2C token limit, [contact the support team](https://docs.microsoft.com/azure/active-directory-b2c/support-options). Reuse tokens to reduce investigation efforts on your infrastructure. [Setup a ROPC flow](https://docs.microsoft.com/azure/active-directory-b2c/add-ropc-policy?tabs=app-reg-ga&pivots=b2c-user-flow).|
-| Load testing | Expect reaching Azure AD B2C [service limits](https://docs.microsoft.com/azure/active-directory-b2c/service-limits). Evaluate the expected number of authentications per month your service will have. Evaluate the expected number of average user logins per month. Assess the expected high load traffic durations and business reason such as holidays, migrations, and events. Evaluate the expected peak sign-up rate, for example, number of requests per second. Evaluate the expected peak traffic rate with MFA, for example, requests per second. Evaluate the expected traffic geographic distribution and their peak rates.
+| Application samples | Refer to the provided [sample codes](../../active-directory-b2c/integrate-with-app-code-samples.md). |
+|Pen testing | Before the tests, inform your operations team about the pen tests and then test all user flows including the OAuth implementation. Learn more about [Penetration testing](../../security/fundamentals/pen-testing.md) and the [Microsoft Cloud unified penetration testing rules of engagement](https://www.microsoft.com/msrc/pentest-rules-of-engagement).
+| Unit testing | Perform unit testing and generate tokens [using Resource owner password credential (ROPC) flows](../develop/v2-oauth-ropc.md). If you hit the Azure AD B2C token limit, [contact the support team](../../active-directory-b2c/support-options.md). Reuse tokens to reduce investigation efforts on your infrastructure. [Setup a ROPC flow](../../active-directory-b2c/add-ropc-policy.md?pivots=b2c-user-flow&tabs=app-reg-ga).|
+| Load testing | Expect reaching Azure AD B2C [service limits](../../active-directory-b2c/service-limits.md). Evaluate the expected number of authentications per month your service will have. Evaluate the expected number of average user logins per month. Assess the expected high load traffic durations and business reason such as holidays, migrations, and events. Evaluate the expected peak sign-up rate, for example, number of requests per second. Evaluate the expected peak traffic rate with MFA, for example, requests per second. Evaluate the expected traffic geographic distribution and their peak rates.
### Security Consider this sample checklist to enhance the security of your application depending on your business needs: -- Check if strong authentication method such as [MFA](https://docs.microsoft.com/azure/active-directory/authentication/concept-mfa-howitworks) is required. For users who trigger high value transactions or other risk events its suggested to use MFA. For example, for banking and finance applications, online shops - first checkout process.
+- Check if strong authentication method such as [MFA](../authentication/concept-mfa-howitworks.md) is required. For users who trigger high value transactions or other risk events its suggested to use MFA. For example, for banking and finance applications, online shops - first checkout process.
-- Check if MFA is required, [check the methods available to do MFA](https://docs.microsoft.com/azure/active-directory/authentication/concept-authentication-methods#:~:text=How%20each%20authentication%20method%20works%20%20%20,%20%20MFA%20%204%20more%20rows%20) such as SMS/Phone, email, and third-party services.
+- Check if MFA is required, [check the methods available to do MFA](../authentication/concept-authentication-methods.md) such as SMS/Phone, email, and third-party services.
- Check if any anti-bot mechanism is in use today with your applications. -- Assess the risk of attempts to create fraudulent accounts and log-ins. Use [Microsoft Dynamics 365 Fraud Protection assessment](https://docs.microsoft.com/azure/active-directory-b2c/partner-dynamics-365-fraud-protection) to block or challenge suspicious attempts to create new fake accounts or to compromise existing accounts.
+- Assess the risk of attempts to create fraudulent accounts and log-ins. Use [Microsoft Dynamics 365 Fraud Protection assessment](../../active-directory-b2c/partner-dynamics-365-fraud-protection.md) to block or challenge suspicious attempts to create new fake accounts or to compromise existing accounts.
- Check for any special conditional postures that need to be applied as part of sign-in or sign-up for accounts with your application. >[!NOTE]
->You can use [Conditional Access rules](https://docs.microsoft.com/azure/active-directory/conditional-access/overview) to adjust the difference between user experience and security based on your business goals.
+>You can use [Conditional Access rules](../conditional-access/overview.md) to adjust the difference between user experience and security based on your business goals.
-For more information, see [Identity Protection and Conditional Access in Azure AD B2C](https://docs.microsoft.com/azure/active-directory-b2c/conditional-access-identity-protection-overview).
+For more information, see [Identity Protection and Conditional Access in Azure AD B2C](../../active-directory-b2c/conditional-access-identity-protection-overview.md).
### Compliance
To address basic compliance requirements, consider:
Consider the sample checklist to define the user experience (UX) requirements: -- Identify the required integrations to [extend CIAM capabilities and build seamless end-user experiences](https://docs.microsoft.com/azure/active-directory-b2c/partner-gallery).
+- Identify the required integrations to [extend CIAM capabilities and build seamless end-user experiences](../../active-directory-b2c/partner-gallery.md).
- Provide screenshots and user stories to show the end-user experience for the existing application. For example, provide screenshots for sign-in, sign-up, combined sign-up sign-in (SUSI), profile edit, and password reset.
This phase includes the following capabilities:
| Capability | Description | |:|:-|
-| Monitoring |[Monitor Azure AD B2C with Azure Monitor](https://docs.microsoft.com/azure/active-directory-b2c/azure-monitor). Watch [this video](https://www.youtube.com/watch?v=Mu9GQy-CbXI&list=PL3ZTgFEc7LyuJ8YRSGXBUVItCPnQz3YX0&index=1)|
-| Auditing and Logging | [Access and review audit logs](https://docs.microsoft.com/azure/active-directory-b2c/view-audit-logs)
+| Monitoring |[Monitor Azure AD B2C with Azure Monitor](../../active-directory-b2c/azure-monitor.md). Watch [this video](https://www.youtube.com/watch?v=Mu9GQy-CbXI&list=PL3ZTgFEc7LyuJ8YRSGXBUVItCPnQz3YX0&index=1)|
+| Auditing and Logging | [Access and review audit logs](../../active-directory-b2c/view-audit-logs.md)
## More information
To accelerate Azure AD B2C deployments and monitor the service at scale, see the
## Next steps -- [Azure AD B2C best practices](https://docs.microsoft.com/azure/active-directory-b2c/best-practices)
+- [Azure AD B2C best practices](../../active-directory-b2c/best-practices.md)
-- [Azure AD B2C service limits](https://docs.microsoft.com/azure/active-directory-b2c/service-limits)
+- [Azure AD B2C service limits](../../active-directory-b2c/service-limits.md)
active-directory Security Operations Applications https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/fundamentals/security-operations-applications.md
After setting up Azure Key Vault, be sure to [enable logging](../../key-vault/ge
| End-user consent to application| Low| Azure AD Audit logs| Activity: Consent to application / ConsentContext.IsAdminConsent = false| Look for: <li>high profile or highly privileged accounts.<li> app requests high-risk permissions<li>apps with suspicious names, for example generic, misspelled, etc. |
-The act of consenting to an application is not in itself malicious. However, investigate new end-user consent grants looking for suspicious applications. You can [restrict user consent operations](/azure/security/fundamentals/steps-secure-identity).
+The act of consenting to an application is not in itself malicious. However, investigate new end-user consent grants looking for suspicious applications. You can [restrict user consent operations](../../security/fundamentals/steps-secure-identity.md).
For more information on consent operations, see the following resources:
See these security operations guide articles:
[Security operations for devices](security-operations-devices.md)
-[Security operations for infrastructure](security-operations-infrastructure.md)
+[Security operations for infrastructure](security-operations-infrastructure.md)
active-directory Security Operations Infrastructure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/fundamentals/security-operations-infrastructure.md
Monitoring and alerting the components of your authentication infrastructure is
We recommend all the components be considered Control Plane / Tier 0 assets, as well as the accounts used to manage them. Refer to [Securing privileged assets](/security/compass/overview) (SPA) for guidance on designing and implementing your environment. This guidance includes recommendations for each of the hybrid authentication components that could potentially be used for an Azure AD tenant.
-A first step in being able to detect unexpected events and potential attacks is to establish a baseline. For all on-premises components listed in this article, see [Privileged access deployment](https://docs.microsoft.com/security/compass/privileged-access-deployment), which is part of the Securing privileged assets (SPA) guide.
+A first step in being able to detect unexpected events and potential attacks is to establish a baseline. For all on-premises components listed in this article, see [Privileged access deployment](/security/compass/privileged-access-deployment), which is part of the Securing privileged assets (SPA) guide.
## Where to look
See these additional security operations guide articles:
-ΓÇÄ
+ΓÇÄ
active-directory Security Operations Privileged Accounts https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/fundamentals/security-operations-privileged-accounts.md
You are entirely responsible for all layers of security for your on-premises IT
* For more information on securing access for privileged users, visit [Securing Privileged access for hybrid and cloud deployments in Azure AD](../roles/security-planning.md).
-* For a wide range of videos, how-to guides, and content of key concepts for privileged identity, visit [Privileged Identity Management documentation](https://docs.microsoft.com/azure/active-directory/privileged-identity-management/).
+* For a wide range of videos, how-to guides, and content of key concepts for privileged identity, visit [Privileged Identity Management documentation](../privileged-identity-management/index.yml).
## Where to look
active-directory Security Operations Privileged Identity Management https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/fundamentals/security-operations-privileged-identity-management.md
You're entirely responsible for all layers of security for your on-premises IT e
* For more information on securing access for privileged users, see [Securing Privileged access for hybrid and cloud deployments in Azure AD](../roles/security-planning.md).
-* For a wide range of videos, how-to guides, and content of key concepts for privileged identity, visit [Privileged Identity Management documentation](https://docs.microsoft.com/azure/active-directory/privileged-identity-management/).
+* For a wide range of videos, how-to guides, and content of key concepts for privileged identity, visit [Privileged Identity Management documentation](../privileged-identity-management/index.yml).
Privileged Identity Management (PIM) is an Azure AD service that enables you to manage, control, and monitor access to important resources in your organization. These resources include resources in Azure AD, Azure, and other Microsoft Online Services such as Microsoft 365 or Microsoft Intune. You can use PIM to help mitigate the following risks:
active-directory Users Default Permissions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/fundamentals/users-default-permissions.md
Previously updated : 08/17/2020 Last updated : 08/04/2021
Policies | <ul><li>Read all properties of policies<li>Manage all properties of o
## Restrict member users default permissions
-Default permissions for member users can be restricted in the following ways:
+It is possible to add restrictions to users' default permissions. Some organizations may have a need to restrict users' access to the portal. This feature is to be used if you donΓÇÖt want all users in the directory to have access to the Azure AD admin portal/directory.
+
+For example, a university will have many users within their directory, and the admin may not want all of the students in the directory to be able to see the full directory and violate other students' privacy. The use of this feature is optional, and at the discretion of the Azure AD administrator. Default permissions for member users can be restricted in the following ways:
Permission | Setting explanation - |
Ability to create Microsoft 365 groups | Setting this option to No prevents user
Restrict access to Azure AD administration portal | <p>Setting this option to No lets non-administrators use the Azure AD administration portal to read and manage Azure AD resources. Yes restricts all non-administrators from accessing any Azure AD data in the administration portal.</p><p>**Note**: this setting does not restrict access to Azure AD data using PowerShell or other clients such as Visual Studio.When set to Yes, to grant a specific non-admin user the ability to use the Azure AD administration portal assign any administrative role such as the Directory Readers role.</p><p>**Note**: this settings will block non-admin users who are owners of groups or applications from using the Azure portal to manage their owned resources.</p><p>This role allows reading basic directory information, which member users have by default (guests and service principals do not).</p> Ability to read other users | This setting is available in PowerShell only. Setting this flag to $false prevents all non-admins from reading user information from the directory. This flag does not prevent reading user information in other Microsoft services like Exchange Online. This setting is meant for special circumstances, and setting this flag to $false is not recommended.
+>![NOTE]
+>ItΓÇÖs assumed the average user would only use the portal to access Azure AD, and not use PowerShell or CLI to access their resources. Currently, restricting access to users' default permissions only occurs when the user tries to access the directory within the Azure portal.
+ ## Restrict guest users default permissions Default permissions for guest users can be restricted in the following ways:
active-directory Whats New https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/fundamentals/whats-new.md
You can now automate creating, updating, and deleting user accounts for these ne
- [Zip](../saas-apps/zip-provisioning-tutorial.md) - [TimeClock 365](../saas-apps/timeclock-365-provisioning-tutorial.md)
-For more information about how to better secure your organization by using automated user account provisioning, read [Automate user provisioning to SaaS applications with Azure AD](../manage-apps/user-provisioning.md).
+For more information about how to better secure your organization by using automated user account provisioning, read [Automate user provisioning to SaaS applications with Azure AD](../app-provisioning/user-provisioning.md).
The [device code flow](../develop/v2-oauth2-device-code.md) has been updated to
**Service category:** User Management **Product capability:** User Management
-You can now view your users' last sign-in date and time stamp on the Azure portal. The information is available for each user on the user profile page. This information helps you identify inactive users and effectively manage risky events. [Learn more](https://docs.microsoft.com/azure/active-directory/fundamentals/active-directory-users-profile-azure-portal?context=/azure/active-directory/enterprise-users/context/ugr-context).
+You can now view your users' last sign-in date and time stamp on the Azure portal. The information is available for each user on the user profile page. This information helps you identify inactive users and effectively manage risky events. [Learn more](./active-directory-users-profile-azure-portal.md?context=%2fazure%2factive-directory%2fenterprise-users%2fcontext%2fugr-context).
The refreshed Authentication Methods Activity dashboard gives admins an overview
Refresh and session token lifetimes configurability in CTL are retired. Azure Active Directory no longer honors refresh and session token configuration in existing policies. [Learn more](../develop/active-directory-configurable-token-lifetimes.md#token-lifetime-policies-for-refresh-tokens-and-session-tokens). -+
active-directory Entitlement Management Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/governance/entitlement-management-overview.md
Azure AD Premium P2 licenses are **not** required for the following tasks:
- No licenses are required for users with the Global Administrator role who set up the initial catalogs, access packages, and policies, and delegate administrative tasks to other users. - No licenses are required for users who have been delegated administrative tasks, such as catalog creator, catalog owner, and access package manager.-- No licenses are required for guests who **can** request access packages, but do **not** request an access package.
+- No licenses are required for guests who have **a privilege to request access packages** but they **do not choose** to request them.
For more information about licenses, see [Assign or remove licenses using the Azure Active Directory portal](../fundamentals/license-users-groups.md).
active-directory Four Steps https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/four-steps.md
The simplest and recommended method for enabling cloud authentication for on-pre
Whether you choose PHS or PTA, don't forget to enable [Seamless Single Sign-on](./how-to-connect-sso.md) to allow users to access cloud apps without constantly entering their username and password in the app when using Windows 7 and 8 devices on your corporate network. Without single sign-on, users must remember application-specific passwords and sign into each application. Likewise, IT staff needs to create and update user accounts for each application such as Microsoft 365, Box, and Salesforce. Users need to remember their passwords, plus spend the time to sign into each application. Providing a standardized single sign-on mechanism to the entire enterprise is crucial for best user experience, reduction of risk, ability to report, and governance.
-For organizations already using AD FS or another on-premises authentication provider, moving to Azure AD as your identity provider can reduce complexity and improve availability. Unless you have specific use cases for using federation, we recommend migrating from federated authentication to either PHS and Seamless SSO or PTA and Seamless SSO to enjoy the benefits of a reduced on-premises footprint and the flexibility the cloud offers with improved user experiences. For more information, see [Migrate from federation to password hash synchronization for Azure Active Directory](./plan-migrate-adfs-password-hash-sync.md).
+For organizations already using AD FS or another on-premises authentication provider, moving to Azure AD as your identity provider can reduce complexity and improve availability. Unless you have specific use cases for using federation, we recommend migrating from federated authentication to either PHS and Seamless SSO or PTA and Seamless SSO to enjoy the benefits of a reduced on-premises footprint and the flexibility the cloud offers with improved user experiences. For more information, see [Migrate from federation to password hash synchronization for Azure Active Directory](./migrate-from-federation-to-cloud-authentication.md).
### Enable automatic deprovisioning of accounts
active-directory How To Connect Health Agent Install https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/how-to-connect-health-agent-install.md
Check out the following related articles:
* [Using Azure AD Connect Health for Sync](how-to-connect-health-sync.md) * [Using Azure AD Connect Health with Azure AD DS](how-to-connect-health-adds.md) * [Azure AD Connect Health FAQ](reference-connect-health-faq.yml)
-* [Azure AD Connect Health version history](reference-connect-health-version-history.md)
+* [Azure AD Connect Health version history](reference-connect-health-version-history.md)
active-directory How To Connect Staged Rollout https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/how-to-connect-staged-rollout.md
The following scenarios are not supported for staged rollout:
- If you have a Windows Hello for Business hybrid certificate trust with certs that are issued via your federation server acting as Registration Authority or smartcard users, the scenario isn't supported on a staged rollout. >[!NOTE]
- >You still need to make the final cutover from federated to cloud authentication by using Azure AD Connect or PowerShell. Staged rollout doesn't switch domains from federated to managed. For more information about domain cutover, see [Migrate from federation to password hash synchronization](plan-migrate-adfs-password-hash-sync.md#step-3-change-the-sign-in-method-to-password-hash-synchronization-and-enable-seamless-sso) and [Migrate from federation to pass-through authentication](plan-migrate-adfs-pass-through-authentication.md#step-2-change-the-sign-in-method-to-pass-through-authentication-and-enable-seamless-sso).
+ >You still need to make the final cutover from federated to cloud authentication by using Azure AD Connect or PowerShell. Staged rollout doesn't switch domains from federated to managed. For more information about domain cutover, see [Migrate from federation to password hash synchronization](./migrate-from-federation-to-cloud-authentication.md) and [Migrate from federation to pass-through authentication](./migrate-from-federation-to-cloud-authentication.md).
## Get started with staged rollout
A: Yes. To learn how to use PowerShell to perform staged rollout, see [Azure AD
## Next steps - [Azure AD 2.0 preview](/powershell/module/azuread/?view=azureadps-2.0-preview&preserve-view=true#staged_rollout )-- [Change the sign-in method to password hash synchronization](plan-migrate-adfs-password-hash-sync.md#step-3-change-the-sign-in-method-to-password-hash-synchronization-and-enable-seamless-sso)-- [Change sign-in method to pass-through authentication](plan-migrate-adfs-password-hash-sync.md#step-3-change-the-sign-in-method-to-password-hash-synchronization-and-enable-seamless-sso)-- [Staged rollout interactive guide](https://mslearn.cloudguides.com/en-us/guides/Test%20migration%20to%20cloud%20authentication%20using%20staged%20rollout%20in%20Azure%20AD)-
+- [Change the sign-in method to password hash synchronization](./migrate-from-federation-to-cloud-authentication.md)
+- [Change sign-in method to pass-through authentication](./migrate-from-federation-to-cloud-authentication.md)
+- [Staged rollout interactive guide](https://mslearn.cloudguides.com/en-us/guides/Test%20migration%20to%20cloud%20authentication%20using%20staged%20rollout%20in%20Azure%20AD)
active-directory How To Connect Sync Feature Directory Extensions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/how-to-connect-sync-feature-directory-extensions.md
ms.devlang: na
na Previously updated : 11/12/2019 Last updated : 08/09/2021
You can use directory extensions to extend the schema in Azure Active Directory
At present, no Microsoft 365 workload consumes these attributes.
+>[!IMPORTANT]
+>If you have exported a configuration that contains a custom rule used to synchronize directory extension attributes and you attempt to import this rule in to a new or existing installation of Azure AD Connect, the rule will be created during import, but the directory extension attributes will not be mapped. You will need to re-select the directory extension attributes and re-associate them with the rule or recreate the rule entirely to fix this.
+ ## Customize which attributes to synchronize with Azure AD You configure which additional attributes you want to synchronize in the custom settings path in the installation wizard.
active-directory Migrate From Federation To Cloud Authentication https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/migrate-from-federation-to-cloud-authentication.md
Your support team should understand how to troubleshoot any authentication issue
Migration requires assessing how the application is configured on-premises, and then mapping that configuration to Azure AD.
-If you plan to keep using AD FS with on-premises & SaaS Applications using SAML / WS-FED or Oauth protocol, youΓÇÖll use both AD FS and Azure AD after you convert the domains for user authentication. In this case, you can protect your on-premises applications and resources with Secure Hybrid Access (SHA) through [Azure AD Application Proxy](../manage-apps/what-is-application-proxy.md) or one of [Azure AD partner integrations](../manage-apps/secure-hybrid-access.md). Using Application Proxy or one of our partners can provide secure remote access to your on-premises applications. Users benefit by easily connecting to their applications from any device after a [single sign-on](../manage-apps/add-application-portal-setup-sso.md).
+If you plan to keep using AD FS with on-premises & SaaS Applications using SAML / WS-FED or Oauth protocol, youΓÇÖll use both AD FS and Azure AD after you convert the domains for user authentication. In this case, you can protect your on-premises applications and resources with Secure Hybrid Access (SHA) through [Azure AD Application Proxy](../app-proxy/what-is-application-proxy.md) or one of [Azure AD partner integrations](../manage-apps/secure-hybrid-access.md). Using Application Proxy or one of our partners can provide secure remote access to your on-premises applications. Users benefit by easily connecting to their applications from any device after a [single sign-on](../manage-apps/add-application-portal-setup-sso.md).
You can move SaaS applications that are currently federated with ADFS to Azure AD. Reconfigure to authenticate with Azure AD either via a built-in connector from the [Azure App gallery](https://azuremarketplace.microsoft.com/marketplace/apps/category/azure-active-directory-apps), or by [registering the application in Azure AD](../develop/quickstart-register-app.md).
If you donΓÇÖt use AD FS for other purposes (that is, for other relying party tr
## Next steps - [Learn about migrating applications](../manage-apps/migration-resources.md)-- [Deploy other identity features](../fundamentals/active-directory-deployment-plans.md)
+- [Deploy other identity features](../fundamentals/active-directory-deployment-plans.md)
active-directory Reference Connect Adsynctools https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/reference-connect-adsynctools.md
Accept pipeline input: True (ByPropertyName, ByValue)
Accept wildcard characters: False ``` #### CommonParameters
-This cmdlet supports the common parameters: -Debug, -ErrorAction, -ErrorVariable, -InformationAction, -InformationVariable, -OutVariable, -OutBuffer, -PipelineVariable, -Verbose, -WarningAction, and -WarningVariable. For more information, see [about_CommonParameters](https://go.microsoft.com/fwlink/?LinkID=113216).
+This cmdlet supports the common parameters: -Debug, -ErrorAction, -ErrorVariable, -InformationAction, -InformationVariable, -OutVariable, -OutBuffer, -PipelineVariable, -Verbose, -WarningAction, and -WarningVariable. For more information, see [about_CommonParameters](/powershell/module/microsoft.powershell.core/about/about_commonparameters).
## Connect-ADSyncToolsSqlDatabase ### SYNOPSIS Connect to a SQL database for testing purposes
Accept pipeline input: False
Accept wildcard characters: False ``` #### CommonParameters
-This cmdlet supports the common parameters: -Debug, -ErrorAction, -ErrorVariable, -InformationAction, -InformationVariable, -OutVariable, -OutBuffer, -PipelineVariable, -Verbose, -WarningAction, and -WarningVariable. For more information, see [about_CommonParameters](https://go.microsoft.com/fwlink/?LinkID=113216).
+This cmdlet supports the common parameters: -Debug, -ErrorAction, -ErrorVariable, -InformationAction, -InformationVariable, -OutVariable, -OutBuffer, -PipelineVariable, -Verbose, -WarningAction, and -WarningVariable. For more information, see [about_CommonParameters](/powershell/module/microsoft.powershell.core/about/about_commonparameters).
## ConvertFrom-ADSyncToolsAadDistinguishedName ### SYNOPSIS Convert Azure AD Connector DistinguishedName to ImmutableId
Accept pipeline input: True (ByPropertyName, ByValue)
Accept wildcard characters: False ``` #### CommonParameters
-This cmdlet supports the common parameters: -Debug, -ErrorAction, -ErrorVariable, -InformationAction, -InformationVariable, -OutVariable, -OutBuffer, -PipelineVariable, -Verbose, -WarningAction, and -WarningVariable. For more information, see [about_CommonParameters](https://go.microsoft.com/fwlink/?LinkID=113216).
+This cmdlet supports the common parameters: -Debug, -ErrorAction, -ErrorVariable, -InformationAction, -InformationVariable, -OutVariable, -OutBuffer, -PipelineVariable, -Verbose, -WarningAction, and -WarningVariable. For more information, see [about_CommonParameters](/powershell/module/microsoft.powershell.core/about/about_commonparameters).
## ConvertFrom-ADSyncToolsImmutableID ### SYNOPSIS Convert Base64 ImmutableId (SourceAnchor) to GUID value
Accept pipeline input: True (ByPropertyName, ByValue)
Accept wildcard characters: False ``` #### CommonParameters
-This cmdlet supports the common parameters: -Debug, -ErrorAction, -ErrorVariable, -InformationAction, -InformationVariable, -OutVariable, -OutBuffer, -PipelineVariable, -Verbose, -WarningAction, and -WarningVariable. For more information, see [about_CommonParameters](https://go.microsoft.com/fwlink/?LinkID=113216).
+This cmdlet supports the common parameters: -Debug, -ErrorAction, -ErrorVariable, -InformationAction, -InformationVariable, -OutVariable, -OutBuffer, -PipelineVariable, -Verbose, -WarningAction, and -WarningVariable. For more information, see [about_CommonParameters](/powershell/module/microsoft.powershell.core/about/about_commonparameters).
## ConvertTo-ADSyncToolsAadDistinguishedName ### SYNOPSIS Convert ImmutableId to Azure AD Connector DistinguishedName
Accept pipeline input: True (ByPropertyName, ByValue)
Accept wildcard characters: False ``` #### CommonParameters
-This cmdlet supports the common parameters: -Debug, -ErrorAction, -ErrorVariable, -InformationAction, -InformationVariable, -OutVariable, -OutBuffer, -PipelineVariable, -Verbose, -WarningAction, and -WarningVariable. For more information, see [about_CommonParameters](https://go.microsoft.com/fwlink/?LinkID=113216).
+This cmdlet supports the common parameters: -Debug, -ErrorAction, -ErrorVariable, -InformationAction, -InformationVariable, -OutVariable, -OutBuffer, -PipelineVariable, -Verbose, -WarningAction, and -WarningVariable. For more information, see [about_CommonParameters](/powershell/module/microsoft.powershell.core/about/about_commonparameters).
## ConvertTo-ADSyncToolsCloudAnchor ### SYNOPSIS Convert Base64 Anchor to CloudAnchor
Accept pipeline input: True (ByPropertyName, ByValue)
Accept wildcard characters: False ``` #### CommonParameters
-This cmdlet supports the common parameters: -Debug, -ErrorAction, -ErrorVariable, -InformationAction, -InformationVariable, -OutVariable, -OutBuffer, -PipelineVariable, -Verbose, -WarningAction, and -WarningVariable. For more information, see [about_CommonParameters](https://go.microsoft.com/fwlink/?LinkID=113216).
+This cmdlet supports the common parameters: -Debug, -ErrorAction, -ErrorVariable, -InformationAction, -InformationVariable, -OutVariable, -OutBuffer, -PipelineVariable, -Verbose, -WarningAction, and -WarningVariable. For more information, see [about_CommonParameters](/powershell/module/microsoft.powershell.core/about/about_commonparameters).
## ConvertTo-ADSyncToolsImmutableID ### SYNOPSIS Convert GUID (ObjectGUID / ms-Ds-Consistency-Guid) to a Base64 string
Accept pipeline input: True (ByPropertyName, ByValue)
Accept wildcard characters: False ``` #### CommonParameters
-This cmdlet supports the common parameters: -Debug, -ErrorAction, -ErrorVariable, -InformationAction, -InformationVariable, -OutVariable, -OutBuffer, -PipelineVariable, -Verbose, -WarningAction, and -WarningVariable. For more information, see [about_CommonParameters](https://go.microsoft.com/fwlink/?LinkID=113216).
+This cmdlet supports the common parameters: -Debug, -ErrorAction, -ErrorVariable, -InformationAction, -InformationVariable, -OutVariable, -OutBuffer, -PipelineVariable, -Verbose, -WarningAction, and -WarningVariable. For more information, see [about_CommonParameters](/powershell/module/microsoft.powershell.core/about/about_commonparameters).
## Export-ADSyncToolsAadDisconnectors ### SYNOPSIS Export Azure AD Disconnector objects
Accept pipeline input: True (ByPropertyName)
Accept wildcard characters: False ``` #### CommonParameters
-This cmdlet supports the common parameters: -Debug, -ErrorAction, -ErrorVariable, -InformationAction, -InformationVariable, -OutVariable, -OutBuffer, -PipelineVariable, -Verbose, -WarningAction, and -WarningVariable. For more information, see [about_CommonParameters](https://go.microsoft.com/fwlink/?LinkID=113216).
+This cmdlet supports the common parameters: -Debug, -ErrorAction, -ErrorVariable, -InformationAction, -InformationVariable, -OutVariable, -OutBuffer, -PipelineVariable, -Verbose, -WarningAction, and -WarningVariable. For more information, see [about_CommonParameters](/powershell/module/microsoft.powershell.core/about/about_commonparameters).
### INPUTS Use ObjectType argument in case you want to export Disconnectors for a given object type only ### OUTPUTS
Accept pipeline input: False
Accept wildcard characters: False ``` #### CommonParameters
-This cmdlet supports the common parameters: -Debug, -ErrorAction, -ErrorVariable, -InformationAction, -InformationVariable, -OutVariable, -OutBuffer, -PipelineVariable, -Verbose, -WarningAction, and -WarningVariable. For more information, see [about_CommonParameters](https://go.microsoft.com/fwlink/?LinkID=113216).
+This cmdlet supports the common parameters: -Debug, -ErrorAction, -ErrorVariable, -InformationAction, -InformationVariable, -OutVariable, -OutBuffer, -PipelineVariable, -Verbose, -WarningAction, and -WarningVariable. For more information, see [about_CommonParameters](/powershell/module/microsoft.powershell.core/about/about_commonparameters).
### RELATED LINKS More Information: [Understand Azure AD Connect 1.4.xx.x and device disappearance](/troubleshoot/azure/active-directory/reference-connect-device-disappearance)
Accept pipeline input: False
Accept wildcard characters: False ``` #### CommonParameters
-This cmdlet supports the common parameters: -Debug, -ErrorAction, -ErrorVariable, -InformationAction, -InformationVariable, -OutVariable, -OutBuffer, -PipelineVariable, -Verbose, -WarningAction, and -WarningVariable. For more information, see [about_CommonParameters](https://go.microsoft.com/fwlink/?LinkID=113216).
+This cmdlet supports the common parameters: -Debug, -ErrorAction, -ErrorVariable, -InformationAction, -InformationVariable, -OutVariable, -OutBuffer, -PipelineVariable, -Verbose, -WarningAction, and -WarningVariable. For more information, see [about_CommonParameters](/powershell/module/microsoft.powershell.core/about/about_commonparameters).
## Export-ADSyncToolsRunHistory ### SYNOPSIS Export Azure AD Connect Run History
Accept pipeline input: True (ByPropertyName, ByValue)
Accept wildcard characters: False ``` #### CommonParameters
-This cmdlet supports the common parameters: -Debug, -ErrorAction, -ErrorVariable, -InformationAction, -InformationVariable, -OutVariable, -OutBuffer, -PipelineVariable, -Verbose, -WarningAction, and -WarningVariable. For more information, see [about_CommonParameters](https://go.microsoft.com/fwlink/?LinkID=113216).
+This cmdlet supports the common parameters: -Debug, -ErrorAction, -ErrorVariable, -InformationAction, -InformationVariable, -OutVariable, -OutBuffer, -PipelineVariable, -Verbose, -WarningAction, and -WarningVariable. For more information, see [about_CommonParameters](/powershell/module/microsoft.powershell.core/about/about_commonparameters).
## Export-ADSyncToolsSourceAnchorReport ### SYNOPSIS Export ms-ds-Consistency-Guid Report
Accept pipeline input: False
Accept wildcard characters: False ``` #### CommonParameters
-This cmdlet supports the common parameters: -Debug, -ErrorAction, -ErrorVariable, -InformationAction, -InformationVariable, -OutVariable, -OutBuffer, -PipelineVariable, -Verbose, -WarningAction, and -WarningVariable. For more information, see [about_CommonParameters](https://go.microsoft.com/fwlink/?LinkID=113216).
+This cmdlet supports the common parameters: -Debug, -ErrorAction, -ErrorVariable, -InformationAction, -InformationVariable, -OutVariable, -OutBuffer, -PipelineVariable, -Verbose, -WarningAction, and -WarningVariable. For more information, see [about_CommonParameters](/powershell/module/microsoft.powershell.core/about/about_commonparameters).
## Get-ADSyncToolsAadObject ### SYNOPSIS Get synced objects for a given SyncObjectType
Accept wildcard characters: False
#### CommonParameters
-This cmdlet supports the common parameters: -Debug, -ErrorAction, -ErrorVariable, -InformationAction, -InformationVariable, -OutVariable, -OutBuffer, -PipelineVariable, -Verbose, -WarningAction, and -WarningVariable. For more information, see [about_CommonParameters](https://go.microsoft.com/fwlink/?LinkID=113216).
+This cmdlet supports the common parameters: -Debug, -ErrorAction, -ErrorVariable, -InformationAction, -InformationVariable, -OutVariable, -OutBuffer, -PipelineVariable, -Verbose, -WarningAction, and -WarningVariable. For more information, see [about_CommonParameters](/powershell/module/microsoft.powershell.core/about/about_commonparameters).
### OUTPUTS
Accept pipeline input: True (ByPropertyName, ByValue)
Accept wildcard characters: False ``` #### CommonParameters
-This cmdlet supports the common parameters: -Debug, -ErrorAction, -ErrorVariable, -InformationAction, -InformationVariable, -OutVariable, -OutBuffer, -PipelineVariable, -Verbose, -WarningAction, and -WarningVariable. For more information, see [about_CommonParameters](https://go.microsoft.com/fwlink/?LinkID=113216).
+This cmdlet supports the common parameters: -Debug, -ErrorAction, -ErrorVariable, -InformationAction, -InformationVariable, -OutVariable, -OutBuffer, -PipelineVariable, -Verbose, -WarningAction, and -WarningVariable. For more information, see [about_CommonParameters](/powershell/module/microsoft.powershell.core/about/about_commonparameters).
## Get-ADSyncToolsRunHistory ### SYNOPSIS Get Azure AD Connect Run History
Accept pipeline input: False
Accept wildcard characters: False ``` #### CommonParameters
-This cmdlet supports the common parameters: -Debug, -ErrorAction, -ErrorVariable, -InformationAction, -InformationVariable, -OutVariable, -OutBuffer, -PipelineVariable, -Verbose, -WarningAction, and -WarningVariable. For more information, see [about_CommonParameters](https://go.microsoft.com/fwlink/?LinkID=113216).
+This cmdlet supports the common parameters: -Debug, -ErrorAction, -ErrorVariable, -InformationAction, -InformationVariable, -OutVariable, -OutBuffer, -PipelineVariable, -Verbose, -WarningAction, and -WarningVariable. For more information, see [about_CommonParameters](/powershell/module/microsoft.powershell.core/about/about_commonparameters).
## Get-ADSyncToolsRunHistoryLegacyWmi ### SYNOPSIS Get Azure AD Connect Run History for older versions of Azure AD Connect (WMI)
Accept pipeline input: False
Accept wildcard characters: False ``` #### CommonParameters
-This cmdlet supports the common parameters: -Debug, -ErrorAction, -ErrorVariable, -InformationAction, -InformationVariable, -OutVariable, -OutBuffer, -PipelineVariable, -Verbose, -WarningAction, and -WarningVariable. For more information, see [about_CommonParameters](https://go.microsoft.com/fwlink/?LinkID=113216).
+This cmdlet supports the common parameters: -Debug, -ErrorAction, -ErrorVariable, -InformationAction, -InformationVariable, -OutVariable, -OutBuffer, -PipelineVariable, -Verbose, -WarningAction, and -WarningVariable. For more information, see [about_CommonParameters](/powershell/module/microsoft.powershell.core/about/about_commonparameters).
## Get-ADSyncToolsSqlBrowserInstances ### SYNOPSIS Get SQL Server Instances from SQL Browser service
Accept wildcard characters: False
``` #### CommonParameters
-This cmdlet supports the common parameters: -Debug, -ErrorAction, -ErrorVariable, -InformationAction, -InformationVariable, -OutVariable, -OutBuffer, -PipelineVariable, -Verbose, -WarningAction, and -WarningVariable. For more information, see [about_CommonParameters](https://go.microsoft.com/fwlink/?LinkID=113216).
+This cmdlet supports the common parameters: -Debug, -ErrorAction, -ErrorVariable, -InformationAction, -InformationVariable, -OutVariable, -OutBuffer, -PipelineVariable, -Verbose, -WarningAction, and -WarningVariable. For more information, see [about_CommonParameters](/powershell/module/microsoft.powershell.core/about/about_commonparameters).
### INPUTS The user's PowerShell Credential object
Get-ADSyncToolsTls12
``` ### PARAMETERS #### CommonParameters
-This cmdlet supports the common parameters: -Debug, -ErrorAction, -ErrorVariable, -InformationAction, -InformationVariable, -OutVariable, -OutBuffer, -PipelineVariable, -Verbose, -WarningAction, and -WarningVariable. For more information, see [about_CommonParameters](https://go.microsoft.com/fwlink/?LinkID=113216).
+This cmdlet supports the common parameters: -Debug, -ErrorAction, -ErrorVariable, -InformationAction, -InformationVariable, -OutVariable, -OutBuffer, -PipelineVariable, -Verbose, -WarningAction, and -WarningVariable. For more information, see [about_CommonParameters](/powershell/module/microsoft.powershell.core/about/about_commonparameters).
### RELATED LINKS More Information: [TLS 1.2 enforcement for Azure AD Connect](reference-connect-tls-enforcement.md)
Accept pipeline input: True (ByPropertyName, ByValue)
Accept wildcard characters: False ``` #### CommonParameters
-This cmdlet supports the common parameters: -Debug, -ErrorAction, -ErrorVariable, -InformationAction, -InformationVariable, -OutVariable, -OutBuffer, -PipelineVariable, -Verbose, -WarningAction, and -WarningVariable. For more information, see [about_CommonParameters](https://go.microsoft.com/fwlink/?LinkID=113216).
+This cmdlet supports the common parameters: -Debug, -ErrorAction, -ErrorVariable, -InformationAction, -InformationVariable, -OutVariable, -OutBuffer, -PipelineVariable, -Verbose, -WarningAction, and -WarningVariable. For more information, see [about_CommonParameters](/powershell/module/microsoft.powershell.core/about/about_commonparameters).
## Import-ADSyncToolsRunHistory ### SYNOPSIS Import Azure AD Connect Run History
Accept pipeline input: True (ByPropertyName, ByValue)
Accept wildcard characters: False ``` #### CommonParameters
-This cmdlet supports the common parameters: -Debug, -ErrorAction, -ErrorVariable, -InformationAction, -InformationVariable, -OutVariable, -OutBuffer, -PipelineVariable, -Verbose, -WarningAction, and -WarningVariable. For more information, see [about_CommonParameters](https://go.microsoft.com/fwlink/?LinkID=113216).
+This cmdlet supports the common parameters: -Debug, -ErrorAction, -ErrorVariable, -InformationAction, -InformationVariable, -OutVariable, -OutBuffer, -PipelineVariable, -Verbose, -WarningAction, and -WarningVariable. For more information, see [about_CommonParameters](/powershell/module/microsoft.powershell.core/about/about_commonparameters).
## Import-ADSyncToolsSourceAnchor ### SYNOPSIS Import ImmutableID from Azure AD
Accept pipeline input: False
Accept wildcard characters: False ``` #### CommonParameters
-This cmdlet supports the common parameters: -Debug, -ErrorAction, -ErrorVariable, -InformationAction, -InformationVariable, -OutVariable, -OutBuffer, -PipelineVariable, -Verbose, -WarningAction, and -WarningVariable. For more information, see [about_CommonParameters](https://go.microsoft.com/fwlink/?LinkID=113216).
+This cmdlet supports the common parameters: -Debug, -ErrorAction, -ErrorVariable, -InformationAction, -InformationVariable, -OutVariable, -OutBuffer, -PipelineVariable, -Verbose, -WarningAction, and -WarningVariable. For more information, see [about_CommonParameters](/powershell/module/microsoft.powershell.core/about/about_commonparameters).
## Invoke-ADSyncToolsSqlQuery ### SYNOPSIS Invoke a SQL query against a database for testing purposes
Accept pipeline input: False
Accept wildcard characters: False ``` #### CommonParameters
-This cmdlet supports the common parameters: -Debug, -ErrorAction, -ErrorVariable, -InformationAction, -InformationVariable, -OutVariable, -OutBuffer, -PipelineVariable, -Verbose, -WarningAction, and -WarningVariable. For more information, see [about_CommonParameters](https://go.microsoft.com/fwlink/?LinkID=113216).
+This cmdlet supports the common parameters: -Debug, -ErrorAction, -ErrorVariable, -InformationAction, -InformationVariable, -OutVariable, -OutBuffer, -PipelineVariable, -Verbose, -WarningAction, and -WarningVariable. For more information, see [about_CommonParameters](/powershell/module/microsoft.powershell.core/about/about_commonparameters).
## Remove-ADSyncToolsAadObject ### SYNOPSIS
Accept pipeline input: False
Accept wildcard characters: False ``` #### CommonParameters
-This cmdlet supports the common parameters: -Debug, -ErrorAction, -ErrorVariable, -InformationAction, -InformationVariable, -OutVariable, -OutBuffer, -PipelineVariable, -Verbose, -WarningAction, and -WarningVariable. For more information, see [about_CommonParameters](https://go.microsoft.com/fwlink/?LinkID=113216).
+This cmdlet supports the common parameters: -Debug, -ErrorAction, -ErrorVariable, -InformationAction, -InformationVariable, -OutVariable, -OutBuffer, -PipelineVariable, -Verbose, -WarningAction, and -WarningVariable. For more information, see [about_CommonParameters](/powershell/module/microsoft.powershell.core/about/about_commonparameters).
### INPUTS InputCsvFilename must point to a CSV file with at least 2 columns: SourceAnchor, SyncObjectType ### OUTPUTS
Accept pipeline input: False
Accept wildcard characters: False ``` #### CommonParameters
-This cmdlet supports the common parameters: -Debug, -ErrorAction, -ErrorVariable, -InformationAction, -InformationVariable, -OutVariable, -OutBuffer, -PipelineVariable, -Verbose, -WarningAction, and -WarningVariable. For more information, see [about_CommonParameters](https://go.microsoft.com/fwlink/?LinkID=113216).
+This cmdlet supports the common parameters: -Debug, -ErrorAction, -ErrorVariable, -InformationAction, -InformationVariable, -OutVariable, -OutBuffer, -PipelineVariable, -Verbose, -WarningAction, and -WarningVariable. For more information, see [about_CommonParameters](/powershell/module/microsoft.powershell.core/about/about_commonparameters).
## Repair-ADSyncToolsAutoUpgradeState ### SYNOPSIS Repair Azure AD Connect AutoUpgrade State
Accept pipeline input: False
Accept wildcard characters: False ``` #### CommonParameters
-This cmdlet supports the common parameters: -Debug, -ErrorAction, -ErrorVariable, -InformationAction, -InformationVariable, -OutVariable, -OutBuffer, -PipelineVariable, -Verbose, -WarningAction, and -WarningVariable. For more information, see [about_CommonParameters](https://go.microsoft.com/fwlink/?LinkID=113216).
+This cmdlet supports the common parameters: -Debug, -ErrorAction, -ErrorVariable, -InformationAction, -InformationVariable, -OutVariable, -OutBuffer, -PipelineVariable, -Verbose, -WarningAction, and -WarningVariable. For more information, see [about_CommonParameters](/powershell/module/microsoft.powershell.core/about/about_commonparameters).
## Search-ADSyncToolsADobject ### SYNOPSIS Search an Active Directory object in Active Directory Forest by its UserPrincipalName, sAMAccountName or DistinguishedName
Accept pipeline input: True (ByPropertyName, ByValue)
Accept wildcard characters: False ``` #### CommonParameters
-This cmdlet supports the common parameters: -Debug, -ErrorAction, -ErrorVariable, -InformationAction, -InformationVariable, -OutVariable, -OutBuffer, -PipelineVariable, -Verbose, -WarningAction, and -WarningVariable. For more information, see [about_CommonParameters](https://go.microsoft.com/fwlink/?LinkID=113216).
+This cmdlet supports the common parameters: -Debug, -ErrorAction, -ErrorVariable, -InformationAction, -InformationVariable, -OutVariable, -OutBuffer, -PipelineVariable, -Verbose, -WarningAction, and -WarningVariable. For more information, see [about_CommonParameters](/powershell/module/microsoft.powershell.core/about/about_commonparameters).
## Set-ADSyncToolsMsDsConsistencyGuid ### SYNOPSIS Set an Active Directory object ms-ds-ConsistencyGuid
Accept pipeline input: True (ByPropertyName, ByValue)
Accept wildcard characters: False ``` #### CommonParameters
-This cmdlet supports the common parameters: -Debug, -ErrorAction, -ErrorVariable, -InformationAction, -InformationVariable, -OutVariable, -OutBuffer, -PipelineVariable, -Verbose, -WarningAction, and -WarningVariable. For more information, see [about_CommonParameters](https://go.microsoft.com/fwlink/?LinkID=113216).
+This cmdlet supports the common parameters: -Debug, -ErrorAction, -ErrorVariable, -InformationAction, -InformationVariable, -OutVariable, -OutBuffer, -PipelineVariable, -Verbose, -WarningAction, and -WarningVariable. For more information, see [about_CommonParameters](/powershell/module/microsoft.powershell.core/about/about_commonparameters).
## Set-ADSyncToolsTls12 ### SYNOPSIS Sets Client\Server TLS 1.2 settings for .NET Framework
Accept pipeline input: True (ByPropertyName, ByValue)
Accept wildcard characters: False ``` #### CommonParameters
-This cmdlet supports the common parameters: -Debug, -ErrorAction, -ErrorVariable, -InformationAction, -InformationVariable, -OutVariable, -OutBuffer, -PipelineVariable, -Verbose, -WarningAction, and -WarningVariable. For more information, see [about_CommonParameters](https://go.microsoft.com/fwlink/?LinkID=113216).
+This cmdlet supports the common parameters: -Debug, -ErrorAction, -ErrorVariable, -InformationAction, -InformationVariable, -OutVariable, -OutBuffer, -PipelineVariable, -Verbose, -WarningAction, and -WarningVariable. For more information, see [about_CommonParameters](/powershell/module/microsoft.powershell.core/about/about_commonparameters).
### RELATED LINKS More Information: [TLS 1.2 enforcement for Azure AD Connect](reference-connect-tls-enforcement.md)
Accept pipeline input: False
Accept wildcard characters: False ``` #### CommonParameters
-This cmdlet supports the common parameters: -Debug, -ErrorAction, -ErrorVariable, -InformationAction, -InformationVariable, -OutVariable, -OutBuffer, -PipelineVariable, -Verbose, -WarningAction, and -WarningVariable. For more information, see [about_CommonParameters](https://go.microsoft.com/fwlink/?LinkID=113216).
+This cmdlet supports the common parameters: -Debug, -ErrorAction, -ErrorVariable, -InformationAction, -InformationVariable, -OutVariable, -OutBuffer, -PipelineVariable, -Verbose, -WarningAction, and -WarningVariable. For more information, see [about_CommonParameters](/powershell/module/microsoft.powershell.core/about/about_commonparameters).
## Trace-ADSyncToolsLdapQuery ### SYNOPSIS Trace LDAP queries
Accept pipeline input: False
Accept wildcard characters: False ``` #### CommonParameters
-This cmdlet supports the common parameters: -Debug, -ErrorAction, -ErrorVariable, -InformationAction, -InformationVariable, -OutVariable, -OutBuffer, -PipelineVariable, -Verbose, -WarningAction, and -WarningVariable. For more information, see [about_CommonParameters](https://go.microsoft.com/fwlink/?LinkID=113216).
+This cmdlet supports the common parameters: -Debug, -ErrorAction, -ErrorVariable, -InformationAction, -InformationVariable, -OutVariable, -OutBuffer, -PipelineVariable, -Verbose, -WarningAction, and -WarningVariable. For more information, see [about_CommonParameters](/powershell/module/microsoft.powershell.core/about/about_commonparameters).
## Update-ADSyncToolsSourceAnchor ### SYNOPSIS Updates users with the new ConsistencyGuid (ImmutableId)
Accept pipeline input: False
Accept wildcard characters: False ``` #### CommonParameters
-This cmdlet supports the common parameters: -Debug, -ErrorAction, -ErrorVariable, -InformationAction, -InformationVariable, -OutVariable, -OutBuffer, -PipelineVariable, -Verbose, -WarningAction, and -WarningVariable. For more information, see [about_CommonParameters](https://go.microsoft.com/fwlink/?LinkID=113216).
-
+This cmdlet supports the common parameters: -Debug, -ErrorAction, -ErrorVariable, -InformationAction, -InformationVariable, -OutVariable, -OutBuffer, -PipelineVariable, -Verbose, -WarningAction, and -WarningVariable. For more information, see [about_CommonParameters](/powershell/module/microsoft.powershell.core/about/about_commonparameters).
active-directory Reference Connect Version History https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/reference-connect-version-history.md
You can use these cmdlets to retrieve the TLS 1.2 enablement status, or set it a
- This release requires PowerShell version 5.0 or newer to be installed on the Windows Server. Note that this version is part of Windows Server 2016 and newer. - We increased the Group sync membership limits to 250k with the new V2 endpoint. - We have updated the Generic LDAP connector and the Generic SQL Connector to the latest versions. Read more about these connectors here:
- - [Generic LDAP Connector reference documentation](https://docs.microsoft.com/microsoft-identity-manager/reference/microsoft-identity-manager-2016-connector-genericldap)
- - [Generic SQL Connector reference documentation](https://docs.microsoft.com/microsoft-identity-manager/reference/microsoft-identity-manager-2016-connector-genericsql)
+ - [Generic LDAP Connector reference documentation](/microsoft-identity-manager/reference/microsoft-identity-manager-2016-connector-genericldap)
+ - [Generic SQL Connector reference documentation](/microsoft-identity-manager/reference/microsoft-identity-manager-2016-connector-genericsql)
- In the M365 Admin Center, we now report the AADConnect client version whenever there is export activity to Azure AD. This ensures that the M365 Admin Center always has the most up to date AADConnect client version, and that it can detect when youΓÇÖre using an outdated version ### Bug fixes
We fixed a bug in the sync errors compression utility that was not handling surr
>We are investigating an incident where some customers are experiencing an issue with existing Hybrid Azure AD joined devices after upgrading to this version of Azure AD Connect. We advise customers who have deployed Hybrid Azure AD join to postpone upgrading to this version until the root cause of these issues are fully understood and mitigated. More information will be provided as soon as possible. >[!IMPORTANT]
->With this version of Azure AD Connect some customers may see some or all of their Windows devices disappear from Azure AD. This is not a cause for concern, as these device identities are not used by Azure AD during Conditional Access authorization. For more information see [Understanding Azure AD Connect 1.4.xx.x device disappearnce](reference-connect-device-disappearance.md)
+>With this version of Azure AD Connect some customers may see some or all of their Windows devices disappear from Azure AD. This is not a cause for concern, as these device identities are not used by Azure AD during Conditional Access authorization. For more information see [Understanding Azure AD Connect 1.4.xx.x device disappearnce](/troubleshoot/azure/active-directory/reference-connect-device-disappearance)
### Release status
We fixed a bug in the sync errors compression utility that was not handling surr
## Next steps
-Learn more about [Integrating your on-premises identities with Azure Active Directory](whatis-hybrid-identity.md).
+Learn more about [Integrating your on-premises identities with Azure Active Directory](whatis-hybrid-identity.md).
active-directory Whatis Azure Ad Connect V2 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/whatis-azure-ad-connect-v2.md
SQL Server 2019 requires Windows Server 2016 or newer as a server operating syst
You cannot install this version on an older Windows Server version. We suggest you upgrade your Azure AD Connect server to Windows Server 2019, which is the most recent version of the Windows Server operating system.
-This [article](https://docs.microsoft.com/windows-server/get-started-19/install-upgrade-migrate-19) describes the upgrade from older Windows Server versions to Windows Server 2019.
+This [article](/windows-server/get-started-19/install-upgrade-migrate-19) describes the upgrade from older Windows Server versions to Windows Server 2019.
### PowerShell 5.0 This release of Azure AD Connect contains several cmdlets that require PowerShell 5.0, so this requirement is a new prerequisite for Azure AD Connect.
-More details about PowerShell prerequisites can be found [here](https://docs.microsoft.com/powershell/scripting/windows-powershell/install/windows-powershell-system-requirements?view=powershell-7.1#windows-powershell-50).
+More details about PowerShell prerequisites can be found [here](/powershell/scripting/windows-powershell/install/windows-powershell-system-requirements?view=powershell-7.1#windows-powershell-50).
>[!NOTE] >PowerShell 5 is already part of Windows Server 2016 so you probably do not have to take action as long as you are on a recent Window Server version.
This is a known issue. To resolve this, restart your PowerShell session after i
- [Express settings](how-to-connect-install-express.md) - [Customized settings](how-to-connect-install-custom.md)
-This article describes the upgrade from older Windows Server versions to Windows Server 2019.
+This article describes the upgrade from older Windows Server versions to Windows Server 2019.
active-directory Concept Identity Protection B2b https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/identity-protection/concept-identity-protection-b2b.md
Last updated 05/03/2021
-+
active-directory Concept Identity Protection Policies https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/identity-protection/concept-identity-protection-policies.md
Last updated 05/20/2020
-+
active-directory Concept Identity Protection Risks https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/identity-protection/concept-identity-protection-risks.md
Last updated 07/16/2021
-+
active-directory Concept Identity Protection Security Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/identity-protection/concept-identity-protection-security-overview.md
Last updated 07/02/2020
-+
active-directory Concept Identity Protection User Experience https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/identity-protection/concept-identity-protection-user-experience.md
Last updated 10/18/2019
-+
active-directory Howto Export Risk Data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/identity-protection/howto-export-risk-data.md
Last updated 07/30/2021
-+
active-directory Howto Identity Protection Configure Mfa Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/identity-protection/howto-identity-protection-configure-mfa-policy.md
Last updated 06/05/2020
-+
active-directory Howto Identity Protection Configure Notifications https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/identity-protection/howto-identity-protection-configure-notifications.md
Last updated 11/09/2020
-+
active-directory Howto Identity Protection Configure Risk Policies https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/identity-protection/howto-identity-protection-configure-risk-policies.md
Last updated 05/27/2021
-+
active-directory Howto Identity Protection Graph Api https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/identity-protection/howto-identity-protection-graph-api.md
Last updated 01/25/2021
-+
active-directory Howto Identity Protection Investigate Risk https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/identity-protection/howto-identity-protection-investigate-risk.md
Last updated 06/05/2020
-+
active-directory Howto Identity Protection Remediate Unblock https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/identity-protection/howto-identity-protection-remediate-unblock.md
Last updated 01/25/2021
-+
active-directory Howto Identity Protection Risk Feedback https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/identity-protection/howto-identity-protection-risk-feedback.md
Last updated 06/05/2020
-+
active-directory Howto Identity Protection Simulate Risk https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/identity-protection/howto-identity-protection-simulate-risk.md
Last updated 06/05/2020
-+
active-directory Overview Identity Protection https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/identity-protection/overview-identity-protection.md
Last updated 06/15/2021
-+
active-directory Reference Identity Protection Glossary https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/identity-protection/reference-identity-protection-glossary.md
Last updated 10/18/2019
-+
active-directory Tutorial Linux Vm Access Storage Access Key https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/managed-identities-azure-resources/tutorial-linux-vm-access-storage-access-key.md
Azure Storage does not natively support Azure AD authentication. However, you c
For detailed steps, see [Assign Azure roles using the Azure portal](../../role-based-access-control/role-assignments-portal.md). >[!NOTE]
-> For more information on the various roles that you can use to grant permissions to storage review [Authorize access to blobs and queues using Azure Active Directory.](../../storage/common/storage-auth-aad.md#assign-azure-roles-for-access-rights)
+> For more information on the various roles that you can use to grant permissions to storage review [Authorize access to blobs and queues using Azure Active Directory.](../../storage/blobs/authorize-access-azure-active-directory.md#assign-azure-roles-for-access-rights)
## Get an access token using the VM's identity and use it to call Azure Resource Manager
Response:
In this tutorial, you learned how to use a Linux VM system-assigned managed identity to access Azure Storage using an access key. To learn more about Azure Storage access keys see: > [!div class="nextstepaction"]
->[Manage your storage access keys](../../storage/common/storage-account-create.md)
+>[Manage your storage access keys](../../storage/common/storage-account-create.md)
active-directory Tutorial Linux Vm Access Storage Sas https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/managed-identities-azure-resources/tutorial-linux-vm-access-storage-sas.md
Azure Storage natively supports Azure AD authentication, so you can use your VM'
For detailed steps, see [Assign Azure roles using the Azure portal](../../role-based-access-control/role-assignments-portal.md). >[!NOTE]
-> For more information on the various roles that you can use to grant permissions to storage review [Authorize access to blobs and queues using Azure Active Directory.](../../storage/common/storage-auth-aad.md#assign-azure-roles-for-access-rights)
+> For more information on the various roles that you can use to grant permissions to storage review [Authorize access to blobs and queues using Azure Active Directory.](../../storage/blobs/authorize-access-azure-active-directory.md#assign-azure-roles-for-access-rights)
## Get an access token using the VM's identity and use it to call Azure Resource Manager
Response:
In this tutorial, you learned how to use a Linux VM system-assigned managed identity to access Azure Storage using a SAS credential. To learn more about Azure Storage SAS see: > [!div class="nextstepaction"]
->[Using shared access signatures (SAS)](../../storage/common/storage-sas-overview.md)
+>[Using shared access signatures (SAS)](../../storage/common/storage-sas-overview.md)
active-directory Tutorial Linux Vm Access Storage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/managed-identities-azure-resources/tutorial-linux-vm-access-storage.md
You can use the VM's managed identity to retrieve the data in the Azure storage
For detailed steps, see [Assign Azure roles using the Azure portal](../../role-based-access-control/role-assignments-portal.md). >[!NOTE]
-> For more information on the various roles that you can use to grant permissions to storage review [Authorize access to blobs and queues using Azure Active Directory](../../storage/common/storage-auth-aad.md#assign-azure-roles-for-access-rights)
+> For more information on the various roles that you can use to grant permissions to storage review [Authorize access to blobs and queues using Azure Active Directory](../../storage/blobs/authorize-access-azure-active-directory.md#assign-azure-roles-for-access-rights)
## Get an access token and use it to call Azure Storage Azure Storage natively supports Azure AD authentication, so it can directly accept access tokens obtained using a Managed Identity. This is part of Azure Storage's integration with Azure AD, and is different from supplying credentials on the connection string.
To complete the following steps, you need to work from the VM created earlier an
In this tutorial, you learned how enable a Linux VM system-assigned managed identity to access Azure Storage. To learn more about Azure Storage see: > [!div class="nextstepaction"]
-> [Azure Storage](../../storage/common/storage-introduction.md)
+> [Azure Storage](../../storage/common/storage-introduction.md)
active-directory Pim Deployment Plan https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/privileged-identity-management/pim-deployment-plan.md
Follow these tasks to prepare PIM to manage Azure resource roles.
Minimize Owner and User Access Administrator assignments attached to each subscription or resource and remove unnecessary assignments.
-As a Global Administrator you can [elevate access to manage all Azure subscriptions](/azure/role-based-access-control/elevate-access-global-admin). You can then find each subscription owner and work with them to remove unnecessary assignments within their subscriptions.
+As a Global Administrator you can [elevate access to manage all Azure subscriptions](../../role-based-access-control/elevate-access-global-admin.md). You can then find each subscription owner and work with them to remove unnecessary assignments within their subscriptions.
Use [access reviews for Azure resources](pim-resource-roles-start-access-review.md) to audit and remove unnecessary role assignments. ### Determine roles to be managed by PIM
-When deciding which role assignments should be managed using PIM for Azure resource, you must first identify the [management groups](/azure/governance/management-groups/overview), subscriptions, resource groups, and resources that are most vital for your organization. Consider using management groups to organize all their resources within their organization.
+When deciding which role assignments should be managed using PIM for Azure resource, you must first identify the [management groups](../../governance/management-groups/overview.md), subscriptions, resource groups, and resources that are most vital for your organization. Consider using management groups to organize all their resources within their organization.
**We recommend** you manage all Subscription Owner and User Access Administrator roles using PIM.
Configure privileged access group members and owners to require approval for act
-
-
active-directory Security Planning https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/roles/security-planning.md
Determine if you need to [transfer ownership of an Azure subscription to another
8. Make sure you save backups of relevant logs for potential forensic and legal investigation.
-For more information about how Microsoft Office 365 handles security incidents, see [Security Incident Management in Microsoft Office 365](https://aka.ms/Office365SIM).
+For more information about how Microsoft Office 365 handles security incidents, see [Security Incident Management in Microsoft Office 365](/compliance/assurance/assurance-security-incident-management).
## FAQ: Answers for securing privileged access
For more information about how Microsoft Office 365 handles security incidents,
* [Microsoft Intune Security](https://www.microsoft.com/trustcenter/security/intune-security) ΓÇô Intune provides mobile device management, mobile application management, and PC management capabilities from the cloud.
-* [Microsoft Dynamics 365 security](https://www.microsoft.com/trustcenter/security/dynamics365-security) ΓÇô Dynamics 365 is the Microsoft cloud-based solution that unifies customer relationship management (CRM) and enterprise resource planning (ERP) capabilities.
+* [Microsoft Dynamics 365 security](https://www.microsoft.com/trustcenter/security/dynamics365-security) ΓÇô Dynamics 365 is the Microsoft cloud-based solution that unifies customer relationship management (CRM) and enterprise resource planning (ERP) capabilities.
active-directory Clebex Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/clebex-provisioning-tutorial.md
# Tutorial: Configure Clebex for automatic user provisioning
-This tutorial describes the steps you need to perform in both Clebex and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and de-provisions users and groups to [Clebex](https://www.clebex.com/en/https://docsupdatetracker.net/index.html) using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../manage-apps/user-provisioning.md).
+This tutorial describes the steps you need to perform in both Clebex and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and de-provisions users and groups to [Clebex](https://www.clebex.com/en/https://docsupdatetracker.net/index.html) using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md).
## Capabilities supported
This tutorial describes the steps you need to perform in both Clebex and Azure A
> * Create users in Clebex > * Remove users in Clebex when they do not require access anymore > * Keep user attributes synchronized between Azure AD and Clebex
-> * [Single sign-on](https://docs.microsoft.com/azure/active-directory/saas-apps/clebex-tutorial) to Clebex (recommended)
+> * [Single sign-on](./clebex-tutorial.md) to Clebex (recommended)
## Prerequisites The scenario outlined in this tutorial assumes that you already have the following prerequisites:
-* [An Azure AD tenant](https://docs.microsoft.com/azure/active-directory/develop/quickstart-create-new-tenant)
-* A user account in Azure AD with [permission](https://docs.microsoft.com/azure/active-directory/users-groups-roles/directory-assign-admin-roles) to configure provisioning (for example, Application Administrator, Cloud Application administrator, Application Owner, or Global Administrator).
+* [An Azure AD tenant](../develop/quickstart-create-new-tenant.md)
+* A user account in Azure AD with [permission](../roles/permissions-reference.md) to configure provisioning (for example, Application Administrator, Cloud Application administrator, Application Owner, or Global Administrator).
* A user account in Clebex with create / edit permissions. ## Step 1. Plan your provisioning deployment
-1. Learn about [how the provisioning service works](https://docs.microsoft.com/azure/active-directory/manage-apps/user-provisioning).
-2. Determine who will be in [scope for provisioning](https://docs.microsoft.com/azure/active-directory/manage-apps/define-conditional-rules-for-provisioning-user-accounts).
-3. Determine what data to [map between Azure AD and Clebex](https://docs.microsoft.com/azure/active-directory/manage-apps/customize-application-attributes).
+1. Learn about [how the provisioning service works](../app-provisioning/user-provisioning.md).
+2. Determine who will be in [scope for provisioning](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+3. Determine what data to [map between Azure AD and Clebex](../app-provisioning/customize-application-attributes.md).
## Step 2. Configure Clebex to support provisioning with Azure AD
The scenario outlined in this tutorial assumes that you already have the followi
## Step 3. Add Clebex from the Azure AD application gallery
-Add Clebex from the Azure AD application gallery to start managing provisioning to Clebex. If you have previously setup Clebex for SSO, you can use the same application. However it is recommended that you create a separate app when testing out the integration initially. Learn more about adding an application from the gallery [here](https://docs.microsoft.com/azure/active-directory/manage-apps/add-gallery-app).
+Add Clebex from the Azure AD application gallery to start managing provisioning to Clebex. If you have previously setup Clebex for SSO, you can use the same application. However it is recommended that you create a separate app when testing out the integration initially. Learn more about adding an application from the gallery [here](../manage-apps/add-application-portal.md).
## Step 4. Define who will be in scope for provisioning
-The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](https://docs.microsoft.com/azure/active-directory/manage-apps/define-conditional-rules-for-provisioning-user-accounts).
+The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* When assigning users and groups to Clebex, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](https://docs.microsoft.com/azure/active-directory/develop/howto-add-app-roles-in-azure-ad-apps) to add additional roles.
+* When assigning users and groups to Clebex, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add additional roles.
-* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](https://docs.microsoft.com/azure/active-directory/manage-apps/define-conditional-rules-for-provisioning-user-accounts).
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
## Step 5. Configure automatic user provisioning to Clebex
This section guides you through the steps to configure the Azure AD provisioning
8. Under the **Mappings** section, select **Synchronize Azure Active Directory Users to Clebex**.
-9. Review the user attributes that are synchronized from Azure AD to Clebex in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in Clebex for update operations. If you choose to change the [matching target attribute](https://docs.microsoft.com/azure/active-directory/manage-apps/customize-application-attributes), you will need to ensure that the Clebex API supports filtering users based on that attribute. Select the **Save** button to commit any changes.
+9. Review the user attributes that are synchronized from Azure AD to Clebex in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in Clebex for update operations. If you choose to change the [matching target attribute](../app-provisioning/customize-application-attributes.md), you will need to ensure that the Clebex API supports filtering users based on that attribute. Select the **Save** button to commit any changes.
|Attribute|Type|Supported for filtering| ||||
This section guides you through the steps to configure the Azure AD provisioning
|name.familyName|String| |name.formatted|String|
-10. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../manage-apps/define-conditional-rules-for-provisioning-user-accounts.md).
+10. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
11. To enable the Azure AD provisioning service for Clebex, change the **Provisioning Status** to **On** in the **Settings** section.
This operation starts the initial synchronization cycle of all users and groups
## Step 6. Monitor your deployment Once you've configured provisioning, use the following resources to monitor your deployment:
-1. Use the [provisioning logs](https://docs.microsoft.com/azure/active-directory/reports-monitoring/concept-provisioning-logs) to determine which users have been provisioned successfully or unsuccessfully
-2. Check the [progress bar](https://docs.microsoft.com/azure/active-directory/app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user) to see the status of the provisioning cycle and how close it is to completion
-3. If the provisioning configuration seems to be in an unhealthy state, the application will go into quarantine. Learn more about quarantine states [here](https://docs.microsoft.com/azure/active-directory/manage-apps/application-provisioning-quarantine-status).
+1. Use the [provisioning logs](../reports-monitoring/concept-provisioning-logs.md) to determine which users have been provisioned successfully or unsuccessfully
+2. Check the [progress bar](../app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user.md) to see the status of the provisioning cycle and how close it is to completion
+3. If the provisioning configuration seems to be in an unhealthy state, the application will go into quarantine. Learn more about quarantine states [here](../app-provisioning/application-provisioning-quarantine-status.md).
## Additional resources
-* [Managing user account provisioning for Enterprise Apps](../manage-apps/configure-automatic-user-provisioning-portal.md)
+* [Managing user account provisioning for Enterprise Apps](../app-provisioning/configure-automatic-user-provisioning-portal.md)
* [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md) ## Next steps
-* [Learn how to review logs and get reports on provisioning activity](../manage-apps/check-status-user-account-provisioning.md)
+* [Learn how to review logs and get reports on provisioning activity](../app-provisioning/check-status-user-account-provisioning.md)
active-directory Exium Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/exium-provisioning-tutorial.md
# Tutorial: Configure Exium for automatic user provisioning
-This tutorial describes the steps you need to perform in both Exium and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and de-provisions users and groups to [Exium](https://exium.net/) using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../manage-apps/user-provisioning.md).
+This tutorial describes the steps you need to perform in both Exium and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and de-provisions users and groups to [Exium](https://exium.net/) using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md).
## Capabilities supported
This tutorial describes the steps you need to perform in both Exium and Azure Ac
The scenario outlined in this tutorial assumes that you already have the following prerequisites:
-* [An Azure AD tenant](https://docs.microsoft.com/azure/active-directory/develop/quickstart-create-new-tenant)
-* A user account in Azure AD with [permission](https://docs.microsoft.com/azure/active-directory/users-groups-roles/directory-assign-admin-roles) to configure provisioning (for example, Application Administrator, Cloud Application administrator, Application Owner, or Global Administrator).
+* [An Azure AD tenant](../develop/quickstart-create-new-tenant.md)
+* A user account in Azure AD with [permission](../roles/permissions-reference.md) to configure provisioning (for example, Application Administrator, Cloud Application administrator, Application Owner, or Global Administrator).
* A user account in Exium with administrator permissions. * A workspace in Exium to generate an Azure AD Secret Token. A new workspace can be created [here](https://service.exium.net/sign-up).
-* [Single sign-on](https://docs.microsoft.com/azure/active-directory/saas-apps/exium-tutorial) should be enabled.
+* [Single sign-on](./exium-tutorial.md) should be enabled.
## Step 1. Plan your provisioning deployment
-1. Learn about [how the provisioning service works](https://docs.microsoft.com/azure/active-directory/manage-apps/user-provisioning).
-2. Determine who will be in [scope for provisioning](https://docs.microsoft.com/azure/active-directory/manage-apps/define-conditional-rules-for-provisioning-user-accounts).
-3. Determine what data to [map between Azure AD and Exium](https://docs.microsoft.com/azure/active-directory/manage-apps/customize-application-attributes).
+1. Learn about [how the provisioning service works](../app-provisioning/user-provisioning.md).
+2. Determine who will be in [scope for provisioning](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+3. Determine what data to [map between Azure AD and Exium](../app-provisioning/customize-application-attributes.md).
## Step 2. Configure Azure AD to support provisioning with Exium 1. Log in to [Exium workspace](https://service.exium.net/sign-in).
The scenario outlined in this tutorial assumes that you already have the followi
## Step 3. Add Exium from the Azure AD application gallery
-Add Exium from the Azure AD application gallery to start managing provisioning to Exium. If you have previously setup Exium for SSO, you can use the same application. However it is recommended that you create a separate app when testing out the integration initially. Learn more about adding an application from the gallery [here](https://docs.microsoft.com/azure/active-directory/manage-apps/add-gallery-app).
+Add Exium from the Azure AD application gallery to start managing provisioning to Exium. If you have previously setup Exium for SSO, you can use the same application. However it is recommended that you create a separate app when testing out the integration initially. Learn more about adding an application from the gallery [here](../manage-apps/add-application-portal.md).
## Step 4. Define who will be in scope for provisioning
-The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](https://docs.microsoft.com/azure/active-directory/manage-apps/define-conditional-rules-for-provisioning-user-accounts).
+The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* When assigning users and groups to Exium, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](https://docs.microsoft.com/azure/active-directory/develop/howto-add-app-roles-in-azure-ad-apps) to add extra roles.
+* When assigning users and groups to Exium, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add extra roles.
-* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](https://docs.microsoft.com/azure/active-directory/manage-apps/define-conditional-rules-for-provisioning-user-accounts).
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
## Step 5. Configure automatic user provisioning to Exium
This section guides you through the steps to configure the Azure AD provisioning
8. Under the **Mappings** section, select **Synchronize Azure Active Directory Users to Exium**.
-9. Review the user attributes that are synchronized from Azure AD to Exium in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in Exium for update operations. If you choose to change the [matching target attribute](https://docs.microsoft.com/azure/active-directory/manage-apps/customize-application-attributes), you will need to ensure that the Exium API supports filtering users based on that attribute. Select the **Save** button to commit any changes.
+9. Review the user attributes that are synchronized from Azure AD to Exium in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in Exium for update operations. If you choose to change the [matching target attribute](../app-provisioning/customize-application-attributes.md), you will need to ensure that the Exium API supports filtering users based on that attribute. Select the **Save** button to commit any changes.
|Attribute|Type|Supported for filtering| ||||
This section guides you through the steps to configure the Azure AD provisioning
|displayName|String|&check;| |members|Reference|
-12. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../manage-apps/define-conditional-rules-for-provisioning-user-accounts.md).
+12. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
13. To enable the Azure AD provisioning service for Exium, change the **Provisioning Status** to **On** in the **Settings** section.
This operation starts the initial synchronization cycle of all users and groups
## Step 6. Monitor your deployment Once you've configured provisioning, use the following resources to monitor your deployment:
-1. Use the [provisioning logs](https://docs.microsoft.com/azure/active-directory/reports-monitoring/concept-provisioning-logs) to determine which users have been provisioned successfully or unsuccessfully
-2. Check the [progress bar](https://docs.microsoft.com/azure/active-directory/app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user) to see the status of the provisioning cycle and how close it is to completion
-3. If the provisioning configuration seems to be in an unhealthy state, the application will go into quarantine. Learn more about quarantine states [here](https://docs.microsoft.com/azure/active-directory/manage-apps/application-provisioning-quarantine-status).
+1. Use the [provisioning logs](../reports-monitoring/concept-provisioning-logs.md) to determine which users have been provisioned successfully or unsuccessfully
+2. Check the [progress bar](../app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user.md) to see the status of the provisioning cycle and how close it is to completion
+3. If the provisioning configuration seems to be in an unhealthy state, the application will go into quarantine. Learn more about quarantine states [here](../app-provisioning/application-provisioning-quarantine-status.md).
## Additional resources
-* [Managing user account provisioning for Enterprise Apps](../manage-apps/configure-automatic-user-provisioning-portal.md)
+* [Managing user account provisioning for Enterprise Apps](../app-provisioning/configure-automatic-user-provisioning-portal.md)
* [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md) ## Next steps
-* [Learn how to review logs and get reports on provisioning activity](../manage-apps/check-status-user-account-provisioning.md)
+* [Learn how to review logs and get reports on provisioning activity](../app-provisioning/check-status-user-account-provisioning.md)
active-directory Talentech Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/talentech-provisioning-tutorial.md
# Tutorial: Configure Talentech for automatic user provisioning
-This tutorial describes the steps you need to perform in both Talentech and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and de-provisions users and groups to [Talentech](https://www.talentech.com) using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../manage-apps/user-provisioning.md).
+This tutorial describes the steps you need to perform in both Talentech and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and de-provisions users and groups to [Talentech](https://www.talentech.com) using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md).
## Capabilities supported
This tutorial describes the steps you need to perform in both Talentech and Azur
The scenario outlined in this tutorial assumes that you already have the following prerequisites:
-* [An Azure AD tenant](https://docs.microsoft.com/azure/active-directory/develop/quickstart-create-new-tenant)
-* A user account in Azure AD with [permission](https://docs.microsoft.com/azure/active-directory/users-groups-roles/directory-assign-admin-roles) to configure provisioning (for example, Application Administrator, Cloud Application administrator, Application Owner, or Global Administrator).
+* [An Azure AD tenant](../develop/quickstart-create-new-tenant.md)
+* A user account in Azure AD with [permission](../roles/permissions-reference.md) to configure provisioning (for example, Application Administrator, Cloud Application administrator, Application Owner, or Global Administrator).
* A user account in Talentech. ## Step 1. Plan your provisioning deployment
-1. Learn about [how the provisioning service works](https://docs.microsoft.com/azure/active-directory/manage-apps/user-provisioning).
-2. Determine who will be in [scope for provisioning](https://docs.microsoft.com/azure/active-directory/manage-apps/define-conditional-rules-for-provisioning-user-accounts).
-3. Determine what data to [map between Azure AD and Talentech](https://docs.microsoft.com/azure/active-directory/manage-apps/customize-application-attributes).
+1. Learn about [how the provisioning service works](../app-provisioning/user-provisioning.md).
+2. Determine who will be in [scope for provisioning](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+3. Determine what data to [map between Azure AD and Talentech](../app-provisioning/customize-application-attributes.md).
## Step 2. Configure Talentech to support provisioning with Azure AD
The scenario outlined in this tutorial assumes that you already have the followi
## Step 3. Add Talentech from the Azure AD application gallery
-Add Talentech from the Azure AD application gallery to start managing provisioning to Talentech. If you have previously setup Talentech for SSO, you can use the same application. However it is recommended that you create a separate app when testing out the integration initially. Learn more about adding an application from the gallery [here](https://docs.microsoft.com/azure/active-directory/manage-apps/add-gallery-app).
+Add Talentech from the Azure AD application gallery to start managing provisioning to Talentech. If you have previously setup Talentech for SSO, you can use the same application. However it is recommended that you create a separate app when testing out the integration initially. Learn more about adding an application from the gallery [here](../manage-apps/add-application-portal.md).
## Step 4. Define who will be in scope for provisioning
-The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](https://docs.microsoft.com/azure/active-directory/manage-apps/define-conditional-rules-for-provisioning-user-accounts).
+The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* When assigning users and groups to Talentech, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](https://docs.microsoft.com/azure/active-directory/develop/howto-add-app-roles-in-azure-ad-apps) to add extra roles.
+* When assigning users and groups to Talentech, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add extra roles.
-* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](https://docs.microsoft.com/azure/active-directory/manage-apps/define-conditional-rules-for-provisioning-user-accounts).
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
## Step 5. Configure automatic user provisioning to Talentech
This section guides you through the steps to configure the Azure AD provisioning
8. Under the **Mappings** section, select **Synchronize Azure Active Directory Users to Talentech**.
-9. Review the user attributes that are synchronized from Azure AD to Talentech in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in Talentech for update operations. If you choose to change the [matching target attribute](https://docs.microsoft.com/azure/active-directory/manage-apps/customize-application-attributes), you will need to ensure that the Talentech API supports filtering users based on that attribute. Select the **Save** button to commit any changes.
+9. Review the user attributes that are synchronized from Azure AD to Talentech in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in Talentech for update operations. If you choose to change the [matching target attribute](../app-provisioning/customize-application-attributes.md), you will need to ensure that the Talentech API supports filtering users based on that attribute. Select the **Save** button to commit any changes.
|Attribute|Type|Supported for filtering| ||||
This section guides you through the steps to configure the Azure AD provisioning
|externalId|String| |members|Reference|
-12. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../manage-apps/define-conditional-rules-for-provisioning-user-accounts.md).
+12. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
13. To enable the Azure AD provisioning service for Talentech, change the **Provisioning Status** to **On** in the **Settings** section.
This operation starts the initial synchronization cycle of all users and groups
## Step 6. Monitor your deployment Once you've configured provisioning, use the following resources to monitor your deployment:
-1. Use the [provisioning logs](https://docs.microsoft.com/azure/active-directory/reports-monitoring/concept-provisioning-logs) to determine which users have been provisioned successfully or unsuccessfully
-2. Check the [progress bar](https://docs.microsoft.com/azure/active-directory/app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user) to see the status of the provisioning cycle and how close it is to completion
-3. If the provisioning configuration seems to be in an unhealthy state, the application will go into quarantine. Learn more about quarantine states [here](https://docs.microsoft.com/azure/active-directory/manage-apps/application-provisioning-quarantine-status).
+1. Use the [provisioning logs](../reports-monitoring/concept-provisioning-logs.md) to determine which users have been provisioned successfully or unsuccessfully
+2. Check the [progress bar](../app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user.md) to see the status of the provisioning cycle and how close it is to completion
+3. If the provisioning configuration seems to be in an unhealthy state, the application will go into quarantine. Learn more about quarantine states [here](../app-provisioning/application-provisioning-quarantine-status.md).
## Additional resources
-* [Managing user account provisioning for Enterprise Apps](../manage-apps/configure-automatic-user-provisioning-portal.md)
+* [Managing user account provisioning for Enterprise Apps](../app-provisioning/configure-automatic-user-provisioning-portal.md)
* [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md) ## Next steps
-* [Learn how to review logs and get reports on provisioning activity](../manage-apps/check-status-user-account-provisioning.md)
+* [Learn how to review logs and get reports on provisioning activity](../app-provisioning/check-status-user-account-provisioning.md)
active-directory Thrive Lxp Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/thrive-lxp-provisioning-tutorial.md
# Tutorial: Configure Thrive LXP for automatic user provisioning
-This tutorial describes the steps you need to perform in both Thrive LXP and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and de-provisions users and groups to [Thrive LXP](https://thrivelearning.com/) using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../manage-apps/user-provisioning.md).
+This tutorial describes the steps you need to perform in both Thrive LXP and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and de-provisions users and groups to [Thrive LXP](https://thrivelearning.com/) using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md).
## Capabilities Supported
This tutorial describes the steps you need to perform in both Thrive LXP and Azu
> * Remove users in Thrive LXP when they do not require access anymore > * Keep user attributes synchronized between Azure AD and Thrive LXP > * Provision groups and group memberships in Thrive LXP
-> * [Single sign-on](https://docs.microsoft.com/azure/active-directory/saas-apps/thrive-lxp-tutorial) to Thrive LXP (recommended)
+> * [Single sign-on](./thrive-lxp-tutorial.md) to Thrive LXP (recommended)
## Prerequisites The scenario outlined in this tutorial assumes that you already have the following prerequisites:
-* [An Azure AD tenant](https://docs.microsoft.com/azure/active-directory/develop/quickstart-create-new-tenant)
-* A user account in Azure AD with [permission](https://docs.microsoft.com/azure/active-directory/users-groups-roles/directory-assign-admin-roles) to configure provisioning (for example, Application Administrator, Cloud Application administrator, Application Owner, or Global Administrator).
+* [An Azure AD tenant](../develop/quickstart-create-new-tenant.md)
+* A user account in Azure AD with [permission](../roles/permissions-reference.md) to configure provisioning (for example, Application Administrator, Cloud Application administrator, Application Owner, or Global Administrator).
* A **SCIM token** supplied by your contact at THRIVE LXP. ## Step 1. Plan your provisioning deployment
-1. Learn about [how the provisioning service works](https://docs.microsoft.com/azure/active-directory/manage-apps/user-provisioning).
-2. Determine who will be in [scope for provisioning](https://docs.microsoft.com/azure/active-directory/manage-apps/define-conditional-rules-for-provisioning-user-accounts).
-3. Determine what data to [map between Azure AD and Thrive LXP](https://docs.microsoft.com/azure/active-directory/manage-apps/customize-application-attributes).
+1. Learn about [how the provisioning service works](../app-provisioning/user-provisioning.md).
+2. Determine who will be in [scope for provisioning](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+3. Determine what data to [map between Azure AD and Thrive LXP](../app-provisioning/customize-application-attributes.md).
## Step 2. Configure Thrive LXP to support provisioning with Azure AD
Reach out to your Thrive LXP contact to generate your **Tenant url** and **Secre
## Step 3. Add Thrive LXP from the Azure AD application gallery
-Add Thrive LXP from the Azure AD application gallery to start managing provisioning to Thrive LXP. If you have previously setup Thrive LXP for SSO, you can use the same application. However it is recommended that you create a separate app when testing out the integration initially. Learn more about adding an application from the gallery [here](https://docs.microsoft.com/azure/active-directory/manage-apps/add-gallery-app).
+Add Thrive LXP from the Azure AD application gallery to start managing provisioning to Thrive LXP. If you have previously setup Thrive LXP for SSO, you can use the same application. However it is recommended that you create a separate app when testing out the integration initially. Learn more about adding an application from the gallery [here](../manage-apps/add-application-portal.md).
## Step 4. Define who will be in scope for provisioning
-The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](https://docs.microsoft.com/azure/active-directory/manage-apps/define-conditional-rules-for-provisioning-user-accounts).
+The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* When assigning users and groups to Thrive LXP, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](https://docs.microsoft.com/azure/active-directory/develop/howto-add-app-roles-in-azure-ad-apps) to add additional roles.
+* When assigning users and groups to Thrive LXP, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add additional roles.
-* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](https://docs.microsoft.com/azure/active-directory/manage-apps/define-conditional-rules-for-provisioning-user-accounts).
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
## Step 5. Configure automatic user provisioning to Thrive LXP
This section guides you through the steps to configure the Azure AD provisioning
8. Under the **Mappings** section, select **Synchronize Azure Active Directory Users to Thrive LXP**.
-9. Review the user attributes that are synchronized from Azure AD to Thrive LXP in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in Thrive LXP for update operations. If you choose to change the [matching target attribute](https://docs.microsoft.com/azure/active-directory/manage-apps/customize-application-attributes), you will need to ensure that the Thrive LXP API supports filtering users based on that attribute. Select the **Save** button to commit any changes.
+9. Review the user attributes that are synchronized from Azure AD to Thrive LXP in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in Thrive LXP for update operations. If you choose to change the [matching target attribute](../app-provisioning/customize-application-attributes.md), you will need to ensure that the Thrive LXP API supports filtering users based on that attribute. Select the **Save** button to commit any changes.
|Attribute|Type|Supported For Filtering| ||||
This section guides you through the steps to configure the Azure AD provisioning
|externalId|String| |members|Reference|
-12. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../manage-apps/define-conditional-rules-for-provisioning-user-accounts.md).
+12. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
13. To enable the Azure AD provisioning service for Thrive LXP, change the **Provisioning Status** to **On** in the **Settings** section.
This operation starts the initial synchronization cycle of all users and groups
## Step 6. Monitor your deployment Once you've configured provisioning, use the following resources to monitor your deployment:
-1. Use the [provisioning logs](https://docs.microsoft.com/azure/active-directory/reports-monitoring/concept-provisioning-logs) to determine which users have been provisioned successfully or unsuccessfully
-2. Check the [progress bar](https://docs.microsoft.com/azure/active-directory/app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user) to see the status of the provisioning cycle and how close it is to completion
-3. If the provisioning configuration seems to be in an unhealthy state, the application will go into quarantine. Learn more about quarantine states [here](https://docs.microsoft.com/azure/active-directory/manage-apps/application-provisioning-quarantine-status).
+1. Use the [provisioning logs](../reports-monitoring/concept-provisioning-logs.md) to determine which users have been provisioned successfully or unsuccessfully
+2. Check the [progress bar](../app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user.md) to see the status of the provisioning cycle and how close it is to completion
+3. If the provisioning configuration seems to be in an unhealthy state, the application will go into quarantine. Learn more about quarantine states [here](../app-provisioning/application-provisioning-quarantine-status.md).
## Additional resources
-* [Managing user account provisioning for Enterprise Apps](../manage-apps/configure-automatic-user-provisioning-portal.md)
+* [Managing user account provisioning for Enterprise Apps](../app-provisioning/configure-automatic-user-provisioning-portal.md)
* [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md) ## Next steps
-* [Learn how to review logs and get reports on provisioning activity](../manage-apps/check-status-user-account-provisioning.md)
+* [Learn how to review logs and get reports on provisioning activity](../app-provisioning/check-status-user-account-provisioning.md)
active-directory Plan Verification Solution https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/verifiable-credentials/plan-verification-solution.md
This content covers the technical aspects of planning for a verifiable credentia
Supporting technologies that are not specific to verification solutions are out of scope. For example, websites are used in a verifiable credential verification solution but planning a website deployment is not covered in detail.
-As you plan your verification solution you must consider what business capability is being added or modified and what IT capabilities can be leveraged or must be added to create the solution. You must also consider what training is needed for the people involved in the business process as well as the people that support the end users and staff of the solution. These topics are not covered in this content. We recommend reviewing the [Microsoft Azure Well-Architected Framework](https://docs.microsoft.com/azure/architecture/framework/) for information covering these topics.
+As you plan your verification solution you must consider what business capability is being added or modified and what IT capabilities can be leveraged or must be added to create the solution. You must also consider what training is needed for the people involved in the business process as well as the people that support the end users and staff of the solution. These topics are not covered in this content. We recommend reviewing the [Microsoft Azure Well-Architected Framework](/azure/architecture/framework/) for information covering these topics.
## Components of the solution
Implement Verifiable Credentials
* [Get started with Verifiable Credentials](get-started-verifiable-credentials.md)
-[FAQs](verifiable-credentials-faq.md)
-
+[FAQs](verifiable-credentials-faq.md)
aks Aks Migration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/aks-migration.md
In this article we will summarize migration details for:
Azure Migrate offers a unified platform to assess and migrate to Azure on-premises servers, infrastructure, applications, and data. For AKS, you can use Azure Migrate for the following tasks: * [Containerize ASP.NET applications and migrate to AKS](../migrate/tutorial-app-containerization-aspnet-kubernetes.md)
-* [Containerize Java web applications and migrate to AKS](/azure/migrate/tutorial-app-containerization-java-kubernetes)
+* [Containerize Java web applications and migrate to AKS](../migrate/tutorial-app-containerization-java-kubernetes.md)
## AKS with Standard Load Balancer and Virtual Machine Scale Sets
In this article, we summarized migration details for:
> * Deployment of your cluster configuration
-[region-availability]: https://azure.microsoft.com/global-infrastructure/services/?products=kubernetes-service
+[region-availability]: https://azure.microsoft.com/global-infrastructure/services/?products=kubernetes-service
aks Coredns Custom https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/coredns-custom.md
data:
rewrite name substring <domain to be rewritten>.com default.svc.cluster.local kubernetes cluster.local in-addr.arpa ip6.arpa { pods insecure
- upstream
fallthrough in-addr.arpa ip6.arpa } forward . /etc/resolv.conf # you can redirect this to a specific DNS server such as 10.0.0.10, but that server must be able to resolve the rewritten domain name
kubectl delete pod --namespace kube-system --selector k8s-app=kube-dns
As all built-in plugins are supported this means that the CoreDNS [Hosts][coredns hosts] plugin is available to customize as well:
-```yaml
-apiVersion: v1
-kind: ConfigMap
-metadata:
- name: coredns-custom # this is the name of the configmap you can overwrite with your changes
- namespace: kube-system
-data:
- test.override: | # you may select any name here, but it must end with the .override file extension
- hosts example.hosts example.org { # example.hosts must be a file
- 10.0.0.1 example.org
- fallthrough
- }
-```
-
-To specify one or more lines in host table using INLINE:
- ```yaml apiVersion: v1 kind: ConfigMap
aks Custom Node Configuration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/custom-node-configuration.md
az aks nodepool add --name mynodepool1 --cluster-name myAKSCluster --resource-gr
[aks-scale-apps]: tutorial-kubernetes-scale.md [aks-support-policies]: support-policies.md [aks-upgrade]: upgrade-cluster.md
-[aks-view-master-logs]: ./view-control-plane-logs.md#enable-resource-logs
+[aks-view-master-logs]: ../azure-monitor/containers/container-insights-log-query.md#enable-resource-logs
[autoscaler-profile-properties]: #using-the-autoscaler-profile [azure-cli-install]: /cli/azure/install-azure-cli [az-aks-show]: /cli/azure/aks#az_aks_show
az aks nodepool add --name mynodepool1 --cluster-name myAKSCluster --resource-gr
[az-aks-nodepool-update]: https://github.com/Azure/azure-cli-extensions/tree/master/src/aks-preview#enable-cluster-auto-scaler-for-a-node-pool [autoscaler-scaledown]: https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md#what-types-of-pods-can-prevent-ca-from-removing-a-node [autoscaler-parameters]: https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md#what-are-the-parameters-to-ca
-[kubernetes-faq]: https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md#ca-doesnt-work-but-it-used-to-work-yesterday-why
+[kubernetes-faq]: https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md#ca-doesnt-work-but-it-used-to-work-yesterday-why
aks Monitor Aks Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/monitor-aks-reference.md
The following table lists the platform metrics collected for AKS. Follow each l
|Metric Type | Resource Provider / Type Namespace<br/> and link to individual metrics | |-|--|
-| Managed clusters | [Microsoft.ContainerService/managedClusters](/azure/azure-monitor/essentials/metrics-supported#microsoftcontainerservicemanagedclusters)
-| Connected clusters | [microsoft.kubernetes/connectedClusters](/azure/azure-monitor/essentials/metrics-supported#microsoftkubernetesconnectedclusters)
-| Virtual machines| [Microsoft.Compute/virtualMachines](/azure/azure-monitor/essentials/metrics-supported#microsoftcomputevirtualmachines) |
-| Virtual machine scale sets | [Microsoft.Compute/virtualMachineScaleSets](/azure/azure-monitor/essentials/metrics-supported#microsoftcomputevirtualmachinescalesets)|
-| Virtual machine scale sets virtual machines | [Microsoft.Compute/virtualMachineScaleSets/virtualMachines](/azure/azure-monitor/essentials/metrics-supported#microsoftcomputevirtualmachinescalesetsvirtualmachines)|
+| Managed clusters | [Microsoft.ContainerService/managedClusters](../azure-monitor/essentials/metrics-supported.md#microsoftcontainerservicemanagedclusters)
+| Connected clusters | [microsoft.kubernetes/connectedClusters](../azure-monitor/essentials/metrics-supported.md#microsoftkubernetesconnectedclusters)
+| Virtual machines| [Microsoft.Compute/virtualMachines](../azure-monitor/essentials/metrics-supported.md#microsoftcomputevirtualmachines) |
+| Virtual machine scale sets | [Microsoft.Compute/virtualMachineScaleSets](../azure-monitor/essentials/metrics-supported.md#microsoftcomputevirtualmachinescalesets)|
+| Virtual machine scale sets virtual machines | [Microsoft.Compute/virtualMachineScaleSets/virtualMachines](../azure-monitor/essentials/metrics-supported.md#microsoftcomputevirtualmachinescalesetsvirtualmachines)|
-For more information, see a list of [all platform metrics supported in Azure Monitor](/azure/azure-monitor/platform/metrics-supported).
+For more information, see a list of [all platform metrics supported in Azure Monitor](../azure-monitor/essentials/metrics-supported.md).
## Metric dimensions
-The following table lists [dimensions](/azure/azure-monitor/platform/data-platform-metrics#multi-dimensional-metrics) for AKS metrics.
+The following table lists [dimensions](../azure-monitor/essentials/data-platform-metrics.md#multi-dimensional-metrics) for AKS metrics.
<!-- listed here /azure/azure-monitor/essentials/metrics-supported#microsoftcontainerservicemanagedclusters-->
The following table lists [dimensions](/azure/azure-monitor/platform/data-platfo
The following table lists the resource log categories you can collect for AKS. These are the logs for AKS control plane components. See [Configure monitoring](monitor-aks.md#configure-monitoring) for information on creating a diagnostic setting to collect these logs and recommendations on which to enable. See [How to query logs from Container insights](../azure-monitor/containers/container-insights-log-query.md#resource-logs) for query examples.
-For reference, see a list of [all resource logs category types supported in Azure Monitor](/azure/azure-monitor/platform/resource-logs-schema).
+For reference, see a list of [all resource logs category types supported in Azure Monitor](../azure-monitor/essentials/resource-logs-schema.md).
| Category | Description | |:|:|
The following table lists a few example operations related to AKS that may be cr
| Microsoft.ContainerService/managedClusters/listClusterAdminCredential/action | List clusterAdmin credential | | Microsoft.ContainerService/managedClusters/agentpools/write | Create or Update Agent Pool |
-For a complete list of possible log entries, see [Microsoft.ContainerService Resource Provider options](/azure/role-based-access-control/resource-provider-operations#microsoftcontainerservice).
+For a complete list of possible log entries, see [Microsoft.ContainerService Resource Provider options](../role-based-access-control/resource-provider-operations.md#microsoftcontainerservice).
-For more information on the schema of Activity Log entries, see [Activity Log schema](/azure/azure-monitor/essentials/activity-log-schema).
+For more information on the schema of Activity Log entries, see [Activity Log schema](../azure-monitor/essentials/activity-log-schema.md).
## See also
aks Private Clusters https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/private-clusters.md
The following parameters can be leveraged to configure Private DNS Zone.
* The AKS Preview version 0.5.19 or later * The api version 2021-05-01 or later
+To use the fqdn-subdomain feature, you must enable the `EnablePrivateClusterFQDNSubdomain` feature flag on your subscription.
+
+Register the `EnablePrivateClusterFQDNSubdomain` feature flag by using the [az feature register][az-feature-register] command as shown in the following example:
+
+```azurecli-interactive
+az feature register --namespace "Microsoft.ContainerService" --name "EnablePrivateClusterFQDNSubdomain"
+```
+
+You can check on the registration status by using the [az feature list][az-feature-list] command:
+
+```azurecli-interactive
+az feature list -o table --query "[?contains(name, 'Microsoft.ContainerService/EnablePrivateClusterFQDNSubdomain')].{Name:name,State:properties.state}"
+```
+
+When ready, refresh the registration of the *Microsoft.ContainerService* resource provider by using the [az provider register][az-provider-register] command:
+
+```azurecli-interactive
+az provider register --namespace Microsoft.ContainerService
+```
+ ### Create a private AKS cluster with Private DNS Zone ```azurecli-interactive
As mentioned, virtual network peering is one way to access your private cluster.
<!-- LINKS - internal --> [az-provider-register]: /cli/azure/provider#az_provider_register
+[az-feature-register]: /cli/azure/feature#az_feature_register
[az-feature-list]: /cli/azure/feature#az_feature_list [az-extension-add]: /cli/azure/extension#az_extension_add [az-extension-update]: /cli/azure/extension#az_extension_update
aks Tutorial Kubernetes Deploy Cluster https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/tutorial-kubernetes-deploy-cluster.md
Import-AzAksCredential -ResourceGroupName myResourceGroup -Name myAKSCluster
To verify the connection to your cluster, run the [kubectl get nodes][kubectl-get] command to return a list of the cluster nodes:
+```azurecli-interactive
+kubectl get nodes
+```
+
+The following example output shows the list of cluster nodes.
+ ``` $ kubectl get nodes
api-management How To Event Grid https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/how-to-event-grid.md
In Event Grid, you subscribe to a *topic* to tell it which events you want to tr
Now that the sample app is up and running and you've subscribed to your API Management instance with Event Grid, you're ready to generate events.
-As an example, [create a product](/azure/api-management/api-management-howto-add-products) in your API Management instance. If your event subscription includes the **Microsoft.APIManagement.ProductCreated** event, creating the product triggers an event that is pushed to your web app endpoint.
+As an example, [create a product](./api-management-howto-add-products.md) in your API Management instance. If your event subscription includes the **Microsoft.APIManagement.ProductCreated** event, creating the product triggers an event that is pushed to your web app endpoint.
Navigate to your Event Grid Viewer web app, and you should see the `ProductCreated` event. Select the button next to the event to show the details.
API Management event data includes the `resourceUri`, which identifies the API M
## Next steps * [Choose between Azure messaging services - Event Grid, Event Hubs, and Service Bus](../event-grid/compare-messaging-services.md)
-* Learn more about [subscribing to events](../event-grid/subscribe-through-portal.md).
-
+* Learn more about [subscribing to events](../event-grid/subscribe-through-portal.md).
api-management Zone Redundancy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/zone-redundancy.md
Previously updated : 07/08/2021 Last updated : 08/09/2021
Configuring API Management for zone redundancy is currently supported in the fol
* East US * East US 2 * France Central
+* Germany West Central
* Japan East
+* Korea Central
* North Europe * South Africa North * South Central US
app-service Configure Authentication Provider Google https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/configure-authentication-provider-google.md
To complete the procedure in this topic, you must have a Google account that has
These options determine how your application responds to unauthenticated requests, and the default selections will redirect all requests to log in with this new provider. You can change customize this behavior now or adjust these settings later from the main **Authentication** screen by choosing **Edit** next to **Authentication settings**. To learn more about these options, see [Authentication flow](overview-authentication-authorization.md#authentication-flow).
-1. (Optional) Click **Next: Scopes** and add any scopes needed by the application. These will be requested at login time for browser-based flows.
1. Click **Add**.
+> [Note]
+ > For adding scope: You can define what permissions your application has in the provider's registration portal. The app can request scopes at login time which leverage these permissions.
+ You are now ready to use Google for authentication in your app. The provider will be listed on the **Authentication** screen. From there, you can edit or delete this provider configuration. ## <a name="related-content"> </a>Next steps
app-service Configure Language Python https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/configure-language-python.md
App Service ignores any errors that occur when processing a custom startup comma
For more information, see [Gunicorn logging](https://docs.gunicorn.org/en/stable/settings.html#logging) (docs.gunicorn.org). -- **Custom Flask main module**: by default, App Service assumes that a Flask app's main module is *application.py* or *app.py*. If your main module uses a different name, then you must customize the startup command. For example, yf you have a Flask app whose main module is *hello.py* and the Flask app object in that file is named `myapp`, then the command is as follows:
+- **Custom Flask main module**: by default, App Service assumes that a Flask app's main module is *application.py* or *app.py*. If your main module uses a different name, then you must customize the startup command. For example, if you have a Flask app whose main module is *hello.py* and the Flask app object in that file is named `myapp`, then the command is as follows:
```bash gunicorn --bind=0.0.0.0 --timeout 600 hello:myapp
app-service Nat Gateway Integration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/networking/nat-gateway-integration.md
NAT gateway is a fully managed, highly resilient service, which can be associated with one or more subnets and ensures that all outbound Internet-facing traffic will be routed through the gateway. With App Service, there are two important scenarios that you can use NAT gateway for.
-The NAT gateway gives you a static predictable public IP for outbound Internet-facing traffic. It also significantly increases the available [SNAT ports](/azure/app-service/troubleshoot-intermittent-outbound-connection-errors) in scenarios where you have a high number of concurrent connections to the same public address/port combination.
+The NAT gateway gives you a static predictable public IP for outbound Internet-facing traffic. It also significantly increases the available [SNAT ports](../troubleshoot-intermittent-outbound-connection-errors.md) in scenarios where you have a high number of concurrent connections to the same public address/port combination.
-For more information and pricing. Go to the [NAT gateway overview](/azure/virtual-network/nat-gateway/nat-overview).
+For more information and pricing. Go to the [NAT gateway overview](../../virtual-network/nat-gateway/nat-overview.md).
:::image type="content" source="./media/nat-gateway-integration/nat-gateway-overview.png" alt-text="Diagram shows Internet traffic flowing to a NAT gateway in an Azure Virtual Network.":::
For more information and pricing. Go to the [NAT gateway overview](/azure/virtua
To configure NAT gateway integration with App Service, you need to complete the following steps:
-* Configure regional VNet Integration with your app as described in [Integrate your app with an Azure virtual network](/azure/app-service/web-sites-integrate-with-vnet)
-* Ensure [Route All](/azure/app-service/web-sites-integrate-with-vnet#routes) is enabled for your VNet Integration so the Internet bound traffic will be affected by routes in your VNet.
+* Configure regional VNet Integration with your app as described in [Integrate your app with an Azure virtual network](../web-sites-integrate-with-vnet.md)
+* Ensure [Route All](../web-sites-integrate-with-vnet.md#routes) is enabled for your VNet Integration so the Internet bound traffic will be affected by routes in your VNet.
* Provision a NAT gateway with a public IP and associate it with the VNet Integration subnet. Set up NAT gateway through the portal:
az network vnet subnet update --resource-group [myResourceGroup] --vnet-name [my
The same NAT gateway can be used across multiple subnets in the same Virtual Network allowing a NAT gateway to be used across multiple apps and App Service plans.
-NAT gateway supports both public IP addresses and public IP prefixes. A NAT gateway can support up to 16 IP addresses across individual IP addresses and prefixes. Each IP address allocates 64,000 ports (SNAT ports) allowing up to 1M available ports. Learn more in the [Scaling section](/azure/virtual-network/nat-gateway/nat-gateway-resource#scaling) of NAT gateway.
+NAT gateway supports both public IP addresses and public IP prefixes. A NAT gateway can support up to 16 IP addresses across individual IP addresses and prefixes. Each IP address allocates 64,000 ports (SNAT ports) allowing up to 1M available ports. Learn more in the [Scaling section](../../virtual-network/nat-gateway/nat-gateway-resource.md#scaling) of NAT gateway.
## Next steps
-For more information on the NAT gateway, see [NAT gateway documentation](/azure/virtual-network/nat-gateway/nat-overview).
+For more information on the NAT gateway, see [NAT gateway documentation](../../virtual-network/nat-gateway/nat-overview.md).
-For more information on VNet Integration, see [VNet Integration documentation](/azure/app-service/web-sites-integrate-with-vnet).
+For more information on VNet Integration, see [VNet Integration documentation](../web-sites-integrate-with-vnet.md).
app-service Overview Inbound Outbound Ips https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/overview-inbound-outbound-ips.md
az webapp show --resource-group <group_name> --name <app_name> --query possibleO
``` ## Get a static outbound IP
-You can control the IP address of outbound traffic from your app by using regional VNet integration together with a virtual network NAT gateway to direct traffic through a static public IP address. [Regional VNet integration](/azure/app-service/web-sites-integration-with-vnet) is available on **Standard**, **Premium**, **PremiumV2** and **PremiumV3** App Service plans. To learn more about this setup, see [NAT gateway integration](/azure/app-service/networking/nat-gateway-integration).
+You can control the IP address of outbound traffic from your app by using regional VNet integration together with a virtual network NAT gateway to direct traffic through a static public IP address. [Regional VNet integration](/azure/app-service/web-sites-integration-with-vnet) is available on **Standard**, **Premium**, **PremiumV2** and **PremiumV3** App Service plans. To learn more about this setup, see [NAT gateway integration](./networking/nat-gateway-integration.md).
## Next steps Learn how to restrict inbound traffic by source IP addresses. > [!div class="nextstepaction"]
-> [Static IP restrictions](app-service-ip-restrictions.md)
+> [Static IP restrictions](app-service-ip-restrictions.md)
app-service Troubleshoot Intermittent Outbound Connection Errors https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/troubleshoot-intermittent-outbound-connection-errors.md
Avoiding the SNAT port problem means avoiding the creation of new connections re
If your destination is an Azure service that supports service endpoints, you can avoid SNAT port exhaustion issues by using [regional VNet Integration](./web-sites-integrate-with-vnet.md) and service endpoints or private endpoints. When you use regional VNet Integration and place service endpoints on the integration subnet, your app outbound traffic to those services will not have outbound SNAT port restrictions. Likewise, if you use regional VNet Integration and private endpoints, you will not have any outbound SNAT port issues to that destination.
-If your destination is an external endpoint outside of Azure, [using a NAT gateway](/azure/app-service/networking/nat-gateway-integration) gives you 64k outbound SNAT ports. It also gives you a dedicated outbound address that you don't share with anybody.
+If your destination is an external endpoint outside of Azure, [using a NAT gateway](./networking/nat-gateway-integration.md) gives you 64k outbound SNAT ports. It also gives you a dedicated outbound address that you don't share with anybody.
If possible, improve your code to use connection pools and avoid the entire situation. It isn't always possible to change code fast enough to mitigate this situation. For the cases where you can't change your code in time, take advantage of the other solutions. The best solution to the problem is to combine all of the solutions as best you can. Try to use service endpoints and private endpoints to Azure services and the NAT gateway for the rest.
You cannot change any Azure settings to release the used SNAT ports sooner, as a
## Additional information * [SNAT with App Service](https://4lowtherabbit.github.io/blogs/2019/10/SNAT/)
-* [Troubleshoot slow app performance issues in Azure App Service](./troubleshoot-performance-degradation.md)
+* [Troubleshoot slow app performance issues in Azure App Service](./troubleshoot-performance-degradation.md)
app-service Tutorial Nodejs Mongodb App https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/tutorial-nodejs-mongodb-app.md
To stop Node.js at any time, press `Ctrl+C` in the terminal.
In this step, you create a MongoDB database in Azure. When your app is deployed to Azure, it uses this cloud database.
-For MongoDB, this tutorial uses [Azure Cosmos DB](/azure/cosmos-db/). Cosmos DB supports MongoDB client connections.
+For MongoDB, this tutorial uses [Azure Cosmos DB](../cosmos-db/index.yml). Cosmos DB supports MongoDB client connections.
### Create a resource group
application-gateway Multiple Site Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/application-gateway/multiple-site-overview.md
Using a wildcard character in the host name, you can match multiple host names i
>[!NOTE] > This feature is in preview and is available only for Standard_v2 and WAF_v2 SKU of Application Gateway. To learn more about previews, see [terms of use here](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
->[!NOTE]
->This feature is currently available only through [Azure PowerShell](tutorial-multiple-sites-powershell.md) and [Azure CLI](tutorial-multiple-sites-cli.md). Portal support is coming soon.
-> Please note that since portal support is not fully available, if you are using only the HostNames parameter, the listener will appear as a Basic listener in the portal and the Host name column of the listener list view will not show the host names that are configured. For any changes to a wildcard listener, make sure you use Azure PowerShell or CLI until it's supported in the portal.
- In [Azure PowerShell](tutorial-multiple-sites-powershell.md), you must use `-HostNames` instead of `-HostName`. With HostNames, you can mention up to 5 host names as comma-separated values and use wildcard characters. For example, `-HostNames "*.contoso.com,*.fabrikam.com"` In [Azure CLI](tutorial-multiple-sites-cli.md), you must use `--host-names` instead of `--host-name`. With host-names, you can mention up to 5 host names as comma-separated values and use wildcard characters. For example, `--host-names "*.contoso.com,*.fabrikam.com"`
+In the Azure portal, under the multi-site listener, you must chose the **Multiple/Wildcard** host type to mention up to five host names with allowed wildcard characters.
++ ### Allowed characters in the host names field * `(A-Z,a-z,0-9)` - alphanumeric characters
applied-ai-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/applied-ai-services/metrics-advisor/whats-new.md
Welcome! This page covers what's new in the Metrics Advisor docs. Check back eve
If you want to learn about the latest updates to Metrics Advisor client SDKs see: * [.NET SDK change log](https://github.com/Azure/azure-sdk-for-net/blob/master/sdk/metricsadvisor/Azure.AI.MetricsAdvisor/CHANGELOG.md)
-* [Java SDK change log ](https://github.com/Azure/azure-sdk-for-jav)
+* [Java SDK change log](https://github.com/Azure/azure-sdk-for-jav)
* [Python SDK change log](https://github.com/Azure/azure-sdk-for-python/blob/master/sdk/metricsadvisor/azure-ai-metricsadvisor/CHANGELOG.md) * [JavaScript SDK change log](https://github.com/Azure/azure-sdk-for-js/blob/master/sdk/metricsadvisor/ai-metrics-advisor/CHANGELOG.md)
If you want to learn about the latest updates to Metrics Advisor client SDKs see
### Updated articles
-* [Update on how Metric Advisor builds an incident tree for multi-dimensional metrics](/azure/applied-ai-services/metrics-advisor/faq#how-does-metric-advisor-build-a-diagnostic-tree-for-multi-dimensional-metrics)
+* [Update on how Metric Advisor builds an incident tree for multi-dimensional metrics](/azure/applied-ai-services/metrics-advisor/faq#how-does-metric-advisor-build-a-diagnostic-tree-for-multi-dimensional-metrics)
automation Automation Send Email https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/automation-send-email.md
You can send an email from a runbook with [SendGrid](https://sendgrid.com/soluti
## Prerequisites * Azure subscription. If you don't have one yet, you can [activate your MSDN subscriber benefits](https://azure.microsoft.com/pricing/member-offers/msdn-benefits-details/) or sign up for a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
-* [A SendGrid account](../sendgrid-dotnet-how-to-send-email.md#create-a-sendgrid-account).
+* [A SendGrid account](https://docs.sendgrid.com/for-developers/partners/microsoft-azure-2021#create-a-sendgrid-account).
* Sender Verification has been configured in Send Grid. Either [Domain or Single Sender](https://sendgrid.com/docs/for-developers/sending-email/sender-identity/) * [Automation account](./index.yml) with **Az** modules. * [Run As account](./automation-security-overview.md#run-as-accounts) to store and execute the runbook.
You can send an email from a runbook with [SendGrid](https://sendgrid.com/soluti
You can create an Azure Key Vault using the following PowerShell script. Replace the variable values with values specific to your environment. Use the embedded Azure Cloud Shell via the **Try It** button, located in the top-right corner of the code block. You can also copy and run the code locally if you have the [Az modules](/powershell/azure/install-az-ps) installed on your local machine. This script also creates a [Key Vault access policy](../key-vault/general/assign-access-policy-portal.md) that allows the Run As account to get and set key vault secrets in the specified key vault. > [!NOTE]
-> To retrieve your API key, use the steps in [Find your SendGrid API key](../sendgrid-dotnet-how-to-send-email.md#to-find-your-sendgrid-api-key).
+> To retrieve your API key, use the steps in [Find your SendGrid API key](https://docs.sendgrid.com/for-developers/partners/microsoft-azure-2021#to-find-your-sendgrid-api-key).
```azurepowershell-interactive $SubscriptionId = "<subscription ID>"
Remove-AzKeyVault -VaultName $VaultName -ResourceGroupName $ResourceGroupName
* To send runbook job data to your Log Analytics workspace, see [Forward Azure Automation job data to Azure Monitor logs](automation-manage-send-joblogs-log-analytics.md). * To monitor base-level metrics and logs, see [Use an alert to trigger an Azure Automation runbook](automation-create-alert-triggered-runbook.md).
-* To correct issues arising during runbook operations, see [Troubleshoot runbook issues](./troubleshoot/runbooks.md).
+* To correct issues arising during runbook operations, see [Troubleshoot runbook issues](./troubleshoot/runbooks.md).
availability-zones Az Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/availability-zones/az-overview.md
To better understand regions and Availability Zones in Azure, it helps to unders
## Regions
-A region is a set of datacenters deployed within a latency-defined perimeter and connected through a dedicated regional low-latency network. Azure gives you the flexibility to deploy applications where you need to, including across multiple regions to deliver cross-region resiliency. For more information, see [Overview of the resiliency pillar](/azure/architecture/framework/resiliency/overview).
+A region is a set of datacenters deployed within a latency-defined perimeter and connected through a dedicated regional low-latency network. Azure gives you the flexibility to deploy applications where you need to, including across multiple regions to deliver cross-region resiliency. For more information, see [Overview of the resiliency pillar](/azure/architecture/framework/resiliency/principles).
## Availability Zones
azure-app-configuration Enable Dynamic Configuration Java Spring Push Refresh https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-app-configuration/enable-dynamic-configuration-java-spring-push-refresh.md
The App Configuration Java Spring client library supports updating configuration
- Poll Model: This is the default behavior that uses polling to detect changes in configuration. Once the cached value of a setting expires, the next call to `AppConfigurationRefresh`'s `refreshConfigurations` sends a request to the server to check if the configuration has changed, and pulls the updated configuration if needed. -- Push Model: This uses [App Configuration events](./concept-app-configuration-event.md) to detect changes in configuration. Once App Configuration is set up to send key value change events with Event Grid, with a [Web Hook](/azure/event-grid/handler-event-hubs), the application can use these events to optimize the total number of requests needed to keep the configuration updated.
+- Push Model: This uses [App Configuration events](./concept-app-configuration-event.md) to detect changes in configuration. Once App Configuration is set up to send key value change events with Event Grid, with a [Web Hook](../event-grid/handler-event-hubs.md), the application can use these events to optimize the total number of requests needed to keep the configuration updated.
This tutorial shows how you can implement dynamic configuration updates in your code using push refresh. It builds on the app introduced in the quickstarts. Before you continue, finish [Create a Java Spring app with App Configuration](./quickstart-java-spring-app.md) first.
In this tutorial, you learn how to:
</dependency> ```
-1. Setup [Maven App Service Deployment](/azure/app-service/quickstart-java?tabs=javase) so the application can be deployed to Azure App Service via Maven.
+1. Setup [Maven App Service Deployment](../app-service/quickstart-java.md?tabs=javase) so the application can be deployed to Azure App Service via Maven.
```console mvn com.microsoft.azure:azure-webapp-maven-plugin:1.12.0:config
A random delay is added before the cached value is marked as dirty to reduce pot
## Build and run the app locally
-Event Grid Web Hooks require validation on creation. You can validate by following this [guide](/azure/event-grid/webhook-event-delivery) or by starting your application with Azure App Configuration Spring Web Library already configured, which will register your application for you. To use an event subscription, follow the steps in the next two sections.
+Event Grid Web Hooks require validation on creation. You can validate by following this [guide](../event-grid/webhook-event-delivery.md) or by starting your application with Azure App Configuration Spring Web Library already configured, which will register your application for you. To use an event subscription, follow the steps in the next two sections.
1. Set the environment variable to your App Configuration instance's connection string:
Event Grid Web Hooks require validation on creation. You can validate by followi
:::image type="content" source="./media/event-subscription-view-webhook.png" alt-text="Web Hook shows up in a table on the bottom of the page." ::: > [!NOTE]
-> When subscribing for configuration changes, one or more filters can be used to reduce the number of events sent to your application. These can be configured either as [Event Grid subscription filters](/azure/event-grid/event-filtering) or [Service Bus subscription filters](/azure/service-bus-messaging/topic-filters). For example, a subscription filter can be used to only subscribe to events for changes in a key that starts with a specific string.
+> When subscribing for configuration changes, one or more filters can be used to reduce the number of events sent to your application. These can be configured either as [Event Grid subscription filters](../event-grid/event-filtering.md) or [Service Bus subscription filters](../service-bus-messaging/topic-filters.md). For example, a subscription filter can be used to only subscribe to events for changes in a key that starts with a specific string.
## Verify and test application
Event Grid Web Hooks require validation on creation. You can validate by followi
In this tutorial, you enabled your Java app to dynamically refresh configuration settings from App Configuration. To learn how to use an Azure managed identity to streamline the access to App Configuration, continue to the next tutorial. > [!div class="nextstepaction"]
-> [Managed identity integration](./howto-integrate-azure-managed-service-identity.md)
+> [Managed identity integration](./howto-integrate-azure-managed-service-identity.md)
azure-arc Manage Postgresql Hyperscale Server Group With Azure Data Studio https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/manage-postgresql-hyperscale-server-group-with-azure-data-studio.md
This article describes how to:
[!INCLUDE [use-insider-azure-data-studio](includes/use-insider-azure-data-studio.md)] -- Create the [Azure Arc Data Controller](create-data-controller-using-azdata.md)
+- Create the [Azure Arc Data Controller](./create-data-controller-indirect-cli.md)
- Launch Azure Data Studio ## Connect to the Azure Arc Data Controller
Once connected, several experiences are available:
- **...** ## Next step
-[Monitor your server group](monitor-grafana-kibana.md)
+[Monitor your server group](monitor-grafana-kibana.md)
azure-arc Manage Vm Extensions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/servers/manage-vm-extensions.md
Title: VM extension management with Azure Arc-enabled servers description: Azure Arc-enabled servers can manage deployment of virtual machine extensions that provide post-deployment configuration and automation tasks with non-Azure VMs. Previously updated : 08/05/2021 Last updated : 08/09/2021
Azure Arc-enabled servers enables you to deploy and remove Azure VM extensions t
Azure Arc-enabled servers VM extension support provides the following key benefits: -- Collect log data for analysis with [Logs in Azure Monitor](../../azure-monitor/logs/data-platform-logs.md) by enabling the Log Analytics agent VM extension. This is useful for doing complex analysis across data from different kinds of sources.
+- Collect log data for analysis with [Logs in Azure Monitor](../../azure-monitor/logs/data-platform-logs.md) by enabling the Log Analytics agent VM extension. Log Analytics makes it useful for doing complex analysis across log data from different kinds of sources.
- With [VM insights](../../azure-monitor/vm/vminsights-overview.md), it analyzes the performance of your Windows and Linux VMs, and monitors their processes and dependencies on other resources and external processes. This is achieved through enabling both the Log Analytics agent and Dependency agent VM extensions.
To learn about the Azure Connected Machine agent package and details about the E
> [!NOTE] > Recently support for the DSC VM extension was removed for Arc-enabled servers. Alternatively, we recommend using the Custom Script Extension to manage the post-deployment configuration of your server or machine.
+Arc-enabled servers support moving machines with one or more VM extensions installed between resource groups or another Azure subscription without experiencing any impact to their configuration. The source and destination subscriptions must exist within the same [Azure Active Directory tenant](../../active-directory/develop/quickstart-create-new-tenant.md). For more information about moving resources and considerations before proceeding, see [Move resources to a new resource group or subscription](../../azure-resource-manager/management/move-resource-group-and-subscription.md).
+ ### Windows extensions |Extension |Publisher |Type |Additional information |
azure-australia Azure Key Vault https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-australia/azure-key-vault.md
There are three aspects to storage and keys stored in Key Vault:
- Azure Storage Service Encryption (SSE) for data at rest - Managed disks and Azure Disk Encryption
-Key Vault's Azure Storage account key management is an extension to Key Vault's key service that supports synchronization and regeneration (rotation) of storage account keys. [Azure Storage integration with Azure Active Directory](../storage/common/storage-auth-aad.md) (preview) is recommended when released as it provides superior security and ease of use.
+Key Vault's Azure Storage account key management is an extension to Key Vault's key service that supports synchronization and regeneration (rotation) of storage account keys. [Azure Storage integration with Azure Active Directory](../storage/blobs/authorize-access-azure-active-directory.md) (preview) is recommended when released as it provides superior security and ease of use.
SSE uses two keys to manage encryption of data at rest: - Key Encryption Keys (KEK)
azure-cache-for-redis Cache How To Scale https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-cache-for-redis/cache-how-to-scale.md
You can monitor the following metrics to help determine if you need to scale.
- High memory usage indicates that your data size is too large for the current cache size. Consider scaling to a cache size with larger memory. - Client connections - Each cache size has a limit to the number of client connections it can support. If your client connections are close to the limit for the cache size, consider scaling up to a larger tier, or scaling out to enable clustering and increase shard count. Your choice depends on the Redis server load and memory usage.
- - For more information on connection limits by cache size, see [Azure Cache for Redis planning FAQs](/azure/azure-cache-for-redis/cache-planning-faq).
+ - For more information on connection limits by cache size, see [Azure Cache for Redis planning FAQs](./cache-planning-faq.yml).
- Network Bandwidth - If the Redis server exceeds the available bandwidth, clients requests could time out because the server can't push data to the client fast enough. Check "Cache Read" and "Cache Write" metrics to see how much server-side bandwidth is being used. If you Redis server is exceeding available network bandwidth, you should consider scaling up to a larger cache size with higher network bandwidth.
- - For more information on network available bandwidth by cache size, see [Azure Cache for Redis planning FAQs](/azure/azure-cache-for-redis/cache-planning-faq).
+ - For more information on network available bandwidth by cache size, see [Azure Cache for Redis planning FAQs](./cache-planning-faq.yml).
If you determine your cache is no longer meeting your application's requirements, you can scale to an appropriate cache pricing tier for your application. You can choose a larger or smaller cache to match your needs.
-For more information on determining the cache pricing tier to use, see [Choosing the right tier](cache-overview.md#choosing-the-right-tier) and [Azure Cache for Redis planning FAQs](/azure/azure-cache-for-redis/cache-planning-faq).
+For more information on determining the cache pricing tier to use, see [Choosing the right tier](cache-overview.md#choosing-the-right-tier) and [Azure Cache for Redis planning FAQs](./cache-planning-faq.yml).
## Scale a cache
In the Azure portal, you can see the scaling operation in progress. When scaling
[redis-cache-pricing-tier-blade]: ./media/cache-how-to-scale/redis-cache-pricing-tier-blade.png
-[redis-cache-scaling]: ./media/cache-how-to-scale/redis-cache-scaling.png
+[redis-cache-scaling]: ./media/cache-how-to-scale/redis-cache-scaling.png
azure-cache-for-redis Cache How To Version https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-cache-for-redis/cache-how-to-version.md
To create a cache, follow these steps:
> At this time, the Redis version can't be changed once a cache is created. >
+## FAQ
+
+### What features aren't supported with Redis 6?
+
+Currently, Redis 6 does not support clustering, zone redundancy, ACL, PowerShell, Azure CLI, Terraform, and geo-replication between a Redis 4.0 and 6.0 cache.
+
+### Can I change the version of my cache after it's created?
+
+Currently, you cannot change the version of your cache once it's created.
+ ## Next Steps Learn more about Azure Cache for Redis features.
azure-cache-for-redis Cache Troubleshoot Server https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-cache-for-redis/cache-troubleshoot-server.md
There are several possible changes you can make to help keep memory usage health
- Break up your large cached objects into smaller related objects. - [Create alerts](cache-how-to-monitor.md#alerts) on metrics like used memory to be notified early about potential impacts. - [Scale](cache-how-to-scale.md) to a larger cache size with more memory capacity.-- [Scale](cache-how-to-scale.md) to a larger cache size with more memory capacity. For more information, see [Azure Cache for Redis planning FAQs](/azure/azure-cache-for-redis/cache-planning-faq).
+- [Scale](cache-how-to-scale.md) to a larger cache size with more memory capacity. For more information, see [Azure Cache for Redis planning FAQs](./cache-planning-faq.yml).
## High CPU usage or server load
There are several changes you can make to mitigate high server load:
- Investigate what is causing CPU spikes such as [long-running commands](#long-running-commands) noted below or page faulting because of high memory pressure. - [Create alerts](cache-how-to-monitor.md#alerts) on metrics like CPU or server load to be notified early about potential impacts.-- [Scale](cache-how-to-scale.md) out to more shards to distribute load across multiple Redis processes or scale up to a larger cache size with more CPU cores. For more information, see [Azure Cache for Redis planning FAQs](/azure/azure-cache-for-redis/cache-planning-faq).
+- [Scale](cache-how-to-scale.md) out to more shards to distribute load across multiple Redis processes or scale up to a larger cache size with more CPU cores. For more information, see [Azure Cache for Redis planning FAQs](./cache-planning-faq.yml).
## Long-running commands
To mitigate situations where network bandwidth usage is close to maximum capacit
- Change client call behavior to reduce network demand. - [Create alerts](cache-how-to-monitor.md#alerts) on metrics like cache read or cache write to be notified early about potential impacts.-- [Scale](cache-how-to-scale.md) to a larger cache size with more network bandwidth capacity. For more information, see [Azure Cache for Redis planning FAQs](/azure/azure-cache-for-redis/cache-planning-faq).
+- [Scale](cache-how-to-scale.md) to a larger cache size with more network bandwidth capacity. For more information, see [Azure Cache for Redis planning FAQs](./cache-planning-faq.yml).
## Additional information
To mitigate situations where network bandwidth usage is close to maximum capacit
- [Choosing the right tier](cache-overview.md#choosing-the-right-tier) - [How can I benchmark and test the performance of my cache?](cache-management-faq.yml#how-can-i-benchmark-and-test-the-performance-of-my-cache-) - [How to monitor Azure Cache for Redis](cache-how-to-monitor.md)-- [How can I run Redis commands?](cache-development-faq.yml#how-can-i-run-redis-commands-)
+- [How can I run Redis commands?](cache-development-faq.yml#how-can-i-run-redis-commands-)
azure-functions Event Driven Scaling https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/event-driven-scaling.md
You may wish to restrict the maximum number of instances an app used to scale ou
az resource update --resource-type Microsoft.Web/sites -g <RESOURCE_GROUP> -n <FUNCTION_APP-NAME>/config/web --set properties.functionAppScaleLimit=<SCALE_LIMIT> ```
+```azurepowershell
+$resource = Get-AzResource -ResourceType Microsoft.Web/sites -ResourceGroupName <RESOURCE_GROUP> -Name <FUNCTION_APP-NAME>/config/web
+$resource.Properties.functionAppScaleLimit = <SCALE_LIMIT>
+$resource | Set-AzResource -Force
+```
+ ## Best practices and patterns for scalable apps There are many aspects of a function app that impacts how it scales, including host configuration, runtime footprint, and resource efficiency. For more information, see the [scalability section of the performance considerations article](functions-best-practices.md#scalability-best-practices). You should also be aware of how connections behave as your function app scales. For more information, see [How to manage connections in Azure Functions](manage-connections.md).
azure-functions Functions Networking Options https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-networking-options.md
This feature is supported for all Windows virtual network-supported SKUs in the
You can use Azure Key Vault references to use secrets from Azure Key Vault in your Azure Functions application without requiring any code changes. Azure Key Vault is a service that provides centralized secrets management, with full control over access policies and audit history.
-Currently, [Key Vault references](../app-service/app-service-key-vault-references.md) won't work if your key vault is secured with service endpoints. To connect to a key vault by using virtual network integration, you need to call Key Vault in your application code.
+If virtual network integration is configured for the app, [Key Vault references](../app-service/app-service-key-vault-references.md) may be used to retireve secrets from a network-restricted vault.
## Virtual network triggers (non-HTTP)
azure-government Azure Services In Fedramp Auditscope https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-government/compliance/azure-services-in-fedramp-auditscope.md
Title: Azure Services in FedRAMP and DoD SRG Audit Scope
-description: This article contains tables for Azure Public and Azure Government that illustrate what FedRAMP (Moderate vs. High) and DoD SRG (Impact level 2, 4, 5 or 6) audit scope a given service has reached.
-- Previously updated : 08/04/2021
+ Title: Azure and other Microsoft cloud services compliance scope
+description: This article tracks FedRAMP, DoD, and ICD 503 compliance scope for Azure, Dynamics 365, Microsoft 365, and Power Platform cloud services across Azure, Azure Government, and Azure Government Secret cloud environments.
-+ Last updated : 08/09/2021
-# Azure services by FedRAMP and DoD CC SRG audit scope
+# Azure, Dynamics 365, Microsoft 365, and Power Platform services compliance scope
-Microsoft's government cloud services meet the demanding requirements of the US Federal Risk & Authorization Management Program (FedRAMP) and of the US Department of Defense, from information impact levels 2 through 6. By deploying protected services including Azure Government, Office 365 U.S. Government, and Dynamics 365 Government, federal and defense agencies can leverage a rich array of compliant services.
+Microsoft Azure cloud environments meet demanding US government compliance requirements that produce formal authorizations, including:
-This article provides a detailed list of in-scope cloud services across Azure Public and Azure Government for FedRAMP and DoD CC SRG compliance offerings.
+- [Federal Risk and Authorization Management Program](https://www.fedramp.gov/) (FedRAMP)
+- Department of Defense (DoD) Cloud Computing [Security Requirements Guide](https://dl.dod.cyber.mil/wp-content/uploads/cloud/SRG/https://docsupdatetracker.net/index.html) (SRG) Impact Level (IL) 2, 4, 5, and 6
+- [Intelligence Community Directive (ICD) 503](http://www.dni.gov/files/documents/ICD/ICD_503.pdf)
-#### Terminology/symbols used
+**Azure** (also known as Azure Commercial, Azure Public, or Azure Global) maintains the following authorizations:
-* DoD CC SRG = Department of Defense Cloud Computing Security Requirements Guide
-* IL = Impact Level
-* FedRAMP = Federal Risk and Authorization Management Program
-* 3PAO = Third Party Assessment Organization
-* JAB = Joint Authorization Board
-* :heavy_check_mark: = indicates the service has achieved this audit scope.
-* Planned 2021 = indicates the service will be reviewed by 3PAO and JAB in 2021. Once the service is authorized, status will be updated
+- [FedRAMP High](/azure/compliance/offerings/offering-fedramp) Provisional Authorization to Operate (P-ATO) issued by the FedRAMP Joint Authorization Board (JAB)
+- [DoD IL2](/azure/compliance/offerings/offering-dod-il2) Provisional Authorization (PA) issued by the Defense Information Systems Agency (DISA)
+
+**Azure Government** maintains the following authorizations that pertain to Azure Government regions US Gov Arizona, US Gov Texas, and US Gov Virginia:
+
+- [FedRAMP High](/azure/compliance/offerings/offering-fedramp) P-ATO issued by the JAB
+- [DoD IL2](/azure/compliance/offerings/offering-dod-il2) PA issued by DISA
+- [DoD IL4](/azure/compliance/offerings/offering-dod-il4) PA issued by DISA
+- [DoD IL5](/azure/compliance/offerings/offering-dod-il5) PA issued by DISA
+
+For current Azure Government regions and available services, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=all&regions=non-regional,usgov-non-regional,us-dod-central,us-dod-east,usgov-arizona,usgov-texas,usgov-virginia).
+
+> [!NOTE]
+>
+> - Some Azure services deployed in Azure Government regions (US Gov Arizona, US Gov Texas, and US Gov Virginia) require extra configuration to meet DoD IL5 compute and storage isolation requirements, as explained in **[Isolation guidelines for Impact Level 5 workloads](../documentation-government-impact-level-5.md).**
+> - For DoD IL5 PA compliance scope in Azure Government DoD regions (US DoD Central and US DoD East), see **[Azure Government DoD regions IL5 audit scope](../documentation-government-overview-dod.md#azure-government-dod-regions-il5-audit-scope).**
+
+**Azure Government Secret** maintains:
+
+- [DoD IL6](/azure/compliance/offerings/offering-dod-il6) PA issued by DISA
+- [ICD 503](/azure/compliance/offerings/offering-icd-503) with facilities at ICD 705 (for authorization details, contact your Microsoft account representative)
+
+This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and Power Platform cloud services in scope for the above authorizations across Azure, Azure Government, and Azure Government Secret cloud environments.
## Azure public services by audit scope
-| _Last Updated: August 2021_ |
+*Last Updated: August 2021*
+
+### Terminology used
-| Azure Service| DoD CC SRG IL 2 | FedRAMP Moderate | FedRAMP High | Planned 2021 |
-| |::|:-:|::|::|
-| [AI Builder](/ai-builder/overview) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
-| [API Management](https://azure.microsoft.com/services/api-management/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
-| [Application Change Analysis](../../azure-monitor/app/change-analysis.md) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
-| [Application Gateway](https://azure.microsoft.com/services/application-gateway/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
-| [Automation](https://azure.microsoft.com/services/automation/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
-| [Azure Active Directory (Free and Basic)](https://azure.microsoft.com/services/active-directory/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
-| [Azure Active Directory (Premium P1 + P2)](https://azure.microsoft.com/services/active-directory/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
-| [Azure Active Directory B2C](https://azure.microsoft.com/services/active-directory-b2c/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
-| [Azure Active Directory Domain Services](https://azure.microsoft.com/services/active-directory-ds/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
-| [Azure Active Directory Provisioning Service](../../active-directory/app-provisioning/user-provisioning.md)| :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
-| [Microsoft Defender for Identity](https://azure.microsoft.com/features/azure-advanced-threat-protection/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
-| [Azure Advisor](https://azure.microsoft.com/services/advisor/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
-| [Azure Analysis Services](https://azure.microsoft.com/services/analysis-services/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
-| [Azure App Configuration](https://azure.microsoft.com/services/app-configuration/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
-| [Azure API for FHIR](https://azure.microsoft.com/services/azure-api-for-fhir/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
-| [Azure Arc enabled Servers](../../azure-arc/servers/overview.md) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
-| [Azure Bastion](https://azure.microsoft.com/services/azure-bastion/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
-| [Azure Blueprints](https://azure.microsoft.com/services/blueprints/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
-| [Azure Bot Service](/azure/bot-service/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
-| [Azure Archive Storage](https://azure.microsoft.com/services/storage/archive/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
-| [Azure Cosmos DB](https://azure.microsoft.com/services/cosmos-db/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
-| [Azure Cost Management](https://azure.microsoft.com/services/cost-management/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
-| [Azure Stack Edge (Data Box Edge)](https://azure.microsoft.com/services/databox/edge/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
-| [Azure Data Box](https://azure.microsoft.com/services/databox/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark:**&ast;** | |
-| [Azure Data Explorer](https://azure.microsoft.com/services/data-explorer/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
-| [Azure Data Share](https://azure.microsoft.com/services/data-share/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
-| [Azure Database for MySQL](https://azure.microsoft.com/services/mysql/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
-| [Azure Database for PostgreSQL](https://azure.microsoft.com/services/postgresql/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
-| [Azure Database for MariaDB](https://azure.microsoft.com/services/mariadb/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
-| [Azure Database Migration Service](https://azure.microsoft.com/services/database-migration/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
-| [Azure Databricks](https://azure.microsoft.com/services/databricks/)| :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark:**&ast;&ast;** | |
-| [Azure DDoS Protection](https://azure.microsoft.com/services/ddos-protection/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
-| [Azure Dedicated HSM](https://azure.microsoft.com/services/azure-dedicated-hsm/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
-| [Azure DevTest Labs](https://azure.microsoft.com/services/devtest-lab/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
-| [Azure DNS](https://azure.microsoft.com/services/dns/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
-| [Azure for Education](https://azure.microsoft.com/developer/students/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
-| [Azure File Sync](https://azure.microsoft.com/services/storage/files/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
-| [Azure Firewall](https://azure.microsoft.com/services/azure-firewall/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
-| [Azure Firewall Manager](https://azure.microsoft.com/services/firewall-manager/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
-| [Azure Front Door](https://azure.microsoft.com/services/frontdoor/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
-| [Azure HPC Cache](https://azure.microsoft.com/services/hpc-cache/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
-| [Azure Information Protection](https://azure.microsoft.com/services/information-protection/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
-| [Microsoft Intune](/intune/what-is-intune) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
-| [Azure IoT Security](https://azure.microsoft.com/overview/iot/security/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
-| [Azure Internet Analyzer](https://azure.microsoft.com/services/internet-analyzer/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
-| [Azure Kubernetes Service (AKS)](https://azure.microsoft.com/services/kubernetes-service/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
-| [Azure Lab Services](https://azure.microsoft.com/services/lab-services/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
-| [Azure Lighthouse](https://azure.microsoft.com/services/azure-lighthouse/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
-| [Azure Machine Learning Services](https://azure.microsoft.com/services/machine-learning-service/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
-| [Azure Managed Applications](https://azure.microsoft.com/services/managed-applications/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
-| [Azure Marketplace Portal](https://azuremarketplace.microsoft.com/en-us) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
-| [Azure Maps](https://azure.microsoft.com/services/azure-maps/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
-| [Azure Migrate](https://azure.microsoft.com/services/azure-migrate/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
-| [Azure Monitor](https://azure.microsoft.com/services/monitor/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
-| [Azure NetApp Files](https://azure.microsoft.com/services/netapp/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
-| [Azure Open Datasets](https://azure.microsoft.com/services/open-datasets/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
-| [Azure Policy](https://azure.microsoft.com/services/azure-policy/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
-| [Azure Private Link](https://azure.microsoft.com/services/private-link/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
-| [Azure Public IP](../../virtual-network/public-ip-addresses.md) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
-| [Azure RedHat OpenShift](https://azure.microsoft.com/services/openshift/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
-| [Azure Resource Graph](../../governance/resource-graph/overview.md) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
-| [Azure Resource Manager](https://azure.microsoft.com/features/resource-manager/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
-| [Azure Cognitive Search](https://azure.microsoft.com/services/search/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
-| [Azure Service Manager (RDFE)](/previous-versions/azure/ee460799(v=azure.100)) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
-| [Azure Service Health](https://azure.microsoft.com/features/service-health/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
-| [Azure Sentinel](https://azure.microsoft.com/services/azure-sentinel/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
-| [Azure SignalR Service](https://azure.microsoft.com/services/signalr-service/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
-| [Azure Site Recovery](https://azure.microsoft.com/services/site-recovery/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
-| [Azure Sphere](https://azure.microsoft.com/services/azure-sphere/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
-| [Azure VMware Solution](https://azure.microsoft.com/services/azure-vmware/) | | | | :heavy_check_mark: |
-| [Backup](https://azure.microsoft.com/services/backup/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
-| [Batch](https://azure.microsoft.com/services/batch/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
-| [Cloud Shell](https://azure.microsoft.com/features/cloud-shell/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
-| [Cloud Services](https://azure.microsoft.com/services/cloud-services/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
-| [Cognitive
-| [Cognitive
-| [Cognitive Services Containers](https://docs.microsoft.com/azure/cognitive-services/cognitive-services-container-support) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
-| [Cognitive
-| [Cognitive
-| [Cognitive
-| [Cognitive
-| [Cognitive Services Personalizer](https://azure.microsoft.com/services/cognitive-services/personalizer/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
-| [Cognitive
-| [Cognitive
-| [Cognitive
-| [Cognitive
-| [Cognitive
-| [Container Instances](https://azure.microsoft.com/services/container-instances/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
-| [Container Registry](https://azure.microsoft.com/services/container-registry/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
-| [Content Delivery Network](https://azure.microsoft.com/services/cdn/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
-| [Customer Lockbox](../../security/fundamentals/customer-lockbox-overview.md) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
-| [Data Factory](https://azure.microsoft.com/services/data-factory/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
-| [Data Integrator](/power-platform/admin/data-integrator) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
-| [Dynamics 365 Commerce](https://dynamics.microsoft.com/commerce/overview/)| :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
-| [Dynamics 365 Customer Service](https://dynamics.microsoft.com/customer-service/overview/)| :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
-| [Dynamics 365 Field Service](https://dynamics.microsoft.com/field-service/overview/)| :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
-| [Dynamics 365 Finance](https://dynamics.microsoft.com/finance/overview/)| :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
-| [Dynamics 365 Guides](/dynamics365/mixed-reality/guides/get-started)| :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
-| [Dynamics 365 Sales](https://docs.microsoft.com/dynamics365/sales-enterprise/overview) | | | | :heavy_check_mark: |
-| [Dynamics 365 Sales Professional](https://docs.microsoft.com/dynamics365/sales-professional/sales-professional-overview) | | | | :heavy_check_mark: |
-| [Dynamics 365 Supply Chain](https://dynamics.microsoft.com/supply-chain-management/overview/)| :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
-| [Dynamics 365 Chat (Dynamics 365 Omni-Channel Engagement Hub)](/dynamics365/omnichannel/introduction-omnichannel) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
-| [Dataverse (Common Data Service)](/powerapps/maker/common-data-service/data-platform-intro) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
-| [Event Grid](https://azure.microsoft.com/services/event-grid/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
-| [Microsoft Defender for Endpoint](/windows/security/threat-protection/microsoft-defender-atp/microsoft-defender-advanced-threat-protection) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
-| [Event Hubs](https://azure.microsoft.com/services/event-hubs/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
-| [ExpressRoute](https://azure.microsoft.com/services/expressroute/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
-| [Power Automate](https://powerplatform.microsoft.com/power-automate/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
-| [Functions](https://azure.microsoft.com/services/functions/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
-| [GitHub AE](https://docs.github.com/en/github-ae@latest/admin/overview/about-github-ae) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
-| [Guest Configuration](../../governance/policy/concepts/guest-configuration.md) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
-| [HDInsight](https://azure.microsoft.com/services/hdinsight/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
-| [Import / Export](https://azure.microsoft.com/services/storage/import-export/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
-| [IoT Central](https://azure.microsoft.com/services/iot-central/) | | | | :heavy_check_mark: |
-| [IoT Hub](https://azure.microsoft.com/services/iot-hub/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
-| [Key Vault](https://azure.microsoft.com/services/key-vault/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
-| [Load Balancer](https://azure.microsoft.com/services/load-balancer/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
-| [Log Analytics](../../azure-monitor/logs/data-platform-logs.md) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
-| [Logic Apps](https://azure.microsoft.com/services/logic-apps/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
-| [Media Services](https://azure.microsoft.com/services/media-services/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
-| [Microsoft 365 Defender](https://docs.microsoft.com/microsoft-365/security/defender/microsoft-365-defender?view=o365-worldwide) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
-| [Microsoft Azure Attestation](https://azure.microsoft.com/services/azure-attestation/)| :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
-| [Microsoft Azure portal](https://azure.microsoft.com/features/azure-portal/)| :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
-| [Microsoft Azure Peering Service](../../peering-service/about.md) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
-| [Microsoft Cloud App Security](/cloud-app-security/what-is-cloud-app-security) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
-| [Microsoft Graph](https://developer.microsoft.com/en-us/graph) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
-| [Azure Health Bot](/healthbot/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
-| [Power Apps](/powerapps/powerapps-overview) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
-| [Power Apps Portal](https://powerapps.microsoft.com/portals/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
-| [Microsoft Stream](/stream/overview) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
-| [Microsoft Threat Experts](/windows/security/threat-protection/microsoft-defender-atp/microsoft-threat-experts) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
-| [Multi-Factor Authentication](../../active-directory/authentication/concept-mfa-howitworks.md) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
-| [Network Watcher](https://azure.microsoft.com/services/network-watcher/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
-| [Network Watcher Traffic Analytics](../../network-watcher/traffic-analytics.md) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
-| [Notification Hubs](https://azure.microsoft.com/services/notification-hubs/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
-| [Power BI Embedded](https://azure.microsoft.com/services/power-bi-embedded/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
-| [Power Virtual Agents](/power-virtual-agents/fundamentals-what-is-power-virtual-agents) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
-| [Redis Cache](https://azure.microsoft.com/services/cache/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
-| [Scheduler](../../scheduler/scheduler-intro.md) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
-| [Security Center](https://azure.microsoft.com/services/security-center/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
-| [Service Bus](https://azure.microsoft.com/services/service-bus/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
-| [Service Fabric](https://azure.microsoft.com/services/service-fabric/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
-| [Azure Synapse Analytics](https://azure.microsoft.com/services/sql-data-warehouse/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
-| [SQL Database](https://azure.microsoft.com/services/sql-database/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
-| [SQL Server Stretch Database](https://azure.microsoft.com/services/sql-server-stretch-database/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
-| [Storage: Blobs (Incl. Azure Data Lake Storage Gen2](https://azure.microsoft.com/services/storage/blobs/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
-| [Storage: Disks (incl. Managed Disks)](https://azure.microsoft.com/services/storage/disks/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
-| [Storage: Files](https://azure.microsoft.com/services/storage/files/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
-| [Storage: Queues](https://azure.microsoft.com/services/storage/queues/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
-| [Storage: Tables](https://azure.microsoft.com/services/storage/tables/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
-| [StorSimple](https://azure.microsoft.com/services/storsimple/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
-| [Stream Analytics](https://azure.microsoft.com/services/stream-analytics/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
-| [Time Series Insights](https://azure.microsoft.com/services/time-series-insights/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
-| [Traffic Manager](https://azure.microsoft.com/services/traffic-manager/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
-| [UEBA for Sentinel](https://docs.microsoft.com/azure/sentinel/identify-threats-with-entity-behavior-analytics#what-is-user-and-entity-behavior-analytics-ueba) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
-| [Virtual Machine Scale Sets](https://azure.microsoft.com/services/virtual-machine-scale-sets/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
-| [Virtual Machines (incl. Reserved Instances)](https://azure.microsoft.com/services/virtual-machines/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
-| [Virtual Network](https://azure.microsoft.com/services/virtual-network/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
-| [Virtual Network NAT](../../virtual-network/nat-gateway/nat-overview.md) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
-| [Virtual WAN](https://azure.microsoft.com/services/virtual-wan/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
-| [Visual Studio Codespaces](https://azure.microsoft.com/services/visual-studio-online/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
-| [VPN Gateway](https://azure.microsoft.com/services/vpn-gateway/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
-| [Web Apps (App Service)](https://azure.microsoft.com/services/app-service/web/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
-| [Web Application Firewall)](https://azure.microsoft.com/services/web-application-firewall/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
-| [Windows 10 IoT Core Services](https://azure.microsoft.com/services/windows-10-iot-core/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
-| [Azure Virtual Desktop](https://azure.microsoft.com/services/virtual-desktop/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
+- FedRAMP High = FedRAMP High P-ATO in Azure
+- DoD IL2 = DoD SRG Impact Level 2 PA in Azure
+- &#x2705; = service is included in audit scope and has been authorized
+- Planned 2021 = service will undergo a FedRAMP High assessment in 2021 - once the service is authorized, status will be updated
-**&ast;** FedRAMP high certification covers Datacenter Infrastructure Services & Databox Pod and Disk Service which are the online software components supporting Data Box hardware appliance.
+| Service | DoD IL2 | FedRAMP High | Planned 2021 |
+| - |:--:|::|::|
+| [AI Builder](/ai-builder/overview) | &#x2705; | &#x2705; | |
+| [API Management](https://azure.microsoft.com/services/api-management/) | &#x2705; | &#x2705; | |
+| [App Configuration](https://azure.microsoft.com/services/app-configuration/) | &#x2705; | &#x2705; | |
+| [Application Gateway](https://azure.microsoft.com/services/application-gateway/) | &#x2705; | &#x2705; | |
+| [Automation](https://azure.microsoft.com/services/automation/) | &#x2705; | &#x2705; | |
+| [Azure Active Directory (Free and Basic)](../../active-directory/fundamentals/active-directory-whatis.md#what-are-the-azure-ad-licenses) | &#x2705; | &#x2705; | |
+| [Azure Active Directory (Premium P1 + P2)](../../active-directory/fundamentals/active-directory-whatis.md#what-are-the-azure-ad-licenses) | &#x2705; | &#x2705; | |
+| [Azure Active Directory B2C](https://azure.microsoft.com/services/active-directory-b2c/) | &#x2705; | &#x2705; | |
+| [Azure Active Directory Domain Services](https://azure.microsoft.com/services/active-directory-ds/) | &#x2705; | &#x2705; | |
+| [Azure Active Directory Provisioning Service](../../active-directory/app-provisioning/user-provisioning.md)| &#x2705; | &#x2705; | |
+| [Azure Advisor](https://azure.microsoft.com/services/advisor/) | &#x2705; | &#x2705; | |
+| [Azure Analysis Services](https://azure.microsoft.com/services/analysis-services/) | &#x2705; | &#x2705; | |
+| [Azure Arc-enabled Servers](../../azure-arc/servers/overview.md) | &#x2705; | &#x2705; | |
+| [Azure Archive Storage](https://azure.microsoft.com/services/storage/archive/) | &#x2705; | &#x2705; | |
+| [Azure Backup](https://azure.microsoft.com/services/backup/) | &#x2705; | &#x2705; | |
+| **Service** | **DoD IL2** | **FedRAMP High** | **Planned 2021** |
+| [Azure Bastion](https://azure.microsoft.com/services/azure-bastion/) | &#x2705; | &#x2705; | |
+| [Azure Blueprints](https://azure.microsoft.com/services/blueprints/) | &#x2705; | &#x2705; | |
+| [Azure Bot Service](/azure/bot-service/) | &#x2705; | &#x2705; | |
+| [Azure Cache for Redis](https://azure.microsoft.com/services/cache/) | &#x2705; | &#x2705; | |
+| [Azure Cloud Services](https://azure.microsoft.com/services/cloud-services/) | &#x2705; | &#x2705; | |
+| [Azure Cognitive Search](https://azure.microsoft.com/services/search/) | &#x2705; | &#x2705; | |
+| [Azure Cosmos DB](https://azure.microsoft.com/services/cosmos-db/) | &#x2705; | &#x2705; | |
+| [Azure Cost Management and Billing](https://azure.microsoft.com/services/cost-management/) | &#x2705; | &#x2705; | |
+| [Azure Data Box](https://azure.microsoft.com/services/databox/) **&ast;** | &#x2705; | &#x2705; | |
+| [Azure Data Explorer](https://azure.microsoft.com/services/data-explorer/) | &#x2705; | &#x2705; | |
+| [Azure Data Share](https://azure.microsoft.com/services/data-share/) | &#x2705; | &#x2705; | |
+| [Azure Database for MariaDB](https://azure.microsoft.com/services/mariadb/) | &#x2705; | &#x2705; | |
+| [Azure Database for MySQL](https://azure.microsoft.com/services/mysql/) | &#x2705; | &#x2705; | |
+| **Service** | **DoD IL2** | **FedRAMP High** | **Planned 2021** |
+| [Azure Database for PostgreSQL](https://azure.microsoft.com/services/postgresql/) | &#x2705; | &#x2705; | |
+| [Azure Database Migration Service](https://azure.microsoft.com/services/database-migration/) | &#x2705; | &#x2705; | |
+| [Azure Databricks](https://azure.microsoft.com/services/databricks/) **&ast;&ast;** | &#x2705; | &#x2705; | |
+| [Azure DDoS Protection](https://azure.microsoft.com/services/ddos-protection/) | &#x2705; | &#x2705; | |
+| [Azure Dedicated HSM](https://azure.microsoft.com/services/azure-dedicated-hsm/) | &#x2705; | &#x2705; | |
+| [Azure DevTest Labs](https://azure.microsoft.com/services/devtest-lab/) | &#x2705; | &#x2705; | |
+| [Azure DNS](https://azure.microsoft.com/services/dns/) | &#x2705; | &#x2705; | |
+| [Azure ExpressRoute](https://azure.microsoft.com/services/expressroute/) | &#x2705; | &#x2705; | |
+| [Azure File Sync](../../storage/file-sync/file-sync-introduction.md) | &#x2705; | &#x2705; | |
+| [Azure Firewall](https://azure.microsoft.com/services/azure-firewall/) | &#x2705; | &#x2705; | |
+| [Azure Firewall Manager](https://azure.microsoft.com/services/firewall-manager/) | &#x2705; | &#x2705; | |
+| [Azure for Education](https://azure.microsoft.com/developer/students/) | &#x2705; | &#x2705; | |
+| [Azure Form Recognizer](https://azure.microsoft.com/services/form-recognizer/) | &#x2705; | &#x2705; | |
+| [Azure Front Door](https://azure.microsoft.com/services/frontdoor/) | &#x2705; | &#x2705; | |
+| [Azure Functions](https://azure.microsoft.com/services/functions/) | &#x2705; | &#x2705; | |
+| **Service** | **DoD IL2** | **FedRAMP High** | **Planned 2021** |
+| [Azure Health Bot](/healthbot/) | &#x2705; | &#x2705; | |
+| [Azure HDInsight](https://azure.microsoft.com/services/hdinsight/) | &#x2705; | &#x2705; | |
+| [Azure Healthcare APIs](https://azure.microsoft.com/services/healthcare-apis/) (formerly Azure API for FHIR) | &#x2705; | &#x2705; | |
+| [Azure HPC Cache](https://azure.microsoft.com/services/hpc-cache/) | &#x2705; | &#x2705; | |
+| [Azure Information Protection](https://azure.microsoft.com/services/information-protection/) | &#x2705; | &#x2705; | |
+| [Azure Internet Analyzer](https://azure.microsoft.com/services/internet-analyzer/) | &#x2705; | &#x2705; | |
+| [Azure IoT Central](https://azure.microsoft.com/services/iot-central/) | | | &#x2705; |
+| [Azure IoT Hub](https://azure.microsoft.com/services/iot-hub/) | &#x2705; | &#x2705; | |
+| [Azure IoT Security](https://azure.microsoft.com/overview/iot/security/) | &#x2705; | &#x2705; | |
+| [Azure Kubernetes Service (AKS)](https://azure.microsoft.com/services/kubernetes-service/) | &#x2705; | &#x2705; | |
+| [Azure Lab Services](https://azure.microsoft.com/services/lab-services/) | &#x2705; | &#x2705; | |
+| [Azure Lighthouse](https://azure.microsoft.com/services/azure-lighthouse/) | &#x2705; | &#x2705; | |
+| [Azure Logic Apps](https://azure.microsoft.com/services/logic-apps/) | &#x2705; | &#x2705; | |
+| [Azure Machine Learning](https://azure.microsoft.com/services/machine-learning/) | &#x2705; | &#x2705; | |
+| [Azure Managed Applications](https://azure.microsoft.com/services/managed-applications/) | &#x2705; | &#x2705; | |
+| **Service** | **DoD IL2** | **FedRAMP High** | **Planned 2021** |
+| [Azure Marketplace portal](https://azuremarketplace.microsoft.com/) | &#x2705; | &#x2705; | |
+| [Azure Maps](https://azure.microsoft.com/services/azure-maps/) | &#x2705; | &#x2705; | |
+| [Azure Media Services](https://azure.microsoft.com/services/media-services/) | &#x2705; | &#x2705; | |
+| [Azure Migrate](https://azure.microsoft.com/services/azure-migrate/) | &#x2705; | &#x2705; | |
+| [Azure Monitor](https://azure.microsoft.com/services/monitor/) (incl. [Application Insights](../../azure-monitor/app/app-insights-overview.md), [Log Analytics](../../azure-monitor/logs/data-platform-logs.md), and [Application Change Analysis](../../azure-monitor/app/change-analysis.md)) | &#x2705; | &#x2705; | |
+| [Azure Monitor Application Change Analysis](../../azure-monitor/app/change-analysis.md) | &#x2705; | &#x2705; | |
+| [Azure NetApp Files](https://azure.microsoft.com/services/netapp/) | &#x2705; | &#x2705; | |
+| [Azure Open Datasets](https://azure.microsoft.com/services/open-datasets/) | &#x2705; | &#x2705; | |
+| [Azure Peering Service](../../peering-service/about.md) | &#x2705; | &#x2705; | |
+| [Azure Policy](https://azure.microsoft.com/services/azure-policy/) | &#x2705; | &#x2705; | |
+| [Azure Policy Guest Configuration](../../governance/policy/concepts/guest-configuration.md) | &#x2705; | &#x2705; | |
+| [Azure Public IP](../../virtual-network/public-ip-addresses.md) | &#x2705; | &#x2705; | |
+| [Azure Red Hat OpenShift](https://azure.microsoft.com/services/openshift/) | &#x2705; | &#x2705; | |
+| [Azure Resource Graph](../../governance/resource-graph/overview.md) | &#x2705; | &#x2705; | |
+| [Azure Resource Manager](https://azure.microsoft.com/features/resource-manager/) | &#x2705; | &#x2705; | |
+| **Service** | **DoD IL2** | **FedRAMP High** | **Planned 2021** |
+| [Azure Scheduler](../../scheduler/scheduler-intro.md) | &#x2705; | &#x2705; | |
+| [Azure Security Center](https://azure.microsoft.com/services/security-center/) | &#x2705; | &#x2705; | |
+| [Azure Service Fabric](https://azure.microsoft.com/services/service-fabric/) | &#x2705; | &#x2705; | |
+| [Azure Service Health](https://azure.microsoft.com/features/service-health/) | &#x2705; | &#x2705; | |
+| [Azure Service Manager (RDFE)](/previous-versions/azure/ee460799(v=azure.100)) | &#x2705; | &#x2705; | |
+| [Azure Sentinel](https://azure.microsoft.com/services/azure-sentinel/) (incl. [UEBA](../../sentinel/identify-threats-with-entity-behavior-analytics.md#what-is-user-and-entity-behavior-analytics-ueba)) | &#x2705; | &#x2705; | |
+| [Azure SignalR Service](https://azure.microsoft.com/services/signalr-service/) | &#x2705; | &#x2705; | |
+| [Azure Site Recovery](https://azure.microsoft.com/services/site-recovery/) | &#x2705; | &#x2705; | |
+| [Azure Sphere](https://azure.microsoft.com/services/azure-sphere/) | &#x2705; | &#x2705; | |
+| [Azure SQL Database](https://azure.microsoft.com/services/sql-database/) (incl. [Azure SQL Managed Instance](https://azure.microsoft.com/products/azure-sql/managed-instance/)) | &#x2705; | &#x2705; | |
+| [Azure Stack Edge](https://azure.microsoft.com/products/azure-stack/edge/) (formerly Data Box Edge) **&ast;** | &#x2705; | &#x2705; | |
+| [Azure Stream Analytics](https://azure.microsoft.com/services/stream-analytics/) | &#x2705; | &#x2705; | |
+| [Azure Synapse Analytics](https://azure.microsoft.com/services/synapse-analytics/) | &#x2705; | &#x2705; | |
+| [Azure Time Series Insights](https://azure.microsoft.com/services/time-series-insights/) | &#x2705; | &#x2705; | |
+| [Azure Video Analyzer](https://azure.microsoft.com/products/video-analyzer/) | &#x2705; | &#x2705; | |
+| **Service** | **DoD IL2** | **FedRAMP High** | **Planned 2021** |
+| [Azure Virtual Desktop](https://azure.microsoft.com/services/virtual-desktop/) (formerly Windows Virtual Desktop) | &#x2705; | &#x2705; | |
+| [Azure VMware Solution](https://azure.microsoft.com/services/azure-vmware/) | | | &#x2705; |
+| [Azure Web Application Firewall)](https://azure.microsoft.com/services/web-application-firewall/) | &#x2705; | &#x2705; | |
+| [Batch](https://azure.microsoft.com/services/batch/) | &#x2705; | &#x2705; | |
+| [Cloud Shell](https://azure.microsoft.com/features/cloud-shell/) | &#x2705; | &#x2705; | |
+| [Cognitive
+| [Cognitive
+| [Cognitive Services Containers](../../cognitive-services/cognitive-services-container-support.md) | &#x2705; | &#x2705; | |
+| [Cognitive
+| [Cognitive
+| [Cognitive
+| [Cognitive
+| [Cognitive
+| [Cognitive
+| [Cognitive
+| **Service** | **DoD IL2** | **FedRAMP High** | **Planned 2021** |
+| [Cognitive
+| [Container Instances](https://azure.microsoft.com/services/container-instances/) | &#x2705; | &#x2705; | |
+| [Container Registry](https://azure.microsoft.com/services/container-registry/) | &#x2705; | &#x2705; | |
+| [Content Delivery Network](https://azure.microsoft.com/services/cdn/) | &#x2705; | &#x2705; | |
+| [Customer Lockbox](../../security/fundamentals/customer-lockbox-overview.md) | &#x2705; | &#x2705; | |
+| [Data Factory](https://azure.microsoft.com/services/data-factory/) | &#x2705; | &#x2705; | |
+| [Data Integrator](/power-platform/admin/data-integrator) | &#x2705; | &#x2705; | |
+| [Dataverse](/powerapps/maker/common-data-service/data-platform-intro) (formerly Common Data Service) | &#x2705; | &#x2705; | |
+| [Dynamics 365 Chat (Omnichannel Engagement Hub)](/dynamics365/omnichannel/introduction-omnichannel) | &#x2705; | &#x2705; | |
+| [Dynamics 365 Commerce](https://dynamics.microsoft.com/commerce/overview/)| &#x2705; | &#x2705; | |
+| [Dynamics 365 Customer Service](https://dynamics.microsoft.com/customer-service/overview/)| &#x2705; | &#x2705; | |
+| [Dynamics 365 Field Service](https://dynamics.microsoft.com/field-service/overview/)| &#x2705; | &#x2705; | |
+| [Dynamics 365 Finance](https://dynamics.microsoft.com/finance/overview/)| &#x2705; | &#x2705; | |
+| [Dynamics 365 Guides](https://dynamics.microsoft.com/mixed-reality/guides/)| &#x2705; | &#x2705; | |
+| [Dynamics 365 Sales](https://dynamics.microsoft.com/sales/overview/) | | | &#x2705; |
+| **Service** | **DoD IL2** | **FedRAMP High** | **Planned 2021** |
+| [Dynamics 365 Sales Professional](https://dynamics.microsoft.com/sales/professional/) | | | &#x2705; |
+| [Dynamics 365 Supply Chain Management](https://dynamics.microsoft.com/supply-chain-management/overview/)| &#x2705; | &#x2705; | |
+| [Event Grid](https://azure.microsoft.com/services/event-grid/) | &#x2705; | &#x2705; | |
+| [Event Hubs](https://azure.microsoft.com/services/event-hubs/) | &#x2705; | &#x2705; | |
+| [GitHub AE](https://docs.github.com/github-ae@latest/admin/overview/about-github-ae) | &#x2705; | &#x2705; | |
+| [GitHub Codespaces](https://visualstudio.microsoft.com/services/github-codespaces/) (formerly Visual Studio Codespaces) | &#x2705; | &#x2705; | |
+| [Import/Export](https://azure.microsoft.com/services/storage/import-export/) | &#x2705; | &#x2705; | |
+| [Key Vault](https://azure.microsoft.com/services/key-vault/) | &#x2705; | &#x2705; | |
+| [Load Balancer](https://azure.microsoft.com/services/load-balancer/) | &#x2705; | &#x2705; | |
+| [Microsoft 365 Defender](/microsoft-365/security/defender/) (formerly Microsoft Threat Protection) | &#x2705; | &#x2705; | |
+| [Microsoft Azure Attestation](https://azure.microsoft.com/services/azure-attestation/)| &#x2705; | &#x2705; | |
+| [Microsoft Azure Marketplace portal](https://azuremarketplace.microsoft.com/marketplace/)| &#x2705; | &#x2705; | |
+| [Microsoft Azure portal](https://azure.microsoft.com/features/azure-portal/)| &#x2705; | &#x2705; | |
+| [Microsoft Cloud App Security](/cloud-app-security/what-is-cloud-app-security) | &#x2705; | &#x2705; | |
+| [Microsoft Defender for Endpoint](/microsoft-365/security/defender-endpoint/) (formerly Microsoft Defender Advanced Threat Protection) | &#x2705; | &#x2705; | |
+| **Service** | **DoD IL2** | **FedRAMP High** | **Planned 2021** |
+| [Microsoft Defender for Identity](/defender-for-identity/what-is) (formerly Azure Advanced Threat Protection) | &#x2705; | &#x2705; | |
+| [Microsoft Graph](/graph/overview) | &#x2705; | &#x2705; | |
+| [Microsoft Intune](/mem/intune/fundamentals/) | &#x2705; | &#x2705; | |
+| [Microsoft Stream](/stream/overview) | &#x2705; | &#x2705; | |
+| [Microsoft Threat Experts](/microsoft-365/security/defender-endpoint/microsoft-threat-experts) | &#x2705; | &#x2705; | |
+| [Multifactor Authentication](../../active-directory/authentication/concept-mfa-howitworks.md) | &#x2705; | &#x2705; | |
+| [Network Watcher](https://azure.microsoft.com/services/network-watcher/) | &#x2705; | &#x2705; | |
+| [Network Watcher Traffic Analytics](../../network-watcher/traffic-analytics.md) | &#x2705; | &#x2705; | |
+| [Notification Hubs](https://azure.microsoft.com/services/notification-hubs/) | &#x2705; | &#x2705; | |
+| [Power Apps](/powerapps/powerapps-overview) | &#x2705; | &#x2705; | |
+| [Power Apps Portal](https://powerapps.microsoft.com/portals/) | &#x2705; | &#x2705; | |
+| [Power Automate](/power-automate/getting-started) (formerly Microsoft Flow) | &#x2705; | &#x2705; | |
+| [Power BI Embedded](https://azure.microsoft.com/services/power-bi-embedded/) | &#x2705; | &#x2705; | |
+| [Power Virtual Agents](/power-virtual-agents/fundamentals-what-is-power-virtual-agents) | &#x2705; | &#x2705; | |
+| [Private Link](https://azure.microsoft.com/services/private-link/) | &#x2705; | &#x2705; | |
+| **Service** | **DoD IL2** | **FedRAMP High** | **Planned 2021** |
+| [Service Bus](https://azure.microsoft.com/services/service-bus/) | &#x2705; | &#x2705; | |
+| [SQL Server Registry](/sql/sql-server/end-of-support/sql-server-extended-security-updates) | &#x2705; | &#x2705; | |
+| [SQL Server Stretch Database](https://azure.microsoft.com/services/sql-server-stretch-database/) | &#x2705; | &#x2705; | |
+| [Storage: Blobs](https://azure.microsoft.com/services/storage/blobs/) (incl. [Azure Data Lake Storage Gen2](../../storage/blobs/data-lake-storage-introduction.md)) | &#x2705; | &#x2705; | |
+| [Storage: Data Movement)](../../storage/common/storage-use-data-movement-library.md) | &#x2705; | &#x2705; | |
+| [Storage: Disks (incl. Managed Disks)](https://azure.microsoft.com/services/storage/disks/) | &#x2705; | &#x2705; | |
+| [Storage: Files](https://azure.microsoft.com/services/storage/files/) | &#x2705; | &#x2705; | |
+| [Storage: Queues](https://azure.microsoft.com/services/storage/queues/) | &#x2705; | &#x2705; | |
+| [Storage: Tables](https://azure.microsoft.com/services/storage/tables/) | &#x2705; | &#x2705; | |
+| [StorSimple](https://azure.microsoft.com/services/storsimple/) | &#x2705; | &#x2705; | |
+| [Traffic Manager](https://azure.microsoft.com/services/traffic-manager/) | &#x2705; | &#x2705; | |
+| [Virtual Machine Scale Sets](https://azure.microsoft.com/services/virtual-machine-scale-sets/) | &#x2705; | &#x2705; | |
+| [Virtual Machines (incl. Reserved Instances)](https://azure.microsoft.com/services/virtual-machines/) | &#x2705; | &#x2705; | |
+| [Virtual Network](https://azure.microsoft.com/services/virtual-network/) | &#x2705; | &#x2705; | |
+| [Virtual Network NAT](../../virtual-network/nat-gateway/nat-overview.md) | &#x2705; | &#x2705; | |
+| [Virtual WAN](https://azure.microsoft.com/services/virtual-wan/) | &#x2705; | &#x2705; | |
+| [VPN Gateway](https://azure.microsoft.com/services/vpn-gateway/) | &#x2705; | &#x2705; | |
+| [Web Apps (App Service)](https://azure.microsoft.com/services/app-service/web/) | &#x2705; | &#x2705; | |
+| [Windows 10 IoT Core Services](https://azure.microsoft.com/services/windows-10-iot-core/) | &#x2705; | &#x2705; | |
-**&ast;&ast;** FedRAMP High certification for Azure Databricks is applicable for limited regions in Azure Commercial. To configure Azure Databricks for FedRAMP High use, please reach out to your Microsoft or Databricks Representative.
+**&ast;** FedRAMP High authorization for edge devices (such as Azure Data Box and Azure Stack Edge) applies only to Azure services that support on-premises, customer-managed devices. For example, FedRAMP High authorization for Azure Data Box covers datacenter infrastructure services and Data Box pod and disk service, which are the online software components supporting Data Box hardware appliance. You are wholly responsible for the authorization package that covers the physical devices. For assistance with accelerating your onboarding and authorization of devices, contact your Microsoft account representative.
+
+**&ast;&ast;** FedRAMP High authorization for Azure Databricks is applicable to limited regions in Azure. To configure Azure Databricks for FedRAMP High use, contact your Microsoft or Databricks representative.
## Azure Government services by audit scope
-| _Last Updated: July 2021_ |
+*Last Updated: August 2021*
+
+### Terminology used
-| Azure Service | DoD CC SRG IL 2 | DoD CC SRG IL 4 | DoD CC SRG IL 5 (Azure Gov)**&ast;** | DoD CC SRG IL 5 (Azure DoD) **&ast;&ast;** | FedRAMP High | DoD CC SRG IL 6
-| - |::|::|::|::|::|::
-| [API Management](https://azure.microsoft.com/services/api-management/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
-| [Application Gateway](https://azure.microsoft.com/services/application-gateway/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
-| [Automation](https://azure.microsoft.com/services/automation/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | :heavy_check_mark: |
-| [Azure Active Directory (Free and Basic)](https://azure.microsoft.com/services/active-directory/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
-| [Azure Active Directory (Premium P1 + P2)](https://azure.microsoft.com/services/active-directory/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
-| [Azure Active Directory Domain Services](https://azure.microsoft.com/services/active-directory-ds/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | :heavy_check_mark: |
-| [Microsoft Defender for Identity](https://azure.microsoft.com/features/azure-advanced-threat-protection/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | :heavy_check_mark: |
-| [Azure Advisor](https://azure.microsoft.com/services/advisor/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | :heavy_check_mark: |
-| [Azure Analysis Services](https://azure.microsoft.com/services/analysis-services/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
-| [Azure API for FHIR](https://azure.microsoft.com/services/azure-api-for-fhir/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | :heavy_check_mark: |
-| [Azure App Configuration](https://azure.microsoft.com/services/app-configuration/) | :heavy_check_mark: | | | | :heavy_check_mark: |
-| [Azure Bastion](https://azure.microsoft.com/services/azure-bastion/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | :heavy_check_mark: |
-| [Azure Blueprints](https://azure.microsoft.com/services/blueprints/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | :heavy_check_mark: |
-| [Azure Bot Service](/azure/bot-service/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | :heavy_check_mark: |
-| [Azure Archive Storage](https://azure.microsoft.com/services/storage/archive/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | :heavy_check_mark: |
-| [Container Registry](https://azure.microsoft.com/services/container-registry/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | :heavy_check_mark: | :heavy_check_mark:
-| [Azure Cosmos DB](https://azure.microsoft.com/services/cosmos-db/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
-| [Azure Cost Management](https://azure.microsoft.com/services/cost-management/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | :heavy_check_mark: |
-| [Azure Cloud Shell](https://azure.microsoft.com/features/cloud-shell/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | :heavy_check_mark: |
-| [Azure Cognitive Search](https://azure.microsoft.com/services/search/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | :heavy_check_mark: |
-| [Azure Stack Edge (Data Box Edge)](https://azure.microsoft.com/services/databox/edge/)| :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | :heavy_check_mark: |
-| [Azure Data Box](https://azure.microsoft.com/services/databox/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | :heavy_check_mark: |
-| [Azure Data Factory](https://azure.microsoft.com/services/data-factory/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | :heavy_check_mark: |
-| [Azure Data Explorer](https://azure.microsoft.com/services/data-explorer/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | :heavy_check_mark: | :heavy_check_mark: |
-| [Azure Database Migration Service](https://azure.microsoft.com/services/database-migration/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | :heavy_check_mark: |
-| [Azure Data Share](https://azure.microsoft.com/services/data-share/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | :heavy_check_mark: |
-| [Azure Databricks](https://azure.microsoft.com/services/databricks/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | :heavy_check_mark: |
-| [Azure DB for MySQL](https://azure.microsoft.com/services/mysql/)| :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
-| [Azure DB for PostgreSQL](https://azure.microsoft.com/services/postgresql/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
-| [Azure DB for MariaDB](https://azure.microsoft.com/services/mariadb/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | :heavy_check_mark: |
-| [Azure DDoS Protection](https://azure.microsoft.com/services/ddos-protection/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | :heavy_check_mark: |
-| [Azure Dedicated HSM](https://azure.microsoft.com/services/azure-dedicated-hsm/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | :heavy_check_mark: |
-| [Azure DevTest Labs](https://azure.microsoft.com/services/devtest-lab/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | :heavy_check_mark: |
-| [Azure DNS](https://azure.microsoft.com/services/dns/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
-| [Azure Event Grid](https://azure.microsoft.com/services/event-grid/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
-| [Azure File Sync](https://azure.microsoft.com/services/storage/files/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | :heavy_check_mark: |
-| [Azure Firewall](https://azure.microsoft.com/services/azure-firewall/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
-| [Azure Front Door](https://azure.microsoft.com/services/frontdoor/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
-| [Azure HPC Cache](https://azure.microsoft.com/services/hpc-cache/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | :heavy_check_mark: |
-| [Azure Information Protection](https://azure.microsoft.com/services/information-protection/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | :heavy_check_mark: |
-| [Microsoft Intune](/intune/what-is-intune) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | :heavy_check_mark: |
-| [Azure IoT Security](https://azure.microsoft.com/overview/iot/security/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | :heavy_check_mark: |
-| [Azure Kubernetes Service (AKS)](https://azure.microsoft.com/services/kubernetes-service/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | :heavy_check_mark: |
-| [Azure Lighthouse](https://azure.microsoft.com/services/azure-lighthouse/)| :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | :heavy_check_mark: |
-| [Azure Lab Services](https://azure.microsoft.com/services/lab-services/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
-| [Azure Managed Applications](https://azure.microsoft.com/services/managed-applications/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
-| [Azure Maps](https://azure.microsoft.com/services/azure-maps/)| :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | :heavy_check_mark: |
-| [Azure Migrate](https://azure.microsoft.com/services/azure-migrate/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | :heavy_check_mark: |
-| [Azure Monitor](https://azure.microsoft.com/services/monitor/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
-| [Azure NetApp Files](https://azure.microsoft.com/services/netapp/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | :heavy_check_mark: |
-| [Azure Policy](https://azure.microsoft.com/services/azure-policy/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | :heavy_check_mark: |
-| [Azure Private Link](https://azure.microsoft.com/services/private-link/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | :heavy_check_mark: |
-| [Azure Public IP](../../virtual-network/public-ip-addresses.md) | :heavy_check_mark: | | | | :heavy_check_mark: |
-| [Azure Resource Manager](https://azure.microsoft.com/features/resource-manager/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
-| [Azure Resource Graph](../../governance/resource-graph/overview.md) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | :heavy_check_mark: |
-| [Azure Sentinel](https://azure.microsoft.com/services/azure-sentinel/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | :heavy_check_mark: |
-| [Azure Security Center](https://azure.microsoft.com/services/security-center/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | :heavy_check_mark: |
-| [Azure Service Health](https://azure.microsoft.com/features/service-health/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | :heavy_check_mark: |
-| [Azure Service Manager (RDFE)](/previous-versions/azure/ee460799(v=azure.100)) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
-| [Azure SignalR Service](https://azure.microsoft.com/services/signalr-service/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | :heavy_check_mark: |
-| [Azure Site Recovery](https://azure.microsoft.com/services/site-recovery/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
-| [Azure Stack Hub](/azure-stack/operator/azure-stack-overview)| :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | :heavy_check_mark: |
-| [Backup](https://azure.microsoft.com/services/backup/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
-| [Batch](https://azure.microsoft.com/services/batch/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
-| [Cloud Services](https://azure.microsoft.com/services/cloud-services/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
-| [Cognitive
-| [Cognitive
-| [Cognitive
-| [Cognitive
-| [Cognitive
-| [Cognitive
-| [Cognitive
-| [Cognitive
-| [Cognitive
-| [Cognitive Services Personalizer](https://azure.microsoft.com/services/cognitive-services/personalizer/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | :heavy_check_mark: |
-| [Cognitive
-| [Container Instances](https://azure.microsoft.com/services/container-instances/)| :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | :heavy_check_mark: |
-| [Content Delivery Network](https://azure.microsoft.com/services/cdn/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | :heavy_check_mark: |
-| [Customer Lockbox](../../security/fundamentals/customer-lockbox-overview.md) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | :heavy_check_mark: |
-| [Data Integrator](/power-platform/admin/data-integrator) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | :heavy_check_mark: |
-| [Dynamics 365 Chat (Dynamics 365 Service Omni-Channel Engagement Hub)](/dynamics365/omnichannel/introduction-omnichannel) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | :heavy_check_mark: |
-| [Dynamics 365 Customer Voice](/forms-pro/get-started) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | :heavy_check_mark: |
-| [Dynamics 365 Customer Insights](/dynamics365/ai/customer-insights/overview) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | :heavy_check_mark: |
-| [Dataverse (Common Data Service)](/dynamics365/customerengagement/on-premises/overview) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
-| [Dynamics 365 Customer Service](/dynamics365/customer-service/overview) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
-| [Dynamics 365 Field Service](/dynamics365/field-service/overview) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
-| [Dynamics 365 Project Service Automation](/dynamics365/project-service/overview) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
-| [Dynamics 365 Sales](/dynamics365/sales-enterprise/overview) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
-| [Event Hubs](https://azure.microsoft.com/services/event-hubs/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
-| [Azure Synapse Link for Dataverse](/powerapps/maker/data-platform/export-to-data-lake) | :heavy_check_mark: | | | | :heavy_check_mark: |
-| [ExpressRoute](https://azure.microsoft.com/services/expressroute/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
-| [Power Automate](/flow/getting-started) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
-| [Functions](https://azure.microsoft.com/services/functions/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
-| [GitHub AE](https://docs.github.com/en/github-ae@latest/admin/overview/about-github-ae) | :heavy_check_mark: | | | | :heavy_check_mark: |
-| [Guest Configuration](../../governance/policy/concepts/guest-configuration.md) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | :heavy_check_mark: |
-| [HDInsight](https://azure.microsoft.com/services/hdinsight/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
-| [Import / Export](https://azure.microsoft.com/services/storage/import-export/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
-| [IoT Hub](https://azure.microsoft.com/services/iot-hub/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | :heavy_check_mark: |
-| [Key Vault](https://azure.microsoft.com/services/key-vault/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
-| [Load Balancer](https://azure.microsoft.com/services/load-balancer/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
-| [Log Analytics](../../azure-monitor/logs/data-platform-logs.md) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | :heavy_check_mark: |
-| [Logic Apps](https://azure.microsoft.com/services/logic-apps/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
-| [Machine Learning Services](https://azure.microsoft.com/services/machine-learning-service/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | :heavy_check_mark: |
-| [Media Services](https://azure.microsoft.com/services/media-services/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
-| [Microsoft Azure Peering Service](../../peering-service/about.md) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | :heavy_check_mark: |
-| [Microsoft Azure portal](https://azure.microsoft.com/features/azure-portal/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark:| :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
-| [Microsoft Cloud App Security](/cloud-app-security/what-is-cloud-app-security)| :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | :heavy_check_mark: |
-| [Microsoft Defender for Endpoint](/windows/security/threat-protection/microsoft-defender-atp/microsoft-defender-advanced-threat-protection) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
-| [Microsoft 365 Defender](/microsoft-365/security/defender/microsoft-365-defender?view=o365-worldwide) | :heavy_check_mark: | | | | :heavy_check_mark: |
-| [Microsoft Graph](/graph/overview) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
-| [Power Apps](/powerapps/powerapps-overview) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
-| [Power Apps Portal](https://powerapps.microsoft.com/portals/) | :heavy_check_mark: | :heavy_check_mark: | | :heavy_check_mark: | :heavy_check_mark: |
-| [Microsoft Stream](/stream/overview) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
-| [Multi-Factor Authentication](../../active-directory/authentication/concept-mfa-howitworks.md) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | :heavy_check_mark: | :heavy_check_mark: |
-| [Network Watcher](https://azure.microsoft.com/services/network-watcher/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
-| [Network Watcher(Traffic Analytics)](../../network-watcher/traffic-analytics.md) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
-| [Notification Hubs](https://azure.microsoft.com/services/notification-hubs/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | :heavy_check_mark: |
-| [Planned Maintenance](https://docs.microsoft.com/azure/virtual-machines/maintenance-control-portal) | :heavy_check_mark: | | | | :heavy_check_mark: |
-| [Power BI](https://powerbi.microsoft.com/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
-| [Power BI Embedded](https://azure.microsoft.com/services/power-bi-embedded/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
-| [Power Virtual Agents](/power-virtual-agents/fundamentals-what-is-power-virtual-agents) | :heavy_check_mark: | | | | :heavy_check_mark: |
-| [Redis Cache](https://azure.microsoft.com/services/cache/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
-| [Scheduler](../../scheduler/scheduler-intro.md) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
-| [Service Bus](https://azure.microsoft.com/services/service-bus/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
-| [Service Fabric](https://azure.microsoft.com/services/service-fabric/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
-| [Azure Synapse Analytics](https://azure.microsoft.com/services/sql-data-warehouse/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
-| [SQL Database](https://azure.microsoft.com/services/sql-database/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
-| [SQL Server Stretch Database](https://azure.microsoft.com/services/sql-server-stretch-database/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
-| [Storage: Blobs (Incl. Azure Data Lake Storage Gen2](../../storage/blobs/data-lake-storage-introduction.md) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
-| [Storage: Disks (incl. Managed Disks)](https://azure.microsoft.com/services/storage/disks/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
-| [Storage: Files](https://azure.microsoft.com/services/storage/files/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
-| [Storage: Queues](https://azure.microsoft.com/services/storage/queues/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
-| [Storage: Tables](https://azure.microsoft.com/services/storage/tables/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
-| [StorSimple](https://azure.microsoft.com/services/storsimple/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | :heavy_check_mark: |
-| [Stream Analytics](https://azure.microsoft.com/services/stream-analytics/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | :heavy_check_mark: |
-| [Traffic Manager](https://azure.microsoft.com/services/traffic-manager/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
-| [Virtual Machine Scale Sets](https://azure.microsoft.com/services/virtual-machine-scale-sets/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
-| [Virtual Machines](https://azure.microsoft.com/services/virtual-machines/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
-| [Virtual Network](https://azure.microsoft.com/services/virtual-network/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
-| [Virtual Network NAT](../../virtual-network/nat-gateway/nat-overview.md) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | :heavy_check_mark: |
-| [Virtual WAN](https://azure.microsoft.com/services/virtual-wan/) | :heavy_check_mark: | | | | :heavy_check_mark: | :heavy_check_mark:
-| [VPN Gateway](https://azure.microsoft.com/services/vpn-gateway/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
-| [Web Apps (App Service)](https://azure.microsoft.com/services/app-service/web/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
-| [Web Application Firewall)](https://azure.microsoft.com/services/web-application-firewall/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | :heavy_check_mark: |
-| [Azure Virtual Desktop](https://azure.microsoft.com/services/virtual-desktop/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | :heavy_check_mark: |
+- FR High = FedRAMP High P-ATO in Azure Government
+- DoD IL2 = DoD SRG Impact Level 2 PA in Azure Government regions US Gov Arizona, US Gov Texas, and US Gov Virginia
+- DoD IL4 = DoD SRG Impact Level 4 PA in Azure Government regions US Gov Arizona, US Gov Texas, and US Gov Virginia
+- DoD IL5 = DoD SRG Impact Level 5 PA in Azure Government regions US Gov Arizona, US Gov Texas, and US Gov Virginia
+- DoD IL6 = DoD SRG Impact Level 6 PA in Azure Government Secret
+- ICD 503 = Intelligence Community Directive 503 PA in Azure Government Secret
+- &#x2705; = service is included in audit scope and has been authorized
+> [!NOTE]
+>
+> - Some services deployed in Azure Government regions (US Gov Arizona, US Gov Texas, and US Gov Virginia) require extra configuration to meet DoD IL5 compute and storage isolation requirements, as explained in **[Isolation guidelines for Impact Level 5 workloads](../documentation-government-impact-level-5.md).**
+> - For DoD IL5 PA compliance scope in Azure Government DoD regions (US DoD Central and US DoD East), see **[Azure Government DoD regions IL5 audit scope](../documentation-government-overview-dod.md#azure-government-dod-regions-il5-audit-scope).**
-**&ast;** DoD CC SRG IL5 (Azure Gov) column shows DoD CC SRG IL5 certification status of services in Azure Government. For details, please refer to [Azure Government Isolation Guidelines for Impact Level 5](../documentation-government-impact-level-5.md)
+| Service | FR High / DoD IL2 | DoD IL4 | DoD IL5 | DoD IL6 | ICD 503 |
+| - |:--:|:-:|:-:|:-:|:-:|
+| [API Management](https://azure.microsoft.com/services/api-management/) | &#x2705; | &#x2705; | &#x2705; | | |
+| [App Configuration](https://azure.microsoft.com/services/app-configuration/) | &#x2705; | | | | |
+| [Application Gateway](https://azure.microsoft.com/services/application-gateway/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
+| [Automation](https://azure.microsoft.com/services/automation/) | &#x2705; | &#x2705; | &#x2705; | | |
+| [Azure Active Directory (Free and Basic)](../../active-directory/fundamentals/active-directory-whatis.md#what-are-the-azure-ad-licenses) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
+| [Azure Active Directory (Premium P1 + P2)](../../active-directory/fundamentals/active-directory-whatis.md#what-are-the-azure-ad-licenses) | &#x2705; | &#x2705; | &#x2705; | | |
+| [Azure Active Directory Domain Services](https://azure.microsoft.com/services/active-directory-ds/) | &#x2705; | &#x2705; | &#x2705; | | |
+| [Azure Advisor](https://azure.microsoft.com/services/advisor/) | &#x2705; | &#x2705; | &#x2705; | | |
+| [Azure Analysis Services](https://azure.microsoft.com/services/analysis-services/) | &#x2705; | &#x2705; | &#x2705; | | |
+| [Azure Archive Storage](https://azure.microsoft.com/services/storage/archive/) | &#x2705; | &#x2705; | &#x2705; | | |
+| [Azure Backup](https://azure.microsoft.com/services/backup/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
+| [Azure Bastion](https://azure.microsoft.com/services/azure-bastion/) | &#x2705; | &#x2705; | &#x2705; | | |
+| [Azure Blueprints](https://azure.microsoft.com/services/blueprints/) | &#x2705; | &#x2705; | &#x2705; | | |
+| [Azure Bot Service](/azure/bot-service/) | &#x2705; | &#x2705; | &#x2705; | | |
+| [Azure Cache for Redis](https://azure.microsoft.com/services/cache/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
+| **Service** | **FR High / DoD IL2** | **DoD IL4** | **DoD IL5** | **DoD IL6** | **ICD 503** |
+| [Azure Cloud Services](https://azure.microsoft.com/services/cloud-services/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
+| [Azure Cognitive Search](https://azure.microsoft.com/services/search/) | &#x2705; | &#x2705; | &#x2705; | | |
+| [Azure Cosmos DB](https://azure.microsoft.com/services/cosmos-db/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
+| [Azure Cost Management and Billing](https://azure.microsoft.com/services/cost-management/) | &#x2705; | &#x2705; | &#x2705; | | |
+| [Azure Data Box](https://azure.microsoft.com/services/databox/) **&ast;** | &#x2705; | &#x2705; | &#x2705; | | |
+| [Azure Data Explorer](https://azure.microsoft.com/services/data-explorer/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
+| [Azure Data Share](https://azure.microsoft.com/services/data-share/) | &#x2705; | &#x2705; | &#x2705; | | |
+| [Azure Database for MariaDB](https://azure.microsoft.com/services/mariadb/) | &#x2705; | &#x2705; | &#x2705; | | |
+| [Azure Database for MySQL](https://azure.microsoft.com/services/mysql/) | &#x2705; | &#x2705; | &#x2705; | | |
+| [Azure Database for PostgreSQL](https://azure.microsoft.com/services/postgresql/) | &#x2705; | &#x2705; | &#x2705; | | |
+| [Azure Database Migration Service](https://azure.microsoft.com/services/database-migration/) | &#x2705; | &#x2705; | &#x2705; | | |
+| [Azure Databricks](https://azure.microsoft.com/services/databricks/) | &#x2705; | &#x2705; | &#x2705; | | |
+| [Azure DDoS Protection](https://azure.microsoft.com/services/ddos-protection/) | &#x2705; | &#x2705; | &#x2705; | | |
+| [Azure Dedicated HSM](https://azure.microsoft.com/services/azure-dedicated-hsm/) | &#x2705; | &#x2705; | &#x2705; | | |
+| [Azure DevTest Labs](https://azure.microsoft.com/services/devtest-lab/) | &#x2705; | &#x2705; | &#x2705; | | |
+| **Service** | **FR High / DoD IL2** | **DoD IL4** | **DoD IL5** | **DoD IL6** | **ICD 503** |
+| [Azure DNS](https://azure.microsoft.com/services/dns/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
+| [Azure ExpressRoute](https://azure.microsoft.com/services/expressroute/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
+| [Azure File Sync](../../storage/file-sync/file-sync-introduction.md) | &#x2705; | &#x2705; | &#x2705; | | |
+| [Azure Firewall](https://azure.microsoft.com/services/azure-firewall/) | &#x2705; | &#x2705; | &#x2705; | | |
+| [Azure Form Recognizer](https://azure.microsoft.com/services/form-recognizer/) | &#x2705; | | | | |
+| [Azure Front Door](https://azure.microsoft.com/services/frontdoor/) | &#x2705; | &#x2705; | &#x2705; | | |
+| [Azure Functions](https://azure.microsoft.com/services/functions/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
+| [Azure HDInsight](https://azure.microsoft.com/services/hdinsight/) | &#x2705; | &#x2705; | &#x2705; | | |
+| [Azure Healthcare APIs](https://azure.microsoft.com/services/healthcare-apis/) (formerly Azure API for FHIR) | &#x2705; | &#x2705; | &#x2705; | | |
+| [Azure HPC Cache](https://azure.microsoft.com/services/hpc-cache/) | &#x2705; | &#x2705; | &#x2705; | | |
+| [Azure Information Protection](https://azure.microsoft.com/services/information-protection/) | &#x2705; | &#x2705; | &#x2705; | | |
+| [Azure IoT Hub](https://azure.microsoft.com/services/iot-hub/) | &#x2705; | &#x2705; | &#x2705; | | |
+| [Azure IoT Security](https://azure.microsoft.com/overview/iot/security/) | &#x2705; | &#x2705; | &#x2705; | | |
+| [Azure Kubernetes Service (AKS)](https://azure.microsoft.com/services/kubernetes-service/) | &#x2705; | &#x2705; | &#x2705; | | |
+| [Azure Lab Services](https://azure.microsoft.com/services/lab-services/) | &#x2705; | &#x2705; | &#x2705; | | |
+| **Service** | **FR High / DoD IL2** | **DoD IL4** | **DoD IL5** | **DoD IL6** | **ICD 503** |
+| [Azure Lighthouse](https://azure.microsoft.com/services/azure-lighthouse/)| &#x2705; | &#x2705; | &#x2705; | | |
+| [Azure Logic Apps](https://azure.microsoft.com/services/logic-apps/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
+| [Azure Machine Learning](https://azure.microsoft.com/services/machine-learning/) | &#x2705; | &#x2705; | &#x2705; | | |
+| [Azure Managed Applications](https://azure.microsoft.com/services/managed-applications/) | &#x2705; | &#x2705; | &#x2705; | | |
+| [Azure Maps](https://azure.microsoft.com/services/azure-maps/) | &#x2705; | &#x2705; | &#x2705; | | |
+| [Azure Media Services](https://azure.microsoft.com/services/media-services/) | &#x2705; | &#x2705; | &#x2705; | | |
+| [Azure Migrate](https://azure.microsoft.com/services/azure-migrate/) | &#x2705; | &#x2705; | &#x2705; | | |
+| [Azure Monitor](https://azure.microsoft.com/services/monitor/) (incl. [Log Analytics](../../azure-monitor/logs/data-platform-logs.md)) | &#x2705; | &#x2705; | &#x2705; | | |
+| [Azure NetApp Files](https://azure.microsoft.com/services/netapp/) | &#x2705; | &#x2705; | &#x2705; | | |
+| [Azure Peering Service](../../peering-service/about.md) | &#x2705; | &#x2705; | &#x2705; | | |
+| [Azure Policy](https://azure.microsoft.com/services/azure-policy/) | &#x2705; | &#x2705; | &#x2705; | | |
+| [Azure Policy Guest Configuration](../../governance/policy/concepts/guest-configuration.md) | &#x2705; | &#x2705; | &#x2705; | | |
+| [Azure Public IP](../../virtual-network/public-ip-addresses.md) | &#x2705; | | | | |
+| [Azure Resource Graph](../../governance/resource-graph/overview.md) | &#x2705; | &#x2705; | &#x2705; | | |
+| [Azure Resource Manager](https://azure.microsoft.com/features/resource-manager/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
+| **Service** | **FR High / DoD IL2** | **DoD IL4** | **DoD IL5** | **DoD IL6** | **ICD 503** |
+| [Azure Scheduler](../../scheduler/scheduler-intro.md) | &#x2705; | &#x2705; | &#x2705; | | |
+| [Azure Security Center](https://azure.microsoft.com/services/security-center/) | &#x2705; | &#x2705; | &#x2705; | | |
+| [Azure Sentinel](https://azure.microsoft.com/services/azure-sentinel/) | &#x2705; | &#x2705; | &#x2705; | | |
+| [Azure Service Fabric](https://azure.microsoft.com/services/service-fabric/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
+| [Azure Service Health](https://azure.microsoft.com/features/service-health/) | &#x2705; | &#x2705; | &#x2705; | | |
+| [Azure Service Manager (RDFE)](/previous-versions/azure/ee460799(v=azure.100)) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
+| [Azure SignalR Service](https://azure.microsoft.com/services/signalr-service/) | &#x2705; | &#x2705; | &#x2705; | | |
+| [Azure Site Recovery](https://azure.microsoft.com/services/site-recovery/) | &#x2705; | &#x2705; | &#x2705; | | |
+| [Azure SQL Database](https://azure.microsoft.com/services/sql-database/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
+| [Azure SQL Managed Instance](https://azure.microsoft.com/products/azure-sql/managed-instance/) | &#x2705; | &#x2705; | &#x2705; | | |
+| [Azure Stack Bridge](/azure-stack/operator/azure-stack-usage-reporting) | &#x2705; | &#x2705; | &#x2705; | | |
+| [Azure Stack Edge](https://azure.microsoft.com/products/azure-stack/edge/) (formerly Data Box Edge) **&ast;** | &#x2705; | &#x2705; | &#x2705; | | |
+| [Azure Stream Analytics](https://azure.microsoft.com/services/stream-analytics/) | &#x2705; | &#x2705; | &#x2705; | | |
+| [Azure Synapse Analytics](https://azure.microsoft.com/services/synapse-analytics/) | &#x2705; | &#x2705; | &#x2705; | | |
+| [Azure Synapse Link for Dataverse](/powerapps/maker/data-platform/export-to-data-lake) | &#x2705; | | | | |
+| **Service** | **FR High / DoD IL2** | **DoD IL4** | **DoD IL5** | **DoD IL6** | **ICD 503** |
+| [Azure Virtual Desktop](https://azure.microsoft.com/services/virtual-desktop/) (formerly Windows Virtual Desktop) | &#x2705; | &#x2705; | &#x2705; | | |
+| [Azure Web Application Firewall)](https://azure.microsoft.com/services/web-application-firewall/) | &#x2705; | &#x2705; | &#x2705; | | |
+| [Batch](https://azure.microsoft.com/services/batch/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
+| [Cloud Shell](https://azure.microsoft.com/features/cloud-shell/) | &#x2705; | &#x2705; | &#x2705; | | |
+| [Cognitive
+| [Cognitive
+| [Cognitive
+| [Cognitive
+| [Cognitive
+| [Cognitive
+| [Cognitive
+| [Cognitive
+| [Cognitive
+| [Cognitive
+| [Container Instances](https://azure.microsoft.com/services/container-instances/)| &#x2705; | &#x2705; | &#x2705; | | |
+| **Service** | **FR High / DoD IL2** | **DoD IL4** | **DoD IL5** | **DoD IL6** | **ICD 503** |
+| [Container Registry](https://azure.microsoft.com/services/container-registry/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
+| [Content Delivery Network](https://azure.microsoft.com/services/cdn/) | &#x2705; | &#x2705; | &#x2705; | | |
+| [Customer Lockbox](../../security/fundamentals/customer-lockbox-overview.md) | &#x2705; | &#x2705; | &#x2705; | | |
+| [Data Factory](https://azure.microsoft.com/services/data-factory/) | &#x2705; | &#x2705; | &#x2705; | | |
+| [Data Integrator](/power-platform/admin/data-integrator) | &#x2705; | &#x2705; | &#x2705; | | |
+| [Dataverse](/powerapps/maker/common-data-service/data-platform-intro) (formerly Common Data Service) | &#x2705; | &#x2705; | &#x2705; | | |
+| [Dynamics 365 Chat (Omnichannel Engagement Hub)](/dynamics365/omnichannel/introduction-omnichannel) | &#x2705; | &#x2705; | &#x2705; | | |
+| [Dynamics 365 Customer Insights](/dynamics365/customer-insights/audience-insights/overview) | &#x2705; | &#x2705; | &#x2705; | | |
+| [Dynamics 365 Customer Voice](/dynamics365/customer-voice/about) (formerly Forms Pro) | &#x2705; | &#x2705; | &#x2705; | | |
+| [Dynamics 365 Customer Service](/dynamics365/customer-service/overview) | &#x2705; | &#x2705; | &#x2705; | | |
+| [Dynamics 365 Field Service](/dynamics365/field-service/overview) | &#x2705; | &#x2705; | &#x2705; | | |
+| [Dynamics 365 Project Service Automation](/dynamics365/project-operations/psa/overview) | &#x2705; | &#x2705; | &#x2705; | | |
+| [Dynamics 365 Sales](https://dynamics.microsoft.com/sales/overview/) | &#x2705; | &#x2705; | &#x2705; | | |
+| [Event Grid](https://azure.microsoft.com/services/event-grid/) | &#x2705; | &#x2705; | &#x2705; | | |
+| [Event Hubs](https://azure.microsoft.com/services/event-hubs/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
+| **Service** | **FR High / DoD IL2** | **DoD IL4** | **DoD IL5** | **DoD IL6** | **ICD 503** |
+| [GitHub AE](https://docs.github.com/en/github-ae@latest/admin/overview/about-github-ae) | &#x2705; | | | | |
+| [Import/Export](https://azure.microsoft.com/services/storage/import-export/) | &#x2705; | &#x2705; | &#x2705; | | |
+| [Key Vault](https://azure.microsoft.com/services/key-vault/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
+| [Load Balancer](https://azure.microsoft.com/services/load-balancer/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
+| [Microsoft 365 Defender](/microsoft-365/security/defender/) (formerly Microsoft Threat Protection) | &#x2705; | | | | |
+| [Microsoft Azure portal](https://azure.microsoft.com/features/azure-portal/) | &#x2705; | &#x2705; | &#x2705;| &#x2705; | &#x2705; |
+| [Microsoft Azure Government portal](../documentation-government-get-started-connect-with-portal.md) | &#x2705; | &#x2705; | &#x2705;| &#x2705; | &#x2705; |
+| [Microsoft Cloud App Security](/cloud-app-security/what-is-cloud-app-security)| &#x2705; | &#x2705; | &#x2705; | | |
+| [Microsoft Defender for Endpoint](/microsoft-365/security/defender-endpoint/) (formerly Microsoft Defender Advanced Threat Protection) | &#x2705; | &#x2705; | &#x2705; | | |
+| [Microsoft Defender for Identity](/defender-for-identity/what-is) | &#x2705; | &#x2705; | &#x2705; | | |
+| [Microsoft Graph](/graph/overview) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
+| [Microsoft Intune](/mem/intune/fundamentals/) | &#x2705; | &#x2705; | &#x2705; | | |
+| [Microsoft Stream](/stream/overview) | &#x2705; | &#x2705; | &#x2705; | | |
+| [Multifactor Authentication](../../active-directory/authentication/concept-mfa-howitworks.md) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
+| [Network Watcher](https://azure.microsoft.com/services/network-watcher/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
+| **Service** | **FR High / DoD IL2** | **DoD IL4** | **DoD IL5** | **DoD IL6** | **ICD 503** |
+| [Network Watcher Traffic Analytics](../../network-watcher/traffic-analytics.md) | &#x2705; | &#x2705; | &#x2705; | | |
+| [Notification Hubs](https://azure.microsoft.com/services/notification-hubs/) | &#x2705; | &#x2705; | &#x2705; | | |
+| [Planned Maintenance for VMs](../../virtual-machines/maintenance-control-portal.md) | &#x2705; | | | | |
+| [Power Apps](/powerapps/powerapps-overview) | &#x2705; | &#x2705; | &#x2705; | | |
+| [Power Automate](/power-automate/getting-started) (formerly Microsoft Flow) | &#x2705; | &#x2705; | &#x2705; | | |
+| [Power BI](https://powerbi.microsoft.com/) | &#x2705; | &#x2705; | &#x2705; | | |
+| [Power BI Embedded](https://azure.microsoft.com/services/power-bi-embedded/) | &#x2705; | &#x2705; | &#x2705; | | |
+| [Power Query Online](/powerquery.microsoft.com/) | &#x2705; | &#x2705; | &#x2705; | | |
+| [Power Virtual Agents](/power-virtual-agents/fundamentals-what-is-power-virtual-agents) | &#x2705; | | | | |
+| [Private Link](https://azure.microsoft.com/services/private-link/) | &#x2705; | &#x2705; | &#x2705; | | |
+| [Service Bus](https://azure.microsoft.com/services/service-bus/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
+| [SQL Server Stretch Database](https://azure.microsoft.com/services/sql-server-stretch-database/) | &#x2705; | &#x2705; | &#x2705; | | |
+| [Storage: Blobs](https://azure.microsoft.com/services/storage/blobs/) (incl. [Azure Data Lake Storage Gen2](../../storage/blobs/data-lake-storage-introduction.md)) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
+| [Storage: Disks (incl. Managed Disks)](https://azure.microsoft.com/services/storage/disks/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
+| [Storage: Files](https://azure.microsoft.com/services/storage/files/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
+| **Service** | **FR High / DoD IL2** | **DoD IL4** | **DoD IL5** | **DoD IL6** | **ICD 503** |
+| [Storage: Queues](https://azure.microsoft.com/services/storage/queues/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
+| [Storage: Tables](https://azure.microsoft.com/services/storage/tables/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
+| [StorSimple](https://azure.microsoft.com/services/storsimple/) | &#x2705; | &#x2705; | &#x2705; | | |
+| [Traffic Manager](https://azure.microsoft.com/services/traffic-manager/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
+| [Virtual Machine Scale Sets](https://azure.microsoft.com/services/virtual-machine-scale-sets/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
+| [Virtual Machines](https://azure.microsoft.com/services/virtual-machines/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
+| [Virtual Network](https://azure.microsoft.com/services/virtual-network/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
+| [Virtual Network NAT](../../virtual-network/nat-gateway/nat-overview.md) | &#x2705; | &#x2705; | &#x2705; | | |
+| [Virtual WAN](https://azure.microsoft.com/services/virtual-wan/) | &#x2705; | | | &#x2705; | &#x2705; |
+| [VPN Gateway](https://azure.microsoft.com/services/vpn-gateway/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
+| [Web Apps (App Service)](https://azure.microsoft.com/services/app-service/web/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
-**&ast;&ast;** DoD CC SRG IL5 (Azure DoD) column shows DoD CC SRG IL5 certification status for services in Azure Government DoD regions.
+**&ast;** Authorizations for edge devices (such as Azure Data Box and Azure Stack Edge) apply only to Azure services that support on-premises, customer-managed devices. You are wholly responsible for the authorization package that covers the physical devices. For assistance with accelerating your onboarding and authorization of devices, contact your Microsoft account representative.
azure-government Documentation Government Developer Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-government/documentation-government-developer-guide.md
For more information on Azure Government Compliance, refer to the [compliance do
### Azure Blueprints
-[Azure Blueprints](../governance/blueprints/overview.md) is a service that helps you deploy and update cloud environments in a repeatable manner using composable artifacts such as Azure Resource Manager templates to provision resources, role-based access controls, and policies. Resources provisioned through Azure Blueprints adhere to an organizationΓÇÖs standards, patterns, and compliance requirements. The overarching goal of Azure Blueprints is to help automate compliance and cybersecurity risk management in cloud environments. To help you deploy a core set of policies for any Azure-based architecture that requires compliance with certain US government compliance requirements, see [Azure Blueprint samples](/azure/governance/blueprints/samples/).
+[Azure Blueprints](../governance/blueprints/overview.md) is a service that helps you deploy and update cloud environments in a repeatable manner using composable artifacts such as Azure Resource Manager templates to provision resources, role-based access controls, and policies. Resources provisioned through Azure Blueprints adhere to an organizationΓÇÖs standards, patterns, and compliance requirements. The overarching goal of Azure Blueprints is to help automate compliance and cybersecurity risk management in cloud environments. To help you deploy a core set of policies for any Azure-based architecture that requires compliance with certain US government compliance requirements, see [Azure Blueprint samples](../governance/blueprints/samples/index.md).
## Endpoint mapping
For more information about Azure Government, see the following resources:
- [Ask questions via the azure-gov tag in StackOverflow](https://stackoverflow.com/tags/azure-gov) - [Azure Government Overview](./documentation-government-welcome.md) - [Azure Government Blog](https://blogs.msdn.microsoft.com/azuregov/)-- [Azure Compliance](../compliance/index.yml)
+- [Azure Compliance](../compliance/index.yml)
azure-government Documentation Government Overview Dod https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-government/documentation-government-overview-dod.md
Last updated 08/04/2021
## Overview
-Azure Government is used by the US Department of Defense (DoD) entities to deploy a broad range of workloads and solutions, including workloads subject to the DoD Cloud Computing [Security Requirements Guide](https://dl.dod.cyber.mil/wp-content/uploads/cloud/SRG/https://docsupdatetracker.net/index.html) (SRG) Impact Level 4 (IL4) and Impact Level 5 (IL5) restrictions. Azure Government was the first hyperscale cloud services platform to be awarded a DoD IL5 Provisional Authorization (PA) by the Defense Information Systems Agency (DISA). For more information about DISA and DoD IL5, see [Department of Defense (DoD) Impact Level 5](/azure/compliance/offerings/offering-dod-il5) compliance documentation.
+Azure Government is used by the US Department of Defense (DoD) entities to deploy a broad range of workloads and solutions. Some of these workloads can be subject to the DoD Cloud Computing [Security Requirements Guide](https://dl.dod.cyber.mil/wp-content/uploads/cloud/SRG/https://docsupdatetracker.net/index.html) (SRG) Impact Level 4 (IL4) and Impact Level 5 (IL5) restrictions. Azure Government was the first hyperscale cloud services platform to be awarded a DoD IL5 Provisional Authorization (PA) by the Defense Information Systems Agency (DISA). For more information about DISA and DoD IL5, see [Department of Defense (DoD) Impact Level 5](/azure/compliance/offerings/offering-dod-il5) compliance documentation.
Azure Government offers the following regions to DoD mission owners and their partners:
Azure Government offers the following regions to DoD mission owners and their pa
|US Gov Arizona </br> US Gov Texas </br> US Gov Virginia|FedRAMP High, DoD IL4, DoD IL5|138| |US DoD Central </br> US DoD East|DoD IL5|64|
-**Azure Government regions** (US Gov Arizona, US Gov Texas, and US Gov Virginia) are intended for US federal (including DoD), state, and local government agencies, and their partners. **Azure Government DoD regions** (US DoD Central and US DoD East) are reserved for exclusive DoD use. Separate DoD IL5 PAs are in place for Azure Government regions (US Gov Arizona, US Gov Texas, and US Gov Virginia) vs. Azure Government DoD regions (US DoD Central and US DoD East).
+**Azure Government regions** (US Gov Arizona, US Gov Texas, and US Gov Virginia) are intended for US federal (including DoD), state, and local government agencies, and their partners. **Azure Government DoD regions** (US DoD Central and US DoD East) are reserved for exclusive DoD use. Separate DoD IL5 PAs are in place for Azure Government regions (US Gov Arizona, US Gov Texas, and US Gov Virginia) vs. Azure Government DoD regions (US DoD Central and US DoD East).
The primary differences between DoD IL5 PAs that are in place for Azure Government regions (US Gov Arizona, US Gov Texas, and US Gov Virginia) vs. Azure Government DoD regions (US DoD Central and US DoD East) are:
The following services are in scope for DoD IL5 PA in Azure Government DoD regio
- [Azure Analysis Services](https://azure.microsoft.com/services/analysis-services/) - [Azure Backup](https://azure.microsoft.com/services/backup/) - [Azure Cache for Redis](https://azure.microsoft.com/services/cache/)
+- [Azure Cloud Services](https://azure.microsoft.com/services/cloud-services/)
- [Azure Cosmos DB](https://azure.microsoft.com/services/cosmos-db/) - [Azure Database for MySQL](https://azure.microsoft.com/services/mysql/) - [Azure Database for PostgreSQL](https://azure.microsoft.com/services/postgresql/) - [Azure DNS](https://azure.microsoft.com/services/dns/)
+- [Azure ExpressRoute](https://azure.microsoft.com/services/expressroute/)
- [Azure Firewall](https://azure.microsoft.com/services/azure-firewall/) - [Azure Front Door](https://azure.microsoft.com/services/frontdoor/) - [Azure Functions](https://azure.microsoft.com/services/functions/)
The following services are in scope for DoD IL5 PA in Azure Government DoD regio
- [Azure Service Fabric](https://azure.microsoft.com/services/service-fabric/) - [Azure Service Manager (RDFE)](/previous-versions/azure/ee460799(v=azure.100)) - [Azure Site Recovery](https://azure.microsoft.com/services/site-recovery/)-- [Azure SQL Database](https://azure.microsoft.com/products/azure-sql/database/) (incl. [Azure SQL MI](https://azure.microsoft.com/products/azure-sql/managed-instance/))-- [Azure Synapse Analytics (formerly SQL Data Warehouse)](https://azure.microsoft.com/services/synapse-analytics/)
+- [Azure SQL Database](https://azure.microsoft.com/products/azure-sql/database/) (incl. [Azure SQL Managed Instance](https://azure.microsoft.com/products/azure-sql/managed-instance/))
+- [Azure Synapse Analytics](https://azure.microsoft.com/services/synapse-analytics/)
- [Batch](https://azure.microsoft.com/services/batch/)-- [Cloud Services](https://azure.microsoft.com/services/cloud-services/)
+- [Dataverse](/powerapps/maker/data-platform/data-platform-intro) (formerly Common Data Service)
- [Dynamics 365 Customer Service](/dynamics365/customer-service/overview) - [Dynamics 365 Field Service](/dynamics365/field-service/overview) - [Dynamics 365 Project Service Automation](/dynamics365/project-operations/psa/overview) - [Dynamics 365 Sales](/dynamics365/sales-enterprise/overview) - [Event Grid](https://azure.microsoft.com/services/event-grid/) - [Event Hubs](https://azure.microsoft.com/services/event-hubs/)-- [ExpressRoute](https://azure.microsoft.com/services/expressroute/) - [Import/Export](https://azure.microsoft.com/services/storage/import-export/) - [Key Vault](https://azure.microsoft.com/services/key-vault/) - [Load Balancer](https://azure.microsoft.com/services/load-balancer/) - [Microsoft Azure porta](https://azure.microsoft.com/features/azure-portal/)-- [Microsoft Dataverse (formerly Common Data Service)](/powerapps/maker/data-platform/data-platform-intro)-- [Microsoft Defender for Endpoint (formerly Microsoft Defender Advanced Threat Protection)](/microsoft-365/security/defender-endpoint/microsoft-defender-endpoint)
+- [Microsoft Defender for Endpoint](/microsoft-365/security/defender-endpoint/microsoft-defender-endpoint) (formerly Microsoft Defender Advanced Threat Protection)
- [Microsoft Graph](/graph/overview) - [Microsoft Stream](/stream/overview) - [Network Watcher](https://azure.microsoft.com/services/network-watcher/) - [Network Watcher Traffic Analytics](../network-watcher/traffic-analytics.md) - [Power Apps](/powerapps/powerapps-overview) - [Power Apps portal](https://powerapps.microsoft.com/portals/)-- [Power Automate (formerly Microsoft Flow)](/power-automate/getting-started)
+- [Power Automate](/power-automate/) (formerly Microsoft Flow)
- [Power BI](https://powerbi.microsoft.com/) - [Power BI Embedded](https://azure.microsoft.com/services/power-bi-embedded/) - [Service Bus](https://azure.microsoft.com/services/service-bus/)
Azure Government is a US government community cloud providing services for feder
Azure provides [extensive support for tenant isolation](./azure-secure-isolation-guidance.md) across compute, storage, and networking services to segregate each customer's applications and data. This approach provides the scale and economic benefits of multi-tenant cloud services while rigorously helping prevent other customers from accessing your data or applications. Moreover, some Azure services deployed in Azure Government regions (US Gov Arizona, US Gov Texas, and US Gov Virginia) require extra configuration to meet DoD IL5 compute and storage isolation requirements, as explained in [Isolation guidelines for Impact Level 5 workloads](./documentation-government-impact-level-5.md). ### What is IL5 data? 
-IL5 accommodates controlled unclassified information (CUI) that requires a higher level of protection than that afforded by IL4 as deemed necessary by the information owner, public law, or other government regulations. IL5 also supports unclassified National Security Systems (NSS). This impact level accommodates NSS and CUI categorizations based on CNSSI 1253 up to moderate confidentiality and moderate integrity (M-M-x). For more information on IL5 data, see [DoD IL5 overview](/azure/compliance/offerings/offering-dod-il5#dod-il5-overview).
+IL5 accommodates controlled unclassified information (CUI) that requires a higher level of protection than is afforded by IL4 as deemed necessary by the information owner, public law, or other government regulations. IL5 also supports unclassified National Security Systems (NSS). This impact level accommodates NSS and CUI categorizations based on CNSSI 1253 up to moderate confidentiality and moderate integrity (M-M-x). For more information on IL5 data, see [DoD IL5 overview](/azure/compliance/offerings/offering-dod-il5#dod-il5-overview).
### What is the difference between IL4 and IL5 data?   IL4 data is controlled unclassified information (CUI) that may include data subject to export control, protected health information, and other data requiring explicit CUI designation (for example, For Official Use Only, Law Enforcement Sensitive, and Sensitive Security Information).
azure-monitor Azure Monitor Agent Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/agents/azure-monitor-agent-overview.md
Set-AzVMExtension -ExtensionName AzureMonitorLinuxAgent -ExtensionType AzureMoni
## Next steps - [Install Azure Monitor agent](azure-monitor-agent-install.md) on Windows and Linux virtual machines.-- [Create a data collection rule](data-collection-rule-azure-monitor-agent.md) to collect data from the agent and send it to Azure Monitor.
+- [Create a data collection rule](data-collection-rule-azure-monitor-agent.md) to collect data from the agent and send it to Azure Monitor.
azure-monitor Status Monitor V2 Get Started https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/status-monitor-v2-get-started.md
Install-Module -Name Az.ApplicationMonitor -AllowPrerelease -AcceptLicense
> [!NOTE] > `AllowPrerelease` switch in `Install-Module` cmdlet allows installation of beta release. >
-> For additional information, see [Install-Module](https://docs.microsoft.com/powershell/module/powershellget/install-module?view=powershell-7.1#parameters).
+> For additional information, see [Install-Module](/powershell/module/powershellget/install-module?view=powershell-7.1#parameters).
> ### Enable monitoring
Enable-ApplicationInsightsMonitoring -ConnectionString 'InstrumentationKey=xxxxx
Do more with Application Insights Agent: - Review the [detailed instructions](status-monitor-v2-detailed-instructions.md) for an explanation of the commands found here.-- Use our guide to [troubleshoot](status-monitor-v2-troubleshoot.md) Application Insights Agent.
+- Use our guide to [troubleshoot](status-monitor-v2-troubleshoot.md) Application Insights Agent.
azure-monitor Container Insights Update Metrics https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/containers/container-insights-update-metrics.md
To support these new capabilities, a new containerized agent is included in the
Either process assigns the **Monitoring Metrics Publisher** role to the cluster's service principal or User assigned MSI for the monitoring add-on so that the data collected by the agent can be published to your clusters resource. Monitoring Metrics Publisher has permission only to push metrics to the resource, it cannot alter any state, update the resource, or read any data. For more information about the role, see [Monitoring Metrics Publisher role](../../role-based-access-control/built-in-roles.md#monitoring-metrics-publisher). The Monitoring Metrics Publisher role requirement is not applicable to Azure Arc enabled Kubernetes clusters. > [!IMPORTANT]
-> The upgrade is not required for Azure Arc enabled Kubernetes clusters since they will already have the minimum required agent version.
+> The upgrade is not required for Azure Arc enabled Kubernetes clusters since they will already have the minimum required agent version.
+> The assignment of **Monitoring Metrics Publisher** role to the cluster's service principal or User assigned MSI for the monitoring add-on is automatically done when using Azure portal, Azure PowerShell, or Azure CLI.
## Prerequisites
azure-monitor Diagnostic Settings https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/essentials/diagnostic-settings.md
Any destinations for the diagnostic setting must be created before creating the
|:|:| | Log Analytics workspace | The workspace does not need to be in the same region as the resource being monitored.| | Event hubs | The shared access policy for the namespace defines the permissions that the streaming mechanism has. Streaming to Event Hubs requires Manage, Send, and Listen permissions. To update the diagnostic setting to include streaming, you must have the ListKey permission on that Event Hubs authorization rule.<br><br>The event hub namespace needs to be in the same region as the resource being monitored if the resource is regional. |
-| Azure storage account | You should not use an existing storage account that has other, non-monitoring data stored in it so that you can better control access to the data. If you are archiving the Activity log and resource logs together though, you may choose to use the same storage account to keep all monitoring data in a central location.<br><br>To send the data to immutable storage, set the immutable policy for the storage account as described in [Set and manage immutability policies for Blob storage](../../storage/blobs/storage-blob-immutability-policies-manage.md). You must follow all steps in this article including enabling protected append blobs writes.<br><br>The storage account needs to be in the same region as the resource being monitored if the resource is regional. |
+| Azure storage account | You should not use an existing storage account that has other, non-monitoring data stored in it so that you can better control access to the data. If you are archiving the Activity log and resource logs together though, you may choose to use the same storage account to keep all monitoring data in a central location.<br><br>To send the data to immutable storage, set the immutable policy for the storage account as described in [Set and manage immutability policies for Blob storage](../../storage/blobs/immutable-policy-configure-version-scope.md). You must follow all steps in this article including enabling protected append blobs writes.<br><br>The storage account needs to be in the same region as the resource being monitored if the resource is regional. |
> [!NOTE] > Azure Data Lake Storage Gen2 accounts are not currently supported as a destination for diagnostic settings even though they may be listed as a valid option in the Azure portal.
Diagnostic settings do not support resourceIDs with non-ASCII characters (for ex
## Next steps -- [Read more about Azure platform Logs](./platform-logs-overview.md)
+- [Read more about Azure platform Logs](./platform-logs-overview.md)
azure-monitor Resource Logs Categories https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/essentials/resource-logs-categories.md
A combination of the resource type (available in the `resourceId` property) and
## Costs
-[Azure Monitor Log Analytics](https://azure.microsoft.com/pricing/details/monitor/), [Azure Storage](https://azure.microsoft.com/en-us/product-categories/storage/), [Event hub](https://azure.microsoft.com/en-us/pricing/details/event-hubs/), and partners who integrate directly with Azure Monitor ([for example Datadog](/azure/partner-solutions/datadog/overview)) have costs associated with ingesting data and storing data. Check the previous links to pricing pages for these services to understand those costs. Resource logs are just one type of data you can send to these locations.
+[Azure Monitor Log Analytics](https://azure.microsoft.com/pricing/details/monitor/), [Azure Storage](https://azure.microsoft.com/en-us/product-categories/storage/), [Event hub](https://azure.microsoft.com/en-us/pricing/details/event-hubs/), and partners who integrate directly with Azure Monitor ([for example Datadog](../../partner-solutions/datadog/overview.md)) have costs associated with ingesting data and storing data. Check the previous links to pricing pages for these services to understand those costs. Resource logs are just one type of data you can send to these locations.
In addition, there may be costs to export some categories of resource logs to those locations. Those logs with possible export costs are listed in the table below. For more information on export pricing, see the *Platform Logs* section in the [Azure Monitor pricing page](https://azure.microsoft.com/pricing/details/monitor/).
If you think there is something is missing, you can open a GitHub comment at the
* [Learn more about resource logs](../essentials/platform-logs-overview.md) * [Stream resource resource logs to **Event Hubs**](./resource-logs.md#send-to-azure-event-hubs) * [Change resource log diagnostic settings using the Azure Monitor REST API](/rest/api/monitor/diagnosticsettings)
-* [Analyze logs from Azure storage with Log Analytics](./resource-logs.md#send-to-log-analytics-workspace)
+* [Analyze logs from Azure storage with Log Analytics](./resource-logs.md#send-to-log-analytics-workspace)
azure-monitor Resource Logs Schema https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/essentials/resource-logs-schema.md
The schema for resource logs varies depending on the resource and log category.
| Azure Database for MySQL | [Azure Database for MySQL diagnostic logs](../../mysql/concepts-server-logs.md#diagnostic-logs) | | Azure Database for PostgreSQL | [Azure Database for PostgreSQL logs](../../postgresql/concepts-server-logs.md#resource-logs) | | Azure Databricks | [Diagnostic logging in Azure Databricks](/azure/databricks/administration-guide/account-settings/azure-diagnostic-logs) |
-| Azure Machine Learning | [Diagnostic logging in Azure Machine Learning](/azure/machine-learning/monitor-resource-reference) |
+| Azure Machine Learning | [Diagnostic logging in Azure Machine Learning](../../machine-learning/monitor-resource-reference.md) |
| DDoS Protection | [Logging for Azure DDoS Protection Standard](../../ddos-protection/diagnostic-logging.md#log-schemas) | | Azure Digital Twins | [Set up Azure Digital Twins Diagnostics](../../digital-twins/troubleshoot-diagnostics.md#log-schemas) | Event Hubs |[Azure Event Hubs logs](../../event-hubs/event-hubs-diagnostic-logs.md) |
azure-monitor Data Security https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/logs/data-security.md
The Azure Monitor service ensures that incoming data is from a trusted source by
The retention period of collected data stored in the database depends on the selected pricing plan. For the *Free* tier, collected data is available for seven days. For the *Paid* tier, collected data is available for 31 days by default, but can be extended to 730 days. Data is stored encrypted at rest in Azure storage, to ensure data confidentiality, and the data is replicated within the local region using locally redundant storage (LRS). The last two weeks of data are also stored in SSD-based cache and this cache is encrypted.
-Data in database storage cannot be altered once ingested but can be deleted via [*purge* API path](personal-data-mgmt.md#delete). Although data cannot be altered, some certifications require that data is kept immutable and cannot be changed or deleted in storage. Data immutability can be achieved using [data export](logs-data-export.md) to a storage account that is configured as [immutable storage](../../storage/blobs/storage-blob-immutability-policies-manage.md).
+Data in database storage cannot be altered once ingested but can be deleted via [*purge* API path](personal-data-mgmt.md#delete). Although data cannot be altered, some certifications require that data is kept immutable and cannot be changed or deleted in storage. Data immutability can be achieved using [data export](logs-data-export.md) to a storage account that is configured as [immutable storage](../../storage/blobs/immutable-policy-configure-version-scope.md).
## 4. Use Azure Monitor to access the data To access your Log Analytics workspace, you sign into the Azure portal using the organizational account or Microsoft account that you set up previously. All traffic between the portal and Azure Monitor service is sent over a secure HTTPS channel. When using the portal, a session ID is generated on the user client (web browser) and data is stored in a local cache until the session is terminated. When terminated, the cache is deleted. Client-side cookies, which do not contain personally identifiable information, are not automatically removed. Session cookies are marked HTTPOnly and are secured. After a pre-determined idle period, the Azure portal session is terminated.
You can use these additional security features to further secure your Azure Moni
## Next steps * [See the different kinds of data that you can collect in Azure Monitor](../monitor-reference.md).-
azure-monitor Logs Data Export https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/logs/logs-data-export.md
The storage account data format is [JSON lines](../essentials/resource-logs-blob
[![Storage sample data](media/logs-data-export/storage-data.png)](media/logs-data-export/storage-data.png#lightbox)
-Log Analytics data export can write append blobs to immutable storage accounts when time-based retention policies have the *allowProtectedAppendWrites* setting enabled. This allows writing new blocks to an append blob, while maintaining immutability protection and compliance. See [Allow protected append blobs writes](../../storage/blobs/storage-blob-immutable-storage.md#allow-protected-append-blobs-writes).
+Log Analytics data export can write append blobs to immutable storage accounts when time-based retention policies have the *allowProtectedAppendWrites* setting enabled. This allows writing new blocks to an append blob, while maintaining immutability protection and compliance. See [Allow protected append blobs writes](../../storage/blobs/immutable-time-based-retention-policy-overview.md#allow-protected-append-blobs-writes).
### Event hub Data is sent to your event hub in near-real-time as it reaches Azure Monitor. An event hub is created for each data type that you export with the name *am-* followed by the name of the table. For example, the table *SecurityEvent* would sent to an event hub named *am-SecurityEvent*. If you want the exported data to reach a specific event hub, or if you have a table with a name that exceeds the 47 character limit, you can provide your own event hub name and export all data for defined tables to it.
Supported tables are currently limited to those specified below. All data from t
| NWConnectionMonitorPathResult | | | NWConnectionMonitorTestResult | | | OfficeActivity | Partial support in government clouds ΓÇô some of the data to ingested via webhooks from O365 into LA. This portion is missing in export currently. |
-| Operation | Partial support ΓÇô some of the data is ingested through internal services that isn't supported for export. This portion is missing in export currently. |
+| Operation | Partial support ΓÇô some of the data is ingested through internal services that aren't supported for export. This portion is missing in export currently. |
| Perf | Partial support ΓÇô only windows perf data is currently supported. The Linux perf data is missing in export currently. | | PowerBIDatasetsWorkspace | | | PurviewScanStatusLogs | |
Supported tables are currently limited to those specified below. All data from t
| SecurityBaselineSummary | | | SecurityCef | | | SecurityDetection | |
-| SecurityEvent | Partial support ΓÇô data arriving from Log Analytics agent (MMA) or Azure Monitor Agent (AMA) is fully supported in export. Data arriving via Diagnostics Extension agent is collected though storage while this path isnΓÇÖt supported in export.2 |
+| SecurityEvent | Partial support ΓÇô data arriving from Log Analytics agent (MMA) or Azure Monitor Agent (AMA) is fully supported in export. Data arriving via Diagnostics Extension agent is collected through storage while this path isnΓÇÖt supported in export.2 |
| SecurityIncident | | | SecurityIoTRawEvent | | | SecurityNestedRecommendation | |
Supported tables are currently limited to those specified below. All data from t
| SynapseSqlPoolRequestSteps | | | SynapseSqlPoolSqlRequests | | | SynapseSqlPoolWaits | |
-| Syslog | Partial support ΓÇô data arriving from Log Analytics agent (MMA) or Azure Monitor Agent (AMA) is fully supported in export. Data arriving via Diagnostics Extension agent is collected though storage while this path isnΓÇÖt supported in export.2 |
+| Syslog | Partial support ΓÇô data arriving from Log Analytics agent (MMA) or Azure Monitor Agent (AMA) is fully supported in export. Data arriving via Diagnostics Extension agent is collected through storage while this path isnΓÇÖt supported in export.2 |
| ThreatIntelligenceIndicator | |
-| Update | Partial support ΓÇô some of the data is ingested through internal services that isn't supported for export. This portion is missing in export currently. |
+| Update | Partial support ΓÇô some of the data is ingested through internal services that aren't supported for export. This portion is missing in export currently. |
| UpdateRunProgress | | | UpdateSummary | | | Usage | |
Supported tables are currently limited to those specified below. All data from t
| Watchlist | | | WindowsEvent | | | WindowsFirewall | |
-| WireData | Partial support ΓÇô some of the data is ingested through internal services that isn't supported for export. This portion is missing in export currently. |
+| WireData | Partial support ΓÇô some of the data is ingested through internal services that aren't supported for export. This portion is missing in export currently. |
| WorkloadDiagnosticLogs | | | WVDAgentHealthStatus | | | WVDCheckpoints | |
Supported tables are currently limited to those specified below. All data from t
## Next steps -- [Query the exported data from Azure Data Explorer](../logs/azure-data-explorer-query-storage.md).
+- [Query the exported data from Azure Data Explorer](../logs/azure-data-explorer-query-storage.md).
azure-monitor Private Storage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/logs/private-storage.md
To replace a storage account used for ingestion,
When using your own storage account, retention is up to you. Log Analytics won't delete logs stored on your private storage. Instead, you should set up a policy to handle the load according to your preferences. #### Consider load
-Storage accounts can handle a certain load of read and write requests before they start throttling requests (For more information, see [Scalability and performance targets for Blob storage](../../storage/common/scalability-targets-standard-account.md)). Throttling affects the time it takes to ingest logs. If your storage account is overloaded, register an additional storage account to spread the load between them. To monitor your storage accountΓÇÖs capacity and performance review its [Insights in the Azure portal](../insights/storage-insights-overview.md).
+Storage accounts can handle a certain load of read and write requests before they start throttling requests (For more information, see [Scalability and performance targets for Blob storage](../../storage/common/scalability-targets-standard-account.md)). Throttling affects the time it takes to ingest logs. If your storage account is overloaded, register an additional storage account to spread the load between them. To monitor your storage accountΓÇÖs capacity and performance review its [Insights in the Azure portal](../../storage/common/storage-insights-overview.md?toc=%2fazure%2fazure-monitor%2ftoc.json).
### Related charges Storage accounts are charged by the volume of stored data, the type of the storage, and the type of redundancy. For details see [Block blob pricing](https://azure.microsoft.com/pricing/details/storage/blobs) and [Table Storage pricing](https://azure.microsoft.com/pricing/details/storage/tables).
azure-monitor Monitor Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/monitor-reference.md
Insights provide a customized monitoring experience for particular applications
| [Cosmos DB insights](insights/cosmosdb-insights-overview.md) | Provides a view of the overall performance, failures, capacity, and operational health of all your Azure Cosmos DB resources in a unified interactive experience. | | [Networks insights (preview)](insights/network-insights-overview.md) | Provides a comprehensive view of health and metrics for all your network resource. The advanced search capability helps you identify resource dependencies, enabling scenarios like identifying resource that are hosting your website, by simply searching for your website name. | [Resource Group insights (preview)](insights/resource-group-insights.md) | Triage and diagnose any problems your individual resources encounter, while offering context as to the health and performance of the resource group as a whole. |
-| [Storage insights](insights/storage-insights-overview.md) | Provides comprehensive monitoring of your Azure Storage accounts by delivering a unified view of your Azure Storage services performance, capacity, and availability. |
+| [Storage insights](../storage/common/storage-insights-overview.md?toc=%2fazure%2fazure-monitor%2ftoc.json) | Provides comprehensive monitoring of your Azure Storage accounts by delivering a unified view of your Azure Storage services performance, capacity, and availability. |
| [VM insights](vm/vminsights-overview.md) | Monitors your Azure virtual machines (VM) and virtual machine scale sets at scale. It analyzes the performance and health of your Windows and Linux VMs, and monitors their processes and dependencies on other resources and external processes. | | [Key Vault insights (preview)](./insights/key-vault-insights-overview.md) | Provides comprehensive monitoring of your key vaults by delivering a unified view of your Key Vault requests, performance, failures, and latency. | | [Azure Cache for Redis insights (preview)](insights/redis-cache-insights-overview.md) | Provides a unified, interactive view of overall performance, failures, capacity, and operational health. |
The following table lists Azure services and the data they collect into Azure Mo
|SQL Database | Yes | Yes | No | | |SQL Server Stretch Database | Yes | Yes | No | | |Stack | No | No | No | |
-|Storage | Yes | No | [Yes](insights/storage-insights-overview.md) | |
+|Storage | Yes | No | [Yes](../storage/common/storage-insights-overview.md?toc=%2fazure%2fazure-monitor%2ftoc.json) | |
|Storage Cache | No | No | No | | |Storage Sync Services | No | No | No | | |Stream Analytics | Yes | Yes | No | |
Azure Monitor can collect data from resources outside of Azure using the methods
- Read more about the [Azure Monitor data platform which stores the logs and metrics collected by insights and solutions](data-platform.md). - Complete a [tutorial on monitoring an Azure resource](essentials/tutorial-resource-logs.md). - Complete a [tutorial on writing a log query to analyze data in Azure Monitor Logs](essentials/tutorial-resource-logs.md).-- Complete a [tutorial on creating a metrics chart to analyze data in Azure Monitor Metrics](essentials/tutorial-metrics-explorer.md).-
+- Complete a [tutorial on creating a metrics chart to analyze data in Azure Monitor Metrics](essentials/tutorial-metrics-explorer.md).
azure-monitor Visualizations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/visualizations.md
Here is a video walkthrough on creating dashboards.
- Cost to support additional Grafana infrastructure or additional cost for Grafana Cloud. ## Azure Monitor partners
-Some [Azure Monitor partners](/azure/azure-monitor/partners) may provide visualization functionality. The previous link lists partners evaluated by Microsoft.
+Some [Azure Monitor partners](./partners.md) may provide visualization functionality. The previous link lists partners evaluated by Microsoft.
### Advantages - May provide out of the box visualizations saving time
You can access data in log and metric data in Azure Monitor through their API us
- Learn about [Workbooks](./visualize/workbooks-overview.md). - Learn about [import log data into Power BI](./visualize/powerbi.md). - Learn about the [Grafana Azure Monitor data source plugin](./visualize/grafana-plugin.md).-- Learn about [Views in Azure Monitor](visualize/view-designer.md).-
+- Learn about [Views in Azure Monitor](visualize/view-designer.md).
azure-monitor Workbooks Automate https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/visualize/workbooks-automate.md
For a technical reason, this mechanism cannot be used to create workbook instanc
## Next steps
-Explore how workbooks are being used to power the new [Storage insights experience](../insights/storage-insights-overview.md).
+Explore how workbooks are being used to power the new [Storage insights experience](../../storage/common/storage-insights-overview.md?toc=%2fazure%2fazure-monitor%2ftoc.json).
azure-percept How To Get Hardware Support https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-percept/how-to-get-hardware-support.md
If you would like to contact ASUS about purchasing dev kits, you can submit an i
## Next steps If you think you need more support, you can also try these options from Microsoft.-- [Microsoft Q&A](https://docs.microsoft.com/answers/products/)-- [Azure Support](https://azure.microsoft.com/support/plans/)-
+- [Microsoft Q&A](/answers/products/)
+- [Azure Support](https://azure.microsoft.com/support/plans/)
azure-percept How To Set Up Advanced Network Settings https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-percept/how-to-set-up-advanced-network-settings.md
The Azure Percept DK allows you to control various networking components on the
IPv4 and IPv6 are both supported on the Azure Percept DK for local connectivity. > [!NOTE]
-> Azure IoTHub [does not supports IPv6](https://docs.microsoft.com/azure/iot-hub/iot-hub-understand-ip-address#support-for-ipv6). IPv4 must be used to communicate with IoTHub.
+> Azure IoTHub [does not supports IPv6](../iot-hub/iot-hub-understand-ip-address.md#support-for-ipv6). IPv4 must be used to communicate with IoTHub.
1. Select the IPv4 radio button and then select an item under Network Settings to change its IPv4 settings 1. Select the IPv6 radio button and then select an item under Network Settings to change its IPv6 settings 1. The **Network setting** options may change depending on your selection
Passphrase requirements:
1. Select **Back** to return to the main **Advanced networking settings** page ## Next steps
-After you have finished making changes in **Advanced network settings**, select the **Back** button to [continue through the Azure Percept DK setup experience](./quickstart-percept-dk-set-up.md).
-
+After you have finished making changes in **Advanced network settings**, select the **Back** button to [continue through the Azure Percept DK setup experience](./quickstart-percept-dk-set-up.md).
azure-percept Speech Module Interface Workflow https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-percept/speech-module-interface-workflow.md
This article describes how the Azure Percept speech module interacts with IoT Hu
- IoT Hub can send control requests to speech module via the Module method. - IoT Hub can get speech module status via the Module method.
-For more details, please refer to [Understand and use module twins in IoT Hub](https://docs.microsoft.com/azure/iot-hub/iot-hub-devguide-module-twins).
+For more details, please refer to [Understand and use module twins in IoT Hub](../iot-hub/iot-hub-devguide-module-twins.md).
## Speech module states
Here's an example using the module method GetModuleState:
- Payload: "DeviceRemoved" ## Next steps
-Try to apply these concepts when [configuring a voice assistant application using Azure IoT Hub](./how-to-configure-voice-assistant.md).
+Try to apply these concepts when [configuring a voice assistant application using Azure IoT Hub](./how-to-configure-voice-assistant.md).
azure-percept Vision Solution Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-percept/vision-solution-troubleshooting.md
View your device's RTSP video stream in [Azure Percept Studio](./how-to-view-vid
To open the RTSP stream in VLC media player, go to **Media** > **Open network stream** > **rtsp://[device IP address]:8554/result**.
+If your RTSP stream is partially blocked by a gray box, you may be trying to view it over a poor network connection. Check that your connection has sufficient bandwidth for video streams.
+ ## Next steps For more information on troubleshooting your Azure Percept DK instance, see the [General troubleshooting guide](./troubleshoot-dev-kit.md).
azure-resource-manager Learn Bicep https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/bicep/learn-bicep.md
Title: Discover Bicep on Microsoft Learn description: Provides an overview of the units that are available on Microsoft Learn for Bicep. Previously updated : 07/30/2021 Last updated : 08/08/2021 # Bicep on Microsoft Learn
In addition to the preceding path, the following modules contain Bicep content.
| [Manage changes to your Bicep code by using Git](/learn/modules/manage-changes-bicep-code-git/) | Learn how to use Git to support your Bicep development workflow by keeping track of the changes you make as you work. You'll find out how to commit files, view the history of the files you've changed, and how to use branches to develop multiple versions of your code at the same time. You'll also learn how to use GitHub or Azure Repos to publish a repository so that you can collaborate with team members. | | [Publish libraries of reusable infrastructure code by using template specs](/learn/modules/arm-template-specs/) | Template specs enable you to reuse and share your ARM templates across your organization. Learn how to create and publish template specs, and how to deploy them. You'll also learn how to manage template specs, including how to control access and how to safely update them by using versions. | | [Preview Azure deployment changes by using what-if](/learn/modules/arm-template-whatif/) | This module teaches you how to preview your changes with the what-if operation. By using what-if, you can make sure your Bicep file only makes changes that you expect. |
+| [Structure your Bicep code for collaboration](/learn/modules/structure-bicep-code-collaboration/) | Build Bicep files that support collaborative development and follow best practices. Plan your parameters to make your templates easy to deploy. Use a consistent style, clear structure, and comments to make your Bicep code easy to understand, use, and modify. |
| [Authenticate your Azure deployment pipeline by using service principals](/learn/modules/authenticate-azure-deployment-pipeline-service-principals/) | Service principals enable your deployment pipelines to authenticate securely with Azure. In this module, you'll learn what service principals are, how they work, and how to create them. You'll also learn how to grant them permission to your Azure resources so that your pipelines can deploy your Bicep files. | | [Build your first Bicep deployment pipeline by using Azure Pipelines](/learn/modules/build-first-bicep-deployment-pipeline-using-azure-pipelines/) | Build a basic deployment pipeline for Bicep code. Use a service connection to securely identify your pipeline to Azure. Configure when the pipeline runs by using triggers. |
azure-resource-manager Modules https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/bicep/modules.md
output storageEndpoint object = stgModule.outputs.storageEndpoint
``` - The **_params_** property contains any parameters to pass to the module file. These parameters match the parameters defined in the Bicep file.
+Like resources, modules are deployed in parallel unless they depend on other modules or resource deployments. To learn more about dependencies, see [Set resource dependencies](resource-declaration.md#set-resource-dependencies).
+ To get an output value from a module, retrieve the property value with syntax like: `stgModule.outputs.storageEndpoint` where `stgModule` is the identifier of the module. You can conditionally deploy a module. Use the same **if** syntax as you would use when [conditionally deploying a resource](conditional-resource-deployment.md).
azure-resource-manager Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/overview.md
This approach means you can safely share templates that meet your organization's
* To learn about ARM templates through a guided set of modules on Microsoft Learn, see [Deploy and manage resources in Azure by using ARM templates](/learn/paths/deploy-manage-resource-manager-templates/). * For information about the properties in template files, see [Understand the structure and syntax of ARM templates](./syntax.md). * To learn about exporting templates, see [Quickstart: Create and deploy ARM templates by using the Azure portal](quickstart-create-templates-use-the-portal.md).
-* For answers to common questions, see [Frequently asked questions about ARM templates](frequently-asked-questions.yml).
+* For answers to common questions, see [Frequently asked questions about ARM templates](/azure/purview/frequently-asked-questions.yml).
azure-resource-manager Syntax https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/syntax.md
You can break a string into multiple lines. For example, see the `location` prop
* For details about the functions you can use from within a template, see [ARM template functions](template-functions.md). * To combine several templates during deployment, see [Using linked and nested templates when deploying Azure resources](linked-templates.md). * For recommendations about creating templates, see [ARM template best practices](./best-practices.md).
-* For answers to common questions, see [Frequently asked questions about ARM templates](frequently-asked-questions.yml).
+* For answers to common questions, see [Frequently asked questions about ARM templates](/azure/purview/frequently-asked-questions.yml).
azure-resource-manager Template Functions Resource https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/template-functions-resource.md
Title: Template functions - resources description: Describes the functions to use in an Azure Resource Manager template (ARM template) to retrieve values about resources. Previously updated : 05/13/2021 Last updated : 08/09/2021
The next example shows a list function that takes a parameter. In this case, the
`pickZones(providerNamespace, resourceType, location, [numberOfZones], [offset])`
-Determines whether a resource type supports zones for a region.
+Determines whether a resource type supports zones for the specified location or region. This function only supports zonal resources, zone redundant services will return an empty array. For more information see [Azure Services that support Availability Zones](../../availability-zones/az-region.md). To use the pickZones function with zone redundant services, see the examples below.
### Parameters
When the `numberOfZones` parameter is set to 3, it returns:
] ```
-When the resource type or region doesn't support zones, an empty array is returned.
+When the resource type or region doesn't support zones an empty array is returned. An empty array is also returned for zone redundant services.
```json [ ] ```
+### Remarks
+
+There are different categories for Azure Availability Zones, zonal and zone-redundant. The pickZones function can be used to return an availability zone number or numbers for a zonal resource. For zone redundant services (ZRS), the function will return an empty array. Zonal resources can typically be identified by the use of a `zones` property on the resource header. Zone redundant services have different ways for identifying and using availability zones per resource, use the documentation for a specific service to determine the category of support for availability zones. For more information see [Azure Services that support Availability Zones](../../availability-zones/az-region.md).
+
+To determine if a given Azure region or location supports availability zones, simply call the pickZones() function with a zonal resource type, for example `Microsoft.Storage/storageAccounts`. If the response is non-empty, the region supports availability zones.
+ ### pickZones example The following template shows three results for using the pickZones function.
You can use the response from pickZones to determine whether to provide null for
}, ```
+The following example shows how to use the pickZones function to enable zone redundancy for Cosmos DB.
+
+```json
+"resources": [
+ {
+ "type": "Microsoft.DocumentDB/databaseAccounts",
+ "apiVersion": "2021-04-15",
+ "name": "[variables('accountName_var')]",
+ "location": "[parameters('location')]",
+ "kind": "GlobalDocumentDB",
+ "properties": {
+ "consistencyPolicy": "[variables('consistencyPolicy')[parameters('defaultConsistencyLevel')]]",
+ "locations": [
+ {
+ "locationName": "[parameters('primaryRegion')]",
+ "failoverPriority": 0,
+ "isZoneRedundant": "[if(empty(pickZones('Microsoft.Storage', 'storageAccounts', parameters('primaryRegion'))), bool('false'), bool('true'))]",
+ },
+ {
+ "locationName": "[parameters('secondaryRegion')]",
+ "failoverPriority": 1,
+ "isZoneRedundant": "[if(empty(pickZones('Microsoft.Storage', 'storageAccounts', parameters('secondaryRegion'))), bool('false'), bool('true'))]",
+ }
+ ],
+ "databaseAccountOfferType": "Standard",
+ "enableAutomaticFailover": "[parameters('automaticFailover')]"
+ }
+ }
+]
+```
+ ## reference `reference(resourceName or resourceIdentifier, [apiVersion], ['Full'])`
azure-sql Auditing Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/auditing-overview.md
An auditing policy can be defined for a specific database or as a default [serve
- Audit logs are written to **Append Blobs** in an Azure Blob storage on your Azure subscription - Audit logs are in .xel format and can be opened by using [SQL Server Management Studio (SSMS)](/sql/ssms/download-sql-server-management-studio-ssms).-- To configure an immutable log store for the server or database-level audit events, follow the [instructions provided by Azure Storage](../../storage/blobs/storage-blob-immutability-policies-manage.md#enabling-allow-protected-append-blobs-writes). Make sure you have selected **Allow additional appends** when you configure the immutable blob storage.
+- To configure an immutable log store for the server or database-level audit events, follow the [instructions provided by Azure Storage](../../storage/blobs/immutable-time-based-retention-policy-overview.md#allow-protected-append-blobs-writes). Make sure you have selected **Allow additional appends** when you configure the immutable blob storage.
- You can write audit logs to a an Azure Storage account behind a VNet or firewall. For specific instructions see, [Write audit to a storage account behind VNet and firewall](audit-write-storage-account-behind-vnet-firewall.md). - For details about the log format, hierarchy of the storage folder and naming conventions, see the [Blob Audit Log Format Reference](./audit-log-format.md). - Auditing on [Read-Only Replicas](read-scale-out.md) is automatically enabled. For further details about the hierarchy of the storage folders, naming conventions, and log format, see the [SQL Database Audit Log Format](audit-log-format.md).
You can manage Azure SQL Database auditing using [Azure Resource Manager](../../
- Data Exposed episode [What's New in Azure SQL Auditing](https://channel9.msdn.com/Shows/Data-Exposed/Whats-New-in-Azure-SQL-Auditing) on Channel 9. - [Auditing for SQL Managed Instance](../managed-instance/auditing-configure.md)-- [Auditing for SQL Server](/sql/relational-databases/security/auditing/sql-server-audit-database-engine)
+- [Auditing for SQL Server](/sql/relational-databases/security/auditing/sql-server-audit-database-engine)
azure-sql Doc Changes Updates Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/doc-changes-updates-release-notes.md
This table provides a quick comparison for the change in terminology:
### SQL Managed Instance H1 2021 updates - [Public Preview for Support 16 TB for SQL Managed Instance General Purpose](https://techcommunity.microsoft.com/t5/azure-sql/increased-storage-limit-to-16-tb-for-sql-managed-instance/ba-p/2421443) - support for allocation of up to 16 TB of space for SQL Managed Instance General Purpose (Public Preview).- - [Parallel backup for better performance in SQL Managed Instance General Purpose](https://techcommunity.microsoft.com/t5/azure-sql/parallel-backup-for-better-performance-in-sql-managed-instance/ba-p/2421762) - support for faster backups for SQL Managed Instance General Purpose.- - [Azure Active Directory only authentication for Azure SQL](https://techcommunity.microsoft.com/t5/azure-sql/azure-active-directory-only-authentication-for-azure-sql/ba-p/2417673) - Public Preview for Azure Active Directory only authenticaion on Azure SQL Managed Instance.- - [Use Resource Health to monitor health status of your Azure SQL Managed Instance](resource-health-to-troubleshoot-connectivity.md) - support for Resource Health monitoring on Azure SQL Managed Instance.- - [Service-aided subnet configuration for Azure SQL Managed Instance now makes use of service tags for user-defined routes](../managed-instance/connectivity-architecture-overview.md) - support for User defined route (UDR) table.- - [Migrate to Managed Instance with Log Replay Service](../managed-instance/log-replay-service-migrate.md) - allows migrating databases from SQL Server to SQL Managed Instance by using Log Replay Service (Public Preview).- - [Maintenance window](./maintenance-window.md) - the maintenance window feature allows you to configure maintenance schedule, see [Maintenance window announcement](https://techcommunity.microsoft.com/t5/azure-sql/maintenance-window-for-azure-sql-database-and-managed-instance/ba-p/2174835) (Public Preview).- - [Machine Learning Services on Azure SQL Managed Instance now generally available](https://azure.microsoft.com/en-gb/updates/machine-learning-services-on-azure-sql-managed-instance-now-generally-available/) - General availability for Machine Learning Services on Azure SQL Managed Instance.- - [Service Broker cross-instance message exchange for Azure SQL Managed Instance](https://azure.microsoft.com/en-gb/updates/service-broker-message-exchange-for-azure-sql-managed-instance-in-public-preview/) - support for cross-instance message exchange.- - [Long-term backup retention for Azure SQL Managed Instance](https://azure.microsoft.com/en-gb/updates/longterm-backup-retention-ltr-for-azure-sql-managed-instance-in-public-preview/) - Support for Long-term backup retention up to 10 years on Azure SQL Managed Instance.- - [Dynamic data masking granular permissions for Azure SQL Managed Instance](dynamic-data-masking-overview.md) - general availability for Dynamic data masking granular permissions for Azure SQL Managed Instance. - - [Azure SQL Managed Instance auditing of Microsoft operations](https://azure.microsoft.com/en-gb/updates/azure-sql-auditing-of-microsoft-operations-is-now-generally-available/) - general availability for Azure SQL Managed Instance auditing of Microsoft operations.- - [Azure Monitor SQL insights for Azure SQL Managed Instance](https://azure.microsoft.com/en-gb/updates/azure-monitor-sql-insights-for-azure-sql-in-public-preview/) - Azure Monitor SQL insights for Azure SQL Managed Instance in public preview.
+### SQL Managed Instance H2 2020 updates
+
+- [Public preview: Auditing of Microsoft support operations in Azure SQL DB and Azure SQL MI](https://azure.microsoft.com/en-us/updates/auditing-of-microsoft-support-operations-in-azure-sql-db-and-azure-sql-mi/) - The auditing of Microsoft support operations capability enables you to audit Microsoft support operations when you need to access your servers and/or databases during a support request to your audit logs destination (Public Preview).
+- [Distributed database transactions spanning multiple Azure SQL Managed Instances](https://azure.microsoft.com/en-us/updates/distributed-database-transactions-spanning-multiple-azure-sql-managed-instances/) - Distributed database transactions spanning multiple Azure SQL Managed Instances have been added to enable frictionless migration of existing applications, as well as development of modern multi-tenant applications relying on vertically or horizontally partitioned database architecture (Public Preview).
+- [Configurable Backup Storage Redundancy option for Azure SQL Managed Instance](https://azure.microsoft.com/en-us/updates/configurable-backup-storage-redundancy-option-for-azure-sql-managed-instance-2/) - Locally redundant storage (LRS) and zone-redundant storage (ZRS) options have been added to backup storage redundancy, providing more flexibility and choice.
+- [Backup storage cost savings for Azure SQL Database and Managed Instance](https://azure.microsoft.com/en-us/updates/backup-storage-cost-savings-for-azure-sql-database-and-managed-instance/) - User can set the PITR backup retention period & automated compression of backups for databases with transparent data encryption(TDE) is now up to 30 percent more efficient in backup storage space consumption.
+- [Azure AD authentication features for Azure SQL MI](https://azure.microsoft.com/en-us/updates/azure-ad-authentication-features-for-azure-sql-db-azure-synapse-analytics-and-azure-sql-managed-instance/) - hese features help automate user creation using Azure AD applications and allow individual Azure AD guest users to be created in SQL Managed Instance (Public Preview).
+- [Global virtual network peering support for Azure SQL Managed Instance](https://azure.microsoft.com/en-us/updates/global-virtual-network-peering-support-for-azure-sql-managed-instance-now-available/)
+- [Hosting catalog databases for all supported versions of SSRS in Azure SQL Managed Instance](https://azure.microsoft.com/en-us/updates/hosting-catalog-databases-for-all-supported-versions-of-ssrs-in-azure-sql-managed-instance/) - Azure SQL Managed Instance can host catalog databases for all supported versions of SQL Server Reporting Services (SSRS).
+- [Major performance improvements for Azure SQL Database Managed Instances](https://techcommunity.microsoft.com/t5/azure-sql/announcing-major-performance-improvements-for-azure-sql-database/ba-p/1701256)
+- [Enhanced management experience for Azure SQL Managed Instance](https://azure.microsoft.com/en-us/updates/enhanced-management-experience-for-azure-sql-managed-instance/)
+- [Machine Learning on Azure SQL Managed Instance in preview](https://techcommunity.microsoft.com/t5/azure-sql/announcing-major-performance-improvements-for-azure-sql-database/ba-p/1701256) - Machine Learning Services with support for R and Python languages now include preview support on Azure SQL Managed Instance (Public Preview).
+- [User-initiated failover for application fault resiliency in Azure SQL Managed Instance is now generally available](https://azure.microsoft.com/en-us/updates/userinitiated-failover-for-application-fault-resiliency-in-azure-sql-managed-instance-is-now-generally-available/) - User-initiated failover is now generally available, providing you with the capability to manually initiate an automatic failover using PowerShell, CLI commands, and API calls.
++ ### SQL Managed Instance H2 2019 updates - [Service-aided subnet configuration](https://azure.microsoft.com/updates/service-aided-subnet-configuration-for-managed-instance-in-azure-sql-database-available/) is a secure and convenient way to manage subnet configuration where you control data traffic while SQL Managed Instance ensures the uninterrupted flow of management traffic.
azure-sql Ledger Digest Management And Database Verification https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/ledger-digest-management-and-database-verification.md
The verification process and the integrity of the database depend on the integri
### Automatic generation and storage of database digests
-Azure SQL Database ledger integrates with the [immutable storage feature of Azure Blob Storage](../../storage/blobs/storage-blob-immutable-storage.md) and [Azure Confidential Ledger](../../confidential-ledger/index.yml). This integration provides secure storage services in Azure to help protect the database digests from potential tampering. This integration provides a simple and cost-effective way for users to automate digest management without having to worry about their availability and geographic replication.
+Azure SQL Database ledger integrates with the [immutable storage feature of Azure Blob Storage](../../storage/blobs/immutable-storage-overview.md) and [Azure Confidential Ledger](../../confidential-ledger/index.yml). This integration provides secure storage services in Azure to help protect the database digests from potential tampering. This integration provides a simple and cost-effective way for users to automate digest management without having to worry about their availability and geographic replication.
You can configure automatic generation and storage of database digests through the Azure portal, PowerShell, or the Azure CLI. When you configure automatic generation and storage, database digests are generated on a predefined interval of 30 seconds and uploaded to the selected storage service. If no transactions occur in the system in the 30-second interval, a database digest won't be generated and uploaded. This mechanism ensures that database digests are generated only when data has been updated in your database. :::image type="content" source="media/ledger/automatic-digest-management.png" alt-text="Screenshot that shows the selections for enabling digest storage."::: > [!IMPORTANT]
-> Configure an [immutability policy](../../storage/blobs/storage-blob-immutability-policies-manage.md) on your container after provisioning to ensure that database digests are protected from tampering.
+> Configure an [immutability policy](../../storage/blobs/immutable-policy-configure-version-scope.md) on your container after provisioning to ensure that database digests are protected from tampering.
### Manual generation and storage of database digests
Return codes for `sp_verify_database_ledger` and `sp_verify_database_ledger_from
- [Azure SQL Database ledger overview](ledger-overview.md) - [Updatable ledger tables](ledger-updatable-ledger-tables.md) - [Append-only ledger tables](ledger-append-only-ledger-tables.md) -- [Database ledger](ledger-database-ledger.md)
+- [Database ledger](ledger-database-ledger.md)
azure-sql Ledger Limits https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/ledger-limits.md
This article provides an overview of the limitations of ledger tables used with
| Sparse column sets | Sparse column sets aren't supported. | | Ledger truncation | Deleting older data in [append-only ledger tables](ledger-append-only-ledger-tables.md) or the history table of [updatable ledger tables](ledger-updatable-ledger-tables.md) isn't supported. | | Converting existing tables to ledger tables | Existing tables in a database that aren't ledger-enabled can't be converted to ledger tables. |
-|Locally redundant storage (LRS) support for [automated digest management](ledger-digest-management-and-database-verification.md) | Automated digest management with ledger tables by using [Azure Storage immutable blobs](../../storage/blobs/storage-blob-immutable-storage.md) doesn't offer the ability for users to use [LRS](../../storage/common/storage-redundancy.md#locally-redundant-storage) accounts.|
+|Locally redundant storage (LRS) support for [automated digest management](ledger-digest-management-and-database-verification.md) | Automated digest management with ledger tables by using [Azure Storage immutable blobs](../../storage/blobs/immutable-storage-overview.md) doesn't offer the ability for users to use [LRS](../../storage/common/storage-redundancy.md#locally-redundant-storage) accounts.|
## Remarks
This article provides an overview of the limitations of ledger tables used with
- [Updatable ledger tables](ledger-updatable-ledger-tables.md) - [Append-only ledger tables](ledger-append-only-ledger-tables.md) - [Database ledger](ledger-database-ledger.md)-- [Digest management and database verification](ledger-digest-management-and-database-verification.md)
+- [Digest management and database verification](ledger-digest-management-and-database-verification.md)
azure-sql Ledger Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/ledger-overview.md
Typical patterns for solving this problem involve replicating data from the bloc
Each transaction that the database receives is cryptographically hashed (SHA-256). The hash function uses the value of the transaction, along with the hash of the previous transaction, as input to the hash function. (The value includes hashes of the rows contained in the transaction.) The function cryptographically links all transactions together, like a blockchain.
-Cryptographically hashed ([database digests](#database-digests)) represent the state of the database. They're periodically generated and stored outside Azure SQL Database in a tamper-proof storage location. An example of a storage location is the [immutable storage feature of Azure Blob Storage](../../storage/blobs/storage-blob-immutable-storage.md) or [Azure Confidential Ledger](../../confidential-ledger/index.yml). Database digests are later used to verify the integrity of the database by comparing the value of the hash in the digest against the calculated hashes in database.
+Cryptographically hashed ([database digests](#database-digests)) represent the state of the database. They're periodically generated and stored outside Azure SQL Database in a tamper-proof storage location. An example of a storage location is the [immutable storage feature of Azure Blob Storage](../../storage/blobs/immutable-storage-overview.md) or [Azure Confidential Ledger](../../confidential-ledger/index.yml). Database digests are later used to verify the integrity of the database by comparing the value of the hash in the digest against the calculated hashes in database.
Ledger functionality is introduced to tables in Azure SQL Database in two forms:
When a block is formed, its associated database digest is published and stored o
2. Generate the hashes that represent the database with those changes. 3. Modify the digests to represent the updated hash of the transactions in the block.
-Ledger provides the ability to automatically generate and store the database digests in [immutable storage](../../storage/blobs/storage-blob-immutable-storage.md) or [Azure Confidential Ledger](../../confidential-ledger/index.yml), to prevent tampering. Alternatively, users can manually generate database digests and store them in the location of their choice. Database digests are used for later verifying that the data stored in ledger tables has not been tampered with.
+Ledger provides the ability to automatically generate and store the database digests in [immutable storage](../../storage/blobs/immutable-storage-overview.md) or [Azure Confidential Ledger](../../confidential-ledger/index.yml), to prevent tampering. Alternatively, users can manually generate database digests and store them in the location of their choice. Database digests are used for later verifying that the data stored in ledger tables has not been tampered with.
### Ledger verification
Ideally, users should run ledger verification only when the organization that's
- [Quickstart: Create a SQL database with ledger enabled](ledger-create-a-single-database-with-ledger-enabled.md) - [Access the digests stored in Azure Confidential Ledger](ledger-how-to-access-acl-digest.md)-- [Verify a ledger table to detect tampering](ledger-verify-database.md)
+- [Verify a ledger table to detect tampering](ledger-verify-database.md)
azure-sql Move Resources Across Regions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/move-resources-across-regions.md
This article provides a general workflow for moving resources to a different reg
> This article applies to migrations within the Azure public cloud or within the same sovereign cloud. > [!NOTE]
-> To move Azure SQL databases and elastic pools to a different Azure region, you can also use Azure Resource Mover (in preview). Refer [this tutorial](https://docs.microsoft.com/azure/resource-mover/tutorial-move-region-sql) for detailed steps to do the same.
+> To move Azure SQL databases and elastic pools to a different Azure region, you can also use Azure Resource Mover (in preview). Refer [this tutorial](../../resource-mover/tutorial-move-region-sql.md) for detailed steps to do the same.
[!INCLUDE [updated-for-az](../../../includes/updated-for-az.md)]
This article provides a general workflow for moving resources to a different reg
> A server or managed instance in one region can now be connected to a key vault in any other region. - As a best practice to ensure the target server has access to older encryption keys (required for restoring database backups), run the [Get-AzSqlServerKeyVaultKey](/powershell/module/az.sql/get-azsqlserverkeyvaultkey) cmdlet on the source server or [Get-AzSqlInstanceKeyVaultKey](/powershell/module/az.sql/get-azsqlinstancekeyvaultkey) cmdlet on the source managed instance to return the list of available keys and add those keys to the target server. - For more information and best practices on configuring customer-managed TDE on the target server, see [Azure SQL transparent data encryption with customer-managed keys in Azure Key Vault](transparent-data-encryption-byok-overview.md).
- - To move the key vault to the new region, see [Move an Azure key vault across regions](https://docs.microsoft.com/azure/key-vault/general/move-region)
+ - To move the key vault to the new region, see [Move an Azure key vault across regions](../../key-vault/general/move-region.md)
1. If database-level audit is enabled, disable it and enable server-level auditing instead. After failover, database-level auditing will require the cross-region traffic, which isn't desired or possible after the move. 1. For server-level audits, ensure that: - The storage container, Log Analytics, or event hub with the existing audit logs is moved to the target region.
Once the move finishes, remove the resources in the source region to avoid unnec
## Next steps
-[Manage](manage-data-after-migrating-to-database.md) your database after it has been migrated.
+[Manage](manage-data-after-migrating-to-database.md) your database after it has been migrated.
azure-sql Resource Health To Troubleshoot Connectivity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/resource-health-to-troubleshoot-connectivity.md
Reconfigurations are considered transient conditions and are expected from time
- [Troubleshoot, diagnose, and prevent SQL connection errors](troubleshoot-common-connectivity-issues.md). - Learn more about [configuring Resource Health alerts](../../service-health/resource-health-alert-arm-template-guide.md). - Get an overview of [Resource Health](../../service-health/resource-health-overview.md).-- Review [Resource Health FAQ](../../service-health/resource-health-faq.md).
+- Review [Resource Health FAQ](../../service-health/resource-health-faq.yml).
azure-sql Security Best Practice https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/security-best-practice.md
Restrict access to the storage account to support Separation of Duties and to se
**How to implement**: - When saving Audit logs to Azure Storage, make sure that access to the Storage Account is restricted to the minimal security principles. Control who has access to the storage account.-- For more information, see [Authorizing access to Azure Storage](../../storage/common/storage-auth.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json).
+- For more information, see [Authorizing access to Azure Storage](../../storage/common/authorize-data-access.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json).
**Best practices**:
Most security standards address data availability in terms of operational contin
## Next steps -- See [An overview of Azure SQL Database security capabilities](security-overview.md)
+- See [An overview of Azure SQL Database security capabilities](security-overview.md)
azure-sql Auditing Configure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/managed-instance/auditing-configure.md
The following section describes the configuration of auditing on your managed in
![Create blob container configuration](./media/auditing-configure/3_create_container_config.png) > [!IMPORTANT]
- > Customers wishing to configure an immutable log store for their server- or database-level audit events should follow the [instructions provided by Azure Storage](../../storage/blobs/storage-blob-immutability-policies-manage.md#enabling-allow-protected-append-blobs-writes). (Please ensure you have selected **Allow additional appends** when you configure the immutable blob storage.)
+ > Customers wishing to configure an immutable log store for their server- or database-level audit events should follow the [instructions provided by Azure Storage](../../storage/blobs/immutable-time-based-retention-policy-overview.md#allow-protected-append-blobs-writes). (Please ensure you have selected **Allow additional appends** when you configure the immutable blob storage.)
3. After you create the container for the audit logs, there are two ways to configure it as the target for the audit logs: [using T-SQL](#blobtsql) or [using the SQL Server Management Studio (SSMS) UI](#blobssms):
The key differences in the `CREATE AUDIT` syntax for auditing to Azure Blob stor
- For a full list of audit log consumption methods, refer to [Get started with Azure SQL Database auditing](../../azure-sql/database/auditing-overview.md). - For more information about Azure programs that support standards compliance, see the [Azure Trust Center](https://gallery.technet.microsoft.com/Overview-of-Azure-c1be3942), where you can find the most current list of compliance certifications.
-<!--Image references-->
+<!--Image references-->
azure-sql Oracle To Sql Database Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/migration-guides/database/oracle-to-sql-database-guide.md
Last updated 08/25/2020
[!INCLUDE[appliesto-sqldb-sqlmi](../../includes/appliesto-sqldb.md)]
-In this guide, you learn [how to migrate](https://azure.microsoft.com/migration/migration-journey) your Oracle schemas to Azure SQL Database by using [SQL Server Migration](https://azure.microsoft.com/migration/sql-server/) Assistant for Oracle (SSMA for Oracle).
+This guide teaches you [to migrate](https://azure.microsoft.com/migration/migration-journey) your Oracle schemas to Azure SQL Database by using [SQL Server Migration](https://azure.microsoft.com/migration/sql-server/) Assistant for Oracle (SSMA for Oracle).
For other migration guides, see [Azure Database Migration Guides](/data-migration).
Before you begin migrating your Oracle schema to SQL Database:
- Download [SSMA for Oracle](https://www.microsoft.com/download/details.aspx?id=54258). - Have a target [SQL Database](../../database/single-database-create-quickstart.md) instance. - Obtain the [necessary permissions for SSMA for Oracle](/sql/ssma/oracle/connecting-to-oracle-database-oracletosql) and [provider](/sql/ssma/oracle/connect-to-oracle-oracletosql).
-
+ ## Pre-migration After you've met the prerequisites, you're ready to discover the topology of your environment and assess the feasibility of your [Azure cloud migration](https://azure.microsoft.com/migration). This part of the process involves conducting an inventory of the databases that you need to migrate, assessing those databases for potential migration issues or blockers, and then resolving any items you might have uncovered.
To create an assessment:
![Screenshot that shows selecting Oracle schema.](./media/oracle-to-sql-database-guide/select-schema.png)
-1. In **Oracle Metadata Explorer**, right-click the Oracle schema you want to migrate and then select **Create Report** to generate an HTML report. Alternatively, you can select a database and then select the **Create Report** tab.
+1. In **Oracle Metadata Explorer**, right-click the Oracle schema you want to migrate and then select **Create Report** to generate an HTML report. Instead, you can select a database and then select the **Create Report** tab.
![Screenshot that shows Create Report.](./media/oracle-to-sql-database-guide/create-report.png)
To convert the schema:
![Screenshot that shows Connect to Azure SQL Database.](./media/oracle-to-sql-database-guide/connect-to-sql-database.png)
-1. In **Oracle Metadata Explorer**, right-click the Oracle schema and then select **Convert Schema**. Alternatively, you can select your schema and then select the **Convert Schema** tab.
+1. In **Oracle Metadata Explorer**, right-click the Oracle schema and then select **Convert Schema**. Or, you can select your schema and then select the **Convert Schema** tab.
![Screenshot that shows Convert Schema.](./media/oracle-to-sql-database-guide/convert-schema.png)
To publish your schema and migrate your data:
![Screenshot that shows Synchronize with the Database review.](./media/oracle-to-sql-database-guide/synchronize-with-database-review.png)
-1. Migrate the data by right-clicking the database or object you want to migrate in **Oracle Metadata Explorer** and selecting **Migrate Data**. Alternatively, you can select the **Migrate Data** tab. To migrate data for an entire database, select the check box next to the database name. To migrate data from individual tables, expand the database, expand **Tables**, and then select the check boxes next to the tables. To omit data from individual tables, clear the check boxes.
+1. Migrate the data by right-clicking the database or object you want to migrate in **Oracle Metadata Explorer** and selecting **Migrate Data**. Or, you can select the **Migrate Data** tab. To migrate data for an entire database, select the check box next to the database name. To migrate data from individual tables, expand the database, expand **Tables**, and then select the checkboxes next to the tables. To omit data from individual tables, clear the checkboxes.
![Screenshot that shows Migrate Data.](./media/oracle-to-sql-database-guide/migrate-data.png)
To publish your schema and migrate your data:
![Screenshot that shows validation in SQL Server Management Studio.](./media/oracle-to-sql-database-guide/validate-data.png)
-Alternatively, you can also use SQL Server Integration Services to perform the migration. To learn more, see:
+Or, you can also use SQL Server Integration Services to perform the migration. To learn more, see:
- [Getting started with SQL Server Integration Services](/sql/integration-services/sql-server-integration-services) - [SQL Server Integration Services for Azure and Hybrid Data Movement](https://download.microsoft.com/download/D/2/0/D20E1C5F-72EA-4505-9F26-FEF9550EFD44/SSIS%20Hybrid%20and%20Azure.docx)
The test approach to database migration consists of the following activities:
1. **Run validation tests**: Run validation tests against the source and the target, and then analyze the results. 1. **Run performance tests**: Run performance tests against the source and the target, and then analyze and compare the results.
+### Validate migrated objects
+
+Microsoft SQL Server Migration Assistant for Oracle Tester (SSMA Tester) allows you to test migrated database objects. The SSMA Tester is used to verify that converted objects behave in the same way.
+
+#### Create test case
+
+1. Open SSMA for Oracle, select **Tester** followed by **New Test Case**.
+ ![Screenshot that shows to create new test case.](./media/oracle-to-sql-database-guide/ssma-tester-new.png)
+
+1. Provide the following information for the new test case:
+
+ **Name:** Enter the name to identify the test case.
+
+ **Creation date:** Today's current date, defined automatically.
+
+ **Last Modified date:** Filled in automatically, should not be changed.
+
+ **Description:** Enter any additional information to identify the purpose of the test case.
+
+ ![Screenshot that shows steps to initialize a test case .](./media/oracle-to-sql-database-guide/tester-init-test-case.png)
+
+1. Select the objects that are part of the test case from the Oracle object tree located in the left side.
+
+ :::image type="content" source="./media/oracle-to-sql-database-guide/tester-select-configure-objects.png" alt-text="Screenshot that shows step to select and configure object.":::
+
+ In this example, stored procedure `ADD_REGION` and table `REGION` is selected.
+
+ To learn more, see [Selecting and configuring objects to test.](/sql/ssma/oracle/selecting-and-configuring-objects-to-test-oracletosql)
+
+1. Next, select the tables, foreign keys, and other dependent objects from the Oracle object tree in the left window.
+
+ :::image type="content" source="./media//oracle-to-sql-database-guide/tester-select-configure-affected.png" alt-text="Screenshot that shows step to select and configure affected object.":::
+
+ To learn more, see [Selecting and configuring affected objects.](/sql/ssma/oracle/selecting-and-configuring-affected-objects-oracletosql)
+
+1. Review the evaluation sequence of objects. Change the order by clicking the buttons in the grid.
+
+ :::image type="content" source="./media/oracle-to-sql-database-guide/test-call-ordering.png" alt-text="Screenshot that shows step to sequence test object execution.":::
+
+1. Finalize the test case by reviewing the information provided in the previous steps. Configure the test execution options based on the test scenario.
+
+ :::image type="content" source="./media/oracle-to-sql-database-guide/tester-finalize-case.png" alt-text="Screenshot that shows step to finalize object.":::
+
+ For more information on test case settings,[Finishing test case preparation](/sql/ssma/oracle/finishing-test-case-preparation-oracletosql)
+
+1. Click on finish to create the test case.
+
+ :::image type="content" source="./media//oracle-to-sql-database-guide/tester-test-repo.png" alt-text="Screenshot that shows step to test repo.":::
+
+#### Run test case
+
+When SSMA Tester runs a test case, the test engine executes the objects selected for testing and generates a verification report.
+
+1. Select the test case from test repository and then click run.
+
+ :::image type="content" source="./media/oracle-to-sql-database-guide/tester-repo-run.png" alt-text="Screenshot that shows to review test repo.":::
+
+1. Review the launch test case and click run.
+
+ :::image type="content" source="./media/oracle-to-sql-database-guide/tester-run-test-case.png" alt-text="Screenshot that shows step to run test case":::
+
+1. Next, provide Oracle source credentials. Click connect after entering the credentials.
+
+ :::image type="content" source="./media/oracle-to-sql-database-guide/tester-oracle-connect.png" alt-text="Screenshot that shows step to connect to oracle source":::
+
+1. Provide target SQL Server credentials and click connect.
+
+ :::image type="content" source="./media/oracle-to-sql-database-guide/tester-sql-connect.png" alt-text="Screenshot that shows step to connect to sql target.":::
+
+ On success, the test case moves to initialization stage.
+
+1. A real-time progress bar shows the execution status of the test run.
+
+ :::image type="content" source="./media/oracle-to-sql-database-guide/tester-run-status.png" alt-text="Screenshot that shows tester test progress.":::
+
+1. Review the report after the test is completed. The report provides the statistics, any errors during the test run and a detail report.
+
+ :::image type="content" source="./media/oracle-to-sql-database-guide/tester-test-result.png" alt-text="Screenshot that shows a sample tester test report":::
+
+1. Click details to get more information.
+
+ Example of positive data validation.
+ :::image type="content" source="./media/oracle-to-sql-database-guide/tester-test-success.png" alt-text="Screenshot that shows a sample tester success report.":::
+
+ Example of failed data validation.
+
+ :::image type="content" source="./media/oracle-to-sql-database-guide/tester-test-failed.png" alt-text="Screenshot that shows tester failure report.":::
+ ### Optimize The post-migration phase is crucial for reconciling any data accuracy issues, verifying completeness, and addressing performance issues with the workload.
azure-sql Oracle To Managed Instance Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/migration-guides/managed-instance/oracle-to-managed-instance-guide.md
Last updated 11/06/2020
[!INCLUDE[appliesto-sqldb-sqlmi](../../includes/appliesto-sqlmi.md)]
-In this guide, you learn how to migrate your Oracle schemas to Azure SQL Managed Instance by using SQL Server Migration Assistant for Oracle (SSMA for Oracle).
+ This guide teaches you to migrate your Oracle schemas to Azure SQL Managed Instance by using SQL Server Migration Assistant for Oracle.
For other migration guides, see [Azure Database Migration Guides](/data-migration).
To create an assessment:
![Screenshot that shows selecting Oracle schema.](./media/oracle-to-managed-instance-guide/select-schema.png)
-1. In **Oracle Metadata Explorer**, right-click the Oracle schema you want to migrate and then select **Create Report** to generate an HTML report. Alternatively, you can select a database and then select the **Create Report** tab.
+1. In **Oracle Metadata Explorer**, right-click the Oracle schema you want to migrate and then select **Create Report** to generate an HTML report. Instead, you can select a database and then select the **Create Report** tab.
![Screenshot that shows Create Report.](./media/oracle-to-managed-instance-guide/create-report.png)
To convert the schema:
![Screenshot that shows Connect to Azure SQL Managed Instance.](./media/oracle-to-managed-instance-guide/connect-to-sql-managed-instance.png)
-1. In **Oracle Metadata Explorer**, right-click the Oracle schema and then select **Convert Schema**. Alternatively, you can select your schema and then select the **Convert Schema** tab.
+1. In **Oracle Metadata Explorer**, right-click the Oracle schema and then select **Convert Schema**. Or, you can select your schema and then select the **Convert Schema** tab.
![Screenshot that shows Convert Schema.](./media/oracle-to-managed-instance-guide/convert-schema.png)
To convert the schema:
After you've completed assessing your databases and addressing any discrepancies, the next step is to run the migration process. Migration involves two steps: publishing the schema and migrating the data. To publish your schema and migrate your data:+ 1. Publish the schema by right-clicking the database from the **Databases** node in **Azure SQL Managed Instance Metadata Explorer** and selecting **Synchronize with Database**. ![Screenshot that shows Synchronize with Database.](./media/oracle-to-managed-instance-guide/synchronize-with-database.png)
To publish your schema and migrate your data:
![Screenshot that shows Synchronize with the Database review.](./media/oracle-to-managed-instance-guide/synchronize-with-database-review.png)
-1. Migrate the data by right-clicking the schema or object you want to migrate in **Oracle Metadata Explorer** and selecting **Migrate Data**. Alternatively, you can select the **Migrate Data** tab. To migrate data for an entire database, select the check box next to the database name. To migrate data from individual tables, expand the database, expand **Tables**, and then select the check boxes next to the tables. To omit data from individual tables, clear the check boxes.
+1. Migrate the data by right-clicking the schema or object you want to migrate in **Oracle Metadata Explorer** and selecting **Migrate Data**. Or, you can select the **Migrate Data** tab. To migrate data for an entire database, select the check box next to the database name. To migrate data from individual tables, expand the database, expand **Tables**, and then select the checkboxes next to the tables. To omit data from individual tables, clear the checkboxes.
![Screenshot that shows Migrate Data.](./media/oracle-to-managed-instance-guide/migrate-data.png)
To publish your schema and migrate your data:
![Screenshot that shows validation in SSMA for Oracle.](./media/oracle-to-managed-instance-guide/validate-data.png)
-Alternatively, you can also use SQL Server Integration Services to perform the migration. To learn more, see:
+Or, you can also use SQL Server Integration Services to perform the migration. To learn more, see:
- [Getting started with SQL Server Integration Services](/sql/integration-services/sql-server-integration-services) - [SQL Server Integration Services for Azure and Hybrid Data Movement](https://download.microsoft.com/download/D/2/0/D20E1C5F-72EA-4505-9F26-FEF9550EFD44/SSIS%20Hybrid%20and%20Azure.docx)
The test approach to database migration consists of the following activities:
3. **Run validation tests**: Run validation tests against the source and the target, and then analyze the results. 4. **Run performance tests**: Run performance tests against the source and the target, and then analyze and compare the results.
+### Validate migrated objects
+
+Microsoft SQL Server Migration Assistant for Oracle Tester (SSMA Tester) allows you to test migrated database objects. The SSMA Tester is used to verify that converted objects behave in the same way.
+
+#### Create test case
+
+1. Open SSMA for Oracle, select **Tester** followed by **New Test Case**.
+
+ :::image type="content" source="./media/oracle-to-managed-instance-guide/ssma-tester-new.png" alt-text="Screenshot that shows new test case.":::
+
+1. On the Test Case wizard, provide the following information:
+
+ **Name:** Enter the name to identify the test case.
+
+ **Creation date:** Today's current date, defined automatically.
+
+ **Last Modified date:** Filled in automatically, should not be changed.
+
+ **Description:** Enter any additional information to identify the purpose of the test case.
+
+ :::image type="content" source="./media/oracle-to-managed-instance-guide/tester-init-test-case.png" alt-text="Screenshot that shows step to initialize a test case.":::
+
+1. Select the objects that are part of the test case from the Oracle object tree located in the left side.
+
+ :::image type="content" source="./media/oracle-to-managed-instance-guide/tester-select-configure-objects.png" alt-text="Screenshot that shows step to select and configure object.":::
+
+ In this example, stored procedure `ADD_REGION` and table `REGION` is selected.
+
+ To learn more, see [Selecting and configuring objects to test.](/sql/ssma/oracle/selecting-and-configuring-objects-to-test-oracletosql)
+
+1. Next, select the tables, foreign keys and other dependent objects from the Oracle object tree in the left window.
+
+ :::image type="content" source="./media/oracle-to-managed-instance-guide/tester-select-configure-affected.png" alt-text="Screenshot that shows step to select and configure affected object.":::
+
+ To learn more, see [Selecting and configuring affected objects.](/sql/ssma/oracle/selecting-and-configuring-affected-objects-oracletosql)
+
+1. Review the evaluation sequence of objects. Change the order by clicking the buttons in the grid.
+
+ :::image type="content" source="./media/oracle-to-managed-instance-guide/test-call-ordering.png" alt-text="Screenshot that shows step to sequence test object execution.":::
+
+1. Finalize the test case by reviewing the information provided in the previous steps.Configure the test execution options based on the test scenario.
+
+ :::image type="content" source="./media//oracle-to-managed-instance-guide/tester-finalize-case.png" alt-text="Screenshot that shows step to finalize object.":::
+
+ For more information on test case settings,[Finishing test case preparation](/sql/ssma/oracle/finishing-test-case-preparation-oracletosql)
+
+1. Click on finish to create the test case.
+
+ :::image type="content" source="./media/oracle-to-managed-instance-guide/tester-test-repo.png" alt-text="Screenshot that shows step to test repo.":::
+
+#### Run test case
+
+When SSMA Tester runs a test case, the test engine executes the objects selected for testing and generates a verification report.
+
+1. Select the test case from test repository and then click run.
+
+ :::image type="content" source="./media/oracle-to-managed-instance-guide/tester-repo-run.png" alt-text="Screenshot that shows to review test repo.":::
+
+1. Review the launch test case and click run.
+
+ :::image type="content" source="./media/oracle-to-managed-instance-guide/tester-run-test-case.png" alt-text="Screenshot that shows step to launch test case.":::
+
+1. Next, provide Oracle source credentials. Click connect after entering the credentials.
+
+ :::image type="content" source="./media/oracle-to-managed-instance-guide/tester-oracle-connect.png" alt-text="Screenshot that shows step to connect to oracle source.":::
+
+1. Provide target SQL Server credentials and click connect.
+
+ :::image type="content" source="./media/oracle-to-managed-instance-guide/tester-sqlmi-connect.png" alt-text="Screenshot that shows step to connect to sql target.":::
+
+ On success, the test case moves to initialization stage.
+
+1. A real-time progress bar shows the execution status of the test run.
+
+ :::image type="content" source="./media/oracle-to-managed-instance-guide/tester-run-status.png" alt-text="Screenshot that shows tester test progress.":::
+
+1. Review the report after the test is completed. The report provides the statistics, any errors during the test run and a detail report.
+
+ :::image type="content" source="./media/oracle-to-managed-instance-guide/tester-test-result.png" alt-text="Screenshot that shows a sample tester test report":::
+
+1. Click details to get more information.
+
+ Example of positive data validation.
+
+ :::image type="content" source="./media/oracle-to-managed-instance-guide/tester-test-success.png" alt-text="Screenshot that shows a sample tester success report.":::
+
+ Example of failed data validation.
+
+ :::image type="content" source="./media/oracle-to-managed-instance-guide/tester-test-failed.png" alt-text="Screenshot that shows tester failure report.":::
+ ### Optimize The post-migration phase is crucial for reconciling any data accuracy issues, verifying completeness, and addressing performance issues with the workload.
azure-sql Oracle To Sql On Azure Vm Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/migration-guides/virtual-machines/oracle-to-sql-on-azure-vm-guide.md
Last updated 11/06/2020
# Migration guide: Oracle to SQL Server on Azure Virtual Machines [!INCLUDE[appliesto-sqldb-sqlmi](../../includes/appliesto-sqldb.md)]
-This guide teaches you to migrate your Oracle schemas to SQL Server on Azure Virtual Machines by using SQL Server Migration Assistant for Oracle.
+This guide teaches you to migrate your Oracle schemas to SQL Server on Azure Virtual Machines by using SQL Server Migration Assistant for Oracle.
-For other migration guides, see [Database Migration](/data-migration).
+For other migration guides, see [Database Migration](/data-migration).
-## Prerequisites
+## Prerequisites
To migrate your Oracle schema to SQL Server on Azure Virtual Machines, you need:
To migrate your Oracle schema to SQL Server on Azure Virtual Machines, you need:
- [SQL Server Migration Assistant (SSMA) for Oracle](https://www.microsoft.com/download/details.aspx?id=54258). - A target [SQL Server VM](../../virtual-machines/windows/sql-vm-create-portal-quickstart.md). - The [necessary permissions for SSMA for Oracle](/sql/ssma/oracle/connecting-to-oracle-database-oracletosql) and the [provider](/sql/ssma/oracle/connect-to-oracle-oracletosql).-- Connectivity and sufficient permissions to access the source and the target.
+- Connectivity and sufficient permissions to access the source and the target.
## Pre-migration To prepare to migrate to the cloud, verify that your source environment is supported and that you've addressed any prerequisites. Doing so will help to ensure an efficient and successful migration.
-This part of the process involves:
+This part of the process involves:
- Conducting an inventory of the databases that you need to migrate.-- Assessing those databases for potential migration problems or blockers. -- Resolving any problems that you uncover.
+- Assessing those databases for potential migration problems or blockers.
+- Resolving any problems that you uncover.
### Discover Use [MAP Toolkit](https://go.microsoft.com/fwlink/?LinkID=316883) to identify existing data sources and details about the features your business is using. Doing so will give you a better understanding of the migration and help you plan for it. This process involves scanning the network to identify your organization's Oracle instances and the versions and features you're using.
-To use MAP Toolkit to do an inventory scan, follow these steps:
+To use MAP Toolkit to do an inventory scan, follow these steps:
1. Open [MAP Toolkit](https://go.microsoft.com/fwlink/?LinkID=316883).
To use MAP Toolkit to do an inventory scan, follow these steps:
![Screenshot that shows the Create/Select database option.](./media/oracle-to-sql-on-azure-vm-guide/select-database.png)
-1. Select **Create an inventory database**. Enter a name for the new inventory database you're creating, provide a brief description, and then select **OK**:
+1. Select **Create an inventory database**. Enter the name for the new inventory database and a brief description, and then select **OK**
:::image type="content" source="media/oracle-to-sql-on-azure-vm-guide/create-inventory-database.png" alt-text="Screenshot that shows the interface for creating an inventory database.":::
To use MAP Toolkit to do an inventory scan, follow these steps:
![Screenshot that shows the Inventory Scenarios page of the Inventory and Assessment Wizard.](./media/oracle-to-sql-on-azure-vm-guide/choose-oracle.png)
-1. Select the computer search option that best suits your business needs and environment, and then select **Next**:
+1. Select the computer search option that best suits your business needs and environment, and then select **Next**:
![Screenshot that shows the Discovery Methods page of the Inventory and Assessment Wizard.](./media/oracle-to-sql-on-azure-vm-guide/choose-search-option.png)
To use MAP Toolkit to do an inventory scan, follow these steps:
![Screenshot that shows the Summary page of the Inventory and Assessment Wizard.](./media/oracle-to-sql-on-azure-vm-guide/review-summary.png)
-1. After the scan finishes, view the **Data Collection** summary. The scan might take a few minutes, depending on the number of databases. Select **Close** when you're done:
+1. After the scan finishes, view the **Data Collection** summary. The scan might take a few minutes, depending on the number of databases. Select **Close** when you're done:
![Screenshot that shows the Data Collection summary.](./media/oracle-to-sql-on-azure-vm-guide/collection-summary-report.png)
To use MAP Toolkit to do an inventory scan, follow these steps:
After you identify the data sources, use [SQL Server Migration Assistant for Oracle](https://www.microsoft.com/download/details.aspx?id=54258) to assess the Oracle instances migrating to the SQL Server VM. The assistant will help you understand the gaps between the source and destination databases. You can review database objects and data, assess databases for migration, migrate database objects to SQL Server, and then migrate data to SQL Server.
-To create an assessment, follow these steps:
+To create an assessment, follow these steps:
-1. Open [SQL Server Migration Assistant for Oracle](https://www.microsoft.com/download/details.aspx?id=54258).
-1. On the **File** menu, select **New Project**.
-1. Provide a project name and a location for your project, and then select a SQL Server migration target from the list. Select **OK**:
+1. Open [SQL Server Migration Assistant for Oracle](https://www.microsoft.com/download/details.aspx?id=54258).
+1. On the **File** menu, select **New Project**.
+1. Provide a project name and a location for your project, and then select a SQL Server migration target from the list. Select **OK**:
![Screenshot that shows the New Project dialog box.](./media/oracle-to-sql-on-azure-vm-guide/new-project.png)
To create an assessment, follow these steps:
![Screenshot that shows the Connect to Oracle dialog box.](./media/oracle-to-sql-on-azure-vm-guide/connect-to-oracle.png)
- Select the Oracle schemas that you want to migrate:
+ Select the Oracle schemas that you want to migrate:
![Screenshot that shows the list of Oracle schemas that can be migrated.](./media/oracle-to-sql-on-azure-vm-guide/select-schema.png)
-1. In **Oracle Metadata Explorer**, right-click the Oracle schema that you want to migrate, and then select **Create Report**. Doing so will generate an HTML report. Alternatively, you can select the database and then select **Create report** in the top menu.
+1. In **Oracle Metadata Explorer**, right-click the Oracle schema that you want to migrate, and then select **Create Report**. Doing so will generate an HTML report. Or, you can select the database and then select **Create report** in the top menu.
![Screenshot that shows how to create a report.](./media/oracle-to-sql-on-azure-vm-guide/create-report.png) 1. Review the HTML report for conversion statistics, errors, and warnings. Analyze it to understand conversion problems and resolutions.
- You can also open the report in Excel to get an inventory of Oracle objects and the effort required to complete schema conversions. The default location for the report is the report folder in SSMAProjects.
+ You can also open the report in Excel to get an inventory of Oracle objects and the effort required to complete schema conversions. The default location for the report is the report folder in SSMAProjects.
For example: `drive:\<username>\Documents\SSMAProjects\MyOracleMigration\report\report_2016_11_12T02_47_55\`
To create an assessment, follow these steps:
### Validate data types
-Validate the default data type mappings and change them based on requirements, if necessary. To do so, follow these steps:
+Validate the default data type mappings and change them based on requirements, if necessary. To do so, follow these steps:
-1. On the **Tools** menu, select **Project Settings**.
-1. Select the **Type Mappings** tab.
+1. On the **Tools** menu, select **Project Settings**.
+1. Select the **Type Mappings** tab.
![Screenshot that shows the Type Mappings tab.](./media/oracle-to-sql-on-azure-vm-guide/type-mappings.png)
-1. You can change the type mapping for each table by selecting the table in **Oracle Metadata Explorer**.
+1. You can change the type mapping for each table by selecting the table in **Oracle Metadata Explorer**.
### Convert the schema
-To convert the schema, follow these steps:
+To convert the schema, follow these steps:
1. (Optional) To convert dynamic or ad-hoc queries, right-click the node and select **Add statement**.
-1. Select **Connect to SQL Server** in the top menu.
- 1. Enter connection details for your SQL Server on Azure VM.
- 1. Select your target database from the list, or provide a new name. If you provide a new name, a database will be created on the target server.
- 1. Provide authentication details.
- 1. Select **Connect**.
+1. Select **Connect to SQL Server** in the top menu.
+ 1. Enter connection details for your SQL Server on Azure VM.
+ 1. Select your target database from the list, or provide a new name. If you provide a new name, a database will be created on the target server.
+ 1. Provide authentication details.
+ 1. Select **Connect**.
![Screenshot that shows how to connect to SQL Server.](./media/oracle-to-sql-on-azure-vm-guide/connect-to-sql-vm.png)
-1. Right-click the Oracle schema in **Oracle Metadata Explorer** and select **Convert Schema**. Alternatively, you can select **Convert schema** in the top menu:
+1. Right-click the Oracle schema in **Oracle Metadata Explorer** and select **Convert Schema**. Or, you can select **Convert schema** in the top menu:
![Screenshot that shows how to convert the schema.](./media/oracle-to-sql-on-azure-vm-guide/convert-schema.png)
To convert the schema, follow these steps:
![Screenshot that shows a comparison of two schemas.](./media/oracle-to-sql-on-azure-vm-guide/table-mapping.png)
- Compare the converted Transact-SQL text to the original stored procedures and review the recommendations:
+ Compare the converted Transact-SQL text to the original stored procedures and review the recommendations:
![Screenshot that shows Transact-SQL, stored procedures, and a warning.](./media/oracle-to-sql-on-azure-vm-guide/procedure-comparison.png) You can save the project locally for an offline schema remediation exercise. To do so, select **Save Project** on the **File** menu. Saving the project locally lets you evaluate the source and target schemas offline and perform remediation before you publish the schema to SQL Server.
-1. Select **Review results** in the **Output** pane, and then review errors in the **Error list** pane.
+1. Select **Review results** in the **Output** pane, and then review errors in the **Error list** pane.
1. Save the project locally for an offline schema remediation exercise. Select **Save Project** on the **File** menu. This gives you an opportunity to evaluate the source and target schemas offline and perform remediation before you publish the schema to SQL Server on Azure Virtual Machines. ## Migrate
-After you have the necessary prerequisites in place and have completed the tasks associated with the pre-migration stage, you're ready to start the schema and data migration. Migration involves two steps: publishing the schema and migrating the data.
+After you have the necessary prerequisites in place and have completed the tasks associated with the pre-migration stage, you're ready to start the schema and data migration. Migration involves two steps: publishing the schema and migrating the data.
-To publish your schema and migrate the data, follow these steps:
+To publish your schema and migrate the data, follow these steps:
-1. Publish the schema: right-click the database in **SQL Server Metadata Explorer** and select **Synchronize with Database**. Doing so publishes the Oracle schema to SQL Server on Azure Virtual Machines.
+1. Publish the schema: right-click the database in **SQL Server Metadata Explorer** and select **Synchronize with Database**. Doing so publishes the Oracle schema to SQL Server on Azure Virtual Machines.
![Screenshot that shows the Synchronize with Database command.](./media/oracle-to-sql-on-azure-vm-guide/synchronize-database.png)
To publish your schema and migrate the data, follow these steps:
-1. Migrate the data: right-click the database or object that you want to migrate in **Oracle Metadata Explorer** and select **Migrate Data**. Alternatively, you can select **Migrate Data** in the top menu.
-
- To migrate data for an entire database, select the check box next to the database name. To migrate data from individual tables, expand the database, expand **Tables**, and then select the check box next to the table. To omit data from individual tables, clear appropriate the check boxes.
+1. Migrate the data: right-click the database or object that you want to migrate in **Oracle Metadata Explorer** and select **Migrate Data**. Or, you can select the **Migrate Data** tab. To migrate data for an entire database, select the check box next to the database name. To migrate data from individual tables, expand the database, expand **Tables**, and then select the checkboxes next to the tables. To omit data from individual tables, clear the checkboxes.
![Screenshot that shows the Migrate Data command.](./media/oracle-to-sql-on-azure-vm-guide/migrate-data.png)
To publish your schema and migrate the data, follow these steps:
![Screenshot that shows a SQL Server instance in SSMA.](./media/oracle-to-sql-on-azure-vm-guide/validate-in-ssms.png)
-Instead of using SSMA, you could use SQL Server Integration Services (SSIS) to migrate the data. To learn more, see:
+Instead of using SSMA, you could use SQL Server Integration Services (SSIS) to migrate the data. To learn more, see:
- The article [SQL Server Integration Services](/sql/integration-services/sql-server-integration-services). - The white paper [SSIS for Azure and Hybrid Data Movement](https://download.microsoft.com/download/D/2/0/D20E1C5F-72EA-4505-9F26-FEF9550EFD44/SSIS%20Hybrid%20and%20Azure.docx).
-## Post-migration
+## Post-migration
After you complete the migration stage, you need to complete a series of post-migration tasks to ensure that everything is running as smoothly and efficiently as possible.
After you complete the migration stage, you need to complete a series of post-mi
After the data is migrated to the target environment, all the applications that previously consumed the source need to start consuming the target. Making those changes might require changes to the applications.
-[Data Access Migration Toolkit](https://marketplace.visualstudio.com/items?itemName=ms-databasemigration.data-access-migration-toolkit) is an extension for Visual Studio Code. It allows you to analyze your Java source code and detect data access API calls and queries. The toolkit provides a single-pane view of what needs to be addressed to support the new database back end. To learn more, see [Migrate your Java application from Oracle](https://techcommunity.microsoft.com/t5/microsoft-data-migration/migrate-your-java-applications-from-oracle-to-sql-server-with/ba-p/368727).
+[Data Access Migration Toolkit](https://marketplace.visualstudio.com/items?itemName=ms-databasemigration.data-access-migration-toolkit) is an extension for Visual Studio Code. It allows you to analyze your Java source code and detect data access API calls and queries. The toolkit provides a single-pane view of what needs to be addressed to support the new database back end. To learn more, see [Migrate your Java application from Oracle](https://techcommunity.microsoft.com/t5/microsoft-data-migration/migrate-your-java-applications-from-oracle-to-sql-server-with/ba-p/368727).
### Perform tests
To test your database migration, complete these activities:
4. **Run performance tests**. Run performance test against the source and the target, and then analyze and compare the results.
+### Validate migrated objects
+
+Microsoft SQL Server Migration Assistant for Oracle Tester (SSMA Tester) allows you to test migrated database objects. The SSMA Tester is used to verify that converted objects behave in the same way.
+
+#### Create test case
+
+1. Open SSMA for Oracle, select **Tester** followed by **New Test Case**.
+
+ :::image type="content" source="./media/oracle-to-sql-on-azure-vm-guide/ssma-tester-new.png" alt-text="Screenshot that shows new test case.":::
+
+1. On the Test Case wizard, provide the following information:
+
+ **Name**: Enter the name to identify the test case.
+
+ **Creation date**: Today's current date, defined automatically.
+
+ **Last Modified date**: filled in automatically, should not be changed.
+
+ **Description**: Enter any additional information to identify the purpose of the test case.
+
+ :::image type="content" source="./media/oracle-to-sql-on-azure-vm-guide/tester-init-test-case.png" alt-text="Screenshot that shows step to initialize a test case.":::
+
+1. Select the objects that are part of the test case from the Oracle object tree located on the left side.
+
+ :::image type="content" source="./media/oracle-to-sql-on-azure-vm-guide/tester-select-configure-objects.png" alt-text="Screenshot that shows step to select and configure object.":::
+
+ In this example, stored procedure `ADD_REGION` and table `REGION` are selected.
+
+ To learn more, see [Selecting and configuring objects to test.](/sql/ssma/oracle/selecting-and-configuring-objects-to-test-oracletosql)
+
+1. Next, select the tables, foreign keys and other dependent objects from the Oracle object tree in the left window.
+
+ :::image type="content" source="./media/oracle-to-sql-on-azure-vm-guide/tester-select-configure-affected.png" alt-text="Screenshot that shows step to select and configure affected object.":::
+
+ To learn more, see [Selecting and configuring affected objects.](/sql/ssma/oracle/selecting-and-configuring-affected-objects-oracletosql)
+
+1. Review the evaluation sequence of objects. Change the order by clicking the buttons in the grid..
+
+ :::image type="content" source="./media/oracle-to-sql-on-azure-vm-guide/test-call-ordering.png" alt-text="Screenshot that shows step to sequence test object execution.":::
+
+1. Finalize the test case by reviewing the information provided in the previous steps. Configure the test execution options based on the test scenario.
+
+ :::image type="content" source="./media/oracle-to-sql-on-azure-vm-guide/tester-finalize-case.png" alt-text="Screenshot that shows step to finalize object.":::
+
+ For more information on test case settings,[Finishing test case preparation](/sql/ssma/oracle/finishing-test-case-preparation-oracletosql)
+
+1. Click on finish to create the test case.
+
+ :::image type="content" source="./media/oracle-to-sql-on-azure-vm-guide/tester-test-repo.png" alt-text="Screenshot that shows step to test repo.":::
+
+#### Run test case
+
+When SSMA Tester runs a test case, the test engine executes the objects selected for testing and generates a verification report.
+
+1. Select the test case from test repository and then click run.
+ :::image type="content" source="./media/oracle-to-sql-on-azure-vm-guide/tester-repo-run.png" alt-text="Screenshot that shows to review test repo.":::
+
+1. Review the launch test case and click run.
+
+ :::image type="content" source="./media/oracle-to-sql-on-azure-vm-guide/tester-run-test-case.png" alt-text="Screenshot that shows step to launch test case.":::
+
+1. Next, provide Oracle source credentials. Click connect after entering the credentials.
+
+ :::image type="content" source="./media/oracle-to-sql-on-azure-vm-guide/tester-oracle-connect.png" alt-text="Screenshot that shows step to connect to oracle source.":::
+
+1. Provide target SQL Server credentials and click connect.
+
+ :::image type="content" source="./media/oracle-to-sql-on-azure-vm-guide/tester-sqlservervm-connect.png" alt-text="Screenshot that shows step to connect to sql target.":::
+
+ On success, the test case moves to initialization stage.
+
+1. A real-time progress bar shows the execution status of the test run.
+
+ :::image type="content" source="./media/oracle-to-sql-on-azure-vm-guide/tester-run-status.png" alt-text="Screenshot that shows tester test progress.":::
+
+1. Review the report after the test is completed. The report provides the statistics, any errors during the test run and a detail report.
+
+ :::image type="content" source="./media/oracle-to-sql-on-azure-vm-guide/tester-test-result.png" alt-text="Screenshot that shows a sample tester test report":::
+
+7.Click details to get more information.
+
+ Example of positive data validation.
+ :::image type="content" source="./media/oracle-to-sql-on-azure-vm-guide/tester-test-success.png" alt-text="Screenshot that shows a sample tester success report.":::
+
+ Example of failed data validation.
+ :::image type="content" source="./media/oracle-to-sql-on-azure-vm-guide/tester-test-failed.png" alt-text="Screenshot that shows tester failure report.":::
+ ### Optimize The post-migration phase is crucial for reconciling any data accuracy problems and verifying completeness. It's also critical for addressing performance issues with the workload.
The post-migration phase is crucial for reconciling any data accuracy problems a
> For more information about these problems and specific steps to mitigate them, see the [Post-migration validation and optimization guide](/sql/relational-databases/post-migration-validation-and-optimization-guide).
-## Migration resources
+## Migration resources
For more help with completing this migration scenario, see the following resources, which were developed to support a real-world migration project.
azure-sql Automated Backup Sql 2014 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/virtual-machines/windows/automated-backup-sql-2014.md
First, you can poll the status by calling [msdb.smart_admin.sp_get_backup_diagno
Another option is to take advantage of the built-in Database Mail feature for notifications. 1. Call the [msdb.smart_admin.sp_set_parameter](/sql/relational-databases/system-stored-procedures/managed-backup-sp-set-parameter-transact-sql) stored procedure to assign an email address to the **SSMBackup2WANotificationEmailIds** parameter.
-1. Enable [SendGrid](../../../sendgrid-dotnet-how-to-send-email.md) to send the emails from the Azure VM.
+1. Enable [SendGrid](https://docs.sendgrid.com/for-developers/partners/microsoft-azure-2021#create-a-twilio-sendgrid-accountcreate-a-twilio-sendgrid-account) to send the emails from the Azure VM.
1. Use the SMTP server and user name to configure Database Mail. You can configure Database Mail in SQL Server Management Studio or with Transact-SQL commands. For more information, see [Database Mail](/sql/relational-databases/database-mail/database-mail). 1. [Configure SQL Server Agent to use Database Mail](/sql/relational-databases/database-mail/configure-sql-server-agent-mail-to-use-database-mail). 1. Verify that the SMTP port is allowed both through the local VM firewall and the network security group for the VM.
You can find additional backup and restore guidance for SQL Server on Azure VMs
For information about other available automation tasks, see [SQL Server IaaS Agent Extension](sql-server-iaas-agent-extension-automate-management.md).
-For more information about running SQL Server on Azure VMs, see [SQL Server on Azure virtual machines overview](sql-server-on-azure-vm-iaas-what-is-overview.md).
+For more information about running SQL Server on Azure VMs, see [SQL Server on Azure virtual machines overview](sql-server-on-azure-vm-iaas-what-is-overview.md).
azure-sql Automated Backup https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/virtual-machines/windows/automated-backup.md
First, you can poll the status by calling [msdb.managed_backup.sp_get_backup_dia
Another option is to take advantage of the built-in Database Mail feature for notifications. 1. Call the [msdb.managed_backup.sp_set_parameter](/sql/relational-databases/system-stored-procedures/managed-backup-sp-set-parameter-transact-sql) stored procedure to assign an email address to the **SSMBackup2WANotificationEmailIds** parameter.
-1. Enable [SendGrid](../../../sendgrid-dotnet-how-to-send-email.md) to send the emails from the Azure VM.
+1. Enable [SendGrid](https://docs.sendgrid.com/for-developers/partners/microsoft-azure-2021#create-a-twilio-sendgrid-accountcreate-a-twilio-sendgrid-account) to send the emails from the Azure VM.
1. Use the SMTP server and user name to configure Database Mail. You can configure Database Mail in SQL Server Management Studio or with Transact-SQL commands. For more information, see [Database Mail](/sql/relational-databases/database-mail/database-mail). 1. [Configure SQL Server Agent to use Database Mail](/sql/relational-databases/database-mail/configure-sql-server-agent-mail-to-use-database-mail). 1. Verify that the SMTP port is allowed both through the local VM firewall and the network security group for the VM.
You can find additional backup and restore guidance for SQL Server on Azure VMs
For information about other available automation tasks, see [SQL Server IaaS Agent Extension](sql-server-iaas-agent-extension-automate-management.md).
-For more information about running SQL Server on Azure VMs, see [SQL Server on Azure virtual machines overview](sql-server-on-azure-vm-iaas-what-is-overview.md).
+For more information about running SQL Server on Azure VMs, see [SQL Server on Azure virtual machines overview](sql-server-on-azure-vm-iaas-what-is-overview.md).
azure-sql Doc Changes Updates Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/virtual-machines/windows/doc-changes-updates-release-notes.md
Azure allows you to deploy a virtual machine (VM) with an image of SQL Server bu
| Changes | Details | | | |
-| **Security enhancements in the Azure portal** | Once you've enabled [Azure Defender for SQL](/azure/security-center/defender-for-sql-usage), you can view Security Center recommendations in the [SQL virtual machines resource in the Azure portal](manage-sql-vm-portal.md#security-center). |
+| **Security enhancements in the Azure portal** | Once you've enabled [Azure Defender for SQL](../../../security-center/defender-for-sql-usage.md), you can view Security Center recommendations in the [SQL virtual machines resource in the Azure portal](manage-sql-vm-portal.md#security-center). |
## May 2021
Azure allows you to deploy a virtual machine (VM) with an image of SQL Server bu
* [Overview of SQL Server on a Linux VM](../linux/sql-server-on-linux-vm-what-is-iaas-overview.md) * [Provision SQL Server on a Linux virtual machine](../linux/sql-vm-create-portal-quickstart.md) * [FAQ (Linux)](../linux/frequently-asked-questions-faq.yml)
-* [SQL Server on Linux documentation](/sql/linux/sql-server-linux-overview)
+* [SQL Server on Linux documentation](/sql/linux/sql-server-linux-overview)
azure-sql Sql Server Iaas Agent Extension Automate Management https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/virtual-machines/windows/sql-server-iaas-agent-extension-automate-management.md
The following table details these benefits:
| **View disk utilization in portal** | Allows you to view a graphical representation of the disk utilization of your SQL data files in the Azure portal. <br/> Management mode: Full | | **Flexible licensing** | Save on cost by [seamlessly transitioning](licensing-model-azure-hybrid-benefit-ahb-change.md) from the bring-your-own-license (also known as the Azure Hybrid Benefit) to the pay-as-you-go licensing model and back again. <br/> Management mode: Lightweight & full| | **Flexible version / edition** | If you decide to change the [version](change-sql-server-version.md) or [edition](change-sql-server-edition.md) of SQL Server, you can update the metadata within the Azure portal without having to redeploy the entire SQL Server VM. <br/> Management mode: Lightweight & full|
-| **Security Center Portal integration** | If you've enabled [Azure Defender for SQL](/azure/security-center/defender-for-sql-usage), then you can view Security Center recommendations directly in the [SQL virtual machines](manage-sql-vm-portal.md) resource of the Azure portal. See [Security best practices](security-considerations-best-practices.md) to learn more. <br/> Management mode: Lightweight & full|
+| **Security Center Portal integration** | If you've enabled [Azure Defender for SQL](../../../security-center/defender-for-sql-usage.md), then you can view Security Center recommendations directly in the [SQL virtual machines](manage-sql-vm-portal.md) resource of the Azure portal. See [Security best practices](security-considerations-best-practices.md) to learn more. <br/> Management mode: Lightweight & full|
## Management modes
To install the SQL Server IaaS extension to SQL Server on Azure VMs, see the art
For more information about running SQL Server on Azure Virtual Machines, see the [What is SQL Server on Azure Virtual Machines?](sql-server-on-azure-vm-iaas-what-is-overview.md).
-To learn more, see [frequently asked questions](frequently-asked-questions-faq.yml).
+To learn more, see [frequently asked questions](frequently-asked-questions-faq.yml).
azure-video-analyzer Direct Methods https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-video-analyzer/video-analyzer-docs/direct-methods.md
Last updated 06/01/2021
Azure Video Analyzer IoT edge module `avaedge` exposes several direct methods that can be invoked from IoT Hub. Direct methods represent a request-reply interaction with a device similar to an HTTP call in that they succeed or fail immediately (after a user-specified timeout). This approach is useful for scenarios where the course of immediate action is different depending on whether the device was able to respond. For more information, see [Understand and invoke direct methods from IoT Hub](../../iot-hub/iot-hub-devguide-direct-methods.md).
-This topic describes these methods and conventions.
+This topic describes these methods, conventions, and the schema of the methods.
## Conventions
Following are some of the error codes used at the detail level.
|409| ResourceValidationError| Referenced resource (example: video resource) is not in a valid state.| ## Supported direct methods
-Following are the direct methods exposed by the Video Analyzer edge module.
+Following are the direct methods exposed by the Video Analyzer edge module. The schema for the direct methods can be found [here](https://github.com/Azure/azure-rest-api-specs/blob/master/specification/videoanalyzer/data-plane/VideoAnalyzer.Edge/preview/1.0.0/AzureVideoAnalyzerSdkDefinitions.json).
### pipelineTopologyList
azure-video-analyzer Get Started Detect Motion Emit Events Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-video-analyzer/video-analyzer-docs/get-started-detect-motion-emit-events-portal.md
When you use this quickstart, events will be sent to IoT Hub. To see these event
## Use direct method calls
-You can now analyze live video streams by invoking direct methods that the Video Analyzer edge module exposes. Read [Video Analyzer direct methods](direct-methods.md) to examine all the direct methods that the module provides.
+You can now analyze live video streams by invoking direct methods that the Video Analyzer edge module exposes. Read [Video Analyzer direct methods](direct-methods.md) to examine all the direct methods that the module provides. The schema for the direct methods can be found [here](https://github.com/Azure/azure-rest-api-specs/blob/master/specification/videoanalyzer/data-plane/VideoAnalyzer.Edge/preview/1.0.0/AzureVideoAnalyzerSdkDefinitions.json).
### Enumerate pipeline topologies
azure-video-analyzer Get Started Detect Motion Emit Events https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-video-analyzer/video-analyzer-docs/get-started-detect-motion-emit-events.md
When you use run this quickstart, events will be sent to the IoT Hub. To see the
## Use direct method calls
-You can now analyze live video streams by invoking direct methods exposed by the Video Analyzer edge module. Read [Video Analyzer direct methods](direct-methods.md) to examine all the direct methods provided by the module.
+You can now analyze live video streams by invoking direct methods exposed by the Video Analyzer edge module. Read [Video Analyzer direct methods](direct-methods.md) to examine all the direct methods provided by the module. The schema for the direct methods can be found [here](https://github.com/Azure/azure-rest-api-specs/blob/master/specification/videoanalyzer/data-plane/VideoAnalyzer.Edge/preview/1.0.0/AzureVideoAnalyzerSdkDefinitions.json).
### Enumerate pipeline topologies
azure-video-analyzer Considerations When Use At Scale https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-video-analyzer/video-analyzer-for-media-docs/considerations-when-use-at-scale.md
To see an example of how to upload videos using URL, check out [this example](up
## Automatic Scaling of Media Reserved Units
-Starting August 1st 2021, Azure Video Analyzer for Media (formerly Video Indexer) enabled [Reserved Units](https://docs.microsoft.com/azure/media-services/latest/concept-media-reserved-units)(MRUs) auto scaling by [Azure Media Services](https://docs.microsoft.com/azure/media-services/latest/media-services-overview) (AMS), as a result you do not need to manage them through Azure Video Analyzer for Media. That will allow price optimization, e.g. price reduction in many cases, based on your business needs as it is being auto scaled.
+Starting August 1st 2021, Azure Video Analyzer for Media (formerly Video Indexer) enabled [Reserved Units](../../media-services/latest/concept-media-reserved-units.md)(MRUs) auto scaling by [Azure Media Services](../../media-services/latest/media-services-overview.md) (AMS), as a result you do not need to manage them through Azure Video Analyzer for Media. That will allow price optimization, e.g. price reduction in many cases, based on your business needs as it is being auto scaled.
## Respect throttling
Therefore, we recommend you to verify that you get the right results for your us
## Next steps
-[Examine the Azure Video Analyzer for Media output produced by API](video-indexer-output-json-v2.md)
+[Examine the Azure Video Analyzer for Media output produced by API](video-indexer-output-json-v2.md)
azure-vmware Azure Vmware Solution Platform Updates https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/azure-vmware-solution-platform-updates.md
Azure VMware Solution will apply important updates starting in March 2021. You'l
## July 23, 2021
-All new Azure VMware Solution private clouds are now deployed with NSX-T version 3.1.2. NSX-T version in existing private clouds will be upgraded through September, 2021 to NSX-T 3.1.2 release.
+All new Azure VMware Solution private clouds are now deployed with NSX-T version [!INCLUDE [nsxt-version](includes/nsxt-version.md)]. NSX-T version in existing private clouds will be upgraded through September, 2021 to NSX-T [!INCLUDE [nsxt-version](includes/nsxt-version.md)] release.
You'll receive an email with the planned maintenance date and time. You can reschedule an upgrade. The email also provides details on the upgraded component, its effect on workloads, private cloud access, and other Azure services.
-For more information on this NSX-T version, see [VMware NSX-T Data Center 3.1.2 Release Notes](https://docs.vmware.com/en/VMware-NSX-T-Data-Center/3.1/rn/VMware-NSX-T-Data-Center-312-Release-Notes.html).
+For more information on this NSX-T version, see [VMware NSX-T Data Center [!INCLUDE [nsxt-version](includes/nsxt-version.md)] Release Notes](https://docs.vmware.com/en/VMware-NSX-T-Data-Center/3.1/rn/VMware-NSX-T-Data-Center-312-Release-Notes.html).
++ ## May 25, 2021
azure-vmware Concepts Api Management https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/concepts-api-management.md
Last updated 04/28/2021
# Publish and protect APIs running on Azure VMware Solution VMs
-Microsoft Azure [API Management](https://azure.microsoft.com/services/api-management/) lets you securely publish to external or internal consumers. Only the Developer (development) and Premium (production) SKUs allow for Azure Virtual Network integration to publish APIs that run on Azure VMware Solution workloads. Both SKUs enable the connectivity between the API Management service and the backend.
-
-The API Management configuration is the same for backend services that run on top of Azure VMware Solution virtual machines (VMs) and on-premises. For both deployments, API Management configures the virtual IP on the load balancer as the backend endpoint when the backend server is placed behind an NSX Load Balancer on the Azure VMware Solution.
+Microsoft Azure [API Management](https://azure.microsoft.com/services/api-management/) lets you securely publish to external or internal consumers. Only the Developer (development) and Premium (production) SKUs allow Azure Virtual Network integration to publish APIs that run on Azure VMware Solution workloads. In addition, both SKUs enable the connectivity between the API Management service and the backend.
+The API Management configuration is the same for backend services that run on Azure VMware Solution virtual machines (VMs) and on-premises. In addition, API Management configures the virtual IP on the load balancer as the backend endpoint for both deployments when the backend server is placed behind an NSX Load Balancer on the Azure VMware Solution.
## External deployment
azure-vmware Concepts Hub And Spoke https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/concepts-hub-and-spoke.md
For more information on Azure VMware Solution networking and connectivity concep
### Traffic segmentation
-[Azure Firewall](../firewall/index.yml) is the Hub and Spoke topology's central piece, deployed on the Hub virtual network. Use Azure Firewall or another Azure supported network virtual appliance to establish traffic rules and segment the communication between the different spokes and Azure VMware Solution workloads.
+[Azure Firewall](../firewall/index.yml) is the Hub and Spoke topology's central piece, deployed on the Hub virtual network. Use Azure Firewall or another Azure supported network virtual appliance (NVA) to establish traffic rules and segment the communication between the different spokes and Azure VMware Solution workloads.
Create route tables to direct the traffic to Azure Firewall. For the Spoke virtual networks, create a route that sets the default route to the internal interface of Azure Firewall. This way, when a workload in the Virtual Network needs to reach the Azure VMware Solution address space, the firewall can evaluate it and apply the corresponding traffic rule to either allow or deny it.
azure-vmware Concepts Identity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/concepts-identity.md
Azure VMware Solution private clouds are provisioned with a vCenter Server and N
## vCenter access and identity
-In Azure VMware Solution, vCenter has a built-in local user called cloudadmin and assigned to the CloudAdmin role. The local cloudadmin user is used to set up users in Active Directory (AD). In general, the CloudAdmin role creates and manages workloads in your private cloud. But in Azure VMware Solution, the CloudAdmin role has vCenter privileges that differ from other VMware cloud solutions.
+In Azure VMware Solution, vCenter has a built-in local user called cloudadmin and is assigned to the CloudAdmin role. The local cloudadmin user is used to set up users in Active Directory (AD). In general, the CloudAdmin role creates and manages workloads in your private cloud. But in Azure VMware Solution, the CloudAdmin role has vCenter privileges that differ from other VMware cloud solutions.
- In a vCenter and ESXi on-premises deployment, the administrator has access to the vCenter administrator\@vsphere.local account. They can also have more AD users and groups assigned. - In an Azure VMware Solution deployment, the administrator doesn't have access to the administrator user account. They can, however, assign AD users and groups to the CloudAdmin role on vCenter.
-The private cloud user doesn't have access to and can't configure specific management components supported and managed by Microsoft. For example, clusters, hosts, datastores, and distributed virtual switches.
+The private cloud user doesn't have access to and can't configure specific management components Microsoft supports and manages. For example, clusters, hosts, datastores, and distributed virtual switches.
> [!IMPORTANT] > Azure VMware Solution offers custom roles on vCenter but currently doesn't offer them on the Azure VMware Solution portal. For more information, see the [Create custom roles on vCenter](#create-custom-roles-on-vcenter) section later in this article.
The private cloud user doesn't have access to and can't configure specific manag
You can view the privileges granted to the Azure VMware Solution CloudAdmin role on your Azure VMware Solution private cloud vCenter.
-1. Sign into the vSphere Client and go to **Menu** > **Administration**.
+1. Sign in to the vSphere Client and go to **Menu** > **Administration**.
1. Under **Access Control**, select **Roles**.
You'll use the CloudAdmin role to create, modify, or delete custom roles with pr
To prevent creating roles that can't be assigned or deleted, clone the CloudAdmin role as the basis for creating new custom roles. #### Create a custom role
-1. Sign into vCenter with cloudadmin\@vsphere.local or a user with the CloudAdmin role.
+1. Sign in to vCenter with cloudadmin\@vsphere.local or a user with the CloudAdmin role.
1. Navigate to the **Roles** configuration section and select **Menu** > **Administration** > **Access Control** > **Roles**.
To prevent creating roles that can't be assigned or deleted, clone the CloudAdmi
#### Apply a custom role
-1. Navigate to the object that requires the added permission. For example, to apply the permission to a folder, navigate to **Menu** > **VMs and Templates** > **Folder Name**.
+1. Navigate to the object that requires the added permission. For example, to apply permission to a folder, navigate to **Menu** > **VMs and Templates** > **Folder Name**.
1. Right-click the object and select **Add Permission**.
To prevent creating roles that can't be assigned or deleted, clone the CloudAdmi
1. Search for the user or group after selecting the Identity Source under the **User** section.
-1. Select the role that will be applied for the user or group.
+1. Select the role that you want to apply to the user or group.
1. Check the **Propagate to children** if needed, and select **OK**. The added permission displays in the **Permissions** section. ## NSX-T Manager access and identity >[!NOTE]
->NSX-T 3.1.2 is currently supported for all new private clouds.
+>NSX-T [!INCLUDE [nsxt-version](includes/nsxt-version.md)] is currently supported for all new private clouds.
-Use the *admin* account to access NSX-T Manager. It has full privileges and lets you create and manage Tier-1 (T1) Gateways, segments (logical switches), and all services. The privileges give you access to the NSX-T Tier-0 (T0) Gateway. A change to the T0 Gateway could result in degraded network performance or no private cloud access. Open a support request in the Azure portal to request any changes to your NSX-T T0 Gateway.
+Use the *admin* account to access NSX-T Manager. It has full privileges and lets you create and manage Tier-1 (T1) Gateways, segments (logical switches), and all services. In addition, the privileges give you access to the NSX-T Tier-0 (T0) Gateway. A change to the T0 Gateway could result in degraded network performance or no private cloud access. Open a support request in the Azure portal to request any changes to your NSX-T T0 Gateway.
## Next steps
azure-vmware Concepts Networking https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/concepts-networking.md
There are two ways to interconnectivity in the Azure VMware Solution private clo
- [**Full on-premises to private cloud interconnectivity**](#on-premises-interconnectivity) extends the basic Azure-only implementation to include interconnectivity between on-premises and Azure VMware Solution private clouds.
-In this article, we'll cover the key concepts that establish networking and interconnectivity, including requirements and limitations. This article provides you with the information you need to know to configure your networking to work with Azure VMware Solution.
+This article covers the key concepts that establish networking and interconnectivity, including requirements and limitations. In addition, this article provides you with the information you need to know to work with Azure VMware Solution to configure your networking.
## Azure VMware Solution private cloud use cases
azure-vmware Install Vmware Hcx https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/install-vmware-hcx.md
In this step, you'll download the VMware HCX Connector OVA file, and then you'll
1. Select a name and location, and select a resource or cluster where you're deploying the VMware HCX Connector. Then review the details and required resources and select **Next**.
-1. Review license terms, select the required storage and network and then select **Next**.
+1. Review license terms, select the required storage and network, and then select **Next**.
1. Select the [VMware HCX management network segment](plan-private-cloud-deployment.md#define-vmware-hcx-network-segments) that you defined during the planning state. Then select **Next**.
You can uninstall HCX Advanced through the portal, which removes the existing pa
1. Enter **yes** to confirm the uninstall.
-At this point, HCX Advanced will no longer have the vCenter plugin, and if needed, it can be reinstalled at any time.
+At this point, HCX Advanced will no longer have the vCenter plugin, and if needed, you can reinstall it at any time.
## Next steps
azure-vmware Tutorial Access Private Cloud https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/tutorial-access-private-cloud.md
Last updated 03/13/2021
# Tutorial: Access an Azure VMware Solution private cloud
-Azure VMware Solution doesn't allow you to manage your private cloud with your on-premises vCenter. You'll need to connect to the Azure VMware Solution vCenter instance through a jump box.
+Azure VMware Solution doesn't allow you to manage your private cloud with your on-premises vCenter. Instead, you'll need to connect to the Azure VMware Solution vCenter instance through a jump box.
In this tutorial, you'll create a jump box in the resource group you created in the [previous tutorial](tutorial-configure-networking.md) and sign into the Azure VMware Solution vCenter. This jump box is a Windows virtual machine (VM) on the same virtual network you created. It provides access to both vCenter and the NSX Manager. In this tutorial, you learn how to: > [!div class="checklist"]
-> * Create a Windows virtual machine for access to the Azure VMware Solution vCenter
-> * Sign into vCenter from this virtual machine
+> * Create a Windows VM to access the Azure VMware Solution vCenter
+> * Sign into vCenter from this VM
## Create a new Windows virtual machine
-1. In the resource group, select **Add**, search for and select **Microsoft Windows 10**, and then select **Create**.
+1. In the resource group, select **Add**, search for and select **Microsoft Windows 10**. Then select **Create**.
:::image type="content" source="media/tutorial-access-private-cloud/ss8-azure-w10vm-create.png" alt-text="Screenshot of how to add a new Windows 10 VM for a jump box.":::
In this tutorial, you learn how to:
If you need help with connecting to the VM, see [connect to a virtual machine](../virtual-machines/windows/connect-logon.md#connect-to-the-virtual-machine) for details.
-1. In the Windows VM, open a browser and navigate to the vCenter and NSX-T Manger URLs in two tabs.
+1. In the Windows VM, open a browser and navigate to the vCenter and NSX-T Manager URLs in two tabs.
1. In the vCenter tab, enter the `cloudadmin@vmcp.local` user credentials from the previous step.
In this tutorial, you learn how to:
:::image type="content" source="media/tutorial-access-private-cloud/ss6-vsphere-client-home.png" alt-text="Screenshot showing a summary of Cluster-1 in the vSphere Client." border="true":::
-1. In the second tab of the browser, sign in to NSX-T manager.
+1. In the second tab of the browser, sign in to NSX-T Manager.
:::image type="content" source="media/tutorial-access-private-cloud/ss10-nsx-manager-home.png" alt-text="Screenshot of the NSX-T Manager Overview." border="true":::
In this tutorial, you learn how to:
In this tutorial, you learned how to: > [!div class="checklist"]
-> * Create a Windows virtual machine to use to connect to vCenter
-> * Login to vCenter from your virtual machine
+> * Create a Windows VM to use to connect to vCenter
+> * Login to vCenter from your VM
Continue to the next tutorial to learn how to create a virtual network to set up local management for your private cloud clusters.
azure-vmware Tutorial Configure Networking https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/tutorial-configure-networking.md
Last updated 07/30/2021
# Tutorial: Configure networking for your VMware private cloud in Azure
-An Azure VMware Solution private cloud requires an Azure Virtual Network. Because Azure VMware Solution doesn't support your on-premises vCenter, extra steps are needed for integration with your on-premises environment. Setting up an ExpressRoute circuit and a virtual network gateway is also required.
+An Azure VMware Solution private cloud requires an Azure Virtual Network. Because Azure VMware Solution doesn't support your on-premises vCenter, you'll need to do additional steps to integrate with your on-premises environment. Setting up an ExpressRoute circuit and a virtual network gateway is also required.
[!INCLUDE [disk-pool-planning-note](includes/disk-pool-planning-note.md)]
When you select an existing vNet, the Azure Resource Manager (ARM) template that
3. Select **Save**.
- At this point, the vNet validates if overlapping IP address spaces between Azure VMware Solution and vNet are detected. If detected, then change the network address of either the private cloud or the vNet so they don't overlap.
+ At this point, the vNet validates if overlapping IP address spaces between Azure VMware Solution and vNet are detected. If detected, change the network address of either the private cloud or the vNet so they don't overlap.
### Create a new vNet
azure-vmware Tutorial Create Private Cloud https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/tutorial-create-private-cloud.md
You use vSphere and NSX-T Manager to manage most other aspects of cluster config
>[!TIP] >You can always extend the cluster and add additional clusters later if you need to go beyond the initial deployment number.
-Because Azure VMware Solution doesn't allow you to manage your private cloud with your on-premises vCenter at launch, extra configuration is needed. These procedures and related prerequisites are covered in this tutorial.
+Because Azure VMware Solution doesn't allow you to manage your private cloud with your on-premises vCenter at launch, you'll need to do additional steps for the configuration. This tutorial covers these steps and related prerequisites.
In this tutorial, you'll learn how to:
In this tutorial, you've learned how to:
> * Verify the private cloud deployed > * Delete an Azure VMware Solution private cloud
-Continue to the next tutorial to learn how to create a jump box. You use the jump box to connect to your environment so that you can manage your private cloud locally.
+Continue to the next tutorial to learn how to create a jump box. You use the jump box to connect to your environment to manage your private cloud locally.
> [!div class="nextstepaction"]
azure-vmware Tutorial Delete Private Cloud https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/tutorial-delete-private-cloud.md
Last updated 03/13/2021
# Tutorial: Delete an Azure VMware Solution private cloud
-If you have an Azure VMware Solution private cloud that you no longer need, you can delete it. The private cloud includes an isolated network domain, one or more provisioned vSphere clusters on dedicated server hosts, and several virtual machines (VMs). When you delete a private cloud, all of the VMs, their data, and clusters are deleted. The dedicated Azure VMware Solution hosts are securely wiped and returned to the free pool. The network address space provisioned is also deleted.
+If you have an Azure VMware Solution private cloud that you no longer need, you can delete it. The private cloud includes:
+
+* An isolated network domain
+
+* One or more provisioned vSphere clusters on dedicated server hosts
+
+* Several virtual machines (VMs)
+
+When you delete a private cloud, all VMs, their data, clusters, and network address space provisioned get deleted. The dedicated Azure VMware Solution hosts are securely wiped and returned to the free pool.
> [!CAUTION]
-> Deleting the private cloud is an irreversible operation. Once the private cloud is deleted, the data cannot be recovered, as it terminates all running workloads and components and destroys all private cloud data and configuration settings, including public IP addresses.
+> Deleting the private cloud terminates all running workloads and components and is an irreversible operation. Once you delete the private cloud, you cannot recover the data.
## Prerequisites
-If you require the VMs and their data later, make sure to back up the data before you delete the private cloud. There's no way to recover the VMs and their data.
+If you require the VMs and their data later, make sure to back up the data before you delete the private cloud. Unfortunately, there's no way to recover the VMs and their data.
## Delete the private cloud 1. Access the Azure VMware Solutions console in the [Azure portal](https://portal.azure.com).
-2. Select the private cloud to be deleted.
+2. Select the private cloud you want to delete.
3. Enter the name of the private cloud and select **Yes**.
azure-vmware Tutorial Expressroute Global Reach Private Cloud https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/tutorial-expressroute-global-reach-private-cloud.md
After you're finished, follow the recommended next steps at the end to continue
- Review the documentation on how to [enable connectivity in different Azure subscriptions](../expressroute/expressroute-howto-set-global-reach-cli.md#enable-connectivity-between-expressroute-circuits-in-different-azure-subscriptions). -- A separate, functioning ExpressRoute circuit to connect on-premises environments to Azure, which is _circuit 1_ for peering.
+- A separate, functioning ExpressRoute circuit for connecting on-premises environments to Azure, which is _circuit 1_ for peering.
- Ensure that all gateways, including the ExpressRoute provider's service, supports 4-byte Autonomous System Number (ASN). Azure VMware Solution uses 4-byte public ASNs for advertising routes.
azure-vmware Tutorial Network Checklist https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/tutorial-network-checklist.md
Last updated 07/01/2021
# Networking planning checklist for Azure VMware Solution
-Azure VMware Solution offers a VMware private cloud environment accessible for users and applications from on-premises and Azure-based environments or resources. The connectivity is delivered through networking services such as Azure ExpressRoute and VPN connections. It requires specific network address ranges and firewall ports to enable the services. This article provides you with the information you need to configure your networking to work with Azure VMware Solution properly.
+Azure VMware Solution offers a VMware private cloud environment accessible for users and applications from on-premises and Azure-based environments or resources. The connectivity is delivered through networking services such as Azure ExpressRoute and VPN connections. It requires specific network address ranges and firewall ports to enable the services. This article provides you with the information you need to properly configure your networking to work with Azure VMware Solution.
In this tutorial, you'll learn about:
In this tutorial, you'll learn about:
Ensure that all gateways, including the ExpressRoute provider's service, supports 4-byte Autonomous System Number (ASN). Azure VMware Solution uses 4-byte public ASNs for advertising routes. ## Virtual network and ExpressRoute circuit considerations
-When you create a virtual network connection in your subscription, the ExpressRoute circuit gets established through peering, uses an authorization key and a peering ID you request in the Azure portal. The peering is a private, one-to-one connection between your private cloud and the virtual network.
+When you create a virtual network connection in your subscription, the ExpressRoute circuit is established through peering, using an authorization key and a peering ID you request in the Azure portal. The peering is a private, one-to-one connection between your private cloud and the virtual network.
> [!NOTE] > The ExpressRoute circuit is not part of a private cloud deployment. The on-premises ExpressRoute circuit is beyond the scope of this document. If you require on-premises connectivity to your private cloud, you can use one of your existing ExpressRoute circuits or purchase one in the Azure portal. When deploying a private cloud, you receive IP addresses for vCenter and NSX-T Manager. To access those management interfaces, you'll need to create more resources in your subscription's virtual network. You can find the procedures for creating those resources and establishing [ExpressRoute private peering](tutorial-expressroute-global-reach-private-cloud.md) in the tutorials.
-The private cloud logical networking comes with pre-provisioned NSX-T. A Tier-0 gateway and Tier-1 gateway are pre-provisioned for you. You can create a segment and attach it to the existing Tier-1 gateway or attach it to a new Tier-1 gateway that you define. NSX-T logical networking components provide East-West connectivity between workloads and provide North-South connectivity to the internet and Azure services.
+The private cloud logical networking comes with pre-provisioned NSX-T. A Tier-0 gateway and Tier-1 gateway are pre-provisioned for you. You can create a segment and attach it to the existing Tier-1 gateway or attach it to a new Tier-1 gateway that you define. NSX-T logical networking components provide East-West connectivity between workloads and North-South connectivity to the internet and Azure services.
>[!IMPORTANT] >[!INCLUDE [disk-pool-planning-note](includes/disk-pool-planning-note.md)]
The private cloud logical networking comes with pre-provisioned NSX-T. A Tier-0
## Routing and subnet considerations The Azure VMware Solution private cloud is connected to your Azure virtual network using an Azure ExpressRoute connection. This high bandwidth, low latency connection allows you to access services running in your Azure subscription from your private cloud environment. The routing is Border Gateway Protocol (BGP) based, automatically provisioned, and enabled by default for each private cloud deployment.
-Azure VMware Solution private clouds require a minimum of a `/22` CIDR network address block for subnets, shown below. This network complements your on-premises networks. The address block shouldn't overlap with address blocks used in other virtual networks in your subscription and on-premises networks. Within this address block, management, provisioning, and vMotion networks get provisioned automatically.
+Azure VMware Solution private clouds require a minimum of a `/22` CIDR network address block for subnets, shown below. This network complements your on-premises networks. Therefore, the address block shouldn't overlap with address blocks used in other virtual networks in your subscription and on-premises networks. Within this address block, management, provisioning, and vMotion networks get provisioned automatically.
>[!NOTE] >Permitted ranges for your address block are the RFC 1918 private address spaces (10.0.0.0/8, 172.16.0.0/12, 192.168.0.0/16), except for 172.17.0.0/16.
The subnets:
| | -- | :: | ::| | | Private Cloud DNS server | On-Premises DNS Server | UDP | 53 | DNS Client - Forward requests from PC vCenter for any on-premises DNS queries (check DNS section below) | | On-premises DNS Server | Private Cloud DNS server | UDP | 53 | DNS Client - Forward requests from on-premises services to Private Cloud DNS servers (check DNS section below) |
-| On-premises network | Private Cloud vCenter server | TCP(HTTP) | 80 | vCenter Server requires port 80 for direct HTTP connections. Port 80 redirects requests to HTTPS port 443. This redirection helps if you use `http://server` instead of `https://server`. <br><br>WS-Management (also requires port 443 to be open) <br><br>If you use a custom Microsoft SQL database and not the bundled SQL Server 2008 database on the vCenter Server, port 80 is used by the SQL Reporting Services. When you install vCenter Server, the installer prompts you to change the HTTP port for the vCenter Server. Change the vCenter Server HTTP port to a custom value to ensure a successful installation. Microsoft Internet Information Services (IIS) also uses port 80. See Conflict Between vCenter Server and IIS for Port 80. |
+| On-premises network | Private Cloud vCenter server | TCP(HTTP) | 80 | vCenter Server requires port 80 for direct HTTP connections. Port 80 redirects requests to HTTPS port 443. This redirection helps if you use `http://server` instead of `https://server`. <br><br>WS-Management (also requires port 443 to be open) <br><br>If you use a custom Microsoft SQL database and not the bundled SQL Server 2008 database on the vCenter Server, the SQL Reporting Services use port 80. When you install vCenter Server, the installer prompts you to change the HTTP port for the vCenter Server. Change the vCenter Server HTTP port to a custom value to ensure a successful installation. Microsoft Internet Information Services (IIS) also uses port 80. See Conflict Between vCenter Server and IIS for Port 80. |
| Private Cloud management network | On-premises Active Directory | TCP | 389 | This port must be open on the local and all remote instances of vCenter Server. This port is the LDAP port number for the Directory Services for the vCenter Server group. The vCenter Server system needs to bind to port 389, even if you aren't joining this vCenter Server instance to a Linked Mode group. If another service is running on this port, it might be preferable to remove it or change its port to a different port. You can run the LDAP service on any port from 1025 through 65535. If this instance is serving as the Microsoft Windows Active Directory, change the port number from 389 to an available port from 1025 through 65535. This port is optional - for configuring on-premises AD as an identity source on the Private Cloud vCenter. | | On-premises network | Private Cloud vCenter server | TCP(HTTPS) | 443 | This port allows you to access vCenter from an on-premises network. The default port that the vCenter Server system uses to listen for connections from the vSphere Client. To enable the vCenter Server system to receive data from the vSphere Client, open port 443 in the firewall. The vCenter Server system also uses port 443 to monitor data transfer from SDK clients. This port is also used for the following | Web Browser | Hybrid Cloud Manager | TCP(HTTPS) | 9443 | Hybrid Cloud Manager Virtual Appliance Management Interface for Hybrid Cloud Manager system configuration. |
azure-vmware Tutorial Nsx T Network Segment https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/tutorial-nsx-t-network-segment.md
Last updated 07/16/2021
# Tutorial: Add a network segment in Azure VMware Solution
-After deploying Azure VMware Solution, you can configure an NSX-T network segment either from NSX-T Manager or the Azure portal. Once configured, the segments are visible in Azure VMware Solution, NSX-T Manger, and vCenter. NSX-T comes pre-previsioned by default with an NSX-T Tier-0 gateway in **Active/Active** mode and a default NSX-T Tier-1 gateway in **Active/Standby** mode. These gateways let you connect the segments (logical switches) and provide East-West and North-South connectivity.
+After deploying Azure VMware Solution, you can configure an NSX-T network segment from NSX-T Manager or the Azure portal. Once configured, the segments are visible in Azure VMware Solution, NSX-T Manger, and vCenter. NSX-T comes pre-provisioned by default with an NSX-T Tier-0 gateway in **Active/Active** mode and a default NSX-T Tier-1 gateway in **Active/Standby** mode. These gateways let you connect the segments (logical switches) and provide East-West and North-South connectivity.
>[!TIP] >The Azure portal presents a simplified view of NSX-T operations a VMware administrator needs regularly and targeted at users not familiar with NSX-T Manager.
In this tutorial, you created an NSX-T network segment to use for VMs in vCenter
You can now: - [Configure and manage DHCP for Azure VMware Solution](configure-dhcp-azure-vmware-solution.md)-- [Create a content Library to deploy VMs in Azure VMware Solution](deploy-vm-content-library.md)
+- [Create a Content Library to deploy VMs in Azure VMware Solution](deploy-vm-content-library.md)
- [Peer on-premises environments to a private cloud](tutorial-expressroute-global-reach-private-cloud.md)
azure-vmware Tutorial Scale Private Cloud https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/tutorial-scale-private-cloud.md
Title: Tutorial - Expand or shrink clusters in a private cloud
+ Title: Tutorial - Scale clusters in a private cloud
description: In this tutorial, you use the Azure portal to scale an Azure VMware Solution private cloud. Last updated 08/03/2021
Last updated 08/03/2021
#Customer intent: As a VMware administrator, I want to learn how to scale an Azure VMware Solution private cloud in the Azure portal.
-# Tutorial: Expand or shrink clusters in a private cloud
+# Tutorial: Scale clusters in a private cloud
-To get the most out of your Azure VMware Solution private cloud experience, scale the clusters and hosts to reflect what you need for planned workloads. You can scale the clusters and hosts in a private cloud as required for your application workload. Performance and availability limitations for specific services should be addressed on a case by case basis. The cluster and host limits are provided in the [private cloud concept](concepts-private-clouds-clusters.md) article.
+To get the most out of your Azure VMware Solution private cloud experience, scale the clusters and hosts to reflect what you need for planned workloads. You can scale the clusters and hosts in a private cloud as required for your application workload. You should address performance and availability limitations for specific services on a case-by-case basis.
+ In this tutorial, you'll use the Azure portal to:
In this tutorial, you'll use the Azure portal to:
## Prerequisites
-You'll need an existing private cloud to complete this tutorial. If you haven't created a private cloud, use the [create a private cloud tutorial](tutorial-create-private-cloud.md) to create one.
+You'll need an existing private cloud to complete this tutorial. If you haven't created a private cloud, follow the [create a private cloud tutorial](tutorial-create-private-cloud.md) to create one.
## Add a new cluster
-1. On the overview page of an existing private cloud, under Manage, select **Clusters** > **Add a cluster**.
+1. In your Azure VMware Solution private cloud, under **Manage**, select **Clusters** > **Add a cluster**.
:::image type="content" source="media/tutorial-scale-private-cloud/ss2-select-add-cluster.png" alt-text="Screenshot showing how to add a cluster to an Azure VMware Solution private cloud." border="true":::
-1. Use the slider to select the number of hosts and the select **Save**.
+1. Use the slider to select the number of hosts and then select **Save**.
:::image type="content" source="media/tutorial-scale-private-cloud/ss3-configure-new-cluster.png" alt-text="Screenshot showing how to configure a new cluster." border="true":::
You'll need an existing private cloud to complete this tutorial. If you haven't
## Scale a cluster
-1. On the overview page of an existing private cloud, under Manage, select **Clusters**.
+1. In your Azure VMware Solution private cloud, under **Manage**, select **Clusters**.
1. Select the cluster you want to scale, select **More** (...) and then select **Edit**.
backup Backup Azure Arm Restore Vms https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-azure-arm-restore-vms.md
Title: Restore VMs by using the Azure portal
description: Restore an Azure virtual machine from a recovery point by using the Azure portal, including the Cross Region Restore feature. Previously updated : 08/05/2021 Last updated : 08/06/2021 # How to restore Azure VM data in Azure portal
For more information, see [Back up and restore Active Directory domain controlle
Managed identities eliminate the need for the user to maintain the credentials. Managed identities provide an identity for applications to use when connecting to resources that support Azure Active Directory (Azure AD) authentication.
-Azure Backup offers the flexibility to restore the managed Azure VM with [managed identities](/azure/active-directory/managed-identities-azure-resources/overview). You can choose to select [system-managed identities](/azure/active-directory/managed-identities-azure-resources/overview#managed-identity-types) or user-managed identities as shown in the figure below. This is introduced as one of the input parameters in the [**Restore configuration** blade](/azure/backup/backup-azure-arm-restore-vms#create-a-vm) of Azure VM. Managed identities used as one of the input parameter is only used for accessing the storage accounts, which is used as staging location during restore and not for any other Azure resource controlling. These managed identities have to be associated to the vault.
+Azure Backup offers the flexibility to restore the managed Azure VM with [managed identities](../active-directory/managed-identities-azure-resources/overview.md). You can choose to select [system-managed identities](../active-directory/managed-identities-azure-resources/overview.md#managed-identity-types) or user-managed identities as shown in the figure below. This is introduced as one of the input parameters in the [**Restore configuration** blade](#create-a-vm) of Azure VM. Managed identities used as one of the input parameter is only used for accessing the storage accounts, which is used as staging location during restore and not for any other Azure resource controlling. These managed identities have to be associated to the vault.
:::image type="content" source="./media/backup-azure-arm-restore-vms/select-system-managed-identities-or-user-managed-identities.png" alt-text="Screenshot for choice to select system managed identities or user managed identities.":::
If you choose to select system-assigned or user-assigned managed identities, che
} ```
-Or, add the role assignment on the staging location (Storage Account) to have [Storage account Backup Contributor](/azure/backup/blob-backup-configure-manage#grant-permissions-to-the-backup-vault-on-storage-accounts) and [Storage Blob data Contributor](/azure/role-based-access-control/built-in-roles#storage-blob-data-contributor) for the successful restore operation.
+Or, add the role assignment on the staging location (Storage Account) to have [Storage account Backup Contributor](./blob-backup-configure-manage.md#grant-permissions-to-the-backup-vault-on-storage-accounts) and [Storage Blob data Contributor](../role-based-access-control/built-in-roles.md#storage-blob-data-contributor) for the successful restore operation.
:::image type="content" source="./media/backup-azure-arm-restore-vms/add-role-assignment-on-staging-location.png" alt-text="Screenshot for adding the role assignment on the staging location.":::
-You can also select the [user-managed identity](/azure/active-directory/managed-identities-azure-resources/how-to-manage-ua-identity-portal) by providing the input as their MSI Resource ID as provided in the figure below.
+You can also select the [user-managed identity](../active-directory/managed-identities-azure-resources/how-to-manage-ua-identity-portal.md) by providing the input as their MSI Resource ID as provided in the figure below.
:::image type="content" source="./media/backup-azure-arm-restore-vms/select-user-managed-identity-by-providing-input-as-msi-resource-id.png" alt-text="Screenshot for selecting the user-managed identity by providing the input as their MSI Resource ID."::: >[!Note]
->The support is available for only managed VMs, and not supported for classic VMs and unmanaged VMs. For the [storage accounts that are restricted with firewalls](/azure/storage/common/storage-network-security?tabs=azure-portal), system MSI is only supported.
+>The support is available for only managed VMs, and not supported for classic VMs and unmanaged VMs. For the [storage accounts that are restricted with firewalls](../storage/common/storage-network-security.md?tabs=azure-portal), system MSI is only supported.
> >Cross Region Restore isn't supported with managed identities. >
->Currently, this is available in all Azure public regions, except Germany West Central and India Central.
+>Currently, this is available in all Azure public and national cloud regions.
## Track the restore operation
There are a few things to note after restoring a VM:
## Next steps - If you experience difficulties during the restore process, [review](backup-azure-vms-troubleshoot.md#restore) common issues and errors.-- After the VM is restored, learn about [managing virtual machines](backup-azure-manage-vms.md)
+- After the VM is restored, learn about [managing virtual machines](backup-azure-manage-vms.md)
backup Backup Azure Delete Vault https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-azure-delete-vault.md
To properly delete a vault, you must follow the steps in this order:
After you've completed these steps, you can continue to [delete the vault](#delete-the-recovery-services-vault).
-If you don't have any protected items on-premises or cloud, but are still getting the vault deletion error, perform the steps in [Delete the Recovery Services vault by using Azure Resource Manager](#delete-the-recovery-services-vault-by-using-azure-resource-manager)
+If you are **still unable to delete the vault** that contain no dependencies then follow the steps listed in [**deleting vault using ARM client**](#delete-the-recovery-services-vault-by-using-azure-resource-manager)
## Delete protected items in the cloud
backup Backup Azure Manage Vms https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-azure-manage-vms.md
A notification lets you know that the backup jobs have been stopped.
To stop protection and delete data of a VM: >[!Note]
->For recovery points in archive that haven't stayed for a duration of 180 days in Archive Tier, deletion of those recovery points lead to early deletion cost. [Learn more](/azure/storage/blobs/storage-blob-storage-tiers#cool-and-archive-early-deletion).
+>For recovery points in archive that haven't stayed for a duration of 180 days in Archive Tier, deletion of those recovery points lead to early deletion cost. [Learn more](../storage/blobs/storage-blob-storage-tiers.md#cool-and-archive-early-deletion).
1. On the [vault item's dashboard](#view-vms-on-the-dashboard), select **Stop backup**.
To protect your data, Azure Backup includes the soft delete feature. With soft d
* Learn how to [back up Azure VMs from the VM's settings](backup-azure-vms-first-look-arm.md). * Learn how to [restore VMs](backup-azure-arm-restore-vms.md).
-* Learn how to [monitor Azure VM backups](./backup-azure-monitoring-built-in-monitor.md).
+* Learn how to [monitor Azure VM backups](./backup-azure-monitoring-built-in-monitor.md).
backup Backup Azure Monitoring Built In Monitor https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-azure-monitoring-built-in-monitor.md
Title: Monitor Azure Backup protected workloads description: In this article, learn about the monitoring and notification capabilities for Azure Backup workloads using the Azure portal. Previously updated : 03/05/2019 Last updated : 08/06/2021 ms.assetid: 86ebeb03-f5fa-4794-8a5f-aa5cbbf68a81
Azure Backup provides multiple backup solutions based on the backup requirement
You can monitor all your backup items via a Recovery Services vault. Navigating to the **Backup Items** section in the vault opens up a view that provides the number of backup items of each workload type associated with the vault. Clicking on any row opens up a detailed view listing all backup items of the given workload type, with information on the last backup status for each item, latest restore point available, and so on.
-![RS vault backup items](media/backup-azure-monitoring-laworkspace/backup-items-view.png)
+![Screenshot for viewing RS vault backup items](media/backup-azure-monitoring-laworkspace/backup-items-view.png)
> [!NOTE] > For items backed up to Azure using DPM, the list will show all the data sources protected (both disk and online) using the DPM server. If the protection is stopped for the datasource with backup data retained, the datasource will be still listed in the portal. You can go to the details of the data source to see if the recovery points are present in disk, online or both. Also, datasources for which the online protection is stopped but data is retained, billing for the online recovery points continue until the data is completely deleted.
You can monitor all your backup items via a Recovery Services vault. Navigating
Azure Backup provides in-built monitoring and alerting capabilities for workloads being protected by Azure Backup. In the Recovery Services vault settings, the **Monitoring** section provides in-built jobs and alerts.
-![RS vault inbuilt monitoring](media/backup-azure-monitoring-laworkspace/rs-vault-inbuiltmonitoring.png)
+![Screenshot for RS vault inbuilt monitoring](media/backup-azure-monitoring-laworkspace/rs-vault-inbuilt-monitoring-menu.png)
Jobs are generated when operations such as configuring backup, back up, restore, delete backup, and so on, are performed.
Jobs from System Center Data Protection Manager (SC-DPM), Microsoft Azure Backup
## Backup Alerts in Recovery Services vault
-> [!NOTE]
-> Viewing alerts across vaults is currently not supported in Backup Center. You need to navigate to an individual vault to view alerts for that vault.
- Alerts are primarily scenarios where users are notified so that they can take relevant action. The **Backup Alerts** section shows alerts generated by Azure Backup service. These alerts are defined by the service and user can't custom create any alerts. ### Alert scenarios
Based on alert severity, alerts can be defined in three types:
Once an alert is raised, users are notified. Azure Backup provides an inbuilt notification mechanism via e-mail. One can specify individual email addresses or distribution lists to be notified when an alert is generated. You can also choose whether to get notified for each individual alert or to group them in an hourly digest and then get notified.
-![RS Vault inbuilt email notification](media/backup-azure-monitoring-laworkspace/rs-vault-inbuiltnotification.png)
+![RS vault inbuilt email notification screenshot](media/backup-azure-monitoring-laworkspace/rs-vault-inbuiltnotification.png)
When notification is configured, you'll receive a welcome or introductory email. This confirms that Azure Backup can send emails to these addresses when an alert is raised.<br>
If the frequency was set to an hourly digest and an alert was raised and resolve
To inactivate/resolve an active alert, you can select the list item corresponding to the alert you wish to inactivate. This opens up a screen that displays detailed information about the alert, with an **Inactivate** button on the top. Selecting this button will change the status of the alert to **Inactive**. You may also inactivate an alert by right-clicking on the list item corresponding to that alert and selecting **Inactivate**.
-![RS Vault alert inactivation](media/backup-azure-monitoring-laworkspace/vault-alert-inactivation.png)
+![Screenshot for Backup center alert inactivation](media/backup-azure-monitoring-laworkspace/vault-alert-inactivate.png)
## Azure Monitor alerts for Azure Backup (preview)
To opt-in to Azure Monitor alerts for backup failure and restore failure scenari
1. Navigate to the Azure portal and search for **Preview Features**.
- ![Preview Features](media/backup-azure-monitoring-laworkspace/portal-preview-features.png)
+ ![Screenshot for viewing preview features in portal](media/backup-azure-monitoring-laworkspace/portal-preview-features.png)
2. You can view the list of all preview features that are available for you to opt-in to. * If you wish to receive job failure alerts for workloads backed up to Recovery Services vaults, select the flag named **EnableAzureBackupJobFailureAlertsToAzureMonitor** corresponding to Microsoft.RecoveryServices provider (column 3). * If you wish to receive job failure alerts for workloads backed up to Backup vaults, select the flag named **EnableAzureBackupJobFailureAlertsToAzureMonitor** corresponding to Microsoft.DataProtection provider (column 3).
- ![Alerts Preview](media/backup-azure-monitoring-laworkspace/alert-preview-feature-flags.png)
+ ![Screenshot for Alerts preview registration](media/backup-azure-monitoring-laworkspace/alert-preview-feature-flags.png)
3. Click **Register** to enable this feature for your subscription. > [!NOTE]
- > It may take up to 30 minutes for the registration to take effect. If you wish to enable this feature for multiple subscriptions, repeat the above process by selecting the relevant subscription at the top of the screen.
-
+ > It may take up to 24 hours for the registration to take effect. To enable this feature for multiple subscriptions, repeat the above process by selecting the relevant subscription at the top of the screen. We also recommend to re-register the preview flag if a new resource has been created in the subscription after the initial registration to continue receiving alerts.
### Viewing fired alerts in the Azure portal
-Once an alert is fired for a vault, you can view the alert in the Azure portal by navigating to the vault and clicking on the **Alerts** menu item. Clicking this shows a distribution of alerts by severity for this vault.
+Once an alert is fired for a vault, you can view the alert in the Azure portal by navigating to Backup center. On the **Overview** tab, you can see a summary of active alerts split by severity. There're two kinds of alerts displayed:
-![Viewing Alerts](media/backup-azure-monitoring-laworkspace/vault-azure-monitor-alerts.png)
+* **Datasource Alerts**: Alerts that are tied to a specific datasource being backed up (for example, back up or restore failure for a VM, deleting backup data for a database, and so on) appear under the **Datasource Alerts** section.
+* **Global Alerts**: Alerts that are not tied to a specific datasource (for example, disabling soft-delete functionality for a vault) appear under the **Global Alerts** section.
-Clicking on any of the numbers opens up a list of all alerts fired with the given severity. You can click any of the alerts to get more details about the alert, such as the affected datasource, alert description and recommended action, and so on.
+Each of the above types of alerts are further split into **Security** and **Configured** alerts. Currently, Security alerts include the scenarios of deleting backup data, or disabling soft-delete for vault (for the applicable workloads as detailed in the above section). Configured alerts include backup failure and restore failure since these alerts are only fired after registering the feature in the preview portal.
-![Alert Details](media/backup-azure-monitoring-laworkspace/azure-monitor-alert-details.png)
+![Screenshot for viewing alerts in Backup center](media/backup-azure-monitoring-laworkspace/backup-center-azure-monitor-alerts.png)
+
+Clicking on any of the numbers (or on the **Alerts** menu item) opens up a list of all active alerts fired with the relevant filters applied. You can filter on a range of properties, such as subscription, resource group, vault, severity, state, and so on. You can click any of the alerts to get more details about the alert, such as the affected datasource, alert description and recommended action, and so on.
+
+![Screenshot for viewing details of the alert](media/backup-azure-monitoring-laworkspace/backup-center-alert-details.png)
You can change the state of an alert to **Acknowledged** or **Closed** by clicking on **Change Alert State**.
-![Change Alert State](media/backup-azure-monitoring-laworkspace/azure-monitor-change-alert-state.png)
+![Screenshot for changing state of the alert](media/backup-azure-monitoring-laworkspace/backup-center-change-alert-state.png)
+> [!NOTE]
+> - In Backup center, only alerts for Azure-based workloads are displayed currently. To view alerts for on-premises resources, navigate to the Recovery Services vault and click the **Alerts** menu item.
+> - Only Azure Monitor alerts are displayed in Backup center. Alerts raised by the older alerting solution (accessed via the [Backup Alerts](/azure/backup/backup-azure-monitoring-built-in-monitor#backup-alerts-in-recovery-services-vault) tab in Recovery Services vault) are not displayed in Backup center.
For more information about Azure Monitor alerts, see [Overview of alerts in Azure](../azure-monitor/alerts/alerts-overview.md). ### Configuring notifications for alerts To configure notifications for Azure Monitor alerts, you must create an action rule. The following steps demonstrate how to create an action rule to send email notifications to a given email address. Similar instructions will apply for routing these alerts to other notification channels, such as ITSM, webhook, logic app and so on.
-1. Navigate to **Azure Monitor** in the Azure portal. Click the **Alerts** menu item and select **Manage actions**.
+1. Navigate to **Backup center** in the Azure portal. Click the **Alerts** menu item and select **Manage actions**.
- ![Manage Actions](media/backup-azure-monitoring-laworkspace/azure-monitor-manage-actions.png)
+ ![Screenshot for Manage Actions in Backup center](media/backup-azure-monitoring-laworkspace/backup-center-manage-actions.png)
2. Navigate to the **Action rules (preview)** tab and click **New action rule**.
- ![New Action Rule](media/backup-azure-monitoring-laworkspace/azure-monitor-create-action-rule.png)
+ ![Screenshot for creating a new action rule](media/backup-azure-monitoring-laworkspace/azure-monitor-create-action-rule.png)
3. Select the scope for which the action rule should be applied. You can apply the action rule for all resources within a subscription. Optionally, you can also apply filters on the alerts, for example, to only generate notifications for alerts of a certain severity.
- ![Action Rule Scope](media/backup-azure-monitoring-laworkspace/azure-monitor-action-rule-scope.png)
+ ![Screenshot for setting the action rule scope](media/backup-azure-monitoring-laworkspace/azure-monitor-action-rule-scope.png)
4. Create an action group. An action group is the destination to which the notification for an alert should be sent, for example, an email address.
- ![Create Action Group](media/backup-azure-monitoring-laworkspace/azure-monitor-create-action-group.png)
+ ![Screenshot for creating a new action group](media/backup-azure-monitoring-laworkspace/azure-monitor-create-action-group.png)
5. On the **Basics** tab, select the name of the action group and the subscription and resource group under which it should be created.
- ![Action Groups Basic](media/backup-azure-monitoring-laworkspace/azure-monitor-action-groups-basic.png)
+ ![Screenshot for basic properties of action group](media/backup-azure-monitoring-laworkspace/azure-monitor-action-groups-basic.png)
-6. On the **Notifications** tab, select **Email/SMS message/Push/Voice** and enter the recipient email id.
+6. On the **Notifications** tab, select **Email/SMS message/Push/Voice** and enter the recipient email ID.
- ![Action Groups Notification](media/backup-azure-monitoring-laworkspace/azure-monitor-email.png)
+ ![Screenshot for setting notification properties](media/backup-azure-monitoring-laworkspace/azure-monitor-email.png)
7. Click **Review+Create** and then **Create** to deploy the action group.
To configure notifications for Azure Monitor alerts, you must create an action r
## Next steps
-[Monitor Azure Backup workloads using Azure Monitor](backup-azure-monitoring-use-azuremonitor.md)
+[Monitor Azure Backup workloads using Azure Monitor](backup-azure-monitoring-use-azuremonitor.md)
backup Backup Azure Vms Encryption https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-azure-vms-encryption.md
Azure Backup can back up and restore Azure VMs using ADE with and without the Az
- You can back up and restore ADE encrypted VMs within the same subscription. - Azure Backup supports VMs encrypted using standalone keys. Any key that's a part of a certificate used to encrypt a VM isn't currently supported.-- Azure Backup supports Cross Region Restore of encrypted Azure VMs to the Azure paired regions. For more information, see [support matrix](/azure/backup/backup-support-matrix#cross-region-restore).
+- Azure Backup supports Cross Region Restore of encrypted Azure VMs to the Azure paired regions. For more information, see [support matrix](./backup-support-matrix.md#cross-region-restore).
- ADE encrypted VMs canΓÇÖt be recovered at the file/folder level. You need to recover the entire VM to restore files and folders. - When restoring a VM, you can't use the [replace existing VM](backup-azure-arm-restore-vms.md#restore-options) option for ADE encrypted VMs. This option is only supported for unencrypted managed disks.
Restore encrypted VMs as follows:
If you run into any issues, review these articles: - [Common errors](backup-azure-vms-troubleshoot.md) when backing up and restoring encrypted Azure VMs.-- [Azure VM agent/backup extension](backup-azure-troubleshoot-vm-backup-fails-snapshot-timeout.md) issues.
+- [Azure VM agent/backup extension](backup-azure-troubleshoot-vm-backup-fails-snapshot-timeout.md) issues.
backup Backup Center Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-center-support-matrix.md
Backup Center provides a single pane of glass for enterprises to [govern, monito
| Insights | View Backup Reports | <li> Azure Virtual Machine <br><br> <li> SQL in Azure Virtual Machine <br><br> <li> SAP HANA in Azure Virtual Machine <br><br> <li> Azure Files <br><br> <li> System Center Data Protection Manager <br><br> <li> Azure Backup Agent (MARS) <br><br> <li> Azure Backup Server (MABS) | Refer to [supported scenarios for Backup Reports](./configure-reports.md#supported-scenarios) | | Governance | View and assign built-in and custom Azure Policies under category 'Backup' | N/A | N/A | | Governance | View datasources not configured for backup | <li> Azure Virtual Machine <br><br> <li> Azure Database for PostgreSQL server | N/A |
+| Monitoring | View Azure Monitor alerts at scale | <li> Azure Virtual Machine <br><br> <li> Azure Database for PostgreSQL server <br><br> <li> SQL in Azure VM <br><br> <li> SAP HANA in Azure VM <br><br> <li> Azure Files<br/><br/> <li>Azure Blobs<br/><br/> <li>Azure Managed Disks | Refer [Alerts](/azure/backup/backup-azure-monitoring-built-in-monitor#azure-monitor-alerts-for-azure-backup-preview) documentation |
+| Actions | Execute cross-region restore job from Backup center | <li> Azure Virtual Machine <br><br> <li> SQL in Azure VM <br><br> <li> SAP HANA in Azure VM | Refer [cross-region restore](/azure/backup/backup-create-rs-vault#set-cross-region-restore) documentation |
## Unsupported scenarios | **Category** | **Scenario** | |--||
-| Monitoring | View alerts at scale |
-| Actions | Configure vault settings at scale |
-| Actions | Execute cross-region restore job from Backup center |
+| Actions | Configuring vault settings at scale is currently not supported from Backup center |
+| Availability | Backup center is currently not available in national clouds |
## Next steps
backup Backup Create Rs Vault https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-create-rs-vault.md
Title: Create and configure Recovery Services vaults description: In this article, learn how to create and configure Recovery Services vaults that store the backups and recovery points. Learn how to use Cross Region Restore to restore in a secondary region. Previously updated : 06/01/2021 Last updated : 08/06/2021
The restore option **Cross Region Restore (CRR)** allows you to restore data in
It supports the following datasources: -- Azure VMs (general availability)-- SQL databases hosted on Azure VMs (preview)-- SAP HANA databases hosted on Azure VMs (preview)
+- Azure VMs
+- SQL databases hosted on Azure VMs
+- SAP HANA databases hosted on Azure VMs
Using Cross Region Restore allows you to: -- conduct drills when there's an audit or compliance requirement-- restore the data if there's a disaster in the primary region
+- Conduct drills when there's an audit or compliance requirement
+- Restore the data if there's a disaster in the primary region
When restoring a VM, you can restore the VM or its disk. If you're restoring from SQL/SAP HANA databases hosted on Azure VMs, then you can restore databases or their files.
Since this process is at the storage level, there are [pricing implications](htt
>Before you begin: > >- Review the [support matrix](backup-support-matrix.md#cross-region-restore) for a list of supported managed types and regions.
->- The Cross Region Restore (CRR) feature for Azure VMs is now in general availability in all Azure public regions.
->- Cross Region Restore for SQL and SAP HANA databases is in preview in all Azure public regions.
+>- The Cross Region Restore (CRR) feature for Azure VMs, SQL and SAP HANA databases are now in general availability in all Azure public and sovereign regions. For information on region availability, see the [support matrix](backup-support-matrix.md#cross-region-restore).
>- CRR is a vault level opt-in feature for any GRS vault (turned off by default). >- After opting-in, it might take up to 48 hours for the backup items to be available in secondary regions. >- Currently, CRR for Azure VMs is supported for Azure Resource Manager Azure VMs and encrypted Azure VMs. Classic Azure VMs won't be supported. When additional management types support CRR, then they'll be **automatically** enrolled.
backup Backup Mabs Whats New Mabs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-mabs-whats-new-mabs.md
For information about the UR2 issues fixes and the installation instructions, se
### Support for Azure Stack HCI
-With MABS v3 UR2, you can backup Virtual Machines on Azure Stack HCI. [Learn more](/azure/backup/back-up-azure-stack-hyperconverged-infrastructure-virtual-machines).
+With MABS v3 UR2, you can backup Virtual Machines on Azure Stack HCI. [Learn more](./back-up-azure-stack-hyperconverged-infrastructure-virtual-machines.md).
### Support for VMware 7.0
-With MABS v3 UR2, you can back up VMware 7.0 VMs. [Learn more](/azure/backup/backup-azure-backup-server-vmware).
+With MABS v3 UR2, you can back up VMware 7.0 VMs. [Learn more](./backup-azure-backup-server-vmware.md).
### Support for SQL Server Failover Cluster Instance (FCI) using Cluster Shared Volume (CSV)
-MABS v3 UR2 supports SQL Server Failover Cluster Instance (FCI) using Cluster Shared Volume (CSV). With CSV, the management of your SQL Server Instance is simplified. This helps you to manage the underlying storage from any node as there is an abstraction to which node owns the disk. [Learn more](/azure/backup/backup-azure-sql-mabs).
+MABS v3 UR2 supports SQL Server Failover Cluster Instance (FCI) using Cluster Shared Volume (CSV). With CSV, the management of your SQL Server Instance is simplified. This helps you to manage the underlying storage from any node as there is an abstraction to which node owns the disk. [Learn more](./backup-azure-sql-mabs.md).
### Optimized Volume Migration
MABS v3 UR2 supports optimized volume migration. The optimized volume migration
### Offline Backup using Azure Data Box
-MABS v3 UR2 supports Offline backup using Azure Data Box. With Microsoft Azure Data Box integration, you can overcome the challenge of moving terabytes of backup data from on-premises to Azure storage. Azure Data Box saves the effort required to procure your own Azure-compatible disks and connectors or to provision temporary storage as a staging location. Microsoft also handles the end-to-end transfer logistics, which you can track through the Azure portal. [Learn more](/azure/backup/offline-backup-azure-data-box-dpm-mabs).
+MABS v3 UR2 supports Offline backup using Azure Data Box. With Microsoft Azure Data Box integration, you can overcome the challenge of moving terabytes of backup data from on-premises to Azure storage. Azure Data Box saves the effort required to procure your own Azure-compatible disks and connectors or to provision temporary storage as a staging location. Microsoft also handles the end-to-end transfer logistics, which you can track through the Azure portal. [Learn more](./offline-backup-azure-data-box-dpm-mabs.md).
## What's new in MABS V3 UR1
With MABS V3 UR1, an additional a layer of authentication is added for critical
MABS v3 UR1 improves the experience of offline backup with Azure Import/Export Service. For more information, see the updated steps [here](./backup-azure-backup-server-import-export.md). >[!NOTE]
->From MABS v3 UR2, MABS can perform offline backup using Azure Data Box. [Learn more](/azure/backup/offline-backup-azure-data-box-dpm-mabs).
+>From MABS v3 UR2, MABS can perform offline backup using Azure Data Box. [Learn more](./offline-backup-azure-data-box-dpm-mabs.md).
### New cmdlet parameter
Learn how to prepare your server or begin protecting a workload:
* [Prepare Backup Server workloads](backup-azure-microsoft-azure-backup.md) * [Use Backup Server to back up a VMware server](backup-azure-backup-server-vmware.md) * [Use Backup Server to back up SQL Server](backup-azure-sql-mabs.md)
-* [Use Modern Backup Storage with Backup Server](backup-mabs-add-storage.md)
+* [Use Modern Backup Storage with Backup Server](backup-mabs-add-storage.md)
backup Backup Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-overview.md
Title: What is Azure Backup? description: Provides an overview of the Azure Backup service, and how it contributes to your business continuity and disaster recovery (BCDR) strategy. Previously updated : 04/24/2019 Last updated : 07/28/2021 # What is the Azure Backup service?
backup Backup Support Matrix Iaas https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-support-matrix-iaas.md
Title: Support matrix for Azure VM backup description: Provides a summary of support settings and limitations when backing up Azure VMs with the Azure Backup service. Previously updated : 08/05/2021 Last updated : 08/06/2021
Backup of Azure VMs with locks | Unsupported for unmanaged VMs. <br><br> Support
[Azure Dedicated Host](../virtual-machines/dedicated-hosts.md) | Supported<br></br>While restoring an Azure VM through the [Create New](backup-azure-arm-restore-vms.md#create-a-vm) option, though the restore gets successful, Azure VM can't be restored in the dedicated host. To achieve this, we recommend you to restore as disks. While [restoring as disks](backup-azure-arm-restore-vms.md#restore-disks) with the template, create a VM in dedicated host, and then attach the disks.<br></br>This is not applicable in secondary region, while performing [Cross Region Restore](backup-azure-arm-restore-vms.md#cross-region-restore). Windows Storage Spaces configuration of standalone Azure VMs | Supported [Azure VM Scale Sets](../virtual-machine-scale-sets/virtual-machine-scale-sets-orchestration-modes.md#scale-sets-with-flexible-orchestration) | Supported for flexible orchestration model to back up and restore Single Azure VM.
-Restore with Managed identities | Yes, supported for managed Azure VMs, and not supported for classic and unmanaged Azure VMs. <br><br> Cross Region Restore isn't supported with managed identities. <br><br> Currently, this is available in all Azure public regions, except Germany West Central and India Central. <br><br> [Learn more](backup-azure-arm-restore-vms.md#restore-vms-with-managed-identities).
+Restore with Managed identities | Yes, supported for managed Azure VMs, and not supported for classic and unmanaged Azure VMs. <br><br> Cross Region Restore isn't supported with managed identities. <br><br> Currently, this is available in all Azure public and national cloud regions. <br><br> [Learn more](backup-azure-arm-restore-vms.md#restore-vms-with-managed-identities).
## VM storage support
backup Backup Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-support-matrix.md
Title: Azure Backup support matrix description: Provides a summary of support settings and limitations for the Azure Backup service. Previously updated : 06/11/2021 Last updated : 07/05/2021
Azure Backup has added the Cross Region Restore feature to strengthen data avail
| MARS Agent/On premises | No | N/A | | AFS (Azure file shares) | No | N/A |
+## Resource health
+
+The resource health check functions in following conditions:
+
+| | |
+| | |
+| **Supported Resources** | Recovery Services vault |
+| **Supported Regions** | East US 2, East Asia, and France Central. |
+| **For unsupported regions** | The resource health status is shown as "Unknown". |
++ ## Next steps - [Review support matrix](backup-support-matrix-iaas.md) for Azure VM backup.
backup Blob Backup Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/blob-backup-support-matrix.md
Operational backup of blobs uses blob point-in-time restore, blob versioning, so
- A block that has been uploaded via [Put Block](/rest/api/storageservices/put-block) or [Put Block from URL](/rest/api/storageservices/put-block-from-url), but not committed via [Put Block List](/rest/api/storageservices/put-block-list), isn't part of a blob and so isn't restored as part of a restore operation. - A blob with an active lease can't be restored. If a blob with an active lease is included in the range of blobs to restore, the restore operation will fail automatically. Break any active leases before starting the restore operation. - Snapshots aren't created or deleted as part of a restore operation. Only the base blob is restored to its previous state.-- If there're [immutable blobs](../storage/blobs/storage-blob-immutable-storage.md#about-immutable-blob-storage) among those being restored, such immutable blobs won't be restored to their state as per the selected recovery point. However, other blobs that don't have immutability enabled will be restored to the selected recovery point as expected.
+- If there're [immutable blobs](../storage/blobs/immutable-storage-overview.md#about-immutable-storage-for-blobs) among those being restored, such immutable blobs won't be restored to their state as per the selected recovery point. However, other blobs that don't have immutability enabled will be restored to the selected recovery point as expected.
## Next steps
-[Overview of operational backup for Azure Blobs](blob-backup-overview.md)
+[Overview of operational backup for Azure Blobs](blob-backup-overview.md)
backup Manage Monitor Sql Database Backup https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/manage-monitor-sql-database-backup.md
In the vault dashboard, go to **Manage** > **Backup Policies** and choose the po
Policy modification will impact all the associated Backup Items and trigger corresponding **configure protection** jobs. >[!Note]
->Modification of policy will affect existing recovery points also. <br><br> For recovery points in archive that haven't stayed for a duration of 180 days in Archive Tier, deletion of those recovery points lead to early deletion cost. [Learn more](/azure/storage/blobs/storage-blob-storage-tiers#cool-and-archive-early-deletion).
+>Modification of policy will affect existing recovery points also. <br><br> For recovery points in archive that haven't stayed for a duration of 180 days in Archive Tier, deletion of those recovery points lead to early deletion cost. [Learn more](../storage/blobs/storage-blob-storage-tiers.md#cool-and-archive-early-deletion).
### Inconsistent policy
Use this option with caution. When triggered on a VM with an already healthy e
## Next steps
-For more information, see [Troubleshoot backups on a SQL Server database](backup-sql-server-azure-troubleshoot.md).
+For more information, see [Troubleshoot backups on a SQL Server database](backup-sql-server-azure-troubleshoot.md).
backup Manage Telemetry https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/manage-telemetry.md
Administrators can turn off this feature at any point of time. For more informat
## Next steps
-[Protect workloads](/azure/backup/back-up-hyper-v-virtual-machines-mabs)
+[Protect workloads](./back-up-hyper-v-virtual-machines-mabs.md)
backup Restore Sql Database Azure Vm https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/restore-sql-database-azure-vm.md
Title: Restore SQL Server databases on an Azure VM description: This article describes how to restore SQL Server databases that are running on an Azure VM and that are backed up with Azure Backup. You can also use Cross Region Restore to restore your databases to a secondary region. Previously updated : 05/22/2019 Last updated : 08/06/2021 # Restore SQL Server databases on Azure VMs
If the total string size of files in a database is greater than a [particular li
As one of the restore options, Cross Region Restore (CRR) allows you to restore SQL databases hosted on Azure VMs in a secondary region, which is an Azure paired region.
-To onboard to the feature during the preview, read the [Before You Begin section](./backup-create-rs-vault.md#set-cross-region-restore).
+To onboard to the feature, read the [Before You Begin section](./backup-create-rs-vault.md#set-cross-region-restore).
To see if CRR is enabled, follow the instructions in [Configure Cross Region Restore](backup-create-rs-vault.md#configure-cross-region-restore)
If CRR is enabled, you can view the backup items in the secondary region.
### Restore in secondary region
-The secondary region restore user experience will be similar to the primary region restore user experience. When configuring details in the Restore Configuration pane to configure your restore, you'll be prompted to provide only secondary region parameters.
+The secondary region restore user experience will be similar to the primary region restore user experience. When configuring details in the Restore Configuration pane to configure your restore, you'll be prompted to provide only secondary region parameters. A vault should exist in the secondary region and the SQL server should be registered to the vault in the secondary region.
![Where and how to restore](./media/backup-azure-sql-database/restore-secondary-region.png)
->[!NOTE]
->The virtual network in the secondary region needs to be assigned uniquely, and can't be used for any other VMs in that resource group.
- ![Trigger restore in progress notification](./media/backup-azure-arm-restore-vms/restorenotifications.png) >[!NOTE]
->
>- After the restore is triggered and in the data transfer phase, the restore job can't be cancelled.
->- The Azure roles needed to restore in the secondary region are the same as those in the primary region.
+>- The role/access level required to perform restore operation in cross-regions are _Backup Operator_ role in the subscription and _Contributor(write)_ access on the source and target virtual machines. To view backup jobs, _ Backup reader_ is the minimum premission required in the subscription.
### Monitoring secondary region restore jobs
backup Sap Hana Db Restore https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/sap-hana-db-restore.md
If you've selected **Full & Differential** as the restore type, do the following
As one of the restore options, Cross Region Restore (CRR) allows you to restore SAP HANA databases hosted on Azure VMs in a secondary region, which is an Azure paired region.
-To onboard to the feature during the preview, read the [Before You Begin section](./backup-create-rs-vault.md#set-cross-region-restore).
+To onboard to the feature, read the [Before You Begin section](./backup-create-rs-vault.md#set-cross-region-restore).
To see if CRR is enabled, follow the instructions in [Configure Cross Region Restore](backup-create-rs-vault.md#configure-cross-region-restore)
If CRR is enabled, you can view the backup items in the secondary region.
### Restore in secondary region
-The secondary region restore user experience will be similar to the primary region restore user experience. When configuring details in the Restore Configuration pane to configure your restore, you'll be prompted to provide only secondary region parameters.
+The secondary region restore user experience will be similar to the primary region restore user experience. When configuring details in the Restore Configuration pane to configure your restore, you'll be prompted to provide only secondary region parameters. A vault should exist in the secondary region and the SAP HANA server should be registered to the vault in the secondary region.
![Where and how to restore](./media/sap-hana-db-restore/restore-secondary-region.png)
->[!NOTE]
->The virtual network in the secondary region needs to be assigned uniquely, and can't be used for any other VMs in that resource group.
- ![Trigger restore in progress notification](./media/backup-azure-arm-restore-vms/restorenotifications.png) >[!NOTE]
->
>* After the restore is triggered and in the data transfer phase, the restore job can't be cancelled.
->* The Azure roles needed to restore in the secondary region are the same as those in the primary region.
+>* The role/access level required to perform restore operation in cross-regions are _Backup Operator_ role in the subscription and _Contributor(write)_ access on the source and target virtual machines. To view backup jobs, _ Backup reader_ is the minimum premission required in the subscription.
### Monitoring secondary region restore jobs
cloud-services-extended-support Deploy Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services-extended-support/deploy-powershell.md
This article shows how to use the `Az.CloudService` PowerShell module to deploy
Use any of the following PowerShell cmdlets to deploy Cloud Services (extended support):
-** Quick Create Cloud Service using a Storage Account**
+1. [**Quick Create Cloud Service using a Storage Account**](#quick-create-cloud-service-using-a-storage-account)
-- This parameter set inputs the .cscfg, .cspkg and .csdef files as inputs along with the storage account. -- The cloud service role profile, network profile, and OS profile are created by the cmdlet with minimal input from the user. -- For certificate input, the keyvault name is to be specified. The certificate thumbprints in the keyvault are validated against those specified in the .cscfg file.
+ - This parameter set inputs the .cscfg, .cspkg and .csdef files as inputs along with the storage account.
+ - The cloud service role profile, network profile, and OS profile are created by the cmdlet with minimal input from the user.
+ - For certificate input, the keyvault name is to be specified. The certificate thumbprints in the keyvault are validated against those specified in the .cscfg file.
- **Quick Create Cloud Service using a SAS URI**
+ 2. [**Quick Create Cloud Service using a SAS URI**](#quick-create-cloud-service-using-a-sas-uri)
+ - This parameter set inputs the SAS URI of the .cspkg along with the local paths of .csdef and .cscfg files. There is no storage account input required.
+ - The cloud service role profile, network profile, and OS profile are created by the cmdlet with minimal input from the user.
+ - For certificate input, the keyvault name is to be specified. The certificate thumbprints in the keyvault are validated against those specified in the .cscfg file.
-**Create Cloud Service with role, OS, network and extension profile and SAS URIs**
+3. [**Create Cloud Service with role, OS, network and extension profile and SAS URIs**](#create-cloud-service-using-profile-objects--sas-uris)
+ - This parameter set inputs the SAS URIs of the .cscfg and .cspkg files.
+ - The role, network, OS, and extension profile must be specified by the user and must match the values in the .cscfg and .csdef.
### Quick Create Cloud Service using a Storage Account
cloud-services-extended-support Deploy Sdk https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services-extended-support/deploy-sdk.md
Review the [deployment prerequisites](deploy-prerequisite.md) for Cloud Services
m_NrpClient.VirtualNetworks.CreateOrUpdate(resourceGroupName, ΓÇ£ContosoVNetΓÇ¥, vnet); ```
-7. Create a public IP address and set the DNS label property of the public IP address. Cloud Services (extended support) only supports [Basic](/azure/virtual-network/public-ip-addresses#basic) SKU Public IP addresses. Standard SKU Public IPs do not work with Cloud Services.
+7. Create a public IP address and set the DNS label property of the public IP address. Cloud Services (extended support) only supports [Basic](../virtual-network/public-ip-addresses.md#basic) SKU Public IP addresses. Standard SKU Public IPs do not work with Cloud Services.
If you are using a Static IP you need to reference it as a Reserved IP in Service Configuration (.cscfg) file ```csharp
If you are using a Static IP you need to reference it as a Reserved IP in Servic
## Next steps - Review [frequently asked questions](faq.yml) for Cloud Services (extended support). - Deploy Cloud Services (extended support) by using the [Azure portal](deploy-portal.md), [PowerShell](deploy-powershell.md), a [template](deploy-template.md), or [Visual Studio](deploy-visual-studio.md).-- Visit the [Samples repository for Cloud Services (extended support)](https://github.com/Azure-Samples/cloud-services-extended-support)
+- Visit the [Samples repository for Cloud Services (extended support)](https://github.com/Azure-Samples/cloud-services-extended-support)
cloud-services-extended-support Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services-extended-support/overview.md
Cloud Services (extended support) is a newΓÇ»[Azure Resource Manager](../azure-r
With this change, the Azure Service Manager based deployment model for Cloud Services will be renamed [Cloud Services (classic)](../cloud-services/cloud-services-choose-me.md). You will retain the ability to build and rapidly deploy your web and cloud applications and services. You will be able to scale your cloud services infrastructure based on current demand and ensure that the performance of your applications can keep up while simultaneously reducing costs.
+> [!VIDEO https://youtu.be/H4K9xTUvNdw]
+ ## What does not change - You create the code, define the configurations, and deploy it to Azure. Azure sets up the compute environment, runs your code then monitors and maintains it for you. - Cloud Services (extended support) also supports two types of roles, [web and worker](../cloud-services/cloud-services-choose-me.md). There are no changes to the design, architecture, or components of web and worker roles.
cloud-shell Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-shell/overview.md
Read more to learn how to mount a [new or existing storage account](persisting-s
Learn more about features in [Bash in Cloud Shell](features.md) and [PowerShell in Cloud Shell](./features.md).
-## Complaince
+## Compliance
### Encryption at rest
-All Cloud Shell infrastructure is complaint with double encryption at rest by default. No action is required by users.
+All Cloud Shell infrastructure is compliant with double encryption at rest by default. No action is required by users.
## Pricing
The machine hosting Cloud Shell is free, with a pre-requisite of a mounted Azure
## Next steps [Bash in Cloud Shell quickstart](quickstart.md) <br>
-[PowerShell in Cloud Shell quickstart](quickstart-powershell.md)
+[PowerShell in Cloud Shell quickstart](quickstart-powershell.md)
cognitive-services Howtoanalyzevideo_Face https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Face/Face-API-How-to-Topics/HowtoAnalyzeVideo_Face.md
In this guide, you learned how to run near-real-time analysis on live video stre
Feel free to provide feedback and suggestions in the [GitHub repository](https://github.com/Microsoft/Cognitive-Samples-VideoFrameAnalysis/) or, for broader API feedback, on our [UserVoice](https://feedback.azure.com/forums/932041-azure-cognitive-services?category_id=395743) site. ## Related Topics-- [How to Detect Faces in Image](HowtoDetectFacesinImage.md)
+- [Call the detect API](HowtoDetectFacesinImage.md)
cognitive-services Howtodetectfacesinimage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Face/Face-API-How-to-Topics/HowtoDetectFacesinImage.md
Title: "Detect faces in an image - Face"
+ Title: "Call the detect API - Face"
-description: This guide demonstrates how to use face detection to extract attributes like gender, age, or pose from a given image.
+description: This guide demonstrates how to use face detection to extract attributes like age, emotion, or head pose from a given image.
-+ Previously updated : 02/23/2021- Last updated : 08/04/2021+
-# Get face detection data
+# Call the detect API
-This guide demonstrates how to use face detection to extract attributes like gender, age, or pose from a given image. The code snippets in this guide are written in C# by using the Azure Cognitive Services Face client library. The same functionality is available through the [REST API](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395236).
+This guide demonstrates how to use the face detection API to extract attributes like age, emotion, or head pose from a given image. You'll learn the different ways to configure the behavior of this API to meet your needs.
-This guide shows you how to:
+The code snippets in this guide are written in C# by using the Azure Cognitive Services Face client library. The same functionality is available through the [REST API](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395236).
-- Get the locations and dimensions of faces in an image.-- Get the locations of various face landmarks, such as pupils, nose, and mouth, in an image.-- Guess the gender, age, emotion, and other attributes of a detected face. ## Setup
-This guide assumes that you already constructed a [FaceClient](/dotnet/api/microsoft.azure.cognitiveservices.vision.face.faceclient) object, named `faceClient`, with a Face subscription key and endpoint URL. From here, you can use the face detection feature by calling either [DetectWithUrlAsync](/dotnet/api/microsoft.azure.cognitiveservices.vision.face.faceoperationsextensions.detectwithurlasync), which is used in this guide, or [DetectWithStreamAsync](/dotnet/api/microsoft.azure.cognitiveservices.vision.face.faceoperationsextensions.detectwithstreamasync). For instructions on how to set up this feature, follow one of the quickstarts.
+This guide assumes that you already constructed a [FaceClient](/dotnet/api/microsoft.azure.cognitiveservices.vision.face.faceclient) object, named `faceClient`, with a Face subscription key and endpoint URL. For instructions on how to set up this feature, follow one of the quickstarts.
-This guide focuses on the specifics of the Detect call, such as what arguments you can pass and what you can do with the returned data. We recommend that you query for only the features you need. Each operation takes additional time to complete.
+## Submit data to the service
-## Get basic face data
-
-To find faces and get their locations in an image, call the [DetectWithUrlAsync](/dotnet/api/microsoft.azure.cognitiveservices.vision.face.faceoperationsextensions.detectwithurlasync) or [DetectWithStreamAsync](/dotnet/api/microsoft.azure.cognitiveservices.vision.face.faceoperationsextensions.detectwithstreamasync) method with the _returnFaceId_ parameter set to **true**. This setting is the default.
+To find faces and get their locations in an image, call the [DetectWithUrlAsync](/dotnet/api/microsoft.azure.cognitiveservices.vision.face.faceoperationsextensions.detectwithurlasync) or [DetectWithStreamAsync](/dotnet/api/microsoft.azure.cognitiveservices.vision.face.faceoperationsextensions.detectwithstreamasync) method. **DetectWithUrlAsync** takes a URL string as input, and **DetectWithStreamAsync** takes the raw byte stream of an image as input.
:::code language="csharp" source="~/cognitive-services-quickstart-code/dotnet/Face/sdk/detect.cs" id="basic1":::
-You can query the returned [DetectedFace](/dotnet/api/microsoft.azure.cognitiveservices.vision.face.models.detectedface) objects for their unique IDs and a rectangle that gives the pixel coordinates of the face.
+You can query the returned [DetectedFace](/dotnet/api/microsoft.azure.cognitiveservices.vision.face.models.detectedface) objects for their unique IDs and a rectangle that gives the pixel coordinates of the face. This way, you can tell which face ID maps to which face in the original image.
:::code language="csharp" source="~/cognitive-services-quickstart-code/dotnet/Face/sdk/detect.cs" id="basic2":::
-For information on how to parse the location and dimensions of the face, see [FaceRectangle](/dotnet/api/microsoft.azure.cognitiveservices.vision.face.models.facerectangle). Usually, this rectangle contains the eyes, eyebrows, nose, and mouth. The top of head, ears, and chin aren't necessarily included. To use the face rectangle to crop a complete head or get a mid-shot portrait, perhaps for a photo ID-type image, you can expand the rectangle in each direction.
+For information on how to parse the location and dimensions of the face, see [FaceRectangle](/dotnet/api/microsoft.azure.cognitiveservices.vision.face.models.facerectangle). Usually, this rectangle contains the eyes, eyebrows, nose, and mouth. The top of head, ears, and chin aren't necessarily included. To use the face rectangle to crop a complete head or get a mid-shot portrait, you should expand the rectangle in each direction.
+
+## Determine how to process the data
-## Get face landmarks
+This guide focuses on the specifics of the Detect call, such as what arguments you can pass and what you can do with the returned data. We recommend that you query for only the features you need. Each operation takes more time to complete.
-[Face landmarks](../concepts/face-detection.md#face-landmarks) are a set of easy-to-find points on a face, such as the pupils or the tip of the nose. To get face landmark data, set the _detectionModel_ parameter to **DetectionModel.Detection01** and the _returnFaceLandmarks_ parameter to **true**.
+### Get face landmarks
+
+[Face landmarks](../concepts/face-detection.md#face-landmarks) are a set of easy-to-find points on a face, such as the pupils or the tip of the nose. To get face landmark data, set the _detectionModel_ parameter to `DetectionModel.Detection01` and the _returnFaceLandmarks_ parameter to `true`.
:::code language="csharp" source="~/cognitive-services-quickstart-code/dotnet/Face/sdk/detect.cs" id="landmarks1":::
+### Get face attributes
+
+Besides face rectangles and landmarks, the face detection API can analyze several conceptual attributes of a face. For a full list, see the [Face attributes](../concepts/face-detection.md#attributes) conceptual section.
+
+To analyze face attributes, set the _detectionModel_ parameter to `DetectionModel.Detection01` and the _returnFaceAttributes_ parameter to a list of [FaceAttributeType Enum](/dotnet/api/microsoft.azure.cognitiveservices.vision.face.models.faceattributetype) values.
+++
+## Get results from the service
+
+### Face landmark results
+ The following code demonstrates how you might retrieve the locations of the nose and pupils: :::code language="csharp" source="~/cognitive-services-quickstart-code/dotnet/Face/sdk/detect.cs" id="landmarks2":::
-You also can use face landmarks data to accurately calculate the direction of the face. For example, you can define the rotation of the face as a vector from the center of the mouth to the center of the eyes. The following code calculates this vector:
+You also can use face landmark data to accurately calculate the direction of the face. For example, you can define the rotation of the face as a vector from the center of the mouth to the center of the eyes. The following code calculates this vector:
:::code language="csharp" source="~/cognitive-services-quickstart-code/dotnet/Face/sdk/detect.cs" id="direction":::
-When you know the direction of the face, you can rotate the rectangular face frame to align it more properly. To crop faces in an image, you can programmatically rotate the image so that the faces always appear upright.
+When you know the direction of the face, you can rotate the rectangular face frame to align it more properly. To crop faces in an image, you can programmatically rotate the image so the faces always appear upright.
-## Get face attributes
-Besides face rectangles and landmarks, the face detection API can analyze several conceptual attributes of a face. For a full list, see the [Face attributes](../concepts/face-detection.md#attributes) conceptual section.
-
-To analyze face attributes, set the _detectionModel_ parameter to **DetectionModel.Detection01** and the _returnFaceAttributes_ parameter to a list of [FaceAttributeType Enum](/dotnet/api/microsoft.azure.cognitiveservices.vision.face.models.faceattributetype) values.
-
+### Face attribute results
-Then, get references to the returned data and do more operations according to your needs.
+The following code shows how you might retrieve the face attribute data that you requested in the original call.
:::code language="csharp" source="~/cognitive-services-quickstart-code/dotnet/Face/sdk/detect.cs" id="attributes2":::
In this guide, you learned how to use the various functionalities of face detect
- [Tutorial: Add users to a Face service](../enrollment-overview.md)
-## Related topics
+## Related articles
- [Reference documentation (REST)](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395236) - [Reference documentation (.NET SDK)](/dotnet/api/overview/azure/cognitiveservices/face-readme)
cognitive-services How To Migrate Face Data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Face/Face-API-How-to-Topics/how-to-migrate-face-data.md
Next, see the relevant API reference documentation, explore a sample app that us
- [Snapshot reference documentation (.NET SDK)](/dotnet/api/microsoft.azure.cognitiveservices.vision.face.snapshotoperations) - [Face snapshot sample](https://github.com/Azure-Samples/cognitive-services-dotnet-sdk-samples/tree/master/app-samples/FaceApiSnapshotSample/FaceApiSnapshotSample) - [Add faces](how-to-add-faces.md)-- [Detect faces in an image](HowtoDetectFacesinImage.md)
+- [Call the detect API](HowtoDetectFacesinImage.md)
cognitive-services How To Use Headpose https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Face/Face-API-How-to-Topics/how-to-use-headpose.md
The [Cognitive Services Face WPF](https://github.com/Azure-Samples/cognitive-ser
### Explore the sample code
-You can programmatically rotate the face rectangle by using the HeadPose attribute. If you specify this attribute when detecting faces (see [How to detect faces](HowtoDetectFacesinImage.md)), you will be able to query it later. The following method from the [Cognitive Services Face WPF](https://github.com/Azure-Samples/cognitive-services-dotnet-sdk-samples/tree/master/app-samples/Cognitive-Services-Face-WPF) app takes a list of **DetectedFace** objects and returns a list of **[Face](https://github.com/Azure-Samples/cognitive-services-dotnet-sdk-samples/blob/master/app-samples/Cognitive-Services-Face-WPF/Sample-WPF/Controls/Face.cs)** objects. **Face** here is a custom class that stores face data, including the updated rectangle coordinates. New values are calculated for **top**, **left**, **width**, and **height**, and a new field **FaceAngle** specifies the rotation.
+You can programmatically rotate the face rectangle by using the HeadPose attribute. If you specify this attribute when detecting faces (see [Call the detect API](HowtoDetectFacesinImage.md)), you will be able to query it later. The following method from the [Cognitive Services Face WPF](https://github.com/Azure-Samples/cognitive-services-dotnet-sdk-samples/tree/master/app-samples/Cognitive-Services-Face-WPF) app takes a list of **DetectedFace** objects and returns a list of **[Face](https://github.com/Azure-Samples/cognitive-services-dotnet-sdk-samples/blob/master/app-samples/Cognitive-Services-Face-WPF/Sample-WPF/Controls/Face.cs)** objects. **Face** here is a custom class that stores face data, including the updated rectangle coordinates. New values are calculated for **top**, **left**, **width**, and **height**, and a new field **FaceAngle** specifies the rotation.
```csharp /// <summary>
cognitive-services Specify Detection Model https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Face/Face-API-How-to-Topics/specify-detection-model.md
If you aren't sure whether you should use the latest model, skip to the [Evaluat
You should be familiar with the concept of AI face detection. If you aren't, see the face detection conceptual guide or how-to guide: * [Face detection concepts](../concepts/face-detection.md)
-* [How to detect faces in an image](HowtoDetectFacesinImage.md)
+* [Call the detect API](HowtoDetectFacesinImage.md)
## Detect faces with specified model
cognitive-services Specify Recognition Model https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Face/Face-API-How-to-Topics/specify-recognition-model.md
You should be familiar with the concepts of AI face detection and identification
* [Face detection concepts](../concepts/face-detection.md) * [Face recognition concepts](../concepts/face-recognition.md)
-* [How to detect faces in an image](HowtoDetectFacesinImage.md)
+* [Call the detect API](HowtoDetectFacesinImage.md)
## Detect faces with specified model
cognitive-services Face Detection https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Face/concepts/face-detection.md
If you're detecting faces from a video feed, you may be able to improve performa
Now that you're familiar with face detection concepts, learn how to write a script that detects faces in a given image.
-* [Detect faces in an image](../Face-API-How-to-Topics/HowtoDetectFacesinImage.md)
+* [Call the detect API](../Face-API-How-to-Topics/HowtoDetectFacesinImage.md)
cognitive-services Releasenotes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/releasenotes.md
#### New features -- **C++**: Simple Language Pattern matching with the Intent Recognizer now makes it easier to [implement simple intent recognition scenarios](/azure/cognitive-services/speech-service/get-started-intent-recognition?pivots=programming-language-cpp).
+- **C++**: Simple Language Pattern matching with the Intent Recognizer now makes it easier to [implement simple intent recognition scenarios](./get-started-intent-recognition.md?pivots=programming-language-cpp).
- **C++/C#/Java**: We added a new API, `GetActivationPhrasesAsync()` to the `VoiceProfileClient` class for receiving a list of valid activation phrases in speaker recognition enrollment phase for independent recognition scenarios. - **Important**: The Speaker Recognition feature is in Preview. All voice profiles created in Preview will be discontinued 90 days after the Speaker Recognition feature is moved out of Preview into General Availability. At that point the Preview voice profiles will stop functioning.-- **Python**: Added [support for continuous Language Identification (LID)](/azure/cognitive-services/speech-service/how-to-automatic-language-detection?pivots=programming-language-python) on the existing `SpeechRecognizer` and `TranslationRecognizer` objects.
+- **Python**: Added [support for continuous Language Identification (LID)](./how-to-automatic-language-detection.md?pivots=programming-language-python) on the existing `SpeechRecognizer` and `TranslationRecognizer` objects.
- **Python**: Added a [new Python object](/python/api/azure-cognitiveservices-speech/azure.cognitiveservices.speech.sourcelanguagerecognizer?view=azure-python) named `SourceLanguageRecognizer` to do one-time or continuous LID (without recognition or translation). - **JavaScript**: `getActivationPhrasesAsync` API added to `VoiceProfileClient` class for receiving a list of valid activation phrases in speaker recognition enrollment phase for independent recognition scenarios. - **JavaScript** `VoiceProfileClient`'s `enrollProfileAsync` API is now async awaitable. See [this independent identification code](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/quickstart/javascript/node/speaker-recognition/identification/independent-identification.js) for example usage.
More samples have been added and are constantly being updated. For the latest se
## Cognitive Services Speech SDK 0.2.12733: 2018-May release
-This release is the first public preview release of the Cognitive Services Speech SDK.
+This release is the first public preview release of the Cognitive Services Speech SDK.
cognitive-services Speech Container Batch Processing https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/speech-container-batch-processing.md
The batch kit container is available for free on [GitHub](https://github.com/mic
Use the [docker pull](https://docs.docker.com/engine/reference/commandline/pull/) command to download the latest batch kit container. + ```bash docker pull docker.io/batchkit/speech-batch-kit:latest ```
cognitive-services Speech Sdk https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/speech-sdk.md
The Speech SDK exposes many features from the Speech service, but not all of the
**Text-to-speech (TTS)** is available on the following platforms:
- - C++/Windows & Linux
- - C#/Windows & UWP & Unity
+ - C++/Windows & Linux & macOS
+ - C# (Framework & .NET Core)/Windows & UWP & Unity & Xamarin & Linux & macOS
- Java (Jre and Android)
+ - JavaScript (Browser and NodeJS)
- Python - Swift - Objective-C
+ - Go
- TTS REST API can be used in every other situation. ### Voice assistants
Custom text-to-speech, also known as Custom Voice is a set of online tools that
## Next steps * [Create a free Azure account](https://azure.microsoft.com/free/cognitive-services/)
-* [See how to recognize speech in C#](./get-started-speech-to-text.md?pivots=programming-language-csharp&tabs=dotnet)
+* [See how to recognize speech in C#](./get-started-speech-to-text.md?pivots=programming-language-csharp&tabs=dotnet)
cognitive-services Diagnostic Logging https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/diagnostic-logging.md
To enable diagnostic logging, you'll need somewhere to store your log data. This
> [!NOTE] > * Additional configuration options are available. To learn more, see [Collect and consume log data from your Azure resources](../azure-monitor/essentials/platform-logs-overview.md).
-> * "Trace" in diagnostic logging is only available for [Custom question answering](/azure/cognitive-services/qnamaker/how-to/get-analytics-knowledge-base?tabs=v2).
+> * "Trace" in diagnostic logging is only available for [Custom question answering](./qnamaker/how-to/get-analytics-knowledge-base.md?tabs=v2).
## Enable diagnostic log collection
cognitive-services Form Recognizer Container Install Run https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/form-recognizer/containers/form-recognizer-container-install-run.md
keywords: on-premises, Docker, container, identify
Azure Form Recognizer is an Azure Applied AI Service that lets you build automated data processing software using machine learning technology. Form Recognizer enables you to identify and extract text, key/value pairs, selection marks, table data, and more from your form documents and output structured data that includes the relationships in the original file.
-In this article you'll learn how to download, install, and run Form Recognizer containers. Containers enable you to run the Form Recognizer service in your own environment. Containers are great for specific security and data governance requirements. Form Recognizer features are supported by six Form Recognizer feature containersΓÇö**Layout**, **Business Card**,**ID Document**, **Receipt**, **Invoice**, and **Custom** (for Receipt, Business Card and ID Document containers you will also need the **Read** OCR container).
+In this article you'll learn how to download, install, and run Form Recognizer containers. Containers enable you to run the Form Recognizer service in your own environment. Containers are great for specific security and data governance requirements. Form Recognizer features are supported by six Form Recognizer feature containersΓÇö**Layout**, **Business Card**,**ID Document**, **Receipt**, **Invoice**, and **Custom** (for Receipt, Business Card and ID Document containers you will also need the **Read** OCR container).
## Prerequisites
The following table lists the additional supporting container(s) for each Form R
| Container | Minimum | Recommended | |--||-| | Read 3.2 | 8 cores, 16-GB memory | 8 cores, 24-GB memory|
-| Layout 2.1-preview | 8 cores, 16-GB memory | 4 core, 8-GB memory |
+| Layout 2.1-preview | 8 cores, 16-GB memory | 8 core, 24-GB memory |
| Business Card 2.1-preview | 2 cores, 4-GB memory | 4 cores, 4-GB memory | | ID Document 2.1-preview | 1 core, 2-GB memory |2 cores, 2-GB memory | | Invoice 2.1-preview | 4 cores, 8-GB memory | 8 cores, 8-GB memory |
version: "3.9"
azure-cognitive-service-receipt: container_name: azure-cognitive-service-receipt
- image: cognitiveservicespreview.azurecr.io/microsoft/cognitive-services-form-recognizer-receipt:2.1
+ image: mcr.microsoft.com/azure-cognitive-services/form-recognizer/receipt
environment: - EULA=accept - billing={FORM_RECOGNIZER_ENDPOINT_URI}
In addition to the [prerequisites](#prerequisites) mentioned above, you will nee
```text worker_processes 1;
-events {
- worker_connections 1024;
-}
+events { worker_connections 1024; }
http {
- sendfile on;
+ sendfile on;
+
+ upstream docker-api {
+ server azure-cognitive-service-custom-api:5000;
+ }
+
+ upstream docker-layout {
+ server azure-cognitive-service-layout:5000;
+ }
+
+ server {
+ listen 5000;
+
+ location = / {
+ proxy_pass http://docker-api/;
+
+ }
+
+ location /status {
+ proxy_pass http://docker-api/status;
+
+ }
- upstream docker - api {
- server azure - cognitive - service - custom - api: 5000;
- }
+ location /ready {
+ proxy_pass http://docker-api/ready;
- upstream docker - layout {
- server azure - cognitive - service - layout: 5000;
- }
+ }
- server {
- listen 5000;
+ location /swagger {
+ proxy_pass http://docker-api/swagger;
- location / formrecognizer / v2 .1 / custom / {
- proxy_pass http: //docker-api/formrecognizer/v2.1/custom/;
+ }
- }
+ location /formrecognizer/v2.1/custom/ {
+ proxy_pass http://docker-api/formrecognizer/v2.1/custom/;
- location / formrecognizer / v2 .1 / layout / {
- proxy_pass http: //docker-layout/formrecognizer/v2.1/layout/;
+ }
- }
+ location /formrecognizer/v2.1/layout/ {
+ proxy_pass http://docker-layout/formrecognizer/v2.1/layout/;
- }
+ }
+ }
} ```
That's it! In this article, you learned concepts and workflows for downloading,
## Next steps
-* [Form Recognizer container configuration settings](form-recognizer-container-configuration.md)
+* [Form Recognizer container configuration settings](form-recognizer-container-configuration.md)
* [Form Recognizer container image tags](../../containers/container-image-tags.md?tabs=current#form-recognizer) * [Cognitive Services container support page and release notes](../../containers/container-image-tags.md?tabs=current#form-recognizer)
cognitive-services Model Versioning https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/text-analytics/concepts/model-versioning.md
Use the table below to find which model versions are supported by each hosted en
You can find details about the updates for these models in [What's new](../whats-new.md).
+## Extractive summarization
+
+Extractive summarization is available starting in `version 3.1-preview.1` by using the asynchronous `analyze` endpoint.
+
+The current model version is: `2021-08-01`
+ ## Text Analytics for health The [Text Analytics for Health](../how-tos/text-analytics-for-health.md) container uses separate model versioning than the above API endpoints. Please note that only one model version is available per container image.
cognitive-services Extractive Summarization https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/text-analytics/how-tos/extractive-summarization.md
+
+ Title: Summarize text with the Text Analytics extractive summarization API
+
+description: This article will show you how to summarize text with the Azure Cognitive Services Text Analytics extractive summarization API.
++++++ Last updated : 08/05/2021+++
+# How to: summarize text with Text Analytics (preview)
+
+> [!IMPORTANT]
+> Text Analytics extractive summarization is a preview capability provided ΓÇ£AS ISΓÇ¥ and ΓÇ£WITH ALL FAULTS.ΓÇ¥ As such, Text Analytics extractive summarization (preview) should not be implemented or deployed in any production use. The customer is solely responsible for any use of Text Analytics extractive summarization.
+
+In general, there are two approaches for automatic text summarization: extractive and abstractive. The Text analytics API provides extractive summarization starting in version `3.2-preview.1`
+
+Extractive summarization is a feature in Azure Text Analytics that produces a summary by extracting sentences that collectively represent the most important or relevant information within the original content.
+
+This feature is designed to shorten content that users consider too long to read. Extractive summarization condenses articles, papers, or documents to key sentences.
+
+The AI models used by the API are provided by the service, you just have to send content for analysis.
+
+## Extractive summarization and features
+
+The extractive summarization feature in Text Analytics uses natural language processing techniques to locate key sentences in an unstructured text document. These sentences collectively convey the main idea of the document.
+
+Extractive summarization returns a rank score as a part of the system response along with extracted sentences and their position in the original documents. A rank score is an indicator of how relevant a sentence is determined to be, to the main idea of a document. The model gives a score between 0 and 1 (inclusive) to each sentence and returns the highest scored sentences per request. For example, if you request a three-sentence summary, the service returns the three highest scored sentences.
+
+There is another feature in Text Analytics, [key phrases extraction](./text-analytics-how-to-keyword-extraction.md), that can extract key information. When deciding between key phrase extraction and extractive summarization, consider the following:
+* key phrase extraction returns phrases while extractive summarization returns sentences
+* extractive summarization returns sentences together with a rank score. Top ranked sentences will be returned per request
+
+## Sending a REST API request
+
+> [!TIP]
+> You can also use the latest preview version of the client library to use extractive summarization. See the following samples on GitHub.
+> * [.NET](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/textanalytics/Azure.AI.TextAnalytics/samples/Sample8_ExtractSummary.md)
+> * [Java](https://github.com/Azure/azure-sdk-for-java/blob/main/sdk/textanalytics/azure-ai-textanalytics/src/samples/java/com/azure/ai/textanalytics/lro/AnalyzeExtractiveSummarization.java)
+> * [Python](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/textanalytics/azure-ai-textanalytics/samples/sample_extract_summary.py)
+
+### Preparation
+
+Unlike other Text Analytics features, the extractive summarization feature is an asynchronous-only operation, and can be accessed through the /analyze endpoint. JSON request data should follow the format outlined in [Asynchronous requests to the /analyze endpoint](./text-analytics-how-to-call-api.md?tabs=asynchronous#api-request-formats).
+
+Extractive summarization supports a wide range of languages for document input. For more information, see [Supported languages](../language-support.md).
+
+Document size must be under 125,000 characters per document. For the maximum number of documents permitted in a collection, see the [data limits](../concepts/data-limits.md?tabs=version-3) article. The collection is submitted in the body of the request.
+
+### Structure the request
+
+Create a POST request. You can [use Postman](text-analytics-how-to-call-api.md) or the **API testing console** in the following reference link to quickly structure and send one.
+
+[Text summarization reference](https://westcentralus.dev.cognitive.microsoft.com/docs/services/TextAnalytics-v3-2-preview-1/operations/Analyze)
+
+### Request endpoints
+
+Set the HTTPS endpoint for extractive summarization by using a Text Analytics resource on Azure. For example:
+
+> [!NOTE]
+> You can find your key and endpoint for your Text Analytics resource on the Azure portal. They will be located on the resource's **Key and endpoint** page.
+
+`https://<your-custom-subdomain>.cognitiveservices.azure.com/text/analytics/v3.2-preview.1/analyze`
+
+Set a request header to include your Text Analytics API key. In the request body, provide the JSON documents collection you prepared for this analysis.
+
+### Example request
+
+The following is an example of content you might submit for summarization, which is extracted using [a holistic representation toward integrative AI](https://www.microsoft.com/research/blog/a-holistic-representation-toward-integrative-ai/) for demonstration purpose. The extractive summarization API can accept much longer input text. See [Data limits](../Concepts/data-limits.md) for more information on Text Analytics API's data limits.
+
+One request can include multiple documents.
+
+Each document has the following parameters
+* `id` to identify a document,
+* `language` to indicate source language of the document, with `en` being the default
+* `text` to attach the document text.
+
+All documents in one request share the following parameters. These parameters can be specified in the `tasks` definition in the request.
+* `model-version` to specify which version of the model to use, with `latest` being the default. For more information, see [Model version](../concepts/model-versioning.md)
+* `sentenceCount` to specify how many sentences will be returned, with `3` being the default. The range is from 1 to 20.
+* `sortyby` to specify in what order the extracted sentences will be returned. The accepted values for `sortBy` are `Offset` and `Rank`, with `Offset` being the default. The value `Offset` is the start position of a sentence in the original document.
+
+```json
+{
+ "analysisInput": {
+ "documents": [
+ {
+ "language": "en",
+ "id": "1",
+ "text": "At Microsoft, we have been on a quest to advance AI beyond existing techniques, by taking a more holistic, human-centric approach to learning and understanding. As Chief Technology Officer of Azure AI Cognitive Services, I have been working with a team of amazing scientists and engineers to turn this quest into a reality. In my role, I enjoy a unique perspective in viewing the relationship among three attributes of human cognition: monolingual text (X), audio or visual sensory signals, (Y) and multilingual (Z). At the intersection of all three, thereΓÇÖs magicΓÇöwhat we call XYZ-code as illustrated in Figure 1ΓÇöa joint representation to create more powerful AI that can speak, hear, see, and understand humans better. We believe XYZ-code will enable us to fulfill our long-term vision: cross-domain transfer learning, spanning modalities and languages. The goal is to have pretrained models that can jointly learn representations to support a broad range of downstream AI tasks, much in the way humans do today. Over the past five years, we have achieved human performance on benchmarks in conversational speech recognition, machine translation, conversational question answering, machine reading comprehension, and image captioning. These five breakthroughs provided us with strong signals toward our more ambitious aspiration to produce a leap in AI capabilities, achieving multisensory and multilingual learning that is closer in line with how humans learn and understand. I believe the joint XYZ-code is a foundational component of this aspiration, if grounded with external knowledge sources in the downstream AI tasks."
+ }
+ ]
+ },
+ "tasks": {
+ "extractiveSummarizationTasks": [
+ {
+ "parameters": {
+ "model-version": "latest",
+ "sentenceCount": 3,
+ "sortBy": "Offset"
+ }
+ }
+ ]
+ }
+}
+```
+
+### Post the request
+
+The extractive summarization API is performed upon receipt of the request. For information on the size and number of requests you can send per minute and second, see the [data limits](../overview.md#data-limits) section in the overview.
+
+The Text Analytics extractive summarization API is an asynchronous API, thus there is no text in the response object. However, you need the value of the `operation-location` key in the response headers to make a GET request to check the status of the job and the output. Below is an example of the value of the operation-location KEY in the response header of the POST request:
+
+`https://<your-custom-subdomain>.cognitiveservices.azure.com/text/analytics/v3.2-preview.1/analyze/jobs/<jobID>`
+
+To check the job status, make a GET request to the URL in the value of the operation-location KEY header of the POST response. The following states are used to reflect the status of a job: `NotStarted`, `running`, `succeeded`, `failed`, or `rejected`.
+
+If the job succeeded, the output of the API will be returned in the body of the GET request.
++
+### View the results
+
+The following is an example of the response of a GET request. The output is available for retrieval until the `expirationDateTime` (24 hours from the time the job was created) has passed after which the output is purged. Due to multilingual and emoji support, the response may contain text offsets. See [how to process offsets](../concepts/text-offsets.md) for more information.
+
+### Example response
+
+The extractive summarization feature returns
+
+```json
+{
+ "jobId": "be437134-a76b-4e45-829e-9b37dcd209bf",
+ "lastUpdateDateTime": "2021-06-11T05:43:37Z",
+ "createdDateTime": "2021-06-11T05:42:32Z",
+ "expirationDateTime": "2021-06-12T05:42:32Z",
+ "status": "succeeded",
+ "errors": [],
+ "results": {
+ "documents": [
+ {
+ "id": "1",
+ "sentences": [
+ {
+ "text": "At Microsoft, we have been on a quest to advance AI beyond existing techniques, by taking a more holistic, human-centric approach to learning and understanding.",
+ "rankScore": 0.7673416137695312,
+ "Offset": 0,
+ "length": 160
+ },
+ {
+ "text": "In my role, I enjoy a unique perspective in viewing the relationship among three attributes of human cognition: monolingual text (X), audio or visual sensory signals, (Y) and multilingual (Z).",
+ "rankScore": 0.7644073963165283,
+ "Offset": 324,
+ "length": 192
+ },
+ {
+ "text": "At the intersection of all three, thereΓÇÖs magicΓÇöwhat we call XYZ-code as illustrated in Figure 1ΓÇöa joint representation to create more powerful AI that can speak, hear, see, and understand humans better.",
+ "rankScore": 0.7623870968818665,
+ "Offset": 517,
+ "length": 203
+ }
+ ],
+ "warnings": []
+ }
+ ],
+ "errors": [],
+ "modelVersion": "2021-08-01"
+ }
+}
+```
+
+## Summary
+
+In this article, you learned concepts and workflow for extractive summarization using the Text Analytics extractive summarization API. You might want to use extractive summarization to:
+
+* Assist the processing of documents to improve efficiency.
+* Distill critical information from lengthy documents, reports, and other text forms.
+* Highlight key sentences in documents.
+* Quickly skim documents in a library.
+* Generate news feed content.
+
+## See also
+
+* [Text Analytics overview](../overview.md)
+* [What's new](../whats-new.md)
+* [Model versions](../concepts/model-versioning.md)
cognitive-services Text Analytics How To Call Api https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/text-analytics/how-tos/text-analytics-how-to-call-api.md
See the table below to see which features can be used asynchronously. Note that
| Entity linking | Γ£ö | Γ£ö* | | Text Analytics for health (container) | Γ£ö | | | Text Analytics for health (API) | | Γ£ö |
+| Text summarization | | Γ£ö |
`*` - Called asynchronously through the `/analyze` endpoint.
The `/analyze` endpoint lets you choose which of the supported Text Analytics fe
* Entity Linking * Sentiment Analysis * Opinion Mining
+* Text summarization
| Element | Valid values | Required? | Usage | ||--|--|-|
The `/analyze` endpoint lets you choose which of the supported Text Analytics fe
|`documents` | Includes the `id` and `text` fields below | Required | Contains information for each document being sent, and the raw text of the document. | |`id` | String | Required | The IDs you provide are used to structure the output. | |`text` | Unstructured raw text, up to 125,000 characters. | Required | Must be in the English language, which is the only language currently supported. |
-|`tasks` | Includes the following Text Analytics features: `entityRecognitionTasks`,`entityLinkingTasks`,`keyPhraseExtractionTasks`,`entityRecognitionPiiTasks` or `sentimentAnalysisTasks`. | Required | One or more of the Text Analytics features you want to use. Note that `entityRecognitionPiiTasks` has an optional `domain` parameter that can be set to `pii` or `phi` and the `pii-categories` for detection of selected entity types. If the `domain` parameter is unspecified, the system defaults to `pii`. Similarly `sentimentAnalysisTasks` has the `opinionMining` boolean parameter to include Opinion Mining results in the output for Sentiment Analysis. |
+|`tasks` | Includes the following Text Analytics features: `entityRecognitionTasks`,`entityLinkingTasks`,`keyPhraseExtractionTasks`,`entityRecognitionPiiTasks`, `extractiveSummarizationTasks` or `sentimentAnalysisTasks`. | Required | One or more of the Text Analytics features you want to use. Note that `entityRecognitionPiiTasks` has an optional `domain` parameter that can be set to `pii` or `phi` and the `pii-categories` for detection of selected entity types. If the `domain` parameter is unspecified, the system defaults to `pii`. Similarly `sentimentAnalysisTasks` has the `opinionMining` boolean parameter to include Opinion Mining results in the output for Sentiment Analysis. |
|`parameters` | Includes the `model-version` and `stringIndexType` fields below | Required | This field is included within the above feature tasks that you choose. They contain information about the model version that you want to use and the index type. | |`model-version` | String | Required | Specify which version of the model being called that you want to use. | |`stringIndexType` | String | Required | Specify the text decoder that matches your programming environment. Types supported are `textElement_v8` (default), `unicodeCodePoint`, `utf16CodeUnit`. Please see the [Text offsets article](../concepts/text-offsets.md#offsets-in-api-version-31) for more information. |
cognitive-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/text-analytics/language-support.md
If you have content expressed in a less frequently used language, you can try La
|Zulu|`zu`|Γ£ô|2021-01-05|
+#### [Text summarization](#tab/summarization)
+
+| Language | Language code | v3 support | Starting with v3 model version: | Notes |
+|:|:-:|:-:|:--:|:--:|
+| Chinese-Simplified | `zh-hans` | Γ£ô | 2021-08-01 | `zh` also accepted |
+| English | `en` | Γ£ô | 2021-08-01 | |
+| French | `fr` | Γ£ô | 2021-08-01 | |
+| German | `de` | Γ£ô | 2021-08-01 | |
+| Italian | `it` | Γ£ô | 2021-08-01 | |
+| Japanese | `ja` | Γ£ô | 2021-08-01 | |
+| Korean | `ko` | Γ£ô | 2021-08-01 | |
+| Spanish | `es` | Γ£ô | 2021-08-01 | |
+| Portuguese (Brazil) | `pt-BR` | Γ£ô | 2021-08-01 | |
+| Portuguese (Portugal) | `pt-PT` | Γ£ô | 2021-08-01 | `pt` also accepted |
+ ## See also
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/text-analytics/overview.md
Language detection can [detect the language an input text is written in](how-tos
Named Entity Recognition (NER) can [Identify and categorize entities](how-tos/text-analytics-how-to-entity-linking.md) in your text as people, places, organizations, quantities, Well-known entities are also recognized and linked to more information on the web.
+## Text summarization
+
+[Summarization](how-tos/extractive-summarization.md) produces a summary of text by extracting sentences that collectively represent the most important or relevant information within the original content. This feature condenses articles, papers, or documents down to key sentences.
+ ## Text Analytics for health Text Analytics for health is a feature of the Text Analytics API service that extracts and labels relevant medical information from unstructured texts such as doctor's notes, discharge summaries, clinical documents, and electronic health records.
cognitive-services Client Libraries Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/text-analytics/quickstarts/client-libraries-rest-api.md
Previously updated : 07/08/2021 Last updated : 08/05/2021 keywords: text mining, sentiment analysis, text analytics
Use this article to get started with the Text Analytics client library and REST
> * The latest stable version of the Text Analytics API is `3.1`. > * Be sure to only follow the instructions for the version you are using. > * The code in this article uses synchronous methods and un-secured credentials storage for simplicity reasons. For production scenarios, we recommend using the batched asynchronous methods for performance and scalability. See the reference documentation below.
-> * If you want to use Text Analytics for health or Asynchronous operations, see the examples on Github for [C#](https://github.com/Azure/azure-sdk-for-net/tree/master/sdk/textanalytics/Azure.AI.TextAnalytics), [Python](https://github.com/Azure/azure-sdk-for-python/tree/master/sdk/textanalytics/azure-ai-textanalytics/) or [Java](https://github.com/Azure/azure-sdk-for-java/tree/master/sdk/textanalytics/azure-ai-textanalytics)
+> * You can also use the latest preview version of the client library to use extractive summarization. See the following samples [on GitHub](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/textanalytics/Azure.AI.TextAnalytics/samples/Sample8_ExtractSummary.md).
+ [!INCLUDE [C# quickstart](../includes/quickstarts/csharp-sdk.md)]
Use this article to get started with the Text Analytics client library and REST
> [!IMPORTANT] > * The latest stable version of the Text Analytics API is `3.1`. > * The code in this article uses synchronous methods and un-secured credentials storage for simplicity reasons. For production scenarios, we recommend using the batched asynchronous methods for performance and scalability. See the reference documentation below.
-If you want to use Text Analytics for health or Asynchronous operations, see the examples on Github for [C#](https://github.com/Azure/azure-sdk-for-net/tree/master/sdk/textanalytics/Azure.AI.TextAnalytics), [Python](https://github.com/Azure/azure-sdk-for-python/tree/master/sdk/textanalytics/azure-ai-textanalytics/) or [Java](https://github.com/Azure/azure-sdk-for-java/tree/master/sdk/textanalytics/azure-ai-textanalytics)
+> * You can also use the latest preview version of the client library to use extractive summarization. See the following samples [on GitHub](https://github.com/Azure/azure-sdk-for-java/blob/main/sdk/textanalytics/azure-ai-textanalytics/src/samples/java/com/azure/ai/textanalytics/lro/AnalyzeExtractiveSummarization.java)
+ [!INCLUDE [Java quickstart](../includes/quickstarts/java-sdk.md)]
If you want to use Text Analytics for health or Asynchronous operations, see the
> * The latest stable version of the Text Analytics API is `3.1`. > * Be sure to only follow the instructions for the version you are using. > * The code in this article uses synchronous methods and un-secured credentials storage for simplicity reasons. For production scenarios, we recommend using the batched asynchronous methods for performance and scalability. See the reference documentation below.
-If you want to use Text Analytics for health or Asynchronous operations, see the examples on Github for [C#](https://github.com/Azure/azure-sdk-for-net/tree/master/sdk/textanalytics/Azure.AI.TextAnalytics), [Python](https://github.com/Azure/azure-sdk-for-python/tree/master/sdk/textanalytics/azure-ai-textanalytics/) or [Java](https://github.com/Azure/azure-sdk-for-java/tree/master/sdk/textanalytics/azure-ai-textanalytics)
+> * You can also use the latest preview version of the client library to use extractive summarization. See the following samples [on GitHub](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/textanalytics/azure-ai-textanalytics/samples/sample_extract_summary.py)
+ [!INCLUDE [Python quickstart](../includes/quickstarts/python-sdk.md)]
cognitive-services Text Analytics User Scenarios https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/text-analytics/text-analytics-user-scenarios.md
Use Key Phrase Extraction and Entity Recognition to process support requests sub
## Monitor your product's social media feedsΓÇï
-Monitor user product feedback on your product's twitter or Facebook page. Use the data to analyze customer sentiment toward new products launches, extract key phrases about features and feature requests, or address customer complaints as they happen. See the example [Microsoft Power Automate template](https://flow.microsoft.com/galleries/public/templates/2680d2227d074c4d901e36c66e68f6f9/run-sentiment-analysis-on-tweets-and-push-results-to-a-power-bi-dataset/)ΓÇï.
+Monitor user feedback on your product's twitter or Facebook page. Use the data to analyze customer sentiment toward new product launches, extract key phrases about features and feature requests, or address customer complaints as they happen. See the example [Microsoft Power Automate template](https://flow.microsoft.com/galleries/public/templates/2680d2227d074c4d901e36c66e68f6f9/run-sentiment-analysis-on-tweets-and-push-results-to-a-power-bi-dataset/)ΓÇï.
![An image describing how to monitor your product and company feedback on social media using key phrase extraction](media/use-cases/social-feed.svg)
cognitive-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/text-analytics/whats-new.md
Previously updated : 07/12/2021 Last updated : 08/09/2021
The Text Analytics API is updated on an ongoing basis. To stay up-to-date with recent developments, this article provides you with information about new releases and features.
+## August 2021
+
+* Version `3.2-preview.1` which includes a public preview for [extractive summarization](how-tos/extractive-summarization.md).
+* [Asynchronous operation](how-tos/text-analytics-how-to-call-api.md?tabs=asynchronous) is now available in the Azure Government and Azure China regions.
+* New preview versions of the client library, with support for extractive summarization. See the following samples:
+ * [.NET](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/textanalytics/Azure.AI.TextAnalytics/samples/Sample8_ExtractSummary.md)
+ * [Java](https://github.com/Azure/azure-sdk-for-java/blob/main/sdk/textanalytics/azure-ai-textanalytics/src/samples/java/com/azure/ai/textanalytics/lro/AnalyzeExtractiveSummarization.java)
+ * [Python](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/textanalytics/azure-ai-textanalytics/samples/sample_extract_summary.py)
+ ## July 2021 ### GA release updates
communication-services Call Logs Azure Monitor Access https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/call-logs-azure-monitor-access.md
To access telemetry for Azure Communication Services Voice & Video resources, follow these steps. ## Enable logging
-1. First, you will need to create a storage account for your logs. Go to [Create a storage account](https://docs.microsoft.com/azure/storage/common/storage-account-create?tabs=azure-portal) for instructions to complete this step. See also [Storage account overview](https://docs.microsoft.com/azure/storage/common/storage-account-overview) for more information on the types and features of different storage options. If you already have an Azure storage account go to Step 2.
+1. First, you will need to create a storage account for your logs. Go to [Create a storage account](../../storage/common/storage-account-create.md?tabs=azure-portal) for instructions to complete this step. See also [Storage account overview](../../storage/common/storage-account-overview.md) for more information on the types and features of different storage options. If you already have an Azure storage account go to Step 2.
-1. When you've created your storage account, next you need to enable logging by following the instructions in [Enable diagnostic logs in your resource](https://docs.microsoft.com/azure/communication-services/concepts/logging-and-diagnostics#enable-diagnostic-logs-in-your-resource). You will select the check boxes for the logs "CallSummaryPRIVATEPREVIEW" and "CallDiagnosticPRIVATEPREVIEW".
+1. When you've created your storage account, next you need to enable logging by following the instructions in [Enable diagnostic logs in your resource](./logging-and-diagnostics.md#enable-diagnostic-logs-in-your-resource). You will select the check boxes for the logs "CallSummaryPRIVATEPREVIEW" and "CallDiagnosticPRIVATEPREVIEW".
1. Next, select the "Archive to a storage account" box and then select the storage account for your logs in the drop-down menu below. The "Send to Analytics workspace" option isn't currently available for Private Preview of this feature, but it will be made available when this feature is made public.
From there, you can download all logs or individual logs.
## Next Steps -- Learn more about [Logging and Diagnostics](./logging-and-diagnostics.md)
+- Learn more about [Logging and Diagnostics](./logging-and-diagnostics.md)
communication-services Calling Sdk Features https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/voice-video-calling/calling-sdk-features.md
The following table represents the set of supported browsers which are currently
| Platform | Chrome | Safari | Edge (Chromium) | Notes | | | | | | -- | | Android | ✔️ | ❌ | ❌ | Outgoing Screen Sharing is not supported. |
-| iOS | ❌ | ✔️ | ❌ | [An iOS app on Safari can't enumerate/select mic and speaker devices](https://docs.microsoft.com/azure/communication-services/concepts/known-issues#enumerating-devices-isnt-possible-in-safari-when-the-application-runs-on-ios-or-ipados) (for example, Bluetooth); this is a limitation of the OS, and there's always only one device, OS controls default device selection. Outgoing screen sharing is not supported. |
+| iOS | ❌ | ✔️ | ❌ | [An iOS app on Safari can't enumerate/select mic and speaker devices](../known-issues.md#enumerating-devices-isnt-possible-in-safari-when-the-application-runs-on-ios-or-ipados) (for example, Bluetooth); this is a limitation of the OS, and there's always only one device, OS controls default device selection. Outgoing screen sharing is not supported. |
| macOS | ✔️ | ✔️ | ❌ | Safari 14+/macOS 11+ needed for outgoing video support. | | Windows | ✔️ | ❌ | ✔️ | | | Ubuntu/Linux | ✔️ | ❌ | ❌ | |
For example, this iframe allows both camera and microphone access:
For more information, see the following articles: - Familiarize yourself with general [call flows](../call-flows.md) - Learn about [call types](../voice-video-calling/about-call-types.md)-- [Plan your PSTN solution](../telephony-sms/plan-solution.md)
+- [Plan your PSTN solution](../telephony-sms/plan-solution.md)
communication-services Quick Create Identity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/quickstarts/identity/quick-create-identity.md
In the [Azure portal](https://portal.azure.com), navigate to the **Identities &
Choose the scope of the access tokens. You can select none, one, or multiple. Click **Generate**.
-You'll see an identity and corresponding user access token generated. You can copy these strings and use them in the [sample apps](https://docs.microsoft.com/azure/communication-services/samples/overview) and other testing scenarios.
+You'll see an identity and corresponding user access token generated. You can copy these strings and use them in the [sample apps](../../samples/overview.md) and other testing scenarios.
## Next steps
You'll see an identity and corresponding user access token generated. You can co
You may also want to: - [Learn about authentication](../../concepts/authentication.md)
+ - [Learn about client and server architecture](../../concepts/client-and-server-architecture.md)
connectors Connectors Sftp Ssh https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/connectors/connectors-sftp-ssh.md
ms.suite: integration
Previously updated : 04/19/2021 Last updated : 08/05/2021 tags: connectors
The following list describes key SFTP-SSH capabilities that differ from the SFTP
* Your SFTP server address and account credentials, so your workflow can access your SFTP account. You also need access to an SSH private key and the SSH private key password. To upload large files using chunking, you need both read and write access for the root folder on your SFTP server. Otherwise, you get a "401 Unauthorized" error.
- The SFTP-SSH connector supports both private key authentication and password authentication. However, the SFTP-SSH connector supports *only* these private key formats, algorithms, and fingerprints:
+ The SFTP-SSH connector supports both private key authentication and password authentication. However, the SFTP-SSH connector supports *only* these private key formats, encryption algorithms, fingerprints, and key exchange algorithms:
* **Private key formats**: RSA (Rivest Shamir Adleman) and DSA (Digital Signature Algorithm) keys in both OpenSSH and ssh.com formats. If your private key is in PuTTY (.ppk) file format, first [convert the key to the OpenSSH (.pem) file format](#convert-to-openssh). * **Encryption algorithms**: DES-EDE3-CBC, DES-EDE3-CFB, DES-CBC, AES-128-CBC, AES-192-CBC, and AES-256-CBC * **Fingerprint**: MD5
+ * **Key exchange algorithms**: curve25519-sha256, curve25519-sha256@libssh.org, ecdh-sha2-nistp256, ecdh-sha2-nistp384, ecdh-sha2-nistp521, diffie-hellman-group-exchange-sha256, diffie-hellman-group-exchange-sha1, diffie-hellman-group16-sha512, diffie-hellman-group14-sha256, diffie-hellman-group14-sha1, and diffie-hellman-group1-sha1
After you add an SFTP-SSH trigger or action to your workflow, you have to provide connection information for your SFTP server. When you provide your SSH private key for this connection, ***don't manually enter or edit the key***, which might cause the connection to fail. Instead, make sure that you ***copy the key*** from your SSH private key file, and ***paste*** that key into the connection details. For more information, see the [Connect to SFTP with SSH](#connect) section later this article.
container-registry Container Registry Troubleshoot Access https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-registry/container-registry-troubleshoot-access.md
If [collection of resource logs](monitor-service.md) is enabled in the registry,
Related links:
-* [Logs for diagnostic evaluation and auditing](container-registry-diagnostics-audit-logs.md)
+* [Logs for diagnostic evaluation and auditing](./monitor-service.md)
* [Container registry FAQ](container-registry-faq.yml) * [Azure Security Baseline for Azure Container Registry](security-baseline.md) * [Best practices for Azure Container Registry](container-registry-best-practices.md)
container-registry Container Registry Troubleshoot Login https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-registry/container-registry-troubleshoot-login.md
If [collection of resource logs](monitor-service.md) is enabled in the registry,
Related links:
-* [Logs for diagnostic evaluation and auditing](container-registry-diagnostics-audit-logs.md)
+* [Logs for diagnostic evaluation and auditing](./monitor-service.md)
* [Container registry FAQ](container-registry-faq.yml) * [Best practices for Azure Container Registry](container-registry-best-practices.md)
If you don't resolve your problem here, see the following options.
* [Troubleshoot registry performance](container-registry-troubleshoot-performance.md) * [Community support](https://azure.microsoft.com/support/community/) options * [Microsoft Q&A](/answers/products/)
-* [Open a support ticket](https://azure.microsoft.com/support/create-ticket/) - based on information you provide, a quick diagnostic might be run for authentication failures in your registry
+* [Open a support ticket](https://azure.microsoft.com/support/create-ticket/) - based on information you provide, a quick diagnostic might be run for authentication failures in your registry
container-registry Container Registry Troubleshoot Performance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-registry/container-registry-troubleshoot-performance.md
If [collection of resource logs](monitor-service.md) is enabled in the registry,
Related links:
-* [Logs for diagnostic evaluation and auditing](container-registry-diagnostics-audit-logs.md)
+* [Logs for diagnostic evaluation and auditing](./monitor-service.md)
* [Container registry FAQ](container-registry-faq.yml) * [Best practices for Azure Container Registry](container-registry-best-practices.md)
cosmos-db Cassandra Introduction https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/cassandra/cassandra-introduction.md
Last updated 11/25/2020
# Introduction to the Azure Cosmos DB Cassandra API [!INCLUDE[appliesto-cassandra-api](../includes/appliesto-cassandra-api.md)]
-Azure Cosmos DB Cassandra API can be used as the data store for apps written for [Apache Cassandra](https://cassandra.apache.org). This means that by using existing [Apache drivers](https://cassandra.apache.org/doc/latest/getting_started/drivers.html?highlight=driver) compliant with CQLv4, your existing Cassandra application can now communicate with the Azure Cosmos DB Cassandra API. In many cases, you can switch from using Apache Cassandra to using Azure Cosmos DB's Cassandra API, by just changing a connection string.
+Azure Cosmos DB Cassandra API can be used as the data store for apps written for [Apache Cassandra](https://cassandra.apache.org). This means that by using existing [Apache drivers](https://cassandra.apache.org/doc/latest/cassandra/getting_started/drivers.html?highlight=driver) compliant with CQLv4, your existing Cassandra application can now communicate with the Azure Cosmos DB Cassandra API. In many cases, you can switch from using Apache Cassandra to using Azure Cosmos DB's Cassandra API, by just changing a connection string.
The Cassandra API enables you to interact with data stored in Azure Cosmos DB using the Cassandra Query Language (CQL) , Cassandra-based tools (like cqlsh) and Cassandra client drivers that you're already familiar with.
cosmos-db Cassandra Support https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/cassandra/cassandra-support.md
Last updated 09/14/2020
# Apache Cassandra features supported by Azure Cosmos DB Cassandra API [!INCLUDE[appliesto-cassandra-api](../includes/appliesto-cassandra-api.md)]
-Azure Cosmos DB is Microsoft's globally distributed multi-model database service. You can communicate with the Azure Cosmos DB Cassandra API through the CQL Binary Protocol v4 [wire protocol](https://github.com/apache/cassandra/blob/trunk/doc/native_protocol_v4.spec) compliant open-source Cassandra client [drivers](https://cassandra.apache.org/doc/latest/getting_started/drivers.html?highlight=driver).
+Azure Cosmos DB is Microsoft's globally distributed multi-model database service. You can communicate with the Azure Cosmos DB Cassandra API through the CQL Binary Protocol v4 [wire protocol](https://github.com/apache/cassandra/blob/trunk/doc/native_protocol_v4.spec) compliant open-source Cassandra client [drivers](https://cassandra.apache.org/doc/latest/cassandra/getting_started/drivers.html?highlight=driver).
By using the Azure Cosmos DB Cassandra API, you can enjoy the benefits of the Apache Cassandra APIs as well as the enterprise capabilities that Azure Cosmos DB provides. The enterprise capabilities include [global distribution](../distribute-data-globally.md), [automatic scale out partitioning](cassandra-partitioning.md), availability and latency guarantees, encryption at rest, backups, and much more.
cosmos-db Diagnostic Queries Cassandra https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/cassandra/diagnostic-queries-cassandra.md
> [!div class="op_single_selector"] > * [SQL (Core) API](../cosmos-db-advanced-queries.md)
-> * [MongoDB API](../queries-mongo.md)
+> * [MongoDB API](../mongodb/diagnostic-queries-mongodb.md)
> * [Cassandra API](diagnostic-queries-cassandra.md) > * [Gremlin API](../queries-gremlin.md)
cosmos-db Find Request Unit Charge Cassandra https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/cassandra/find-request-unit-charge-cassandra.md
Azure Cosmos DB supports many APIs, such as SQL, MongoDB, Cassandra, Gremlin, an
The cost of all database operations is normalized by Azure Cosmos DB and is expressed by Request Units (or RUs, for short). Request charge is the request units consumed by all your database operations. You can think of RUs as a performance currency abstracting the system resources such as CPU, IOPS, and memory that are required to perform the database operations supported by Azure Cosmos DB. No matter which API you use to interact with your Azure Cosmos container, costs are always measured by RUs. Whether the database operation is a write, point read, or query, costs are always measured in RUs. To learn more, see the [request units and it's considerations](../request-units.md) article.
-This article presents the different ways you can find the [request unit](../request-units.md) (RU) consumption for any operation executed against a container in Azure Cosmos DB Cassandra API. If you are using a different API, see [API for MongoDB](../find-request-unit-charge-mongodb.md), [SQL API](../find-request-unit-charge.md), [Gremlin API](../find-request-unit-charge-gremlin.md), and [Table API](../table/find-request-unit-charge.md) articles to find the RU/s charge.
+This article presents the different ways you can find the [request unit](../request-units.md) (RU) consumption for any operation executed against a container in Azure Cosmos DB Cassandra API. If you are using a different API, see [API for MongoDB](../mongodb/find-request-unit-charge-mongodb.md), [SQL API](../find-request-unit-charge.md), [Gremlin API](../find-request-unit-charge-gremlin.md), and [Table API](../table/find-request-unit-charge.md) articles to find the RU/s charge.
When you perform operations against the Azure Cosmos DB Cassandra API, the RU charge is returned in the incoming payload as a field named `RequestCharge`. You have multiple options for retrieving the RU charge.
cosmos-db How To Create Container Cassandra https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/cassandra/how-to-create-container-cassandra.md
This article explains the different ways to create a container in Azure Cosmos DB Cassandra API. It shows how to create a container using Azure portal, Azure CLI, PowerShell, or supported SDKs. This article demonstrates how to create a container, specify the partition key, and provision throughput.
-This article explains the different ways to create a container in Azure Cosmos DB Cassandra API. If you are using a different API, see [API for MongoDB](../how-to-create-container-mongodb.md), [Gremlin API](../how-to-create-container-gremlin.md), [Table API](../table/how-to-create-container.md), and [SQL API](../how-to-create-container.md) articles to create the container.
+This article explains the different ways to create a container in Azure Cosmos DB Cassandra API. If you are using a different API, see [API for MongoDB](../mongodb/how-to-create-container-mongodb.md), [Gremlin API](../how-to-create-container-gremlin.md), [Table API](../table/how-to-create-container.md), and [SQL API](../how-to-create-container.md) articles to create the container.
> [!NOTE] > When creating containers, make sure you donΓÇÖt create two containers with the same name but different casing. ThatΓÇÖs because some parts of the Azure platform are not case-sensitive, and this can result in confusion/collision of telemetry and actions on containers with such names.
cosmos-db How To Provision Throughput Cassandra https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/cassandra/how-to-provision-throughput-cassandra.md
This article explains how to provision throughput in Azure Cosmos DB Cassandra API. You can provision standard(manual) or autoscale throughput on a container, or a database and share it among the containers within the database. You can provision throughput using Azure portal, Azure CLI, or Azure Cosmos DB SDKs.
-If you are using a different API, see [SQL API](../how-to-provision-container-throughput.md), [API for MongoDB](../how-to-provision-throughput-mongodb.md), [Gremlin API](../how-to-provision-throughput-gremlin.md) articles to provision the throughput.
+If you are using a different API, see [SQL API](../how-to-provision-container-throughput.md), [API for MongoDB](../mongodb/how-to-provision-throughput-mongodb.md), [Gremlin API](../how-to-provision-throughput-gremlin.md) articles to provision the throughput.
## <a id="portal-cassandra"></a> Azure portal 1. Sign in to the [Azure portal](https://portal.azure.com/).
-1. [Create a new Azure Cosmos account](../create-mongodb-dotnet.md#create-a-database-account), or select an existing Azure Cosmos account.
+1. [Create a new Azure Cosmos account](../mongodb/create-mongodb-dotnet.md#create-a-database-account), or select an existing Azure Cosmos account.
1. Open the **Data Explorer** pane, and select **New Table**. Next, provide the following details:
cosmos-db Kafka Connect https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/cassandra/kafka-connect.md
# Ingest data from Apache Kafka into Azure Cosmos DB Cassandra API using Kafka Connect [!INCLUDE[appliesto-cassandra-api](../includes/appliesto-cassandra-api.md)]
-Existing Cassandra applications can easily work with the [Azure Cosmos DB Cassandra API](cassandra-introduction.md) because of its [CQLv4 driver compatibility](https://cassandra.apache.org/doc/latest/getting_started/drivers.html?highlight=driver). You leverage this capability to integrate with streaming platforms such as [Apache Kafka](https://kafka.apache.org/) and bring data into Azure Cosmos DB.
+Existing Cassandra applications can easily work with the [Azure Cosmos DB Cassandra API](cassandra-introduction.md) because of its [CQLv4 driver compatibility](https://cassandra.apache.org/doc/latest/cassandra/getting_started/drivers.html?highlight=driver). You leverage this capability to integrate with streaming platforms such as [Apache Kafka](https://kafka.apache.org/) and bring data into Azure Cosmos DB.
Data in Apache Kafka (topics) is only useful when consumed by other applications or ingested into other systems. It's possible to build a solution using the [Kafka Producer/Consumer](https://kafka.apache.org/documentation/#api) APIs [using a language and client SDK of your choice](https://cwiki.apache.org/confluence/display/KAFKA/Clients). Kafka Connect provides an alternative solution. It's a platform to stream data between Apache Kafka and other systems in a scalable and reliable manner. Since Kafka Connect supports off the shelf connectors which includes Cassandra, you don't need to write custom code to integrate Kafka with Azure Cosmos DB Cassandra API.
cosmos-db Migrate Data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/cassandra/migrate-data.md
You can move data from existing Cassandra workloads to Azure Cosmos DB by using
### Migrate data by using the cqlsh COPY command
-Use the [CQL COPY command](https://cassandra.apache.org/doc/latest/tools/cqlsh.html#cqlsh) to copy local data to the Cassandra API account in Azure Cosmos DB.
+Use the [CQL COPY command](https://cassandra.apache.org/doc/latest/cassandra/tools/cqlsh.html#cqlshrc) to copy local data to the Cassandra API account in Azure Cosmos DB.
1. Get your Cassandra API accountΓÇÖs connection string information:
In this tutorial, you've learned how to migrate your data to a Cassandra API acc
> [!div class="nextstepaction"] > [Tunable data consistency levels in Azure Cosmos DB](../consistency-levels.md)----
cosmos-db Postgres Migrate Cosmos Db Kafka https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/cassandra/postgres-migrate-cosmos-db-kafka.md
Data in PostgreSQL table will be pushed to Apache Kafka using the [Debezium Post
### Set up PostgreSQL database if you haven't already. This could be an existing on-premise database or you could [download and install one](https://www.postgresql.org/download/) on your local machine. It's also possible to use a [Docker container](https://hub.docker.com/_/postgres). To start a container:
cosmos-db Templates Samples https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/cassandra/templates-samples.md
In this article, you learn how to use Azure Resource Manager templates to help deploy and manage your Azure Cosmos DB accounts, keyspaces, and tables.
-This article has examples for Cassandra API accounts only, to find examples for other API type accounts see: use Azure Resource Manager templates with Azure Cosmos DB's API for [SQL](../templates-samples-sql.md), [Gremlin](../templates-samples-gremlin.md), [MongoDB](../templates-samples-mongodb.md), [Table](../table/resource-manager-templates.md) articles.
+This article has examples for Cassandra API accounts only, to find examples for other API type accounts see: use Azure Resource Manager templates with Azure Cosmos DB's API for [SQL](../templates-samples-sql.md), [Gremlin](../templates-samples-gremlin.md), [MongoDB](../mongodb/resource-manager-template-samples.md), [Table](../table/resource-manager-templates.md) articles.
> [!IMPORTANT] >
cosmos-db Troubleshoot Common Issues https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/cassandra/troubleshoot-common-issues.md
LoadBalancingPolicy loadBalancingPolicy = new CosmosLoadBalancingPolicy.Builder(
When you run `select count(*) from table` or similar for a large number of rows, the server times out.
-If you're using a local CQLSH client, change the `--connect-timeout` or `--request-timeout` settings. See [cqlsh: the CQL shell](https://cassandra.apache.org/doc/latest/tools/cqlsh.html).
+If you're using a local CQLSH client, change the `--connect-timeout` or `--request-timeout` settings. See [cqlsh: the CQL shell](https://cassandra.apache.org/doc/latest/cassandra/tools/cqlsh.html).
If the count still times out, you can get a count of records from the Azure Cosmos DB back-end telemetry by going to the metrics tab in the Azure portal, selecting the metric `document count`, and then adding a filter for the database or collection (the analog of the table in Azure Cosmos DB). You can then hover over the resulting graph for the point in time at which you want a count of the number of records.
cosmos-db Change Feed https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/change-feed.md
Change feed is available for each logical partition key within the container, an
## Change feed in APIs for Cassandra and MongoDB
-Change feed functionality is surfaced as change stream in MongoDB API and Query with predicate in Cassandra API. To learn more about the implementation details for MongoDB API, see the [Change streams in the Azure Cosmos DB API for MongoDB](mongodb-change-streams.md).
+Change feed functionality is surfaced as change stream in MongoDB API and Query with predicate in Cassandra API. To learn more about the implementation details for MongoDB API, see the [Change streams in the Azure Cosmos DB API for MongoDB](mongodb/change-streams.md).
Native Apache Cassandra provides change data capture (CDC), a mechanism to flag specific tables for archival as well as rejecting writes to those tables once a configurable size-on-disk for the CDC log is reached. The change feed feature in Azure Cosmos DB API for Cassandra enhances the ability to query the changes with predicate via CQL. To learn more about the implementation details, see [Change feed in the Azure Cosmos DB API for Cassandra](cassandr).
cosmos-db Choose Api https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/choose-api.md
If you are migrating from other databases such as Oracle, DynamoDB, HBase etc. a
This API stores data in a document structure, via BSON format. It is compatible with MongoDB wire protocol; however, it does not use any native MongoDB related code. This API is a great choice if you want to use the broader MongoDB ecosystem and skills, without compromising on using Azure Cosmos DBΓÇÖs features such as scaling, high availability, geo-replication, multiple write locations, automatic and transparent shard management, transparent replication between operational and analytical stores, and more.
-You can use your existing MongoDB apps with API for MongoDB by just changing the connection string. You can move any existing data using native MongoDB tools such as mongodump & mongorestore or using our Azure Database Migration tool. Tools, such as the MongoDB shell, [MongoDB Compass](mongodb-compass.md), and [Robo3T](mongodb-robomongo.md), can run queries and work with data as they do with native MongoDB.
+You can use your existing MongoDB apps with API for MongoDB by just changing the connection string. You can move any existing data using native MongoDB tools such as mongodump & mongorestore or using our Azure Database Migration tool. Tools, such as the MongoDB shell, [MongoDB Compass](mongodb/connect-using-compass.md), and [Robo3T](mongodb/connect-using-robomongo.md), can run queries and work with data as they do with native MongoDB.
-API for MongoDB is compatible with the 4.0, 3.6, and 3.2 MongoDB server versions. Server version 4.0 is recommended as it offers the best performance and full feature support. To learn more, see [API for MongoDB](mongodb-introduction.md) article.
+API for MongoDB is compatible with the 4.0, 3.6, and 3.2 MongoDB server versions. Server version 4.0 is recommended as it offers the best performance and full feature support. To learn more, see [API for MongoDB](mongodb/mongodb-introduction.md) article.
## Cassandra API
Applications written for Azure Table storage can migrate to the Table API with l
## Next steps * [Get started with Azure Cosmos DB SQL API](create-sql-api-dotnet.md)
-* [Get started with Azure Cosmos DB's API for MongoDB](create-mongodb-nodejs.md)
+* [Get started with Azure Cosmos DB's API for MongoDB](mongodb/create-mongodb-nodejs.md)
* [Get started with Azure Cosmos DB Cassandra API](cassandr) * [Get started with Azure Cosmos DB Gremlin API](create-graph-dotnet.md) * [Get started with Azure Cosmos DB Table API](create-table-dotnet.md)
cosmos-db Cli Samples Mongodb https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/cli-samples-mongodb.md
- Title: Azure CLI Samples for Azure Cosmos DB API for MongoDB
-description: Azure CLI Samples for Azure Cosmos DB API for MongoDB
---- Previously updated : 10/13/2020----
-# Azure CLI samples for Azure Cosmos DB API for MongoDB
-
-The following table includes links to sample Azure CLI scripts for Azure Cosmos DB. Use the links on the right to navigate to API specific samples. Common samples are the same across all APIs. Reference pages for all Azure Cosmos DB CLI commands are available in the [Azure CLI Reference](/cli/azure/cosmosdb). Azure Cosmos DB CLI script samples can also be found in the [Azure Cosmos DB CLI GitHub Repository](https://github.com/Azure-Samples/azure-cli-samples/tree/master/cosmosdb).
-
-These samples require Azure CLI version 2.12.1 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli)
-
-## Common Samples
-
-These samples apply to all Azure Cosmos DB APIs
-
-|Task | Description |
-|||
-| [Add or failover regions](scripts/cli/common/regions.md?toc=%2fcli%2fazure%2ftoc.json) | Add a region, change failover priority, trigger a manual failover.|
-| [Account keys and connection strings](scripts/cli/common/keys.md?toc=%2fcli%2fazure%2ftoc.json) | List account keys, read-only keys, regenerate keys and list connection strings.|
-| [Secure with IP firewall](scripts/cli/common/ipfirewall.md?toc=%2fcli%2fazure%2ftoc.json)| Create a Cosmos account with IP firewall configured.|
-| [Secure new account with service endpoints](scripts/cli/common/service-endpoints.md?toc=%2fcli%2fazure%2ftoc.json)| Create a Cosmos account and secure with service-endpoints.|
-| [Secure existing account with service endpoints](scripts/cli/common/service-endpoints-ignore-missing-vnet.md?toc=%2fcli%2fazure%2ftoc.json)| Update a Cosmos account to secure with service-endpoints when the subnet is eventually configured.|
-|||
-
-## MongoDB API Samples
-
-|Task | Description |
-|||
-| [Create an Azure Cosmos account, database and collection](scripts/cli/mongodb/create.md?toc=%2fcli%2fazure%2ftoc.json)| Creates an Azure Cosmos DB account, database, and collection for MongoDB API. |
-| [Create an Azure Cosmos account, database with autoscale and two collections with shared throughput](scripts/cli/mongodb/autoscale.md?toc=%2fcli%2fazure%2ftoc.json)| Creates an Azure Cosmos DB account, database with autoscale and two collections with shared throughput for MongoDB API. |
-| [Throughput operations](scripts/cli/mongodb/throughput.md?toc=%2fcli%2fazure%2ftoc.json) | Read, update and migrate between autoscale and standard throughput on a database and collection.|
-| [Lock resources from deletion](scripts/cli/mongodb/lock.md?toc=%2fcli%2fazure%2ftoc.json)| Prevent resources from being deleted with resource locks.|
-|||
cosmos-db Cli Samples https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/cli-samples.md
The following table includes links to sample Azure CLI scripts for Azure Cosmos
These samples require Azure CLI version 2.12.1 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli)
-For Azure CLI samples for other APIs see [CLI Samples for Cassandra](cassandr)
+For Azure CLI samples for other APIs see [CLI Samples for Cassandra](cassandr)
## Common Samples
cosmos-db Concepts Limits https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/concepts-limits.md
Cosmos DB supports querying items using [SQL](./sql-query-getting-started.md). T
## MongoDB API-specific limits
-Cosmos DB supports the MongoDB wire protocol for applications written against MongoDB. You can find the supported commands and protocol versions at [Supported MongoDB features and syntax](mongodb-feature-support.md).
+Cosmos DB supports the MongoDB wire protocol for applications written against MongoDB. You can find the supported commands and protocol versions at [Supported MongoDB features and syntax](mongodb/feature-support-32.md).
The following table lists the limits specific to MongoDB feature support. Other service limits mentioned for the SQL (core) API also apply to the MongoDB API.
Read more about Cosmos DB's core concepts [global distribution](distribute-data-
Get started with Azure Cosmos DB with one of our quickstarts: * [Get started with Azure Cosmos DB SQL API](create-sql-api-dotnet.md)
-* [Get started with Azure Cosmos DB's API for MongoDB](create-mongodb-nodejs.md)
+* [Get started with Azure Cosmos DB's API for MongoDB](mongodb/create-mongodb-nodejs.md)
* [Get started with Azure Cosmos DB Cassandra API](cassandr) * [Get started with Azure Cosmos DB Gremlin API](create-graph-dotnet.md) * [Get started with Azure Cosmos DB Table API](table/create-table-dotnet.md)
cosmos-db Connect Mongodb Account Experimental https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/connect-mongodb-account-experimental.md
- Title: Connect a MongoDB application to Azure Cosmos DB
-description: Learn how to connect a MongoDB app to Azure Cosmos DB by getting the connection string from Azure portal
----- Previously updated : 02/08/2021---
-# Connect a MongoDB application to Azure Cosmos DB
-
-Learn how to connect your MongoDB app to an Azure Cosmos DB by using a MongoDB connection string. You can then use an Azure Cosmos database as the data store for your MongoDB app. In addition to the tutorial below, you can explore MongoDB [samples](mongodb-samples.md) with Azure Cosmos DB's API for MongoDB.
-
-This tutorial provides two ways to retrieve connection string information:
--- [The quickstart method](#get-the-mongodb-connection-string-by-using-the-quick-start), for use with .NET, Node.js, MongoDB Shell, Java, and Python drivers-- [The custom connection string method](#get-the-mongodb-connection-string-to-customize), for use with other drivers-
-## Prerequisites
--- An Azure account. If you don't have an Azure account, create a [free Azure account](https://azure.microsoft.com/free/) now.-- A Cosmos account. For instructions, see [Build a web app using Azure Cosmos DB's API for MongoDB and .NET SDK](create-mongodb-dotnet.md).-
-## Get the MongoDB connection string by using the quick start
-
-1. In an Internet browser, sign in to the [Azure portal](https://portal.azure.com).
-2. In the **Azure Cosmos DB** blade, select the API.
-3. In the left pane of the account blade, click **Quick start**.
-4. Choose your platform (**.NET**, **Node.js**, **MongoDB Shell**, **Java**, **Python**). If you don't see your driver or tool listed, don't worry--we continuously document more connection code snippets. Please comment below on what you'd like to see. To learn how to craft your own connection, read [Get the account's connection string information](#get-the-mongodb-connection-string-to-customize).
-5. Copy and paste the code snippet into your MongoDB app.
-
- :::image type="content" source="./media/connect-mongodb-account/QuickStartBlade.png" alt-text="Quick start blade":::
-
-## Get the MongoDB connection string to customize
-
-1. In an Internet browser, sign in to the [Azure portal](https://portal.azure.com).
-2. In the **Azure Cosmos DB** blade, select the API.
-3. In the left pane of the account blade, click **Connection String**.
-4. The **Connection String** blade opens. It has all the information necessary to connect to the account by using a driver for MongoDB, including a preconstructed connection string.
-
- :::image type="content" source="./media/connect-mongodb-account/ConnectionStringBlade.png" alt-text="Connection String blade" lightbox= "./media/connect-mongodb-account/ConnectionStringBlade.png" :::
-
-## Connection string requirements
-
-> [!Important]
-> Azure Cosmos DB has strict security requirements and standards. Azure Cosmos DB accounts require authentication and secure communication via *TLS*.
-
-Azure Cosmos DB supports the standard MongoDB connection string URI format, with a couple of specific requirements: Azure Cosmos DB accounts require authentication and secure communication via TLS. So, the connection string format is:
-
-`mongodb://username:password@host:port/[database]?ssl=true`
-
-The values of this string are available in the **Connection String** blade shown earlier:
-
-* Username (required): Cosmos account name.
-* Password (required): Cosmos account password.
-* Host (required): FQDN of the Cosmos account.
-* Port (required): 10255.
-* Database (optional): The database that the connection uses. If no database is provided, the default database is "test."
-* ssl=true (required)
-
-For example, consider the account shown in the **Connection String** blade. A valid connection string is:
-
-`mongodb://contoso123:0Fc3IolnL12312asdfawejunASDF@asdfYXX2t8a97kghVcUzcDv98hawelufhawefafnoQRGwNj2nMPL1Y9qsIr9Srdw==@contoso123.documents.azure.com:10255/mydatabase?ssl=true`
-
-## Next steps
--- Learn how to [use Studio 3T](mongodb-mongochef.md) with Azure Cosmos DB's API for MongoDB.-- Learn how to [use Robo 3T](mongodb-robomongo.md) with Azure Cosmos DB's API for MongoDB.-- Explore MongoDB [samples](mongodb-samples.md) with Azure Cosmos DB's API for MongoDB.
cosmos-db Consistency Levels https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/consistency-levels.md
The consistency levels are region-agnostic and are guaranteed for all operations
## Consistency levels and Azure Cosmos DB APIs
-Azure Cosmos DB provides native support for wire protocol-compatible APIs for popular databases. These include MongoDB, Apache Cassandra, Gremlin, and Azure Table storage. When using Gremlin API and Table API, the default consistency level configured on the Azure Cosmos account is used. For details on consistency level mapping between Cassandra API or the API for MongoDB and Azure Cosmos DB's consistency levels see, [Cassandra API consistency mapping](cassandr).
+Azure Cosmos DB provides native support for wire protocol-compatible APIs for popular databases. These include MongoDB, Apache Cassandra, Gremlin, and Azure Table storage. When using Gremlin API and Table API, the default consistency level configured on the Azure Cosmos account is used. For details on consistency level mapping between Cassandra API or the API for MongoDB and Azure Cosmos DB's consistency levels see, [Cassandra API consistency mapping](cassandr).
## Scope of the read consistency
cosmos-db Continuous Backup Restore Introduction https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/continuous-backup-restore-introduction.md
Currently the point in time restore functionality has the following limitations:
* While a restore is in progress, don't modify or delete the Identity and Access Management (IAM) policies that grant the permissions for the account or change any VNET, firewall configuration.
-* Azure Cosmos DB API for SQL or MongoDB accounts that create unique index after the container is created are not supported for continuous backup. Only containers that create unique index as a part of the initial container creation are supported. For MongoDB accounts, you create unique index using [extension commands](mongodb-custom-commands.md).
+* Azure Cosmos DB API for SQL or MongoDB accounts that create unique index after the container is created are not supported for continuous backup. Only containers that create unique index as a part of the initial container creation are supported. For MongoDB accounts, you create unique index using [extension commands](mongodb/custom-commands.md).
* The point-in-time restore functionality always restores to a new Azure Cosmos account. Restoring to an existing account is currently not supported. If you are interested in providing feedback about in-place restore, contact the Azure Cosmos DB team via your account representative or [UserVoice](https://feedback.azure.com/forums/263030-azure-cosmos-db).
cosmos-db Cosmos Db Advanced Queries https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/cosmos-db-advanced-queries.md
> [!div class="op_single_selector"] > * [SQL (Core) API](cosmos-db-advanced-queries.md)
-> * [MongoDB API](queries-mongo.md)
+> * [MongoDB API](mongodb/diagnostic-queries-mongodb.md)
> * [Cassandra API](cassandr) > * [Gremlin API](queries-gremlin.md) >
cosmos-db Find Request Unit Charge Gremlin https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/find-request-unit-charge-gremlin.md
Azure Cosmos DB supports many APIs, such as SQL, MongoDB, Cassandra, Gremlin, an
The cost of all database operations is normalized by Azure Cosmos DB and is expressed by Request Units (or RUs, for short). Request charge is the request units consumed by all your database operations. You can think of RUs as a performance currency abstracting the system resources such as CPU, IOPS, and memory that are required to perform the database operations supported by Azure Cosmos DB. No matter which API you use to interact with your Azure Cosmos container, costs are always measured by RUs. Whether the database operation is a write, point read, or query, costs are always measured in RUs. To learn more, see the [request units and it's considerations](request-units.md) article.
-This article presents the different ways you can find the [request unit](request-units.md) (RU) consumption for any operation executed against a container in Azure Cosmos DB Gremlin API. If you are using a different API, see [API for MongoDB](find-request-unit-charge-mongodb.md), [Cassandra API](cassandr) articles to find the RU/s charge.
+This article presents the different ways you can find the [request unit](request-units.md) (RU) consumption for any operation executed against a container in Azure Cosmos DB Gremlin API. If you are using a different API, see [API for MongoDB](mongodb/find-request-unit-charge-mongodb.md), [Cassandra API](cassandr) articles to find the RU/s charge.
Headers returned by the Gremlin API are mapped to custom status attributes, which currently are surfaced by the Gremlin .NET and Java SDK. The request charge is available under the `x-ms-request-charge` key. When you use the Gremlin API, you have multiple options for finding the RU consumption for an operation against an Azure Cosmos container.
cosmos-db Find Request Unit Charge https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/find-request-unit-charge.md
Azure Cosmos DB supports many APIs, such as SQL, MongoDB, Cassandra, Gremlin, an
The cost of all database operations is normalized by Azure Cosmos DB and is expressed by Request Units (or RUs, for short). Request charge is the request units consumed by all your database operations. You can think of RUs as a performance currency abstracting the system resources such as CPU, IOPS, and memory that are required to perform the database operations supported by Azure Cosmos DB. No matter which API you use to interact with your Azure Cosmos container, costs are always measured by RUs. Whether the database operation is a write, point read, or query, costs are always measured in RUs. To learn more, see the [request units and its considerations](request-units.md) article.
-This article presents the different ways you can find the [request unit](request-units.md) (RU) consumption for any operation executed against a container in Azure Cosmos DB SQL API. If you are using a different API, see [API for MongoDB](find-request-unit-charge-mongodb.md), [Cassandra API](cassandr) articles to find the RU/s charge.
+This article presents the different ways you can find the [request unit](request-units.md) (RU) consumption for any operation executed against a container in Azure Cosmos DB SQL API. If you are using a different API, see [API for MongoDB](mongodb/find-request-unit-charge-mongodb.md), [Cassandra API](cassandr) articles to find the RU/s charge.
Currently, you can measure this consumption only by using the Azure portal or by inspecting the response sent back from Azure Cosmos DB through one of the SDKs. If you're using the SQL API, you have multiple options for finding the RU consumption for an operation against an Azure Cosmos container.
cosmos-db Free Tier https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/free-tier.md
New-AzCosmosDBAccount -ResourceGroupName "MyResourcegroup" `
After you create a free tier account, you can start building apps with Azure Cosmos DB with the following articles: * [Build a console app using the .NET V4 SDK](create-sql-api-dotnet-v4.md) to manage Azure Cosmos DB resources.
-* [Build a .NET web app using Azure Cosmos DB's API for MongoDB](create-mongodb-dotnet.md)
+* [Build a .NET web app using Azure Cosmos DB's API for MongoDB](mongodb/create-mongodb-dotnet.md)
* [Download a notebook from the gallery](publish-notebook-gallery.md#download-a-notebook-from-the-gallery) and analyze your data. * Learn more about [Understanding your Azure Cosmos DB bill](understand-your-bill.md)
cosmos-db How To Create Container Gremlin https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/how-to-create-container-gremlin.md
This article explains the different ways to create a container in Azure Cosmos DB Gremlin API. It shows how to create a container using Azure portal, Azure CLI, PowerShell, or supported SDKs. This article demonstrates how to create a container, specify the partition key, and provision throughput.
-This article explains the different ways to create a container in Azure Cosmos DB Gremlin API. If you are using a different API, see [API for MongoDB](how-to-create-container-mongodb.md), [Cassandra API](cassandr) articles to create the container.
+This article explains the different ways to create a container in Azure Cosmos DB Gremlin API. If you are using a different API, see [API for MongoDB](mongodb/how-to-create-container-mongodb.md), [Cassandra API](cassandr) articles to create the container.
> [!NOTE] > When creating containers, make sure you donΓÇÖt create two containers with the same name but different casing. ThatΓÇÖs because some parts of the Azure platform are not case-sensitive, and this can result in confusion/collision of telemetry and actions on containers with such names.
cosmos-db How To Manage Indexing Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/how-to-manage-indexing-policy.md
In Azure Cosmos DB, data is indexed following [indexing policies](index-policy.md) that are defined for each container. The default indexing policy for newly created containers enforces range indexes for any string or number. This policy can be overridden with your own custom indexing policy. > [!NOTE]
-> The method of updating indexing policies described in this article only applies to Azure Cosmos DB's SQL (Core) API. Learn about indexing in [Azure Cosmos DB's API for MongoDB](mongodb-indexing.md) and [Secondary indexing in Azure Cosmos DB Cassandra API.](cassandr)
+> The method of updating indexing policies described in this article only applies to Azure Cosmos DB's SQL (Core) API. Learn about indexing in [Azure Cosmos DB's API for MongoDB](mongodb/mongodb-indexing.md) and [Secondary indexing in Azure Cosmos DB Cassandra API.](cassandr)
## Indexing policy examples
cosmos-db How To Provision Autoscale Throughput https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/how-to-provision-autoscale-throughput.md
This article explains how to provision autoscale throughput on a database or container (collection, graph, or table) in Azure Cosmos DB SQL API. You can enable autoscale on a single container, or provision autoscale throughput on a database and share it among all the containers in the database.
-If you are using a different API, see [API for MongoDB](how-to-provision-throughput-mongodb.md), [Cassandra API](cassandr) articles to provision the throughput.
+If you are using a different API, see [API for MongoDB](mongodb/how-to-provision-throughput-mongodb.md), [Cassandra API](cassandr) articles to provision the throughput.
## Azure portal
cosmos-db How To Provision Container Throughput https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/how-to-provision-container-throughput.md
This article explains how to provision standard (manual) throughput on a container in Azure Cosmos DB SQL API. You can provision throughput on a single container, or [provision throughput on a database](how-to-provision-database-throughput.md) and share it among the containers within the database. You can provision throughput on a container using Azure portal, Azure CLI, or Azure Cosmos DB SDKs.
-If you are using a different API, see [API for MongoDB](how-to-provision-throughput-mongodb.md), [Cassandra API](cassandr) articles to provision the throughput.
+If you are using a different API, see [API for MongoDB](mongodb/how-to-provision-throughput-mongodb.md), [Cassandra API](cassandr) articles to provision the throughput.
## Azure portal
cosmos-db How To Provision Database Throughput https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/how-to-provision-database-throughput.md
This article explains how to provision standard (manual) throughput on a database in Azure Cosmos DB SQL API. You can provision throughput for a single [container](how-to-provision-container-throughput.md), or for a database and share the throughput among the containers within it. To learn when to use container level and database level throughput, see the [Use cases for provisioning throughput on containers and databases](set-throughput.md) article. You can provision database level throughput by using the Azure portal or Azure Cosmos DB SDKs.
-If you are using a different API, see [API for MongoDB](how-to-provision-throughput-mongodb.md), [Cassandra API](cassandr) articles to provision the throughput.
+If you are using a different API, see [API for MongoDB](mongodb/how-to-provision-throughput-mongodb.md), [Cassandra API](cassandr) articles to provision the throughput.
## Provision throughput using Azure portal
cosmos-db How To Provision Throughput Gremlin https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/how-to-provision-throughput-gremlin.md
This article explains how to provision throughput in Azure Cosmos DB Gremlin API. You can provision standard(manual) or autoscale throughput on a container, or a database and share it among the containers within the database. You can provision throughput using Azure portal, Azure CLI, or Azure Cosmos DB SDKs.
-If you are using a different API, see [SQL API](how-to-provision-container-throughput.md), [Cassandra API](cassandr) articles to provision the throughput.
+If you are using a different API, see [SQL API](how-to-provision-container-throughput.md), [Cassandra API](cassandr) articles to provision the throughput.
## <a id="portal-gremlin"></a> Azure portal 1. Sign in to the [Azure portal](https://portal.azure.com/).
-1. [Create a new Azure Cosmos account](create-mongodb-dotnet.md#create-a-database-account), or select an existing Azure Cosmos account.
+1. [Create a new Azure Cosmos account](mongodb/create-mongodb-dotnet.md#create-a-database-account), or select an existing Azure Cosmos account.
1. Open the **Data Explorer** pane, and select **New Graph**. Next, provide the following details:
cosmos-db Import Data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/import-data.md
This tutorial provides instructions on using the Azure Cosmos DB Data Migration
* **[SQL API](./introduction.md)** - You can use any of the source options provided in the Data Migration tool to import data at a small scale. [Learn about migration options for importing data at a large scale](cosmosdb-migrationchoices.md). * **[Table API](table/introduction.md)** - You can use the Data Migration tool or [AzCopy](table/table-import.md#migrate-data-by-using-azcopy) to import data. For more information, see [Import data for use with the Azure Cosmos DB Table API](table/table-import.md).
-* **[Azure Cosmos DB's API for MongoDB](mongodb-introduction.md)** - The Data Migration tool doesn't support Azure Cosmos DB's API for MongoDB either as a source or as a target. If you want to migrate the data in or out of collections in Azure Cosmos DB, refer to [How to migrate MongoDB data to a Cosmos database with Azure Cosmos DB's API for MongoDB](../dms/tutorial-mongodb-cosmos-db.md?toc=%2fazure%2fcosmos-db%2ftoc.json%253ftoc%253d%2fazure%2fcosmos-db%2ftoc.json) for instructions. You can still use the Data Migration tool to export data from MongoDB to Azure Cosmos DB SQL API collections for use with the SQL API.
+* **[Azure Cosmos DB's API for MongoDB](mongodb/mongodb-introduction.md)** - The Data Migration tool doesn't support Azure Cosmos DB's API for MongoDB either as a source or as a target. If you want to migrate the data in or out of collections in Azure Cosmos DB, refer to [How to migrate MongoDB data to a Cosmos database with Azure Cosmos DB's API for MongoDB](../dms/tutorial-mongodb-cosmos-db.md?toc=%2fazure%2fcosmos-db%2ftoc.json%253ftoc%253d%2fazure%2fcosmos-db%2ftoc.json) for instructions. You can still use the Data Migration tool to export data from MongoDB to Azure Cosmos DB SQL API collections for use with the SQL API.
* **[Cassandra API](graph-introduction.md)** - The Data Migration tool isn't a supported import tool for Cassandra API accounts. [Learn about migration options for importing data into Cassandra API](cosmosdb-migrationchoices.md#azure-cosmos-db-cassandra-api) * **[Gremlin API](graph-introduction.md)** - The Data Migration tool isn't a supported import tool for Gremlin API accounts at this time. [Learn about migration options for importing data into Gremlin API](cosmosdb-migrationchoices.md#other-apis)
cosmos-db Index Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/index-policy.md
In Azure Cosmos DB, every container has an indexing policy that dictates how the
In some situations, you may want to override this automatic behavior to better suit your requirements. You can customize a container's indexing policy by setting its *indexing mode*, and include or exclude *property paths*. > [!NOTE]
-> The method of updating indexing policies described in this article only applies to Azure Cosmos DB's SQL (Core) API. Learn about indexing in [Azure Cosmos DB's API for MongoDB](mongodb-indexing.md)
+> The method of updating indexing policies described in this article only applies to Azure Cosmos DB's SQL (Core) API. Learn about indexing in [Azure Cosmos DB's API for MongoDB](mongodb/mongodb-indexing.md)
## Indexing mode
The following considerations apply when creating composite indexes to optimize a
| ```(name ASC, age ASC, timestamp ASC)``` | ```SELECT AVG(c.timestamp) FROM c WHERE c.name = "John" AND c.age = 25``` | `Yes` | | ```(age ASC, timestamp ASC)``` | ```SELECT AVG(c.timestamp) FROM c WHERE c.name = "John" AND c.age > 25``` | `No` |
-## <index-transformation>Modifying the indexing policy
+## <a id=index-transformation></a>Modifying the indexing policy
A container's indexing policy can be updated at any time [by using the Azure portal or one of the supported SDKs](how-to-manage-indexing-policy.md). An update to the indexing policy triggers a transformation from the old index to the new one, which is performed online and in-place (so no additional storage space is consumed during the operation). The old indexing policy is efficiently transformed to the new policy without affecting the write availability, read availability, or the throughput provisioned on the container. Index transformation is an asynchronous operation, and the time it takes to complete depends on the provisioned throughput, the number of items and their size.
cosmos-db Introduction https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/introduction.md
Get started with Azure Cosmos DB with one of our quickstarts:
- Learn [how to choose an API](choose-api.md) in Azure Cosmos DB - [Get started with Azure Cosmos DB SQL API](create-sql-api-dotnet.md)-- [Get started with Azure Cosmos DB's API for MongoDB](create-mongodb-nodejs.md)
+- [Get started with Azure Cosmos DB's API for MongoDB](mongodb/create-mongodb-nodejs.md)
- [Get started with Azure Cosmos DB Cassandra API](cassandr) - [Get started with Azure Cosmos DB Gremlin API](create-graph-dotnet.md) - [Get started with Azure Cosmos DB Table API](table/create-table-dotnet.md)
cosmos-db Local Emulator https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/local-emulator.md
CosmosClient client = new CosmosClient(
### Azure Cosmos DB's API for MongoDB
-Once you have the Azure Cosmos DB Emulator running on your desktop, you can use the [Azure Cosmos DB's API for MongoDB](mongodb-introduction.md) to interact with the emulator. Start the emulator from [command prompt](emulator-command-line-parameters.md) as an administrator with "/EnableMongoDbEndpoint". Then use the following connection string to connect to the MongoDB API account:
+Once you have the Azure Cosmos DB Emulator running on your desktop, you can use the [Azure Cosmos DB's API for MongoDB](mongodb/mongodb-introduction.md) to interact with the emulator. Start the emulator from [command prompt](emulator-command-line-parameters.md) as an administrator with "/EnableMongoDbEndpoint". Then use the following connection string to connect to the MongoDB API account:
```bash mongodb://localhost:C2y6yDjf5/R+ob0N8A7Cgv30VRDJIWEHLM+4QDU5DE2nQ9nDuVTqobD4b8mGGyPMbIZnqyMsEcaGQy67XIw/Jw==@localhost:10255/admin?ssl=true
cosmos-db Manage With Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/manage-with-cli.md
The following guide describes common commands to automate management of your Azu
- This article requires version 2.22.1 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
-For Azure CLI samples for other APIs see [CLI Samples for Cassandra](cassandr)
+For Azure CLI samples for other APIs see [CLI Samples for Cassandra](cassandr)
> [!IMPORTANT] > Azure Cosmos DB resources cannot be renamed as this violates how Azure Resource Manager works with resource URIs.
cosmos-db Manage With Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/manage-with-powershell.md
# Manage Azure Cosmos DB Core (SQL) API resources using PowerShell [!INCLUDE[appliesto-all-apis](includes/appliesto-all-apis.md)]
-The following guide describes how to use PowerShell to script and automate management of Azure Cosmos DB Core (SQL) API resources, including the Cosmos account, database, container, and throughput. For PowerShell cmdlets for other APIs see [PowerShell Samples for Cassandra](cassandr)
+The following guide describes how to use PowerShell to script and automate management of Azure Cosmos DB Core (SQL) API resources, including the Cosmos account, database, container, and throughput. For PowerShell cmdlets for other APIs see [PowerShell Samples for Cassandra](cassandr)
> [!NOTE] > Samples in this article use [Az.CosmosDB](/powershell/module/az.cosmosdb) management cmdlets. See the [Az.CosmosDB](/powershell/module/az.cosmosdb) API reference page for the latest changes.
cosmos-db Manage With Templates https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/manage-with-templates.md
In this article, you learn how to use Azure Resource Manager templates to help deploy and manage your Azure Cosmos DB accounts, databases, and containers.
-This article only shows Azure Resource Manager template examples for Core (SQL) API accounts. You can also find template examples for [Cassandra](cassandr) APIs.
+This article only shows Azure Resource Manager template examples for Core (SQL) API accounts. You can also find template examples for [Cassandra](cassandr) APIs.
> [!IMPORTANT] >
cosmos-db Change Streams https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/mongodb/change-streams.md
+
+ Title: Change streams in Azure Cosmos DBΓÇÖs API for MongoDB
+description: Learn how to use change streams n Azure Cosmos DBΓÇÖs API for MongoDB to get the changes made to your data.
+++ Last updated : 03/02/2021+++++
+# Change streams in Azure Cosmos DBΓÇÖs API for MongoDB
+
+[Change feed](../change-feed.md) support in Azure Cosmos DBΓÇÖs API for MongoDB is available by using the change streams API. By using the change streams API, your applications can get the changes made to the collection or to the items in a single shard. Later you can take further actions based on the results. Changes to the items in the collection are captured in the order of their modification time and the sort order is guaranteed per shard key.
+
+> [!NOTE]
+> To use change streams, create the Azure Cosmos DB's API for MongoDB account with server version 3.6 or higher. If you run the change stream examples against an earlier version, you might see the *Unrecognized pipeline stage name: $changeStream* error.
+
+## Examples
+
+The following example shows how to get change streams on all the items in the collection. This example creates a cursor to watch items when they are inserted, updated, or replaced. The `$match` stage, `$project` stage, and `fullDocument` option are required to get the change streams. Watching for delete operations using change streams is currently not supported. As a workaround, you can add a soft marker on the items that are being deleted. For example, you can add an attribute in the item called "deleted." When you'd like to delete the item, you can set "deleted" to `true` and set a TTL on the item. Since updating "deleted" to `true` is an update, this change will be visible in the change stream.
+
+# [JavaScript](#tab/javascript)
+
+```javascript
+var cursor = db.coll.watch(
+ [
+ { $match: { "operationType": { $in: ["insert", "update", "replace"] } } },
+ { $project: { "_id": 1, "fullDocument": 1, "ns": 1, "documentKey": 1 } }
+ ],
+ { fullDocument: "updateLookup" });
+
+while (!cursor.isExhausted()) {
+ if (cursor.hasNext()) {
+ printjson(cursor.next());
+ }
+}
+```
+
+# [C#](#tab/csharp)
+
+```csharp
+var pipeline = new EmptyPipelineDefinition<ChangeStreamDocument<BsonDocument>>()
+ .Match(change => change.OperationType == ChangeStreamOperationType.Insert || change.OperationType == ChangeStreamOperationType.Update || change.OperationType == ChangeStreamOperationType.Replace)
+ .AppendStage<ChangeStreamDocument<BsonDocument>, ChangeStreamDocument<BsonDocument>, BsonDocument>(
+ "{ $project: { '_id': 1, 'fullDocument': 1, 'ns': 1, 'documentKey': 1 }}");
+
+var options = new ChangeStreamOptions{
+ FullDocument = ChangeStreamFullDocumentOption.UpdateLookup
+ };
+
+var enumerator = coll.Watch(pipeline, options).ToEnumerable().GetEnumerator();
+
+while (enumerator.MoveNext()){
+ Console.WriteLine(enumerator.Current);
+ }
+
+enumerator.Dispose();
+```
+
+# [Java](#tab/java)
+
+The following example shows how to use change stream functionality in Java, for the complete example, see this [GitHub repo](https://github.com/Azure-Samples/azure-cosmos-db-mongodb-java-changestream/blob/main/mongostream/src/main/java/com/azure/cosmos/mongostream/App.java). This example also shows how to use the `resumeAfter` method to seek all the changes from last read.
+
+```java
+Bson match = Aggregates.match(Filters.in("operationType", asList("update", "replace", "insert")));
+
+// Pick the field you are most interested in
+Bson project = Aggregates.project(fields(include("_id", "ns", "documentKey", "fullDocument")));
+
+// This variable is for second example
+BsonDocument resumeToken = null;
+
+// Now time to build the pipeline
+List<Bson> pipeline = Arrays.asList(match, project);
+
+//#1 Simple example to seek changes
+
+// Create cursor with update_lookup
+MongoChangeStreamCursor<ChangeStreamDocument<org.bson.Document>> cursor = collection.watch(pipeline)
+ .fullDocument(FullDocument.UPDATE_LOOKUP).cursor();
+
+Document document = new Document("name", "doc-in-step-1-" + Math.random());
+collection.insertOne(document);
+
+while (cursor.hasNext()) {
+ // There you go, we got the change document.
+ ChangeStreamDocument<Document> csDoc = cursor.next();
+
+ // Let is pick the token which will help us resuming
+ // You can save this token in any persistent storage and retrieve it later
+ resumeToken = csDoc.getResumeToken();
+ //Printing the token
+ System.out.println(resumeToken);
+
+ //Printing the document.
+ System.out.println(csDoc.getFullDocument());
+ //This break is intentional but in real project feel free to remove it.
+ break;
+}
+
+cursor.close();
+
+```
++
+## Changes within a single shard
+
+The following example shows how to get changes to the items within a single shard. This example gets the changes of items that have shard key equal to "a" and the shard key value equal to "1". It is possible to have different clients reading changes from different shards in parallel.
+
+```javascript
+var cursor = db.coll.watch(
+ [
+ {
+ $match: {
+ $and: [
+ { "fullDocument.a": 1 },
+ { "operationType": { $in: ["insert", "update", "replace"] } }
+ ]
+ }
+ },
+ { $project: { "_id": 1, "fullDocument": 1, "ns": 1, "documentKey": 1} }
+ ],
+ { fullDocument: "updateLookup" });
+
+```
+
+## Scaling change streams
+When using change streams at scale, it is best to evenly spread the load. Utilize the [GetChangeStreamTokens custom command](../mongodb/custom-commands.md) to spread the load across physical shards/partitions.
+
+## Current limitations
+
+The following limitations are applicable when using change streams:
+
+* The `operationType` and `updateDescription` properties are not yet supported in the output document.
+* The `insert`, `update`, and `replace` operations types are currently supported. However, the delete operation or other events are not yet supported.
+
+Due to these limitations, the $match stage, $project stage, and fullDocument options are required as shown in the previous examples.
+
+Unlike the change feed in Azure Cosmos DB's SQL API, there is not a separate [Change Feed Processor Library](../change-feed-processor.md) to consume change streams or a need for a leases container. There is not currently support for [Azure Functions triggers](../change-feed-functions.md) to process change streams.
+
+## Error handling
+
+The following error codes and messages are supported when using change streams:
+
+* **HTTP error code 16500** - When the change stream is throttled, it returns an empty page.
+
+* **NamespaceNotFound (OperationType Invalidate)** - If you run change stream on the collection that does not exist or if the collection is dropped, then a `NamespaceNotFound` error is returned. Because the `operationType` property can't be returned in the output document, instead of the `operationType Invalidate` error, the `NamespaceNotFound` error is returned.
+
+## Next steps
+
+* [Use time to live to expire data automatically in Azure Cosmos DB's API for MongoDB](mongodb-time-to-live.md)
+* [Indexing in Azure Cosmos DB's API for MongoDB](mongodb-indexing.md)
cosmos-db Cli Samples https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/mongodb/cli-samples.md
+
+ Title: Azure CLI Samples for Azure Cosmos DB API for MongoDB
+description: Azure CLI Samples for Azure Cosmos DB API for MongoDB
++++ Last updated : 10/13/2020++++
+# Azure CLI samples for Azure Cosmos DB API for MongoDB
+
+The following table includes links to sample Azure CLI scripts for Azure Cosmos DB. Use the links on the right to navigate to API specific samples. Common samples are the same across all APIs. Reference pages for all Azure Cosmos DB CLI commands are available in the [Azure CLI Reference](/cli/azure/cosmosdb). Azure Cosmos DB CLI script samples can also be found in the [Azure Cosmos DB CLI GitHub Repository](https://github.com/Azure-Samples/azure-cli-samples/tree/master/cosmosdb).
+
+These samples require Azure CLI version 2.12.1 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli)
+
+## Common Samples
+
+These samples apply to all Azure Cosmos DB APIs
+
+|Task | Description |
+|||
+| [Add or failover regions](../scripts/cli/common/regions.md?toc=%2fcli%2fazure%2ftoc.json) | Add a region, change failover priority, trigger a manual failover.|
+| [Account keys and connection strings](../scripts/cli/common/keys.md?toc=%2fcli%2fazure%2ftoc.json) | List account keys, read-only keys, regenerate keys and list connection strings.|
+| [Secure with IP firewall](../scripts/cli/common/ipfirewall.md?toc=%2fcli%2fazure%2ftoc.json)| Create a Cosmos account with IP firewall configured.|
+| [Secure new account with service endpoints](../scripts/cli/common/service-endpoints.md?toc=%2fcli%2fazure%2ftoc.json)| Create a Cosmos account and secure with service-endpoints.|
+| [Secure existing account with service endpoints](../scripts/cli/common/service-endpoints-ignore-missing-vnet.md?toc=%2fcli%2fazure%2ftoc.json)| Update a Cosmos account to secure with service-endpoints when the subnet is eventually configured.|
+|||
+
+## MongoDB API Samples
+
+|Task | Description |
+|||
+| [Create an Azure Cosmos account, database and collection](../scripts/cli/mongodb/create.md?toc=%2fcli%2fazure%2ftoc.json)| Creates an Azure Cosmos DB account, database, and collection for MongoDB API. |
+| [Create an Azure Cosmos account, database with autoscale and two collections with shared throughput](../scripts/cli/mongodb/autoscale.md?toc=%2fcli%2fazure%2ftoc.json)| Creates an Azure Cosmos DB account, database with autoscale and two collections with shared throughput for MongoDB API. |
+| [Throughput operations](../scripts/cli/mongodb/throughput.md?toc=%2fcli%2fazure%2ftoc.json) | Read, update and migrate between autoscale and standard throughput on a database and collection.|
+| [Lock resources from deletion](../scripts/cli/mongodb/lock.md?toc=%2fcli%2fazure%2ftoc.json)| Prevent resources from being deleted with resource locks.|
+|||
cosmos-db Connect Mongodb Account https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/mongodb/connect-mongodb-account.md
+
+ Title: Connect a MongoDB application to Azure Cosmos DB
+description: Learn how to connect a MongoDB app to Azure Cosmos DB by getting the connection string from Azure portal
+++++ Last updated : 03/02/2021+
+adobe-target: true
+adobe-target-activity: DocsExp-A/B-384740-MongoDB-2.8.2021
+adobe-target-experience: Experience B
+adobe-target-content: ./connect-mongodb-account-experimental
++
+# Connect a MongoDB application to Azure Cosmos DB
+
+Learn how to connect your MongoDB app to an Azure Cosmos DB by using a MongoDB connection string. You can then use an Azure Cosmos database as the data store for your MongoDB app.
+
+This tutorial provides two ways to retrieve connection string information:
+
+- [The quickstart method](#get-the-mongodb-connection-string-by-using-the-quick-start), for use with .NET, Node.js, MongoDB Shell, Java, and Python drivers
+- [The custom connection string method](#get-the-mongodb-connection-string-to-customize), for use with other drivers
+
+## Prerequisites
+
+- An Azure account. If you don't have an Azure account, create a [free Azure account](https://azure.microsoft.com/free/) now.
+- A Cosmos account. For instructions, see [Build a web app using Azure Cosmos DB's API for MongoDB and .NET SDK](create-mongodb-dotnet.md).
+
+## Get the MongoDB connection string by using the quick start
+
+1. In an Internet browser, sign in to the [Azure portal](https://portal.azure.com).
+2. In the **Azure Cosmos DB** blade, select the API.
+3. In the left pane of the account blade, click **Quick start**.
+4. Choose your platform (**.NET**, **Node.js**, **MongoDB Shell**, **Java**, **Python**). If you don't see your driver or tool listed, don't worry--we continuously document more connection code snippets. Please comment below on what you'd like to see. To learn how to craft your own connection, read [Get the account's connection string information](#get-the-mongodb-connection-string-to-customize).
+5. Copy and paste the code snippet into your MongoDB app.
+
+ :::image type="content" source="./media/connect-mongodb-account/quickstart-blade.png" alt-text="Quick start blade":::
+
+## Get the MongoDB connection string to customize
+
+1. In an Internet browser, sign in to the [Azure portal](https://portal.azure.com).
+2. In the **Azure Cosmos DB** blade, select the API.
+3. In the left pane of the account blade, click **Connection String**.
+4. The **Connection String** blade opens. It has all the information necessary to connect to the account by using a driver for MongoDB, including a preconstructed connection string.
+
+ :::image type="content" source="./media/connect-mongodb-account/connection-string-blade.png" alt-text="Connection String blade" lightbox= "./media/connect-mongodb-account/connection-string-blade.png" :::
+
+## Connection string requirements
+
+> [!Important]
+> Azure Cosmos DB has strict security requirements and standards. Azure Cosmos DB accounts require authentication and secure communication via *TLS*.
+
+Azure Cosmos DB supports the standard MongoDB connection string URI format, with a couple of specific requirements: Azure Cosmos DB accounts require authentication and secure communication via TLS. So, the connection string format is:
+
+`mongodb://username:password@host:port/[database]?ssl=true`
+
+The values of this string are available in the **Connection String** blade shown earlier:
+
+* Username (required): Cosmos account name.
+* Password (required): Cosmos account password.
+* Host (required): FQDN of the Cosmos account.
+* Port (required): 10255.
+* Database (optional): The database that the connection uses. If no database is provided, the default database is "test."
+* ssl=true (required)
+
+For example, consider the account shown in the **Connection String** blade. A valid connection string is:
+
+`mongodb://contoso123:0Fc3IolnL12312asdfawejunASDF@asdfYXX2t8a97kghVcUzcDv98hawelufhawefafnoQRGwNj2nMPL1Y9qsIr9Srdw==@contoso123.documents.azure.com:10255/mydatabase?ssl=true`
+
+## Driver Requirements
+
+All drivers that support wire protocol version 3.4 or greater will support Azure Cosmos DB API for MongoDB.
+
+Specifically, client drivers must support the Service Name Identification (SNI) TLS extension and/or the appName connection string option. If the `appName` parameter is provided, it must be included as found in the connection string value in the Azure portal.
+
+## Next steps
+
+- Learn how to [use Studio 3T](connect-using-mongochef.md) with Azure Cosmos DB's API for MongoDB.
+- Learn how to [use Robo 3T](connect-using-robomongo.md) with Azure Cosmos DB's API for MongoDB.
+- Explore MongoDB [samples](nodejs-console-app.md) with Azure Cosmos DB's API for MongoDB.
cosmos-db Connect Using Compass https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/mongodb/connect-using-compass.md
+
+ Title: Connect to Azure Cosmos DB using Compass
+description: Learn how to use MongoDB Compass to store and manage data in Azure Cosmos DB.
+++ Last updated : 06/05/2020++++
+# Use MongoDB Compass to connect to Azure Cosmos DB's API for MongoDB
+
+This tutorial demonstrates how to use [MongoDB Compass](https://www.mongodb.com/products/compass) when storing and/or managing data in Cosmos DB. We use the Azure Cosmos DB's API for MongoDB for this walk-through. For those of you unfamiliar, Compass is a GUI for MongoDB. It is commonly used to visualize your data, run ad-hoc queries, along with managing your data.
+
+Cosmos DB is Microsoft's globally distributed multi-model database service. You can quickly create and query document, key/value, and graph databases, all of which benefit from the global distribution and horizontal scale capabilities at the core of Cosmos DB.
+
+## Pre-requisites
+
+To connect to your Cosmos DB account using MongoDB Compass, you must:
+
+* Download and install [Compass](https://www.mongodb.com/download-center/compass?jmp=hero)
+* Have your Cosmos DB [connection string](connect-mongodb-account.md) information
+
+## Connect to Cosmos DB's API for MongoDB
+
+To connect your Cosmos DB account to Compass, you can follow the below steps:
+
+1. Retrieve the connection information for your Cosmos account configured with Azure Cosmos DB's API MongoDB using the instructions [here](connect-mongodb-account.md).
+
+ :::image type="content" source="./media/connect-using-compass/mongodb-compass-connection.png" alt-text="Screenshot of the connection string blade":::
+
+2. Click on the button that says **Copy to clipboard** next to your **Primary/Secondary connection string** in Cosmos DB. Clicking this button will copy your entire connection string to your clipboard.
+
+ :::image type="content" source="./media/connect-using-compass/mongodb-connection-copy.png" alt-text="Screenshot of the copy to clipboard button":::
+
+3. Open Compass on your desktop/machine and click on **Connect** and then **Connect to...**.
+
+4. Compass will automatically detect a connection string in the clipboard, and will prompt to ask whether you wish to use that to connect. Click on **Yes** as shown in the screenshot below.
+
+ :::image type="content" source="./media/connect-using-compass/mongodb-compass-detect.png" alt-text="Screenshot shows a dialog box explaining that your have a connection string on your clipboard.":::
+
+5. Upon clicking **Yes** in the above step, your details from the connection string will be automatically populated. Remove the value automatically populated in the **Replica Set Name** field to ensure that is left blank.
+
+ :::image type="content" source="./media/connect-using-compass/mongodb-compass-replica.png" alt-text="Screenshot shows the Replica Set Name text box.":::
+
+6. Click on **Connect** at the bottom of the page. Your Cosmos DB account and databases should now be visible within MongoDB Compass.
+
+## Next steps
+
+- Learn how to [use Studio 3T](connect-using-mongochef.md) with Azure Cosmos DB's API for MongoDB.
+- Explore MongoDB [samples](nodejs-console-app.md) with Azure Cosmos DB's API for MongoDB.
cosmos-db Connect Using Mongochef https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/mongodb/connect-using-mongochef.md
+
+ Title: Use Studio 3T to connect to Azure Cosmos DB's API for MongoDB
+description: Learn how to connect to an Azure Cosmos DB's API for MongoDB using Studio 3T.
+++ Last updated : 03/20/2020++++
+# Connect to an Azure Cosmos account using Studio 3T
+
+To connect to an Azure Cosmos DB's API for MongoDB using Studio 3T, you must:
+
+* Download and install [Studio 3T](https://studio3t.com/).
+* Have your Azure Cosmos account's [connection string](connect-mongodb-account.md) information.
+
+## Create the connection in Studio 3T
+
+To add your Azure Cosmos account to the Studio 3T connection manager, use the following steps:
+
+1. Retrieve the connection information for your Azure Cosmos DB's API for MongoDB account using the instructions in the [Connect a MongoDB application to Azure Cosmos DB](connect-mongodb-account.md) article.
+
+ :::image type="content" source="./media/connect-using-mongochef/connection-string-blade.png" alt-text="Screenshot of the connection string page":::
+
+2. Click **Connect** to open the Connection Manager, then click **New Connection**
+
+ :::image type="content" source="./media/connect-using-mongochef/connection-manager.png" alt-text="Screenshot of the Studio 3T connection manager that highlights the New Connection button.":::
+3. In the **New Connection** window, on the **Server** tab, enter the HOST (FQDN) of the Azure Cosmos account and the PORT.
+
+ :::image type="content" source="./media/connect-using-mongochef/connection-manager-server-tab.png" alt-text="Screenshot of the Studio 3T connection manager server tab":::
+4. In the **New Connection** window, on the **Authentication** tab, choose Authentication Mode **Basic (MONGODB-CR or SCRAM-SHA-1)** and enter the USERNAME and PASSWORD. Accept the default authentication db (admin) or provide your own value.
+
+ :::image type="content" source="./media/connect-using-mongochef/connection-manager-authentication-tab.png" alt-text="Screenshot of the Studio 3T connection manager authentication tab":::
+5. In the **New Connection** window, on the **SSL** tab, check the **Use SSL protocol to connect** check box and the **Accept server self-signed SSL certificates** radio button.
+
+ :::image type="content" source="./media/connect-using-mongochef/connection-manager-ssl-tab.png" alt-text="Screenshot of the Studio 3T connection manager SSL tab":::
+6. Click the **Test Connection** button to validate the connection information, click **OK** to return to the New Connection window, and then click **Save**.
+
+ :::image type="content" source="./media/connect-using-mongochef/test-connection-results.png" alt-text="Screenshot of the Studio 3T test connection window":::
+
+## Use Studio 3T to create a database, collection, and documents
+To create a database, collection, and documents using Studio 3T, perform the following steps:
+
+1. In **Connection Manager**, highlight the connection and click **Connect**.
+
+ :::image type="content" source="./media/connect-using-mongochef/connect-account.png" alt-text="Screenshot of the Studio 3T connection manager":::
+2. Right-click the host and choose **Add Database**. Provide a database name and click **OK**.
+
+ :::image type="content" source="./media/connect-using-mongochef/add-database.png" alt-text="Screenshot of the Studio 3T Add Database option":::
+3. Right-click the database and choose **Add Collection**. Provide a collection name and click **Create**.
+
+ :::image type="content" source="./media/connect-using-mongochef/add-collection.png" alt-text="Screenshot of the Studio 3T Add Collection option":::
+4. Click the **Collection** menu item, then click **Add Document**.
+
+ :::image type="content" source="./media/connect-using-mongochef/add-document.png" alt-text="Screenshot of the Studio 3T Add Document menu item":::
+5. In the Add Document dialog, paste the following and then click **Add Document**.
+
+ ```json
+ {
+ "_id": "AndersenFamily",
+ "lastName": "Andersen",
+ "parents": [
+ { "firstName": "Thomas" },
+ { "firstName": "Mary Kay"}
+ ],
+ "children": [
+ {
+ "firstName": "Henriette Thaulow", "gender": "female", "grade": 5,
+ "pets": [{ "givenName": "Fluffy" }]
+ }
+ ],
+ "address": { "state": "WA", "county": "King", "city": "seattle" },
+ "isRegistered": true
+ }
+ ```
+
+6. Add another document, this time with the following content:
+
+ ```json
+ {
+ "_id": "WakefieldFamily",
+ "parents": [
+ { "familyName": "Wakefield", "givenName": "Robin" },
+ { "familyName": "Miller", "givenName": "Ben" }
+ ],
+ "children": [
+ {
+ "familyName": "Merriam",
+ "givenName": "Jesse",
+ "gender": "female", "grade": 1,
+ "pets": [
+ { "givenName": "Goofy" },
+ { "givenName": "Shadow" }
+ ]
+ },
+ {
+ "familyName": "Miller",
+ "givenName": "Lisa",
+ "gender": "female",
+ "grade": 8 }
+ ],
+ "address": { "state": "NY", "county": "Manhattan", "city": "NY" },
+ "isRegistered": false
+ }
+ ```
+
+7. Execute a sample query. For example, search for families with the last name 'Andersen' and return the parents and state fields.
+
+ :::image type="content" source="./media/connect-using-mongochef/query-document.png" alt-text="Screenshot of Mongo Chef query results":::
+
+## Next steps
+
+- Learn how to [use Robo 3T](connect-using-robomongo.md) with Azure Cosmos DB's API for MongoDB.
+- Explore MongoDB [samples](nodejs-console-app.md) with Azure Cosmos DB's API for MongoDB.
cosmos-db Connect Using Mongoose https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/mongodb/connect-using-mongoose.md
+
+ Title: Connect a Node.js Mongoose application to Azure Cosmos DB
+description: Learn how to use the Mongoose Framework to store and manage data in Azure Cosmos DB.
++
+ms.devlang: nodejs
+ Last updated : 03/20/2020++++
+# Connect a Node.js Mongoose application to Azure Cosmos DB
+
+This tutorial demonstrates how to use the [Mongoose Framework](https://mongoosejs.com/) when storing data in Cosmos DB. We use the Azure Cosmos DB's API for MongoDB for this walkthrough. For those of you unfamiliar, Mongoose is an object modeling framework for MongoDB in Node.js and provides a straight-forward, schema-based solution to model your application data.
+
+Cosmos DB is Microsoft's globally distributed multi-model database service. You can quickly create and query document, key/value, and graph databases, all of which benefit from the global distribution and horizontal scale capabilities at the core of Cosmos DB.
+
+## Prerequisites
+++
+[Node.js](https://nodejs.org/) version v0.10.29 or higher.
+
+## Create a Cosmos account
+
+Let's create a Cosmos account. If you already have an account you want to use, you can skip ahead to Set up your Node.js application. If you are using the Azure Cosmos DB Emulator, follow the steps at [Azure Cosmos DB Emulator](../local-emulator.md) to set up the emulator and skip ahead to Set up your Node.js application.
++
+### Create a database
+In this application we will cover two ways of creating collections in Azure Cosmos DB:
+- **Storing each object model in a separate collection**: We recommend [creating a database with dedicated throughput](../set-throughput.md#set-throughput-on-a-database). Using this capacity model will give you better cost efficiency.
+
+ :::image type="content" source="./media/connect-using-mongoose/db-level-throughput.png" alt-text="Node.js tutorial - Screenshot of the Azure portal, showing how to create a database in the Data Explorer for an Azure Cosmos DB account, for use with the Mongoose Node module":::
+
+- **Storing all object models in a single Cosmos DB collection**: If you'd prefer to store all models in a single collection, you can just create a new database without selecting the Provision Throughput option. Using this capacity model will create each collection with its own throughput capacity for every object model.
+
+After you create the database, you'll use the name in the `COSMOSDB_DBNAME` environment variable below.
+
+## Set up your Node.js application
+
+>[!Note]
+> If you'd like to just walkthrough the sample code instead of setup the application itself, clone the [sample](https://github.com/Azure-Samples/Mongoose_CosmosDB) used for this tutorial and build your Node.js Mongoose application on Azure Cosmos DB.
+
+1. To create a Node.js application in the folder of your choice, run the following command in a node command prompt.
+
+ ```npm init```
+
+ Answer the questions and your project will be ready to go.
+
+2. Add a new file to the folder and name it ```index.js```.
+3. Install the necessary packages using one of the ```npm install``` options:
+ * Mongoose: ```npm install mongoose@5 --save```
+
+ > [!Note]
+ > The Mongoose example connection below is based on Mongoose 5+, which has changed since earlier versions.
+
+ * Dotenv (if you'd like to load your secrets from an .env file): ```npm install dotenv --save```
+
+ >[!Note]
+ > The ```--save``` flag adds the dependency to the package.json file.
+
+4. Import the dependencies in your index.js file.
+
+ ```JavaScript
+ var mongoose = require('mongoose');
+ var env = require('dotenv').config(); //Use the .env file to load the variables
+ ```
+
+5. Add your Cosmos DB connection string and Cosmos DB Name to the ```.env``` file. Replace the placeholders {cosmos-account-name} and {dbname} with your own Cosmos account name and database name, without the brace symbols.
+
+ ```JavaScript
+ # You can get the following connection details from the Azure portal. You can find the details on the Connection string pane of your Azure Cosmos account.
+
+ COSMOSDB_USER = "<Azure Cosmos account's user name, usually the database account name>"
+ COSMOSDB_PASSWORD = "<Azure Cosmos account password, this is one of the keys specified in your account>"
+ COSMOSDB_DBNAME = "<Azure Cosmos database name>"
+ COSMOSDB_HOST= "<Azure Cosmos Host name>"
+ COSMOSDB_PORT=10255
+ ```
+
+6. Connect to Cosmos DB using the Mongoose framework by adding the following code to the end of index.js.
+ ```JavaScript
+ mongoose.connect("mongodb://"+process.env.COSMOSDB_HOST+":"+process.env.COSMOSDB_PORT+"/"+process.env.COSMOSDB_DBNAME+"?ssl=true&replicaSet=globaldb", {
+ auth: {
+ user: process.env.COSMOSDB_USER,
+ password: process.env.COSMOSDB_PASSWORD
+ },
+ useNewUrlParser: true,
+ useUnifiedTopology: true,
+ retryWrites: false
+ })
+ .then(() => console.log('Connection to CosmosDB successful'))
+ .catch((err) => console.error(err));
+ ```
+ >[!Note]
+ > Here, the environment variables are loaded using process.env.{variableName} using the 'dotenv' npm package.
+
+ Once you are connected to Azure Cosmos DB, you can now start setting up object models in Mongoose.
+
+## Best practices for using Mongoose with Cosmos DB
+
+For every model you create, Mongoose creates a new collection. This is best addressed using the [Database Level Throughput option](../set-throughput.md#set-throughput-on-a-database), which was previously discussed. To use a single collection, you need to use Mongoose [Discriminators](https://mongoosejs.com/docs/discriminators.html). Discriminators are a schema inheritance mechanism. They enable you to have multiple models with overlapping schemas on top of the same underlying MongoDB collection.
+
+You can store the various data models in the same collection and then use a filter clause at query time to pull down only the data needed. Let's walk through each of the models.
+
+### One collection per object model
+
+This section explores how to achieve this with Azure Cosmos DB's API for MongoDB. This method is our recommended approach since it allows you to control cost and capacity. As a result, the amount of Request Units on the database does not depend on the number of object models. This is the default operating model for Mongoose, so, you might be familiar with this.
+
+1. Open your ```index.js``` again.
+
+1. Create the schema definition for 'Family'.
+
+ ```JavaScript
+ const Family = mongoose.model('Family', new mongoose.Schema({
+ lastName: String,
+ parents: [{
+ familyName: String,
+ firstName: String,
+ gender: String
+ }],
+ children: [{
+ familyName: String,
+ firstName: String,
+ gender: String,
+ grade: Number
+ }],
+ pets:[{
+ givenName: String
+ }],
+ address: {
+ country: String,
+ state: String,
+ city: String
+ }
+ }));
+ ```
+
+1. Create an object for 'Family'.
+
+ ```JavaScript
+ const family = new Family({
+ lastName: "Volum",
+ parents: [
+ { firstName: "Thomas" },
+ { firstName: "Mary Kay" }
+ ],
+ children: [
+ { firstName: "Ryan", gender: "male", grade: 8 },
+ { firstName: "Patrick", gender: "male", grade: 7 }
+ ],
+ pets: [
+ { givenName: "Buddy" }
+ ],
+ address: { country: "USA", state: "WA", city: "Seattle" }
+ });
+ ```
+
+1. Finally, let's save the object to Cosmos DB. This creates a collection underneath the covers.
+
+ ```JavaScript
+ family.save((err, saveFamily) => {
+ console.log(JSON.stringify(saveFamily));
+ });
+ ```
+
+1. Now, let's create another schema and object. This time, let's create one for 'Vacation Destinations' that the families might be interested in.
+ 1. Just like last time, let's create the scheme
+ ```JavaScript
+ const VacationDestinations = mongoose.model('VacationDestinations', new mongoose.Schema({
+ name: String,
+ country: String
+ }));
+ ```
+
+ 1. Create a sample object (You can add multiple objects to this schema) and save it.
+ ```JavaScript
+ const vacaySpot = new VacationDestinations({
+ name: "Honolulu",
+ country: "USA"
+ });
+
+ vacaySpot.save((err, saveVacay) => {
+ console.log(JSON.stringify(saveVacay));
+ });
+ ```
+
+1. Now, going into the Azure portal, you notice two collections created in Cosmos DB.
+
+ :::image type="content" source="./media/connect-using-mongoose/mongo-mutliple-collections.png" alt-text="Node.js tutorial - Screenshot of the Azure portal, showing an Azure Cosmos DB account, with multiple collection names highlighted - Node database":::
+
+1. Finally, let's read the data from Cosmos DB. Since we're using the default Mongoose operating model, the reads are the same as any other reads with Mongoose.
+
+ ```JavaScript
+ Family.find({ 'children.gender' : "male"}, function(err, foundFamily){
+ foundFamily.forEach(fam => console.log("Found Family: " + JSON.stringify(fam)));
+ });
+ ```
+
+### Using Mongoose discriminators to store data in a single collection
+
+In this method, we use [Mongoose Discriminators](https://mongoosejs.com/docs/discriminators.html) to help optimize for the costs of each collection. Discriminators allow you to define a differentiating 'Key', which allows you to store, differentiate and filter on different object models.
+
+Here, we create a base object model, define a differentiating key and add 'Family' and 'VacationDestinations' as an extension to the base model.
+
+1. Let's set up the base config and define the discriminator key.
+
+ ```JavaScript
+ const baseConfig = {
+ discriminatorKey: "_type", //If you've got a lot of different data types, you could also consider setting up a secondary index here.
+ collection: "alldata" //Name of the Common Collection
+ };
+ ```
+
+1. Next, let's define the common object model
+
+ ```JavaScript
+ const commonModel = mongoose.model('Common', new mongoose.Schema({}, baseConfig));
+ ```
+
+1. We now define the 'Family' model. Notice here that we're using ```commonModel.discriminator``` instead of ```mongoose.model```. Additionally, we're also adding the base config to the mongoose schema. So, here, the discriminatorKey is ```FamilyType```.
+
+ ```JavaScript
+ const Family_common = commonModel.discriminator('FamilyType', new mongoose.Schema({
+ lastName: String,
+ parents: [{
+ familyName: String,
+ firstName: String,
+ gender: String
+ }],
+ children: [{
+ familyName: String,
+ firstName: String,
+ gender: String,
+ grade: Number
+ }],
+ pets:[{
+ givenName: String
+ }],
+ address: {
+ country: String,
+ state: String,
+ city: String
+ }
+ }, baseConfig));
+ ```
+
+1. Similarly, let's add another schema, this time for the 'VacationDestinations'. Here, the DiscriminatorKey is ```VacationDestinationsType```.
+
+ ```JavaScript
+ const Vacation_common = commonModel.discriminator('VacationDestinationsType', new mongoose.Schema({
+ name: String,
+ country: String
+ }, baseConfig));
+ ```
+
+1. Finally, let's create objects for the model and save it.
+ 1. Let's add object(s) to the 'Family' model.
+ ```JavaScript
+ const family_common = new Family_common({
+ lastName: "Volum",
+ parents: [
+ { firstName: "Thomas" },
+ { firstName: "Mary Kay" }
+ ],
+ children: [
+ { firstName: "Ryan", gender: "male", grade: 8 },
+ { firstName: "Patrick", gender: "male", grade: 7 }
+ ],
+ pets: [
+ { givenName: "Buddy" }
+ ],
+ address: { country: "USA", state: "WA", city: "Seattle" }
+ });
+
+ family_common.save((err, saveFamily) => {
+ console.log("Saved: " + JSON.stringify(saveFamily));
+ });
+ ```
+
+ 1. Next, let's add object(s) to the 'VacationDestinations' model and save it.
+ ```JavaScript
+ const vacay_common = new Vacation_common({
+ name: "Honolulu",
+ country: "USA"
+ });
+
+ vacay_common.save((err, saveVacay) => {
+ console.log("Saved: " + JSON.stringify(saveVacay));
+ });
+ ```
+
+1. Now, if you go back to the Azure portal, you notice that you have only one collection called ```alldata``` with both 'Family' and 'VacationDestinations' data.
+
+ :::image type="content" source="./media/connect-using-mongoose/mongo-collections-alldata.png" alt-text="Node.js tutorial - Screenshot of the Azure portal, showing an Azure Cosmos DB account, with the collection name highlighted - Node database":::
+
+1. Also, notice that each object has another attribute called as ```__type```, which help you differentiate between the two different object models.
+
+1. Finally, let's read the data that is stored in Azure Cosmos DB. Mongoose takes care of filtering data based on the model. So, you have to do nothing different when reading data. Just specify your model (in this case, ```Family_common```) and Mongoose handles filtering on the 'DiscriminatorKey'.
+
+ ```JavaScript
+ Family_common.find({ 'children.gender' : "male"}, function(err, foundFamily){
+ foundFamily.forEach(fam => console.log("Found Family (using discriminator): " + JSON.stringify(fam)));
+ });
+ ```
+
+As you can see, it is easy to work with Mongoose discriminators. So, if you have an app that uses the Mongoose framework, this tutorial is a way for you to get your application up and running using Azure Cosmos's API for MongoDB without requiring too many changes.
+
+## Clean up resources
++
+## Next steps
+
+- Learn how to [use Studio 3T](connect-using-mongochef.md) with Azure Cosmos DB's API for MongoDB.
+- Learn how to [use Robo 3T](connect-using-robomongo.md) with Azure Cosmos DB's API for MongoDB.
+- Explore MongoDB [samples](nodejs-console-app.md) with Azure Cosmos DB's API for MongoDB.
+
+[dbleveltp]: ./media/connect-using-mongoose/db-level-throughput.png
cosmos-db Connect Using Robomongo https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/mongodb/connect-using-robomongo.md
+
+ Title: Use Robo 3T to connect to Azure Cosmos DB
+description: Learn how to connect to Azure Cosmos DB using Robo 3T and Azure Cosmos DB's API for MongoDB
+++ Last updated : 03/23/2020++++
+# Use Robo 3T with Azure Cosmos DB's API for MongoDB
+
+To connect to Cosmos account using Robo 3T, you must:
+
+* Download and install [Robo 3T](https://robomongo.org/)
+* Have your Cosmos DB [connection string](connect-mongodb-account.md) information
+
+> [!NOTE]
+> Currently, Robo 3T v1.2 and lower versions are supported with Cosmos DB's API for MongoDB.
+
+## Connect using Robo 3T
+
+To add your Cosmos account to the Robo 3T connection manager, perform the following steps:
+
+1. Retrieve the connection information for your Cosmos account configured with Azure Cosmos DB's API MongoDB using the instructions [here](connect-mongodb-account.md).
+
+ :::image type="content" source="./media/connect-using-robomongo/connectionstringblade.png" alt-text="Screenshot of the connection string blade":::
+2. Run the *Robomongo* application.
+
+3. Click the connection button under **File** to manage your connections. Then, click **Create** in the **MongoDB Connections** window, which will open up the **Connection Settings** window.
+
+4. In the **Connection Settings** window, choose a name. Then, find the **Host** and **Port** from your connection information in Step 1 and enter them into **Address** and **Port**, respectively.
+
+ :::image type="content" source="./media/connect-using-robomongo/manageconnections.png" alt-text="Screenshot of the Robomongo Manage Connections":::
+5. On the **Authentication** tab, click **Perform authentication**. Then, enter your Database (default is *Admin*), **User Name** and **Password**.
+Both **User Name** and **Password** can be found in your connection information in Step 1.
+
+ :::image type="content" source="./media/connect-using-robomongo/authentication.png" alt-text="Screenshot of the Robomongo Authentication Tab":::
+6. On the **SSL** tab, check **Use SSL protocol**, then change the **Authentication Method** to **Self-signed Certificate**.
+
+ :::image type="content" source="./media/connect-using-robomongo/ssl.png" alt-text="Screenshot of the Robomongo SSL Tab":::
+7. Finally, click **Test** to verify that you are able to connect, then **Save**.
+
+## Next steps
+
+- Learn how to [use Studio 3T](connect-using-mongochef.md) with Azure Cosmos DB's API for MongoDB.
+- Explore MongoDB [samples](nodejs-console-app.md) with Azure Cosmos DB's API for MongoDB.
cosmos-db Consistency Mapping https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/mongodb/consistency-mapping.md
+
+ Title: Mapping consistency levels for Azure Cosmos DB API for MongoDB
+description: Mapping consistency levels for Azure Cosmos DB API for MongoDB.
+++++ Last updated : 10/12/2020++++
+# Consistency levels for Azure Cosmos DB and the API for MongoDB
+
+Unlike Azure Cosmos DB, the native MongoDB does not provide precisely defined consistency guarantees. Instead, native MongoDB allows users to configure the following consistency guarantees: a write concern, a read concern, and the isMaster directive - to direct the read operations to either primary or secondary replicas to achieve the desired consistency level.
+
+When using Azure Cosmos DBΓÇÖs API for MongoDB, the MongoDB driver treats your write region as the primary replica and all other regions are read replica. You can choose which region associated with your Azure Cosmos account as a primary replica.
+
+> [!NOTE]
+> The default consistency model for Azure Cosmos DB is Session. Session is a client-centric consistency model which is not natively supported by either Cassandra or MongoDB. For more information on which consistency model to choose see, [Consistency levels in Azure Cosmos DB](../consistency-levels.md)
+
+While using Azure Cosmos DBΓÇÖs API for MongoDB:
+
+* The write concern is mapped to the default consistency level configured on your Azure Cosmos account.
+
+* Azure Cosmos DB will dynamically map the read concern specified by the MongoDB client driver to one of the Azure Cosmos DB consistency levels that is configured dynamically on a read request.
+
+* You can annotate a specific region associated with your Azure Cosmos account as "Primary" by making the region as the first writable region.
+
+## Mapping consistency levels
+
+The following table illustrates how the native MongoDB write/read concerns are mapped to the Azure Cosmos consistency levels when using Azure Cosmos DBΓÇÖs API for MongoDB:
++
+If your Azure Cosmos account is configured with a consistency level other than the strong consistency, you can find out the probability that your clients may get strong and consistent reads for your workloads by looking at the *Probabilistically Bounded Staleness* (PBS) metric. This metric is exposed in the Azure portal, to learn more, see [Monitor Probabilistically Bounded Staleness (PBS) metric](../how-to-manage-consistency.md#monitor-probabilistically-bounded-staleness-pbs-metric).
+
+Probabilistic bounded staleness shows how eventual is your eventual consistency. This metric provides an insight into how often you can get a stronger consistency than the consistency level that you have currently configured on your Azure Cosmos account. In other words, you can see the probability (measured in milliseconds) of getting strongly consistent reads for a combination of write and read regions.
+
+## Next steps
+
+Learn more about global distribution and consistency levels for Azure Cosmos DB:
+
+* [Global distribution overview](../distribute-data-globally.md)
+* [Consistency Level overview](../consistency-levels.md)
+* [High availability](../high-availability.md)
cosmos-db Create Mongodb Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/mongodb/create-mongodb-dotnet.md
+
+ Title: Build a web app using Azure Cosmos DB's API for MongoDB and .NET SDK
+description: Presents a .NET code sample you can use to connect to and query using Azure Cosmos DB's API for MongoDB.
+++++
+ms.devlang: dotnet
+ Last updated : 10/15/2020+++
+# Quickstart: Build a .NET web app using Azure Cosmos DB's API for MongoDB
+
+> [!div class="op_single_selector"]
+> * [.NET](create-mongodb-dotnet.md)
+> * [Java](create-mongodb-java.md)
+> * [Node.js](create-mongodb-nodejs.md)
+> * [Xamarin](create-mongodb-xamarin.md)
+> * [Golang](create-mongodb-go.md)
+>
+
+Azure Cosmos DB is MicrosoftΓÇÖs fast NoSQL database with open APIs for any scale. You can quickly create and query document, key/value and graph databases, all of which benefit from the global distribution and horizontal scale capabilities at the core of Cosmos DB.
+
+This quickstart demonstrates how to create a Cosmos account with [Azure Cosmos DB's API for MongoDB](mongodb-introduction.md). You'll then build and deploy a tasks list web app built using the [MongoDB .NET driver](https://docs.mongodb.com/ecosystem/drivers/csharp/).
+
+## Prerequisites to run the sample app
+
+* [Visual Studio](https://www.visualstudio.com/downloads/)
+* An Azure Cosmos DB account.
+
+If you don't already have Visual Studio, download [Visual Studio 2019 Community Edition](https://www.visualstudio.com/downloads/) with the **ASP.NET and web development** workload installed with setup.
++
+<a id="create-account"></a>
+## Create a database account
++
+The sample described in this article is compatible with MongoDB.Driver version 2.6.1.
+
+## Clone the sample app
+
+Run the following commands in a GitHub enabled command windows such as [Git bash](https://git-scm.com/downloads):
+
+```bash
+mkdir "C:\git-samples"
+cd "C:\git-samples"
+git clone https://github.com/Azure-Samples/azure-cosmos-db-mongodb-dotnet-getting-started.git
+```
+
+The preceding commands:
+
+1. Create the *C:\git-samples* directory for the sample. Chose a folder appropriate for your operating system.
+1. Change your current directory to the *C:\git-samples* folder.
+1. Clone the sample into the *C:\git-samples* folder.
+
+If you don't wish to use git, you can also [download the project as a ZIP file](https://github.com/Azure-Samples/azure-cosmos-db-mongodb-dotnet-getting-started/archive/master.zip).
+
+## Review the code
+
+1. In Visual Studio, right-click on the project in **Solution Explorer** and then click **Manage NuGet Packages**.
+1. In the NuGet **Browse** box, type *MongoDB.Driver*.
+1. From the results, install the **MongoDB.Driver** library. This installs the MongoDB.Driver package as well as all dependencies.
+
+The following steps are optional. If you're interested in learning how the database resources are created in the code, review the following snippets. Otherwise, skip ahead to [Update your connection string](#update-the-connection-string).
+
+The following snippets are from the *DAL/Dal.cs* file.
+
+* The following code initializes the client:
+
+ ```cs
+ MongoClientSettings settings = new MongoClientSettings();
+ settings.Server = new MongoServerAddress(host, 10255);
+ settings.UseSsl = true;
+ settings.SslSettings = new SslSettings();
+ settings.SslSettings.EnabledSslProtocols = SslProtocols.Tls12;
+
+ MongoIdentity identity = new MongoInternalIdentity(dbName, userName);
+ MongoIdentityEvidence evidence = new PasswordEvidence(password);
+
+ settings.Credential = new MongoCredential("SCRAM-SHA-1", identity, evidence);
+
+ MongoClient client = new MongoClient(settings);
+ ```
+
+* The following code retrieves the database and the collection:
+
+ ```cs
+ private string dbName = "Tasks";
+ private string collectionName = "TasksList";
+
+ var database = client.GetDatabase(dbName);
+ var todoTaskCollection = database.GetCollection<MyTask>(collectionName);
+ ```
+
+* The following code retrieves all documents:
+
+ ```cs
+ collection.Find(new BsonDocument()).ToList();
+ ```
+
+The following code creates a task and insert it into the collection:
+
+ ```csharp
+ public void CreateTask(MyTask task)
+ {
+ var collection = GetTasksCollectionForEdit();
+ try
+ {
+ collection.InsertOne(task);
+ }
+ catch (MongoCommandException ex)
+ {
+ string msg = ex.Message;
+ }
+ }
+ ```
+ Similarly, you can update and delete documents by using the [collection.UpdateOne()](https://docs.mongodb.com/stitch/mongodb/actions/collection.updateOne/https://docsupdatetracker.net/index.html) and [collection.DeleteOne()](https://docs.mongodb.com/stitch/mongodb/actions/collection.deleteOne/https://docsupdatetracker.net/index.html) methods.
+
+## Update the connection string
+
+From the Azure portal copy the connection string information:
+
+1. In the [Azure portal](https://portal.azure.com/), select your Cosmos account, in the left navigation click **Connection String**, and then click **Read-write Keys**. You'll use the copy buttons on the right side of the screen to copy the Username, Password, and Host into the Dal.cs file in the next step.
+
+2. Open the *DAL/Dal.cs* file.
+
+3. Copy the **username** value from the portal (using the copy button) and make it the value of the **username** in the **Dal.cs** file.
+
+4. Copy the **host** value from the portal and make it the value of the **host** in the **Dal.cs** file.
+
+5. Copy the **password** value from the portal and make it the value of the **password** in your **Dal.cs** file.
+
+<!-- TODO Store PW correctly-->
+> [!WARNING]
+> Never check passwords or other sensitive data into source code.
+
+You've now updated your app with all the info it needs to communicate with Cosmos DB.
+
+## Run the web app
+
+1. Click CTRL + F5 to run the app. The default browser is launched with the app.
+1. Click **Create** in the browser and create a few new tasks in your task list app.
+
+<!--
+## Deploy the app to Azure
+1. In VS, right click .. publish
+2. This is so easy, why is this critical step missed?
+-->
+## Review SLAs in the Azure portal
++
+## Clean up resources
++
+## Next steps
+
+In this quickstart, you've learned how to create a Cosmos account, create a collection and run a console app. You can now import additional data to your Cosmos database.
+
+> [!div class="nextstepaction"]
+> [Import MongoDB data into Azure Cosmos DB](../../dms/tutorial-mongodb-cosmos-db.md?toc=%2fazure%2fcosmos-db%2ftoc.json%253ftoc%253d%2fazure%2fcosmos-db%2ftoc.json)
cosmos-db Create Mongodb Go https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/mongodb/create-mongodb-go.md
+
+ Title: Connect a Go application to Azure Cosmos DB's API for MongoDB
+description: This quickstart demonstrates how to connect an existing Go application to Azure Cosmos DB's API for MongoDB.
++++
+ms.devlang: go
+ Last updated : 04/24/2020++
+# Quickstart: Connect a Go application to Azure Cosmos DB's API for MongoDB
+
+> [!div class="op_single_selector"]
+> * [.NET](create-mongodb-dotnet.md)
+> * [Java](create-mongodb-java.md)
+> * [Node.js](create-mongodb-nodejs.md)
+> * [Xamarin](create-mongodb-xamarin.md)
+> * [Golang](create-mongodb-go.md)
+>
+
+Azure Cosmos DB is a multi-model database service that lets you quickly create and query document, table, key-value, and graph databases with global distribution and horizontal scale capabilities. In this quickstart, you create and manage an Azure Cosmos DB account by using the Azure Cloud Shell, clone an existing sample application from GitHub and configure it to work with Azure Cosmos DB.
+
+The sample application is a command-line based `todo` management tool written in Go. Azure Cosmos DB's API for MongoDB is [compatible with the MongoDB wire protocol](./mongodb-introduction.md), making it possible for any MongoDB client driver to connect to it. This application uses the [Go driver for MongoDB](https://github.com/mongodb/mongo-go-driver) in a way that is transparent to the application that the data is stored in an Azure Cosmos DB database.
+
+## Prerequisites
+- An Azure account with an active subscription. [Create one for free](https://azure.microsoft.com/free). Or [try Azure Cosmos DB for free](https://azure.microsoft.com/try/cosmosdb/) without an Azure subscription. You can also use the [Azure Cosmos DB Emulator](https://aka.ms/cosmosdb-emulator) with the connection string `.mongodb://localhost:C2y6yDjf5/R+ob0N8A7Cgv30VRDJIWEHLM+4QDU5DE2nQ9nDuVTqobD4b8mGGyPMbIZnqyMsEcaGQy67XIw/Jw==@localhost:10255/admin?ssl=true`.
+- [Go](https://golang.org/) installed on your computer, and a working knowledge of Go.
+- [Git](https://git-scm.com/downloads).
+
+## Clone the sample application
+
+Run the following commands to clone the sample repository.
+
+1. Open a command prompt, create a new folder named `git-samples`, then close the command prompt.
+
+ ```bash
+ mkdir "C:\git-samples"
+ ```
+
+2. Open a git terminal window, such as git bash, and use the `cd` command to change to the new folder to install the sample app.
+
+ ```bash
+ cd "C:\git-samples"
+ ```
+
+3. Run the following command to clone the sample repository. This command creates a copy of the sample app on your computer.
+
+ ```bash
+ git clone https://github.com/Azure-Samples/cosmosdb-go-mongodb-quickstart
+ ```
+
+## Review the code
+
+This step is optional. If you're interested in learning how the application works, you can review the following snippets. Otherwise, you can skip ahead to [Run the application](#run-the-application). The application layout is as follows:
+
+```bash
+.
+Γö£ΓöÇΓöÇ go.mod
+Γö£ΓöÇΓöÇ go.sum
+ΓööΓöÇΓöÇ todo.go
+```
+
+The following snippets are all taken from the `todo.go` file.
+
+### Connecting the Go app to Azure Cosmos DB
+
+[`clientOptions`](https://pkg.go.dev/go.mongodb.org/mongo-driver@v1.3.2/mongo/options?tab=doc#ClientOptions) encapsulates the connection string for Azure Cosmos DB, which is passed in using an environment variable (details in the upcoming section). The connection is initialized using [`mongo.NewClient`](https://pkg.go.dev/go.mongodb.org/mongo-driver@v1.3.2/mongo?tab=doc#NewClient) to which the `clientOptions` instance is passed. [`Ping` function](https://pkg.go.dev/go.mongodb.org/mongo-driver@v1.3.2/mongo?tab=doc#Client.Ping) is invoked to confirm successful connectivity (it is a fail-fast strategy)
+
+```go
+ ctx, cancel := context.WithTimeout(context.Background(), time.Second*10)
+ defer cancel()
+
+ clientOptions := options.Client().ApplyURI(mongoDBConnectionString).SetDirect(true)
+
+ c, err := mongo.NewClient(clientOptions)
+ err = c.Connect(ctx)
+ if err != nil {
+ log.Fatalf("unable to initialize connection %v", err)
+ }
+
+ err = c.Ping(ctx, nil)
+ if err != nil {
+ log.Fatalf("unable to connect %v", err)
+ }
+```
+
+> [!NOTE]
+> Using the [`SetDirect(true)`](https://pkg.go.dev/go.mongodb.org/mongo-driver@v1.3.2/mongo/options?tab=doc#ClientOptions.SetDirect) configuration is important, without which you will get the following connectivity error: `unable to connect connection(cdb-ms-prod-<azure-region>-cm1.documents.azure.com:10255[-4]) connection is closed`
+>
+
+### Create a `todo` item
+
+To create a `todo`, we get a handle to a [`mongo.Collection`](https://pkg.go.dev/go.mongodb.org/mongo-driver@v1.3.2/mongo?tab=doc#Collection) and invoke the [`InsertOne`](https://pkg.go.dev/go.mongodb.org/mongo-driver@v1.3.2/mongo?tab=doc#Collection.InsertOne) function.
+
+```go
+func create(desc string) {
+ c := connect()
+ ctx := context.Background()
+ defer c.Disconnect(ctx)
+
+ todoCollection := c.Database(database).Collection(collection)
+ r, err := todoCollection.InsertOne(ctx, Todo{Description: desc, Status: statusPending})
+ if err != nil {
+ log.Fatalf("failed to add todo %v", err)
+ }
+```
+
+We pass in a `Todo` struct that contains the description and the status (which is initially set to `pending`)
+
+```go
+type Todo struct {
+ ID primitive.ObjectID `bson:"_id,omitempty"`
+ Description string `bson:"description"`
+ Status string `bson:"status"`
+}
+```
+### List `todo` items
+
+We can list TODOs based on criteria. A [`bson.D`](https://pkg.go.dev/go.mongodb.org/mongo-driver@v1.3.2/bson?tab=doc#D) is created to encapsulate the filter criteria
+
+```go
+func list(status string) {
+ .....
+ var filter interface{}
+ switch status {
+ case listAllCriteria:
+ filter = bson.D{}
+ case statusCompleted:
+ filter = bson.D{{statusAttribute, statusCompleted}}
+ case statusPending:
+ filter = bson.D{{statusAttribute, statusPending}}
+ default:
+ log.Fatal("invalid criteria for listing todo(s)")
+ }
+```
+
+[`Find`](https://pkg.go.dev/go.mongodb.org/mongo-driver@v1.3.2/mongo?tab=doc#Collection.Find) is used to search for documents based on the filter and the result is converted into a slice of `Todo`
+
+```go
+ todoCollection := c.Database(database).Collection(collection)
+ rs, err := todoCollection.Find(ctx, filter)
+ if err != nil {
+ log.Fatalf("failed to list todo(s) %v", err)
+ }
+ var todos []Todo
+ err = rs.All(ctx, &todos)
+ if err != nil {
+ log.Fatalf("failed to list todo(s) %v", err)
+ }
+```
+
+Finally, the information is rendered in tabular format
+
+```go
+ todoTable := [][]string{}
+
+ for _, todo := range todos {
+ s, _ := todo.ID.MarshalJSON()
+ todoTable = append(todoTable, []string{string(s), todo.Description, todo.Status})
+ }
+
+ table := tablewriter.NewWriter(os.Stdout)
+ table.SetHeader([]string{"ID", "Description", "Status"})
+
+ for _, v := range todoTable {
+ table.Append(v)
+ }
+ table.Render()
+```
+
+### Update a `todo` item
+
+A `todo` can be updated based on its `_id`. A [`bson.D`](https://pkg.go.dev/go.mongodb.org/mongo-driver@v1.3.2/bson?tab=doc#D) filter is created based on the `_id` and another one is created for the updated information, which is a new status (`completed` or `pending`) in this case. Finally, the [`UpdateOne`](https://pkg.go.dev/go.mongodb.org/mongo-driver@v1.3.2/mongo?tab=doc#Collection.UpdateOne) function is invoked with the filter and the updated document
+
+```go
+func update(todoid, newStatus string) {
+....
+ todoCollection := c.Database(database).Collection(collection)
+ oid, err := primitive.ObjectIDFromHex(todoid)
+ if err != nil {
+ log.Fatalf("failed to update todo %v", err)
+ }
+ filter := bson.D{{"_id", oid}}
+ update := bson.D{{"$set", bson.D{{statusAttribute, newStatus}}}}
+ _, err = todoCollection.UpdateOne(ctx, filter, update)
+ if err != nil {
+ log.Fatalf("failed to update todo %v", err)
+ }
+```
+
+### Delete a `todo`
+
+A `todo` is deleted based on its `_id` and it is encapsulated in the form of a [`bson.D`](https://pkg.go.dev/go.mongodb.org/mongo-driver@v1.3.2/bson?tab=doc#D) instance. [`DeleteOne`](https://pkg.go.dev/go.mongodb.org/mongo-driver@v1.3.2/mongo?tab=doc#Collection.DeleteOne) is invoked to delete the document.
+
+```go
+func delete(todoid string) {
+....
+ todoCollection := c.Database(database).Collection(collection)
+ oid, err := primitive.ObjectIDFromHex(todoid)
+ if err != nil {
+ log.Fatalf("invalid todo ID %v", err)
+ }
+ filter := bson.D{{"_id", oid}}
+ _, err = todoCollection.DeleteOne(ctx, filter)
+ if err != nil {
+ log.Fatalf("failed to delete todo %v", err)
+ }
+}
+```
+
+## Build the application
+
+Change into the directory where you cloned the application and build it (using `go build`).
+
+```bash
+cd monogdb-go-quickstart
+go build -o todo
+```
+
+To confirm that the application was built properly.
+
+```bash
+./todo --help
+```
+
+## Setup Azure Cosmos DB
+
+### Sign in to Azure
+
+If you choose to install and use the CLI locally, this topic requires that you are running the Azure CLI version 2.0 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI].
+
+If you are using an installed Azure CLI, sign in to your Azure subscription with the [az login](/cli/azure/reference-index#az_login) command and follow the on-screen directions. You can skip this step if you're using the Azure Cloud Shell.
+
+```azurecli
+az login
+```
+
+### Add the Azure Cosmos DB module
+
+If you are using an installed Azure CLI, check to see if the `cosmosdb` component is already installed by running the `az` command. If `cosmosdb` is in the list of base commands, proceed to the next command. You can skip this step if you're using the Azure Cloud Shell.
+
+If `cosmosdb` is not in the list of base commands, reinstall [Azure CLI](/cli/azure/install-azure-cli).
+
+### Create a resource group
+
+Create a [resource group](../../azure-resource-manager/management/overview.md) with the [az group create](/cli/azure/group#az_group_create). An Azure resource group is a logical container into which Azure resources like web apps, databases and storage accounts are deployed and managed.
+
+The following example creates a resource group in the West Europe region. Choose a unique name for the resource group.
+
+If you are using Azure Cloud Shell, select **Try It**, follow the onscreen prompts to login, then copy the command into the command prompt.
+
+```azurecli-interactive
+az group create --name myResourceGroup --location "West Europe"
+```
+
+### Create an Azure Cosmos DB account
+
+Create a Cosmos account with the [az cosmosdb create](/cli/azure/cosmosdb#az_cosmosdb_create) command.
+
+In the following command, please substitute your own unique Cosmos account name where you see the `<cosmosdb-name>` placeholder. This unique name will be used as part of your Cosmos DB endpoint (`https://<cosmosdb-name>.documents.azure.com/`), so the name needs to be unique across all Cosmos accounts in Azure.
+
+```azurecli-interactive
+az cosmosdb create --name <cosmosdb-name> --resource-group myResourceGroup --kind MongoDB
+```
+
+The `--kind MongoDB` parameter enables MongoDB client connections.
+
+When the Azure Cosmos DB account is created, the Azure CLI shows information similar to the following example.
+
+> [!NOTE]
+> This example uses JSON as the Azure CLI output format, which is the default. To use another output format, see [Output formats for Azure CLI commands](/cli/azure/format-output-azure-cli).
+
+```json
+{
+ "databaseAccountOfferType": "Standard",
+ "documentEndpoint": "https://<cosmosdb-name>.documents.azure.com:443/",
+ "id": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/myResourceGroup/providers/Microsoft.Document
+DB/databaseAccounts/<cosmosdb-name>",
+ "kind": "MongoDB",
+ "location": "West Europe",
+ "name": "<cosmosdb-name>",
+ "readLocations": [
+ {
+ "documentEndpoint": "https://<cosmosdb-name>-westeurope.documents.azure.com:443/",
+ "failoverPriority": 0,
+ "id": "<cosmosdb-name>-westeurope",
+ "locationName": "West Europe",
+ "provisioningState": "Succeeded"
+ }
+ ],
+ "resourceGroup": "myResourceGroup",
+ "type": "Microsoft.DocumentDB/databaseAccounts",
+ "writeLocations": [
+ {
+ "documentEndpoint": "https://<cosmosdb-name>-westeurope.documents.azure.com:443/",
+ "failoverPriority": 0,
+ "id": "<cosmosdb-name>-westeurope",
+ "locationName": "West Europe",
+ "provisioningState": "Succeeded"
+ }
+ ]
+}
+```
+
+### Retrieve the database key
+
+In order to connect to a Cosmos database, you need the database key. Use the [az cosmosdb keys list](/cli/azure/cosmosdb/keys#az_cosmosdb_keys_list) command to retrieve the primary key.
+
+```azurecli-interactive
+az cosmosdb keys list --name <cosmosdb-name> --resource-group myResourceGroup --query "primaryMasterKey"
+```
+
+The Azure CLI outputs information similar to the following example.
+
+```json
+"RUayjYjixJDWG5xTqIiXjC..."
+```
+
+## Configure the application
+
+<a name="devconfig"></a>
+### Export the connection string, MongoDB database and collection names as environment variables.
+
+```bash
+export MONGODB_CONNECTION_STRING="mongodb://<COSMOSDB_ACCOUNT_NAME>:<COSMOSDB_PASSWORD>@<COSMOSDB_ACCOUNT_NAME>.documents.azure.com:10255/?ssl=true&replicaSet=globaldb&maxIdleTimeMS=120000&appName=@<COSMOSDB_ACCOUNT_NAME>@"
+```
+
+> [!NOTE]
+> The `ssl=true` option is important because of Cosmos DB requirements. For more information, see [Connection string requirements](connect-mongodb-account.md#connection-string-requirements).
+>
+
+For the `MONGODB_CONNECTION_STRING` environment variable, replace the placeholders for `<COSMOSDB_ACCOUNT_NAME>` and `<COSMOSDB_PASSWORD>`
+
+1. `<COSMOSDB_ACCOUNT_NAME>`: The name of the Azure Cosmos DB account you created
+2. `<COSMOSDB_PASSWORD>`: The database key extracted in the previous step
+
+```bash
+export MONGODB_DATABASE=todo-db
+export MONGODB_COLLECTION=todos
+```
+
+You can choose your preferred values for `MONGODB_DATABASE` and `MONGODB_COLLECTION` or leave them as is.
+
+## Run the application
+
+To create a `todo`
+
+```bash
+./todo --create "Create an Azure Cosmos DB database account"
+```
+
+If successful, you should see an output with the MongoDB `_id` of the newly created document:
+
+```bash
+added todo ObjectID("5e9fd6befd2f076d1f03bd8a")
+```
+
+Create another `todo`
+
+```bash
+./todo --create "Get the MongoDB connection string using the Azure CLI"
+```
+
+List all the `todo`s
+
+```bash
+./todo --list all
+```
+
+You should see the ones you just added in a tabular format as such
+
+```bash
++-+--+--+
+| ID | DESCRIPTION | STATUS |
++-+--+--+
+| "5e9fd6b1bcd2fa6bd267d4c4" | Create an Azure Cosmos DB | pending |
+| | database account | |
+| "5e9fd6befd2f076d1f03bd8a" | Get the MongoDB connection | pending |
+| | string using the Azure CLI | |
++-+--+--+
+```
+
+To update the status of a `todo` (e.g. change it to `completed` status), use the `todo` ID
+
+```bash
+./todo --update 5e9fd6b1bcd2fa6bd267d4c4,completed
+```
+
+List only the completed `todo`s
+
+```bash
+./todo --list completed
+```
+
+You should see the one you just updated
+
+```bash
++-+--+--+
+| ID | DESCRIPTION | STATUS |
++-+--+--+
+| "5e9fd6b1bcd2fa6bd267d4c4" | Create an Azure Cosmos DB | completed |
+| | database account | |
++-+--+--+
+```
+
+### View data in Data Explorer
+
+Data stored in Azure Cosmos DB is available to view and query in the Azure portal.
+
+To view, query, and work with the user data created in the previous step, login to the [Azure portal](https://portal.azure.com) in your web browser.
+
+In the top Search box, enter **Azure Cosmos DB**. When your Cosmos account blade opens, select your Cosmos account. In the left navigation, select **Data Explorer**. Expand your collection in the Collections pane, and then you can view the documents in the collection, query the data, and even create and run stored procedures, triggers, and UDFs.
+++
+Delete a `todo` using it's ID
+
+```bash
+./todo --delete 5e9fd6b1bcd2fa6bd267d4c4,completed
+```
+
+List the `todo`s to confirm
+
+```bash
+./todo --list all
+```
+
+The `todo` you just deleted should not be present
+
+```bash
++-+--+--+
+| ID | DESCRIPTION | STATUS |
++-+--+--+
+| "5e9fd6befd2f076d1f03bd8a" | Get the MongoDB connection | pending |
+| | string using the Azure CLI | |
++-+--+--+
+```
+
+## Clean up resources
++
+## Next steps
+
+In this