Updates from: 03/26/2021 04:10:18
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Claim Resolver Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/claim-resolver-overview.md
The following sections list available claim resolvers.
| {OIDC:LoginHint} | The `login_hint` query string parameter. | someone@contoso.com | | {OIDC:MaxAge} | The `max_age`. | N/A | | {OIDC:Nonce} |The `Nonce` query string parameter. | defaultNonce |
-| {OIDC:Password}| The [resource owner password credentials flow](ropc-custom.md) user's password.| password1|
+| {OIDC:Password}| The [resource owner password credentials flow](add-ropc-policy.md) user's password.| password1|
| {OIDC:Prompt} | The `prompt` query string parameter. | login | | {OIDC:RedirectUri} |The `redirect_uri` query string parameter. | https://jwt.ms | | {OIDC:Resource} |The `resource` query string parameter. | N/A | | {OIDC:Scope} |The `scope` query string parameter. | openid |
-| {OIDC:Username}| The [resource owner password credentials flow](ropc-custom.md) user's username.| emily@contoso.com|
+| {OIDC:Username}| The [resource owner password credentials flow](add-ropc-policy.md) user's username.| emily@contoso.com|
### Context
active-directory-b2c Create User Flow https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/create-user-flow.md
- Title: Create a user flow - Azure Active Directory B2C
-description: Learn how to create user flows in the Azure portal to enable sign-up, sign-in, and user profile editing for your applications in Azure Active Directory B2C.
------- Previously updated : 07/30/2020----
-# Create a user flow in Azure Active Directory B2C
-
-You can create [user flows](user-flow-overview.md) of different types in your Azure Active Directory B2C (Azure AD B2C) tenant and use them in your applications as needed. User flows can be reused across applications.
-
-> [!IMPORTANT]
-> We've changed the way we reference user flow versions. Previously, we offered V1 (production-ready) versions, and V1.1 and V2 (preview) versions. Now, we've consolidated user flows into **Recommended** (next-generation preview) and **Standard** (generally available) versions. All V1.1 and V2 legacy preview user flows are on a path to deprecation by **August 1, 2021**. For details, see [User flow versions in Azure AD B2C](user-flow-versions.md).
-
-## Before you begin
--- **Register the application** you want to use to test the new user flow. For an example, see the [Tutorial: Register a web application in Azure AD B2C](tutorial-register-applications.md).-- **Add external identity providers** if you want to enable user sign-in with providers like Azure AD, Amazon, Facebook, GitHub, LinkedIn, Microsoft, or Twitter. See [Add identity providers to your applications in Azure AD B2C](add-identity-provider.md).-- **Configure the local account identity provider** to specify the identity types (email, username, phone number) you want to support for local accounts in your tenant. Then you can choose from these supported identity types when you create individual user flows. When a user completes the user flow, a local account is created in your Azure AD B2C directory, and your **Local account** identity provider authenticates the user's information. Configure your tenant's local account identity provider with these steps:-
- 1. Sign in to the [Azure portal](https://portal.azure.com/).
- 2. Select the **Directory + subscription** filter in the top menu and choose the directory that contains your Azure AD B2C tenant.
- 3. In the search bar at the top of the Azure portal, search for and select **Azure AD B2C**.
- 4. Under **Manage**, select **Identity providers**.
- 5. In the identity provider list, select **Local account**.
- 6. In the **Configure local IDP** page, select all the identity types you want to support. Selecting options here simply makes them available for the user flows you create later:
- - **Phone** (preview): Allows a user to enter a phone number, which is verified at sign-up and becomes their user ID.
- - **Email** (default): Allows a user to enter an email address, which is verified at sign-up and becomes their user ID.
- - **Username**: Allows a user to create their own unique user ID. An email address is collected from the user and verified.
- 7. Select **Save**.
-
-## Create a user flow
-
-1. Sign in to the [Azure portal](https://portal.azure.com).
-2. Select the **Directory + Subscription** icon in the portal toolbar, and then select the directory that contains your Azure AD B2C tenant.
-
- ![B2C tenant, Directory and Subscription pane, Azure portal](./media/create-user-flow/directory-subscription-pane.png)
-
-3. In the Azure portal, search for and select **Azure AD B2C**.
-4. Under **Policies**, select **User flows**, and then select **New user flow**.
-
- ![User flows page in portal with New user flow button highlighted](./media/create-user-flow/signup-signin-user-flow.png)
-
-5. On the **Create a user flow** page, select the type of user flow you want to create (see [User flows in Azure AD B2C](user-flow-overview.md) for an overview).
-
- ![Select a user flow page with sign-up and sign-in flow highlighted](./media/create-user-flow/select-user-flow-type.png)
-
-6. Under **Select a version**, select **Recommended**, and then select **Create**. ([Learn more](user-flow-versions.md) about user flow versions.)
-
- ![Create user flow page in Azure portal with properties highlighted](./media/create-user-flow/select-version.png)
-
-7. Enter a **Name** for the user flow (for example, *signupsignin1*, *profileediting1*, *passwordreset1*).
-8. Under **Identity providers**, choose the options depending on the type of user flow you're creating:
-
- - **Local account**. If you want to allow users to create local accounts in your Azure AD B2C tenant, select the type of identifier you want them to use (for example, email, user ID, or phone). Only those identity types that are configured in your [local account identity provider](#before-you-begin) settings are listed.
-
- - **Social identity providers**. If you want to allow user sign-in with [social identity providers you've added](add-identity-provider.md), like Azure AD, Amazon, Facebook, GitHub, LinkedIn, Microsoft, or Twitter, select the providers from the list.
-
-9. For **User attributes and claims**, choose the claims and attributes that you want to collect and send from the user during sign-up. Select **Show more**. Select the attributes and claims, and then select **OK**.
-
- ![Attributes and claims selection page with three claims selected](./media/create-user-flow/signup-signin-attributes.png)
-
-10. Select **Create** to add the user flow. A prefix of *B2C_1* is automatically prepended to the name.
-
-### Test the user flow
-
-1. Select **Policies** > **User flows**, and then select the user flow you created. On the user flow overview page, select **Run user flow**.
-1. For **Application**, select the web application you registered in step 1. The **Reply URL** should show `https://jwt.ms`.
-1. Select **Run user flow**.
-2. Depending on the type of user flow you're testing, either sign up using a valid email address and follow the sign-up flow, or sign in using an account that you previously created.
-
- ![Run user flow page in portal with Run user flow button highlighted](./media/create-user-flow/sign-up-sign-in-run-now.png)
-
-1. Follow the user flow prompts. When you complete the user flow, the token is returned to `https://jwt.ms` and should be displayed to you.
-
-> [!NOTE]
-> The "Run user flow" experience is not currently compatible with the SPA reply URL type using authorization code flow. To use the "Run user flow" experience with these kinds of apps, register a reply URL of type "Web" and enable the implicit flow as described [here](tutorial-register-spa.md).
-
-## Next steps
--- [Add Conditional Access to Azure AD B2C user flows](conditional-access-user-flow.md)-- [Customize the user interface in an Azure AD B2C user flow](customize-ui-with-html.md)
active-directory-b2c Custom Policy Developer Notes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/custom-policy-developer-notes.md
Custom policy capabilities are under constant development. The following table i
| [OAuth2 authorization code](authorization-code-flow.md) | | | X | | | OAuth2 authorization code with PKCE | | | X | [Public clients and single-page applications](authorization-code-flow.md) | | [OAuth2 implicit flow](implicit-flow-single-page-application.md) | | | X | |
-| [OAuth2 resource owner password credentials](ropc-custom.md) | | X | | |
+| [OAuth2 resource owner password credentials](add-ropc-policy.md) | | X | | |
| [OIDC Connect](openid-connect.md) | | | X | | | [SAML2](saml-service-provider.md) | | |X | POST and Redirect bindings. | | OAuth1 | | | | Not supported. |
active-directory-b2c Identity Provider Apple Id https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/identity-provider-apple-id.md
You can define an Apple ID as a claims provider by adding it to the **ClaimsProv
<Item Key="response_types">code</Item> <Item Key="external_user_identity_claim_id">sub</Item> <Item Key="response_mode">form_post</Item>
- <Item Key="ReadBodyClaimsOnIdpRedirect">user.firstName user.lastName user.email</Item>
+ <Item Key="ReadBodyClaimsOnIdpRedirect">user.name.firstName user.name.lastName user.email</Item>
<Item Key="client_id">You Apple ID</Item> <Item Key="UsePolicyInRedirectUri">false</Item> </Metadata>
active-directory-b2c Phone Authentication https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/phone-authentication.md
- Title: Phone sign-up and sign-in with custom policies-
-description: Send one-time passwords (OTP) in text messages to your application users' phones with custom policies in Azure Active Directory B2C.
------- Previously updated : 09/01/2020----
-# Set up phone sign-up and sign-in with custom policies in Azure AD B2C
-
-Phone sign-up and sign-in in Azure Active Directory B2C (Azure AD B2C) enables your users to sign up and sign in to your applications by using a one-time password (OTP) sent in a text message to their phone. One-time passwords can help minimize the risk of your users forgetting or having their passwords compromised.
-
-Follow the steps in this article to use the custom policies to enable your customers to sign up and sign in to your applications by using a one-time password sent to their phone.
-
-## Pricing
-
-One-time passwords are sent to your users by using SMS text messages, and you may be charged for each message sent. For pricing information, see the **Separate Charges** section of [Azure Active Directory B2C pricing](https://azure.microsoft.com/pricing/details/active-directory-b2c/).
-
-## User experience for phone sign-up and sign-in
-
-With phone sign-up and sign-in, the user can sign up for the app using a phone number as their primary identifier. The end user's experience during sign-up and sign-in is described below.
-
-> [!NOTE]
-> We strongly suggest you include consent information in your sign-up and sign-in experience similar to the sample text below. This sample text is for informational purposes only. Please refer to the Short Code Monitoring Handbook on the [CTIA website](https://www.ctia.org/programs) and consult with your own legal or compliance experts for guidance on your final text and feature configuration to meet your own compliance needs:
->
-> *By providing your phone number, you consent to receiving a one-time passcode sent by text message to help you sign in to *&lt;insert: your application name&gt;*. Standard message and data rates may apply.*
->
-> *&lt;insert: a link to your Privacy Statement&gt;*<br/>*&lt;insert: a link to your Terms of Service&gt;*
-
-To add your own consent information, customize the following sample. Include it in the `LocalizedResources` for the ContentDefinition used by the self-asserted page with the display control (the *Phone_Email_Base.xml* file in the [phone sign-up and sign-in starter pack][starter-pack-phone]):
-
-```xml
-<LocalizedResources Id="phoneSignUp.en">
- <LocalizedStrings>
- <LocalizedString ElementType="DisplayControl" ElementId="phoneControl" StringId="disclaimer_msg_intro">By providing your phone number, you consent to receiving a one-time passcode sent by text message to help you sign into {insert your application name}. Standard message and data rates may apply.</LocalizedString>
- <LocalizedString ElementType="DisplayControl" ElementId="phoneControl" StringId="disclaimer_link_1_text">Privacy Statement</LocalizedString>
- <LocalizedString ElementType="DisplayControl" ElementId="phoneControl" StringId="disclaimer_link_1_url">{insert your privacy statement URL}</LocalizedString>
- <LocalizedString ElementType="DisplayControl" ElementId="phoneControl" StringId="disclaimer_link_2_text">Terms and Conditions</LocalizedString>
- <LocalizedString ElementType="DisplayControl" ElementId="phoneControl" StringId="disclaimer_link_2_url">{insert your terms and conditions URL}</LocalizedString>
- <LocalizedString ElementType="UxElement" StringId="initial_intro">Please verify your country code and phone number</LocalizedString>
- </LocalizedStrings>
-</LocalizedResources>
- ```
-
-### Phone sign-up experience
-
-If the user doesn't already have an account for your application, they can create one by choosing the **Sign up now** link. A sign-up page appears, where the user selects their **Country**, enters their phone number, and selects **Send Code**.
-
-![User starts phone sign-up](media/phone-authentication/phone-signup-start.png)
-
-A one-time verification code is sent to the user's phone number. The user enters the **Verification Code** on the sign-up page, and then selects **Verify Code**. (If the user wasn't able to retrieve the code, they can select **Send New Code**.)
-
-![User verifies code during phone sign-up](media/phone-authentication/phone-signup-verify-code.png)
-
-The user enters any other information requested on the sign-up page. For example, **Display Name**, **Given Name**, and **Surname** (Country and phone number remain populated). If the user wants to use a different phone number, they can choose **Change number** to restart sign-up. When finished, the user selects **Continue**.
-
-![User provides additional info](media/phone-authentication/phone-signup-additional-info.png)
-
-Next, the user is asked to provide a recovery email. The user enters their email address, and then selects **Send verification code**. A code is sent to the user's email inbox, which they can retrieve and enter in the **Verification code** box. Then the user selects **Verify code**.
-
-Once the code is verified, the user selects **Create** to create their account. Or if the user wants to use a different email address, they can choose **Change e-mail**.
-
-![User creates account](media/phone-authentication/email-verification.png)
-
-### Phone sign-in experience
-
-If the user has an existing account with phone number as their identifier, the user enters their phone number and selects **Continue**. They confirm the country and phone number by selecting **Continue**, and a one-time verification code is sent to their phone. The user enters the verification code and selects **Continue** to sign in.
-
-![Phone sign-in user experience](media/phone-authentication/phone-signin-screens.png)
-
-## Deleting a user account
-
-In certain cases you might you need to delete a user and associated data from your Azure AD B2C directory. For details about how to delete a user account through the Azure portal, refer to [these instructions](/microsoft-365/compliance/gdpr-dsr-azure#step-5-delete).
----
-## Prerequisites
-
-You need the following resources in place before setting up OTP.
-
-* [Azure AD B2C tenant](tutorial-create-tenant.md)
-* [Web application registered](tutorial-register-applications.md) in your tenant
-* [Custom policies](custom-policy-get-started.md) uploaded to your tenant
-
-## Get the phone sign-up & sign-in starter pack
-
-Start by updating the phone sign-up and sign-in custom policy files to work with your Azure AD B2C tenant.
-
-1. Find the [phone sign-up and sign-in custom policy files][starter-pack-phone] in your local clone of the starter pack repo, or download them directly. The XML policy files are located in the following directory:
-
- `active-directory-b2c-custom-policy-starterpack/scenarios/`**`phone-number-passwordless`**
-
-1. In each file, replace the string `yourtenant` with the name of your Azure AD B2C tenant. For example, if the name of your B2C tenant is *contosob2c*, all instances of `yourtenant.onmicrosoft.com` become `contosob2c.onmicrosoft.com`.
-
-1. Complete the steps in the [Add application IDs to the custom policy](custom-policy-get-started.md#add-application-ids-to-the-custom-policy) section of [Get started with custom policies in Azure Active Directory B2C](custom-policy-get-started.md). In this case, update `/phone-number-passwordless/`**`Phone_Email_Base.xml`** with the **Application (client) IDs** of the two applications you registered when completing the prerequisites, *IdentityExperienceFramework* and *ProxyIdentityExperienceFramework*.
-
-## Upload the policy files
-
-1. Sign in to the [Azure portal](https://portal.azure.com) and navigate to your Azure AD B2C tenant.
-1. Under **Policies**, select **Identity Experience Framework**.
-1. Select **Upload custom policy**.
-1. Upload the policy files in the following order:
- 1. *Phone_Email_Base.xml*
- 1. *SignUpOrSignInWithPhone.xml*
- 1. *SignUpOrSignInWithPhoneOrEmail.xml*
- 1. *ProfileEditPhoneOnly.xml*
- 1. *ProfileEditPhoneEmail.xml*
- 1. *ChangePhoneNumber.xml*
- 1. *PasswordResetEmail.xml*
-
-As you upload each file, Azure adds the prefix `B2C_1A_`.
-
-## Test the custom policy
-
-1. Under **Custom policies**, select **B2C_1A_SignUpOrSignInWithPhone**.
-1. Under **Select application**, select the *webapp1* application that you registered when completing the prerequisites.
-1. For **Select reply url**, choose `https://jwt.ms`.
-1. Select **Run now** and sign up using an email address or a phone number.
-1. Select **Run now** once again and sign in with the same account to confirm that you have the correct configuration.
-
-## Get user account by phone number
-
-A user that signs up with a phone number, without a recovery email address is recorded in your Azure AD B2C directory with their phone number as their sign-in name. To change the phone number, your help desk or support team must first find their account, and then update their phone number.
-
-You can find a user by their phone number (sign-in name) by using [Microsoft Graph](microsoft-graph-operations.md):
-
-```http
-GET https://graph.microsoft.com/v1.0/users?$filter=identities/any(c:c/issuerAssignedId eq '+{phone number}' and c/issuer eq '{tenant name}.onmicrosoft.com')
-```
-
-For example:
-
-```http
-GET https://graph.microsoft.com/v1.0/users?$filter=identities/any(c:c/issuerAssignedId eq '+450334567890' and c/issuer eq 'contosob2c.onmicrosoft.com')
-```
-
-## Next steps
-
-You can find the phone sign-up and sign-in custom policy starter pack (and other starter packs) on GitHub:
- [Azure-Samples/active-directory-b2c-custom-policy-starterpack/scenarios/phone-number-passwordless][starter-pack-phone]
- The starter pack policy files use multi-factor authentication technical profiles and phone number claims transformations:
-* [Define an Azure AD Multi-Factor Authentication technical profile](multi-factor-auth-technical-profile.md)
-* [Define phone number claims transformations](phone-number-claims-transformations.md)
-
-<!-- LINKS - External -->
-[starter-pack]: https://github.com/Azure-Samples/active-directory-b2c-custom-policy-starterpack
-[starter-pack-phone]: https://github.com/Azure-Samples/active-directory-b2c-custom-policy-starterpack/tree/master/scenarios/phone-number-passwordless
active-directory-b2c Ropc Custom https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/ropc-custom.md
- Title: Configure the resource owner password credentials flow with custom policies-
-description: Learn how to configure the resource owner password credentials (ROPC) flow by using custom policies in Azure Active Directory B2C.
------- Previously updated : 05/12/2020----
-# Configure the resource owner password credentials flow in Azure Active Directory B2C using a custom policy
--
-In Azure Active Directory B2C (Azure AD B2C), the resource owner password credentials (ROPC) flow is an OAuth standard authentication flow. In this flow, an application, also known as the relying party, exchanges valid credentials for tokens. The credentials include a user ID and password. The tokens returned are an ID token, access token, and a refresh token.
--
-## Prerequisites
-
-Complete the steps in [Get started with custom policies in Azure Active Directory B2C](custom-policy-get-started.md).
-
-## Register an application
--
-## Create a resource owner policy
-
-1. Open the *TrustFrameworkExtensions.xml* file.
-2. If it doesn't exist already, add a **ClaimsSchema** element and its child elements as the first element under the **BuildingBlocks** element:
-
- ```xml
- <ClaimsSchema>
- <ClaimType Id="logonIdentifier">
- <DisplayName>User name or email address that the user can use to sign in</DisplayName>
- <DataType>string</DataType>
- </ClaimType>
- <ClaimType Id="resource">
- <DisplayName>The resource parameter passes to the ROPC endpoint</DisplayName>
- <DataType>string</DataType>
- </ClaimType>
- <ClaimType Id="refreshTokenIssuedOnDateTime">
- <DisplayName>An internal parameter used to determine whether the user should be permitted to authenticate again using their existing refresh token.</DisplayName>
- <DataType>string</DataType>
- </ClaimType>
- <ClaimType Id="refreshTokensValidFromDateTime">
- <DisplayName>An internal parameter used to determine whether the user should be permitted to authenticate again using their existing refresh token.</DisplayName>
- <DataType>string</DataType>
- </ClaimType>
- </ClaimsSchema>
- ```
-
-3. After **ClaimsSchema**, add a **ClaimsTransformations** element and its child elements to the **BuildingBlocks** element:
-
- ```xml
- <ClaimsTransformations>
- <ClaimsTransformation Id="CreateSubjectClaimFromObjectID" TransformationMethod="CreateStringClaim">
- <InputParameters>
- <InputParameter Id="value" DataType="string" Value="Not supported currently. Use oid claim." />
- </InputParameters>
- <OutputClaims>
- <OutputClaim ClaimTypeReferenceId="sub" TransformationClaimType="createdClaim" />
- </OutputClaims>
- </ClaimsTransformation>
-
- <ClaimsTransformation Id="AssertRefreshTokenIssuedLaterThanValidFromDate" TransformationMethod="AssertDateTimeIsGreaterThan">
- <InputClaims>
- <InputClaim ClaimTypeReferenceId="refreshTokenIssuedOnDateTime" TransformationClaimType="leftOperand" />
- <InputClaim ClaimTypeReferenceId="refreshTokensValidFromDateTime" TransformationClaimType="rightOperand" />
- </InputClaims>
- <InputParameters>
- <InputParameter Id="AssertIfEqualTo" DataType="boolean" Value="false" />
- <InputParameter Id="AssertIfRightOperandIsNotPresent" DataType="boolean" Value="true" />
- </InputParameters>
- </ClaimsTransformation>
- </ClaimsTransformations>
- ```
-
-4. Locate the **ClaimsProvider** element that has a **DisplayName** of `Local Account SignIn` and add following technical profile:
-
- ```xml
- <TechnicalProfile Id="ResourceOwnerPasswordCredentials-OAUTH2">
- <DisplayName>Local Account SignIn</DisplayName>
- <Protocol Name="OpenIdConnect" />
- <Metadata>
- <Item Key="UserMessageIfClaimsPrincipalDoesNotExist">We can't seem to find your account</Item>
- <Item Key="UserMessageIfInvalidPassword">Your password is incorrect</Item>
- <Item Key="UserMessageIfOldPasswordUsed">Looks like you used an old password</Item>
- <Item Key="DiscoverMetadataByTokenIssuer">true</Item>
- <Item Key="ValidTokenIssuerPrefixes">https://sts.windows.net/</Item>
- <Item Key="METADATA">https://login.microsoftonline.com/{tenant}/.well-known/openid-configuration</Item>
- <Item Key="authorization_endpoint">https://login.microsoftonline.com/{tenant}/oauth2/token</Item>
- <Item Key="response_types">id_token</Item>
- <Item Key="response_mode">query</Item>
- <Item Key="scope">email openid</Item>
- <Item Key="grant_type">password</Item>
- </Metadata>
- <InputClaims>
- <InputClaim ClaimTypeReferenceId="logonIdentifier" PartnerClaimType="username" Required="true" DefaultValue="{OIDC:Username}"/>
- <InputClaim ClaimTypeReferenceId="password" Required="true" DefaultValue="{OIDC:Password}" />
- <InputClaim ClaimTypeReferenceId="grant_type" DefaultValue="password" />
- <InputClaim ClaimTypeReferenceId="scope" DefaultValue="openid" />
- <InputClaim ClaimTypeReferenceId="nca" PartnerClaimType="nca" DefaultValue="1" />
- <InputClaim ClaimTypeReferenceId="client_id" DefaultValue="ProxyIdentityExperienceFrameworkAppId" />
- <InputClaim ClaimTypeReferenceId="resource_id" PartnerClaimType="resource" DefaultValue="IdentityExperienceFrameworkAppId" />
- </InputClaims>
- <OutputClaims>
- <OutputClaim ClaimTypeReferenceId="objectId" PartnerClaimType="oid" />
- <OutputClaim ClaimTypeReferenceId="userPrincipalName" PartnerClaimType="upn" />
- </OutputClaims>
- <OutputClaimsTransformations>
- <OutputClaimsTransformation ReferenceId="CreateSubjectClaimFromObjectID" />
- </OutputClaimsTransformations>
- <UseTechnicalProfileForSessionManagement ReferenceId="SM-Noop" />
- </TechnicalProfile>
- ```
-
- Replace the **DefaultValue** of **client_id** with the Application ID of the ProxyIdentityExperienceFramework application that you created in the prerequisite tutorial. Then replace **DefaultValue** of **resource_id** with the Application ID of the IdentityExperienceFramework application that you also created in the prerequisite tutorial.
-
-5. Add following **ClaimsProvider** elements with their technical profiles to the **ClaimsProviders** element:
-
- ```xml
- <ClaimsProvider>
- <DisplayName>Azure Active Directory</DisplayName>
- <TechnicalProfiles>
- <TechnicalProfile Id="AAD-UserReadUsingObjectId-CheckRefreshTokenDate">
- <Metadata>
- <Item Key="Operation">Read</Item>
- <Item Key="RaiseErrorIfClaimsPrincipalDoesNotExist">true</Item>
- </Metadata>
- <InputClaims>
- <InputClaim ClaimTypeReferenceId="objectId" Required="true" />
- </InputClaims>
- <OutputClaims>
- <OutputClaim ClaimTypeReferenceId="objectId" />
- <OutputClaim ClaimTypeReferenceId="refreshTokensValidFromDateTime" />
- </OutputClaims>
- <OutputClaimsTransformations>
- <OutputClaimsTransformation ReferenceId="AssertRefreshTokenIssuedLaterThanValidFromDate" />
- <OutputClaimsTransformation ReferenceId="CreateSubjectClaimFromObjectID" />
- </OutputClaimsTransformations>
- <IncludeTechnicalProfile ReferenceId="AAD-Common" />
- </TechnicalProfile>
- </TechnicalProfiles>
- </ClaimsProvider>
-
- <ClaimsProvider>
- <DisplayName>Session Management</DisplayName>
- <TechnicalProfiles>
- <TechnicalProfile Id="SM-RefreshTokenReadAndSetup">
- <DisplayName>Trustframework Policy Engine Refresh Token Setup Technical Profile</DisplayName>
- <Protocol Name="None" />
- <OutputClaims>
- <OutputClaim ClaimTypeReferenceId="objectId" />
- <OutputClaim ClaimTypeReferenceId="refreshTokenIssuedOnDateTime" />
- </OutputClaims>
- </TechnicalProfile>
- </TechnicalProfiles>
- </ClaimsProvider>
-
- <ClaimsProvider>
- <DisplayName>Token Issuer</DisplayName>
- <TechnicalProfiles>
- <TechnicalProfile Id="JwtIssuer">
- <Metadata>
- <!-- Point to the redeem refresh token user journey-->
- <Item Key="RefreshTokenUserJourneyId">ResourceOwnerPasswordCredentials-RedeemRefreshToken</Item>
- </Metadata>
- </TechnicalProfile>
- </TechnicalProfiles>
- </ClaimsProvider>
- ```
-
-6. Add a **UserJourneys** element and its child elements to the **TrustFrameworkPolicy** element:
-
- ```xml
- <UserJourney Id="ResourceOwnerPasswordCredentials">
- <PreserveOriginalAssertion>false</PreserveOriginalAssertion>
- <OrchestrationSteps>
- <OrchestrationStep Order="1" Type="ClaimsExchange">
- <ClaimsExchanges>
- <ClaimsExchange Id="ResourceOwnerFlow" TechnicalProfileReferenceId="ResourceOwnerPasswordCredentials-OAUTH2" />
- </ClaimsExchanges>
- </OrchestrationStep>
- <OrchestrationStep Order="2" Type="ClaimsExchange">
- <ClaimsExchanges>
- <ClaimsExchange Id="AADUserReadWithObjectId" TechnicalProfileReferenceId="AAD-UserReadUsingObjectId" />
- </ClaimsExchanges>
- </OrchestrationStep>
- <OrchestrationStep Order="3" Type="SendClaims" CpimIssuerTechnicalProfileReferenceId="JwtIssuer" />
- </OrchestrationSteps>
- </UserJourney>
- <UserJourney Id="ResourceOwnerPasswordCredentials-RedeemRefreshToken">
- <PreserveOriginalAssertion>false</PreserveOriginalAssertion>
- <OrchestrationSteps>
- <OrchestrationStep Order="1" Type="ClaimsExchange">
- <ClaimsExchanges>
- <ClaimsExchange Id="RefreshTokenSetupExchange" TechnicalProfileReferenceId="SM-RefreshTokenReadAndSetup" />
- </ClaimsExchanges>
- </OrchestrationStep>
- <OrchestrationStep Order="2" Type="ClaimsExchange">
- <ClaimsExchanges>
- <ClaimsExchange Id="CheckRefreshTokenDateFromAadExchange" TechnicalProfileReferenceId="AAD-UserReadUsingObjectId-CheckRefreshTokenDate" />
- </ClaimsExchanges>
- </OrchestrationStep>
- <OrchestrationStep Order="3" Type="SendClaims" CpimIssuerTechnicalProfileReferenceId="JwtIssuer" />
- </OrchestrationSteps>
- </UserJourney>
- ```
-
-7. On the **Custom Policies** page in your Azure AD B2C tenant, select **Upload Policy**.
-8. Enable **Overwrite the policy if it exists**, and then browse to and select the *TrustFrameworkExtensions.xml* file.
-9. Click **Upload**.
-
-## Create a relying party file
-
-Next, update the relying party file that initiates the user journey that you created:
-
-1. Make a copy of *SignUpOrSignin.xml* file in your working directory and rename it to *ROPC_Auth.xml*.
-2. Open the new file and change the value of the **PolicyId** attribute for **TrustFrameworkPolicy** to a unique value. The policy ID is the name of your policy. For example, **B2C_1A_ROPC_Auth**.
-3. Change the value of the **ReferenceId** attribute in **DefaultUserJourney** to `ResourceOwnerPasswordCredentials`.
-4. Change the **OutputClaims** element to only contain the following claims:
-
- ```xml
- <OutputClaim ClaimTypeReferenceId="sub" />
- <OutputClaim ClaimTypeReferenceId="objectId" />
- <OutputClaim ClaimTypeReferenceId="displayName" DefaultValue="" />
- <OutputClaim ClaimTypeReferenceId="givenName" DefaultValue="" />
- <OutputClaim ClaimTypeReferenceId="surname" DefaultValue="" />
- ```
-
-5. On the **Custom Policies** page in your Azure AD B2C tenant, select **Upload Policy**.
-6. Enable **Overwrite the policy if it exists**, and then browse to and select the *ROPC_Auth.xml* file.
-7. Click **Upload**.
-
-## Test the policy
-
-Use your favorite API development application to generate an API call, and review the response to debug your policy. Construct a call like this example with the following information as the body of the POST request:
-
-`https://<tenant-name>.b2clogin.com/<tenant-name>.onmicrosoft.com/B2C_1A_ROPC_Auth/oauth2/v2.0/token`
--- Replace `<tenant-name>` with the name of your Azure AD B2C tenant.-- Replace `B2C_1A_ROPC_Auth` with the full name of your resource owner password credentials policy.-
-| Key | Value |
-| | -- |
-| username | `user-account` |
-| password | `password1` |
-| grant_type | password |
-| scope | openid `application-id` offline_access |
-| client_id | `application-id` |
-| response_type | token id_token |
--- Replace `user-account` with the name of a user account in your tenant.-- Replace `password1` with the password of the user account.-- Replace `application-id` with the Application ID from the *ROPC_Auth_app* registration.-- *Offline_access* is optional if you want to receive a refresh token.-
-The actual POST request looks like the following example:
-
-```https
-POST /<tenant-name>.onmicrosoft.com/oauth2/v2.0/token?B2C_1A_ROPC_Auth HTTP/1.1
-Host: <tenant-name>.b2clogin.com
-Content-Type: application/x-www-form-urlencoded
-
-username=contosouser.outlook.com.ws&password=Passxword1&grant_type=password&scope=openid+bef22d56-552f-4a5b-b90a-1988a7d634ce+offline_access&client_id=bef22d56-552f-4a5b-b90a-1988a7d634ce&response_type=token+id_token
-```
-
-A successful response with offline-access looks like the following example:
-
-```json
-{
- "access_token": "eyJ0eXAiOiJKV1QiLCJhbGciOiJSUzI1NiIsImtpZCI6Ik9YQjNhdTNScWhUQWN6R0RWZDM5djNpTmlyTWhqN2wxMjIySnh6TmgwRlki...",
- "token_type": "Bearer",
- "expires_in": "3600",
- "refresh_token": "eyJraWQiOiJacW9pQlp2TW5pYVc2MUY0TnlfR3REVk1EVFBLbUJLb0FUcWQ1ZWFja1hBIiwidmVyIjoiMS4wIiwiemlwIjoiRGVmbGF0ZSIsInNlciI6Ij...",
- "id_token": "eyJ0eXAiOiJKV1QiLCJhbGciOiJSUzI1NiIsImtpZCI6Ik9YQjNhdTNScWhUQWN6R0RWZDM5djNpTmlyTWhqN2wxMjIySnh6TmgwRlki..."
-}
-```
-
-## Redeem a refresh token
-
-Construct a POST call like the one shown here. Use the information in the following table as the body of the request:
-
-`https://<tenant-name>.b2clogin.com/<tenant-name>.onmicrosoft.com/B2C_1A_ROPC_Auth/oauth2/v2.0/token`
--- Replace `<tenant-name>` with the name of your Azure AD B2C tenant.-- Replace `B2C_1A_ROPC_Auth` with the full name of your resource owner password credentials policy.-
-| Key | Value |
-| | -- |
-| grant_type | refresh_token |
-| response_type | id_token |
-| client_id | `application-id` |
-| resource | `application-id` |
-| refresh_token | `refresh-token` |
--- Replace `application-id` with the Application ID from the *ROPC_Auth_app* registration.-- Replace `refresh-token` with the **refresh_token** that was sent back in the previous response.-
-A successful response looks like the following example:
-
-```json
-{
- "access_token": "eyJ0eXAiOiJKV1QiLCJhbGciOiJSUzI1NiIsImtpZCI6Ilg1ZVhrNHh5b2pORnVtMWtsMll0djhkbE5QNC1jNTdkTzZRR1RWQndhT...",
- "id_token": "eyJ0eXAiOiJKV1QiLCJhbGciOiJSUzI1NiIsImtpZCI6Ilg1ZVhrNHh5b2pORnVtMWtsMll0djhkbE5QNC1jNTdkTzZRR1RWQn...",
- "token_type": "Bearer",
- "not_before": 1533672990,
- "expires_in": 3600,
- "expires_on": 1533676590,
- "resource": "bef2222d56-552f-4a5b-b90a-1988a7d634c3",
- "id_token_expires_in": 3600,
- "profile_info": "eyJ2ZXIiOiIxLjAiLCJ0aWQiOiI1MTZmYzA2NS1mZjM2LTRiOTMtYWE1YS1kNmVlZGE3Y2JhYzgiLCJzdWIiOm51bGwsIm5hbWUiOiJEYXZpZE11IiwicHJlZmVycmVkX3VzZXJuYW1lIjpudWxsLCJpZHAiOiJMb2NhbEFjY291bnQifQ",
- "refresh_token": "eyJraWQiOiJjcGltY29yZV8wOTI1MjAxNSIsInZlciI6IjEuMCIsInppcCI6IkRlZmxhdGUiLCJzZXIiOiIxLjAi...",
- "refresh_token_expires_in": 1209600
-}
-```
-
-## Use a native SDK or App-Auth
-
-Azure AD B2C meets OAuth 2.0 standards for public client resource owner password credentials and should be compatible with most client SDKs. For the latest information, see [Native App SDK for OAuth 2.0 and OpenID Connect implementing modern best practices](https://appauth.io/).
-
-## Next steps
--- See a full example of this scenario in the [Azure Active Directory B2C custom policy starter pack](https://github.com/Azure-Samples/active-directory-b2c-custom-policy-starterpack/tree/master/scenarios/source/aadb2c-ief-ropc).-- Learn more about the tokens that are used by Azure Active Directory B2C in the [Token reference](tokens-overview.md).
active-directory-b2c Whats New Docs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/whats-new-docs.md
Welcome to what's new in Azure Active Directory B2C documentation. This article
### New articles -- [Create a user flow in Azure Active Directory B2C](create-user-flow.md)
+- [Create a user flow in Azure Active Directory B2C](add-sign-up-and-sign-in-policy.md)
- [Set up phone sign-up and sign-in for user flows (preview)](phone-authentication-user-flows.md) ### Updated articles
Welcome to what's new in Azure Active Directory B2C documentation. This article
### Updated articles - [Set redirect URLs to b2clogin.com for Azure Active Directory B2C](b2clogin.md) - [Define an OpenID Connect technical profile in an Azure Active Directory B2C custom policy](openid-connect-technical-profile.md)-- [Set up phone sign-up and sign-in with custom policies in Azure AD B2C](phone-authentication.md)
+- [Set up phone sign-up and sign-in with custom policies in Azure AD B2C](phone-authentication-user-flows.md)
## August 2020
active-directory Concept Authentication Oath Tokens https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/authentication/concept-authentication-oath-tokens.md
Previously updated : 03/15/2021 Last updated : 03/25/2021
Helga@contoso.com,1234567,2234567abcdef1234567abcdef,60,Contoso,HardwareKey
> [!NOTE] > Make sure you include the header row in your CSV file. If a UPN has a single quote, escape it with another single quote. For example, if the UPN is myΓÇÖuser@domain.com, change it to myΓÇÖΓÇÖuser@domain.com when uploading the file.
-Once properly formatted as a CSV file, an administrator can then sign in to the Azure portal, navigate to **Azure Active Directory > Security > MFA > OATH tokens**, and upload the resulting CSV file.
+Once properly formatted as a CSV file, a Global Administrator can then sign in to the Azure portal, navigate to **Azure Active Directory > Security > MFA > OATH tokens**, and upload the resulting CSV file.
Depending on the size of the CSV file, it may take a few minutes to process. Select the **Refresh** button to get the current status. If there are any errors in the file, you can download a CSV file that lists any errors for you to resolve. The field names in the downloaded CSV file are different than the uploaded version.
active-directory Howto Authentication Methods Activity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/authentication/howto-authentication-methods-activity.md
Previously updated : 03/04/2021 Last updated : 03/16/2021
The registration details report shows the following information for each user:
- The data in the report is not updated in real-time and may reflect a latency of up to a few hours. - Temporary Access Pass registrations are not reflected in the registration tab of the report because they are only valid for short period of time.
+- The **PhoneAppNotification** or **PhoneAppOTP** methods that a user might have configured are not displayed in the dashboard.
## Next steps
active-directory Howto Authentication Temporary Access Pass https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/authentication/howto-authentication-temporary-access-pass.md
Last updated 03/18/2021 -+ - # Configure Temporary Access Pass in Azure AD to register Passwordless authentication methods (Preview) Passwordless authentication methods, such as FIDO2 and Passwordless Phone Sign-in through the Microsoft Authenticator app, enable users to sign in securely without a password. Users can bootstrap Passwordless methods in one of two ways: -- Using existing Azure AD multi-factor authentication methods -- Using a Temporary Access Pass
+- Using existing Azure AD Multi-Factor Authentication methods
+- Using a Temporary Access Pass (TAP)
A Temporary Access Pass is a time-limited passcode issued by an admin that satisfies strong authentication requirements and can be used to onboard other authentication methods, including Passwordless ones. A Temporary Access Pass also makes recovery easier when a user has lost or forgotten their strong authentication factor like a FIDO2 security key or Microsoft Authenticator app, but needs to sign in to register new strong authentication methods.
To configure the Temporary Access Pass authentication method policy:
The default value and the range of allowed values are described in the following table.
- | Setting | Default values | Allowed values | Comments | |
- ||-||--||
- Minimum lifetime | 1 hour | 10 ΓÇô 43200 Minutes (30 days) | Minimum number of minutes that the Temporary Access Pass is valid. | |
- | Maximum lifetime | 24 hours | 10 ΓÇô 43200 Minutes (30 days) | Maximum number of minutes that the Temporary Access Pass is valid. | |
- | Default lifetime | 1 hour | 10 ΓÇô 43200 Minutes (30 days) | Default values can be override by the individual passes, within the minimum and maximum lifetime configured by the policy | |
- | One-time use | False | True / False | When the policy is set to false, passes in the tenant can be used either once or more than once during its validity (maximum lifetime). By enforcing one-time use in the Temporary Access Pass policy, all passes created in the tenant will be created as one-time use. | |
- | Length | 8 | 8-48 characters | Defines the length of the passcode. | |
+ | Setting | Default values | Allowed values | Comments |
+ |||||
+ | Minimum lifetime | 1 hour | 10 ΓÇô 43200 Minutes (30 days) | Minimum number of minutes that the Temporary Access Pass is valid. |
+ | Maximum lifetime | 24 hours | 10 ΓÇô 43200 Minutes (30 days) | Maximum number of minutes that the Temporary Access Pass is valid. |
+ | Default lifetime | 1 hour | 10 ΓÇô 43200 Minutes (30 days) | Default values can be override by the individual passes, within the minimum and maximum lifetime configured by the policy. |
+ | One-time use | False | True / False | When the policy is set to false, passes in the tenant can be used either once or more than once during its validity (maximum lifetime). By enforcing one-time use in the Temporary Access Pass policy, all passes created in the tenant will be created as one-time use. |
+ | Length | 8 | 8-48 characters | Defines the length of the passcode. |
## Create a Temporary Access Pass in the Azure AD Portal
active-directory Tutorial Pilot Aadc Aadccp https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/cloud-sync/tutorial-pilot-aadc-aadccp.md
Before you try this tutorial, consider the following items:
The following are prerequisites required for completing this tutorial - A test environment with Azure AD Connect sync version 1.4.32.0 or later - An OU or group that is in scope of sync and can be used the pilot. We recommend starting with a small set of objects.-- A server running Windows Server 2012 R2 or later that will host the provisioning agent. This cannot be the same server as the Azure AD Connect server.
+- A server running Windows Server 2012 R2 or later that will host the provisioning agent.
- Source anchor for Azure AD Connect sync should be either *objectGuid* or *ms-ds-consistencyGUID* ## Update Azure AD Connect
active-directory Concept Continuous Access Evaluation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/conditional-access/concept-continuous-access-evaluation.md
Continuous access evaluation is implemented by enabling services, like Exchange
This process enables the scenario where users lose access to organizational SharePoint Online files, email, calendar, or tasks, and Teams from Microsoft 365 client apps within mins after one of these critical events.
-[!NOTE] Teams does not support user risk events yet.
+> [!NOTE]
+> Teams does not support user risk events yet.
### Conditional Access policy evaluation (preview)
Exchange and SharePoint are able to synchronize key Conditional Access policies
This process enables the scenario where users lose access to organizational files, email, calendar, or tasks from Microsoft 365 client apps or SharePoint Online immediately after network location changes. > [!NOTE]
-> Not all app and resource provider combination are supported. See table below. Office refers to Word, Excel, and PowerPoint
+> Not all app and resource provider combination are supported. See table below. Office refers to Word, Excel, and PowerPoint.
| | Outlook Web | Outlook Win32 | Outlook iOS | Outlook Android | Outlook Mac | | : | :: | :: | :: | :: | :: |
This process enables the scenario where users lose access to organizational file
| : | :: | :: | :: | :: | :: | | **SharePoint Online** | Supported | Supported | Supported | Supported | Supported |
-| | Teams web apps | Teams Win32 apps | Teams for iOS | Teams for Android | Teams for Mac |
-| : | :: | :: | :: | :: | :: |
-| **SharePoint Online** | Supported | Supported | Supported | Supported | Supported |
-| **Exchange Online** | Supported | Supported | Supported | Supported | Supported |
-| **Exchange Online** | Supported | Supported | Supported | Supported | Supported |
- ### Client-side claim challenge Before continuous access evaluation, clients would always try to replay the access token from its cache as long as it was not expired. With CAE, we are introducing a new case that a resource provider can reject a token even when it is not expired. In order to inform clients to bypass their cache even though the cached tokens have not expired, we introduce a mechanism called **claim challenge** to indicate that the token was rejected and a new access token need to be issued by Azure AD. CAE requires a client update to understand claim challenge. The latest version of the following applications below support claim challenge:
active-directory Howto Conditional Access Policy Registration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/conditional-access/howto-conditional-access-policy-registration.md
Previously updated : 05/26/2020 Last updated : 03/24/2021 -+ # Conditional Access: Securing security info registration
-Securing when and how users register for Azure AD Multi-Factor Authentication and self-service password reset is now possible with user actions in Conditional Access policy. This preview feature is available to organizations who have enabled the [combined registration preview](../authentication/concept-registration-mfa-sspr-combined.md). This functionality may be enabled in organizations where they want to use conditions like trusted network location to restrict access to register for Azure AD Multi-Factor Authentication and self-service password reset (SSPR). For more information about usable conditions, see the article [Conditional Access: Conditions](concept-conditional-access-conditions.md).
+Securing when and how users register for Azure AD Multi-Factor Authentication and self-service password reset is possible with user actions in a Conditional Access policy. This feature is available to organizations who have enabled the [combined registration](../authentication/concept-registration-mfa-sspr-combined.md). This functionality allows organizations to treat the registration process like any application in a Conditional Access policy and use the full power of Conditional Access to secure the experience.
-## Create a policy to require registration from a trusted location
+Some organizations in the past may have used trusted network location or device compliance as a means to secure the registration experience. With the addition of [Temporary Access Pass](../authentication/howto-authentication-temporary-access-pass.md) in Azure AD, administrators can provision time-limited credentials to their users that allow them to register from any device or location. Temporary Access Pass credentials satisfy Conditional Access requirements for multi-factor authentication.
-The following policy applies to all selected users, who attempt to register using the combined registration experience, and blocks access unless they are connecting from a location marked as trusted network.
+## Create a policy to secure registration
+
+The following policy applies to the selected users, who attempt to register using the combined registration experience. The policy requires users to perform multi-factor authentication or use Temporary Access Pass credentials.
1. In the **Azure portal**, browse to **Azure Active Directory** > **Security** > **Conditional Access**. 1. Select **New policy**.
-1. In Name, Enter a Name for this policy. For example, **Combined Security Info Registration on Trusted Networks**.
+1. In Name, Enter a Name for this policy. For example, **Combined Security Info Registration with TAP**.
1. Under **Assignments**, select **Users and groups**, and select the users and groups you want this policy to apply to.
+ 1. Under **Include**, select **All users**.
+
+ > [!WARNING]
+ > Users must be enabled for the [combined registration](../authentication/howto-registration-mfa-sspr-combined.md).
+
+ 1. Under **Exclude**.
+ 1. Select **All guest and external users**.
+
+ > [!NOTE]
+ > Temporary Access Pass does not work for guest users.
+
+ 1. Select **Users and groups** and choose your organization's emergency access or break-glass accounts.
+1. Under **Cloud apps or actions**, select **User actions**, check **Register security information**.
+1. Under **Access controls** > **Grant**.
+ 1. Select **Grant access**.
+ 1. Select **Require multi-factor authentication**.
+ 1. Click **Select**.
+1. Set **Enable policy** to **On**.
+1. Then select **Create**.
+
+Administrators will now have to issue Temporary Access Pass credentials to new users so they can satisfy the requirements for multi-factor authentication to register. Steps to accomplish this task, are found in the section [Create a Temporary Access Pass in the Azure AD Portal](../authentication/howto-authentication-temporary-access-pass.md#create-a-temporary-access-pass-in-the-azure-ad-portal).
+
+Organizations may choose to require other grant controls in addition to or in place of **Require multi-factor authentication** at step 6b. When selecting multiple controls be sure to select the appropriate radio button toggle to require **all** or **one** of the selected controls when making this change.
+
+### Guest user registration
- > [!WARNING]
- > Users must be enabled for the [combined registration](../authentication/howto-registration-mfa-sspr-combined.md).
+For [guest users](../external-identities/what-is-b2b.md) who need to register for multi-factor authentication in your directory you may choose to block registration from outside of [trusted network locations](concept-conditional-access-conditions.md#locations) using the following guide.
- 1. Under **Exclude**, select **Users and groups** and choose your organization's emergency access or break-glass accounts.
- 1. Select **Done**.
+1. In the **Azure portal**, browse to **Azure Active Directory** > **Security** > **Conditional Access**.
+1. Select **New policy**.
+1. In Name, Enter a Name for this policy. For example, **Combined Security Info Registration on Trusted Networks**.
+1. Under **Assignments**, select **Users and groups**, and select the users and groups you want this policy to apply to.
+ 1. Under **Include**, select **All guest and external users**.
1. Under **Cloud apps or actions**, select **User actions**, check **Register security information**. 1. Under **Conditions** > **Locations**. 1. Configure **Yes**.
The following policy applies to all selected users, who attempt to register usin
1. Exclude **All trusted locations**. 1. Select **Done** on the Locations blade. 1. Select **Done** on the Conditions blade.
-1. Under **Conditions** > **Client apps (Preview)**, set **Configure** to **Yes**, and select **Done**.
1. Under **Access controls** > **Grant**. 1. Select **Block access**. 1. Then click **Select**. 1. Set **Enable policy** to **On**. 1. Then select **Save**.
-At step 6 in this policy, organizations have choices they can make. The policy above requires registration from a trusted network location. Organizations can choose to utilize any available conditions in place of **Locations**. Remember that this policy is a block policy so anything included is blocked and anything that does not match the include is allowed.
-
-Some may choose to use device state instead of location in step 6 above:
-
-6. Under **Conditions** > **Device state (Preview)**.
- 1. Configure **Yes**.
- 1. Include **All device state**.
- 1. Exclude **Device Hybrid Azure AD joined** and/or **Device marked as compliant**
- 1. Select **Done** on the Locations blade.
- 1. Select **Done** on the Conditions blade.
-
-> [!WARNING]
-> If you use device state as a condition in your policy this may impact guest users in the directory. [Report-only mode](concept-conditional-access-report-only.md) can help determine the impact of policy decisions.
-> Note that report-only mode is not applicable for Conditional Access policies with "User Actions" scope.
- ## Next steps [Conditional Access common policies](concept-conditional-access-policy-common.md)
active-directory Apple Sso Plugin https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/apple-sso-plugin.md
To use the Microsoft Enterprise SSO plug-in for Apple devices:
- The device must be *enrolled in MDM*, for example, through Microsoft Intune. - Configuration must be *pushed to the device* to enable the Enterprise SSO plug-in. Apple requires this security constraint.
-iOS requirements:
-- iOS 13.0 or later must be installed on the device.-- A Microsoft application that provides the Microsoft Enterprise SSO plug-in for Apple devices must be installed on the device. During the public preview, this application is the [Microsoft Authenticator app](/intune/user-help/user-help-auth-app-overview.md).
+### iOS requirements:
+- iOS 13.0 or higher must be installed on the device.
+- A Microsoft application that provides the Microsoft Enterprise SSO plug-in for Apple devices must be installed on the device. For Public Preview, these applications are the [Microsoft Authenticator app](/azure/active-directory/user-help/user-help-auth-app-overview).
-macOS requirements:
-- macOS 10.15 or later must be installed on the device. -- A Microsoft application that provides the Microsoft Enterprise SSO plug-in for Apple devices must be installed on the device. During the public preview, this application is the [Intune Company Portal app](/intune/user-help/enroll-your-device-in-intune-macos-cp.md).
+### macOS requirements:
+- macOS 10.15 or higher must be installed on the device.
+- A Microsoft application that provides the Microsoft Enterprise SSO plug-in for Apple devices must be installed on the device. For Public Preview, these applications include the [Intune Company Portal app](/mem/intune/user-help/enroll-your-device-in-intune-macos-cp).
## Enable the SSO plug-in
active-directory Resilient End User Experience https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/fundamentals/resilient-end-user-experience.md
As part of the external identity provider registration process, include a verifi
## Availability of Multi-factor authentication
-When using a [phone service for Multi-factor authentication (MFA)](../../active-directory-b2c/phone-authentication.md), make sure to consider an alternative service provider. The local Telco or phone service provider may experience disruptions in their service.
+When using a [phone service for Multi-factor authentication (MFA)](../../active-directory-b2c/phone-authentication-user-flows.md), make sure to consider an alternative service provider. The local Telco or phone service provider may experience disruptions in their service.
### How to choose an alternate MFA
active-directory Service Accounts Managed Identities https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/fundamentals/service-accounts-managed-identities.md
Managed identities are best used for communications among services that support
A source system requests access to a target service. Any Azure resource can be a source system. For example, an Azure VM, Azure Function instance, and Azure App Services instances support managed identities.
-[!VIDEO https://www.youtube.com/embed/5lqayO_oeEo]
+ > [!VIDEO https://www.youtube.com/embed/5lqayO_oeEo]
### How authentication and authorization work
There are several ways in which you can find managed identities:
### Using the Azure portal
-1. In Azure AD, select Enterprise application.
+1. In Azure Active Directory, select Enterprise applications.
2. Select the filter for ΓÇ£Managed IdentitiesΓÇ¥
You can assess the security of managed identities in the following ways:
## Move to managed identities
-If you are using a service principal or an Azure AD user account, evaluate if you can instead use a managed to eliminate the need to protect, rotate, and manage credentials.
+If you are using a service principal or an Azure AD user account, evaluate if you can instead use a managed identity to eliminate the need to protect, rotate, and manage credentials.
## Next steps
active-directory Whats New Archive https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/fundamentals/whats-new-archive.md
Workspaces, the filters admins can configure to organize their users' apps, will
**Service category:** B2C - Consumer Identity Management **Product capability:** B2B/B2C
-With phone number sign-up and sign-in, developers and enterprises can allow their customers to sign up and sign in using a one-time password sent to the user's phone number via SMS. This feature also lets the customer change their phone number if they lose access to their phone. With the power of custom policies and phone sign-up and sign-in, allows developers and enterprises to communicate their brand through page customization. Find out how to [set up phone sign-up and sign-in with custom policies in Azure AD B2C](../../active-directory-b2c/phone-authentication.md).
+With phone number sign-up and sign-in, developers and enterprises can allow their customers to sign up and sign in using a one-time password sent to the user's phone number via SMS. This feature also lets the customer change their phone number if they lose access to their phone. With the power of custom policies and phone sign-up and sign-in, allows developers and enterprises to communicate their brand through page customization. Find out how to [set up phone sign-up and sign-in with custom policies in Azure AD B2C](../../active-directory-b2c/phone-authentication-user-flows.md).
active-directory Whats New https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/fundamentals/whats-new.md
Azure Active Directory (Azure AD) Application Proxy natively supports single sig
**Service category:** B2C - Consumer Identity Management **Product capability:** B2B/B2C
-With phone number sign-up and sign-in, developers and enterprises can allow their customers to sign up and sign in using a one-time password sent to the user's phone number via SMS. This feature also lets the customer change their phone number if they lose access to their phone. With the power of custom policies, allow developers and enterprises to communicate their brand through page customization. Find out how to [set up phone sign-up and sign-in with custom policies in Azure AD B2C](../../active-directory-b2c/phone-authentication.md).
+With phone number sign-up and sign-in, developers and enterprises can allow their customers to sign up and sign in using a one-time password sent to the user's phone number via SMS. This feature also lets the customer change their phone number if they lose access to their phone. With the power of custom policies, allow developers and enterprises to communicate their brand through page customization. Find out how to [set up phone sign-up and sign-in with custom policies in Azure AD B2C](../../active-directory-b2c/phone-authentication-user-flows.md).
active-directory Entitlement Management Access Package Request Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/governance/entitlement-management-access-package-request-policy.md
Follow these steps if you want to allow users in your directory to be able to re
1. Select one of the following options:
- | | |
+ | | Description |
| | | | **Specific users and groups** | Choose this option if you want only the users and groups in your directory that you specify to be able to request this access package. | | **All members (excluding guests)** | Choose this option if you want all member users in your directory to be able to request this access package. This option doesn't include any guest users you might have invited into your directory. |
Follow these steps if you want to allow users not in your directory to request t
1. Select one of the following options:
- | | |
+ | | Description |
| | | | **Specific connected organizations** | Choose this option if you want to select from a list of organizations that your administrator previously added. All users from the selected organizations can request this access package. | | **All configured connected organizations** | Choose this option if all users from all your configured connected organizations can request this access package. Only users from configured connected organizations can request access packages that are shown to users from all configured organizations. |
active-directory How To Connect Selective Password Hash Synchronization https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/how-to-connect-selective-password-hash-synchronization.md
To reduce the configuration administrative effort, you should first consider the
> [!Important] > With either configuration option chosen, a required initial sync (Full Sync) to apply the changes, will be performed automatically over the next sync cycle.
+> [!Important]
+> Configuring selective password hash synchronization directly influences password writeback. Password changes or password resets that are initiated in Azure Active Directory write back to on-premises Active Directory only if the user is in scope for password hash synchronization.
+ ### The adminDescription attribute Both scenarios rely on setting the adminDescription attribute of users to a specific value. This allows the the rules to be applied and is what makes selective PHS work.
Once all configurations are complete, you need edit the attribute **adminDescrip
![Edit attribute](media/how-to-connect-selective-password-hash-synchronization/exclude-11.png)
+You can also use the following PowerShell command to edit a user's **adminDescription** attribute:
+
+```Set-ADUser myuser -Replace @{adminDescription="PHSFiltered"}```
## Excluded users is larger than included users The following section describes how to enable selective password hash synchronization when the number of users to **exclude** is **larger** than the number of users to **include**.
Once all configurations are complete, you need edit the attribute **adminDescrip
![Edit attributes](media/how-to-connect-selective-password-hash-synchronization/include-11.png)
-
+ You can also use the following PowerShell command to edit a user's **adminDescription** attribute:
+
+ ```Set-ADUser myuser -Replace @{adminDescription="PHSIncluded"}```
## Next Steps - [What is password hash synchronization?](whatis-phs.md)
active-directory My Apps Deployment Plan https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/my-apps-deployment-plan.md
Administrators can configure:
## Plan consent configuration
-There are two types of consent: user consent and consent for apps accessing data.
-
-![Screen shot of consent configuration](./media/my-apps-deployment-plan/my-apps-consent.png)
- ### User consent for applications
-Users or administrators must consent to any applicationΓÇÖs terms of use and privacy policies. You must decide if users or only administrators can consent to applications. **We recommend that if your business rules allow, you use administrator consent to maintain control of the applications in your tenant**.
-
-To use administrator consent, you must be a global administrator of the organization, and the applications must be either:
-
-* Registered in your organization.
+Before a user can sign in to an application and the application can access your organization's data, a user or an admin must grant the application permissions. You can configure whether user consent is allowed, and under which conditions. **Microsoft recommends you only allow user consent for applications from verified publishers.**
-* Registered in another Azure AD organization and previously consented to by at least one user.
-
-If you want to allow users to consent, you must decide if you want them to consent to any app, or only under specific circumstances.
-
-For more information, see [Configure the way end users consent to an application in Azure Active Directory.](../manage-apps/configure-user-consent.md)
+For more information, see [Configure how end-users consent to applications](../manage-apps/configure-user-consent.md)
### Group owner consent for apps accessing data
-Determine if owners of the Azure AD security groups or M365 groups are able to consent to applications to access data for the groups they own. You can disallow, allow all group owners, or allow only a subset of group owners.
+Group and team owners can authorize applications, such as applications published by third-party vendors, to access your organization's data associated with a group. See [Resource-specific consent in Microsoft Teams](https://docs.microsoft.com/microsoftteams/resource-specific-consent) to learn more.
-For more information, see [Configure group consent permissions](../manage-apps/configure-user-consent-groups.md).
+You can configure whether you'd like to allow or disable this feature.
-Then, configure your [User and group owner consent settings](https://portal.azure.com/) in the Azure portal.
+For more information, see [Configure group consent permissions](../manage-apps/configure-user-consent-groups.md).
### Plan communications
active-directory Groups Role Settings https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/privileged-identity-management/groups-role-settings.md
You can choose from two assignment duration options for each assignment type (el
You can choose one of these **eligible** assignment duration options:
-| | |
+| | Description |
| | | | **Allow permanent eligible assignment** | Resource administrators can assign permanent eligible assignment. | | **Expire eligible assignment after** | Resource administrators can require that all eligible assignments have a specified start and end date. | And, you can choose one of these **active** assignment duration options:
-| | |
+| | Description |
| | | | **Allow permanent active assignment** | Resource administrators can assign permanent active assignment. | | **Expire active assignment after** | Resource administrators can require that all active assignments have a specified start and end date. |
active-directory Pim How To Change Default Settings https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/privileged-identity-management/pim-how-to-change-default-settings.md
You can choose from two assignment duration options for each assignment type (el
You can choose one of these **eligible** assignment duration options:
-| | |
+| | Description |
| | | | **Allow permanent eligible assignment** | Global admins and Privileged role admins can assign permanent eligible assignment. | | **Expire eligible assignment after** | Global admins and Privileged role admins can require that all eligible assignments have a specified start and end date. | And, you can choose one of these **active** assignment duration options:
-| | |
+| | Description |
| | | | **Allow permanent active assignment** | Global admins and Privileged role admins can assign permanent active assignment. | | **Expire active assignment after** | Global admins and Privileged role admins can require that all active assignments have a specified start and end date. |
active-directory Pim How To Configure Security Alerts https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/privileged-identity-management/pim-how-to-configure-security-alerts.md
This section lists all the security alerts for Azure AD roles, along with how to
### Administrators aren't using their privileged roles
-| | |
+Severity: **Low**
+
+| | Description |
| | |
-| **Severity** | Low |
| **Why do I get this alert?** | Users that have been assigned privileged roles they don't need increases the chance of an attack. It is also easier for attackers to remain unnoticed in accounts that are not actively being used. | | **How to fix?** | Review the users in the list and remove them from privileged roles that they do not need. | | **Prevention** | Assign privileged roles only to users who have a business justification. </br>Schedule regular [access reviews](pim-how-to-start-security-review.md) to verify that users still need their access. |
This section lists all the security alerts for Azure AD roles, along with how to
### Roles don't require multi-factor authentication for activation
-| | |
+Severity: **Low**
+
+| | Description |
| | |
-| **Severity** | Low |
| **Why do I get this alert?** | Without multi-factor authentication, compromised users can activate privileged roles. | | **How to fix?** | Review the list of roles and [require multi-factor authentication](pim-how-to-change-default-settings.md) for every role. | | **Prevention** | [Require MFA](pim-how-to-change-default-settings.md) for every role. |
This section lists all the security alerts for Azure AD roles, along with how to
### The organization doesn't have Azure AD Premium P2
-| | |
+Severity: **Low**
+
+| | Description |
| | |
-| **Severity** | Low |
| **Why do I get this alert?** | The current Azure AD organization does not have Azure AD Premium P2. | | **How to fix?** | Review information about [Azure AD editions](../fundamentals/active-directory-whatis.md). Upgrade to Azure AD Premium P2. | ### Potential stale accounts in a privileged role
-| | |
+Severity: **Medium**
+
+| | Description |
| | |
-| **Severity** | Medium |
| **Why do I get this alert?** | Accounts in a privileged role have not changed their password in the past 90 days. These accounts might be service or shared accounts that aren't being maintained and are vulnerable to attackers. | | **How to fix?** | Review the accounts in the list. If they no longer need access, remove them from their privileged roles. | | **Prevention** | Ensure that accounts that are shared are rotating strong passwords when there is a change in the users that know the password. </br>Regularly review accounts with privileged roles using [access reviews](pim-how-to-start-security-review.md) and remove role assignments that are no longer needed. |
This section lists all the security alerts for Azure AD roles, along with how to
### Roles are being assigned outside of Privileged Identity Management
-| | |
+Severity: **High**
+
+| | Description |
| | |
-| **Severity** | High |
| **Why do I get this alert?** | Privileged role assignments made outside of Privileged Identity Management are not properly monitored and may indicate an active attack. | | **How to fix?** | Review the users in the list and remove them from privileged roles assigned outside of Privileged Identity Management. | | **Prevention** | Investigate where users are being assigned privileged roles outside of Privileged Identity Management and prohibit future assignments from there. |
This section lists all the security alerts for Azure AD roles, along with how to
### There are too many global administrators
-| | |
+Severity: **Low**
+
+| | Description |
| | |
-| **Severity** | Low |
| **Why do I get this alert?** | Global administrator is the highest privileged role. If a Global Administrator is compromised, the attacker gains access to all of their permissions, which puts your whole system at risk. | | **How to fix?** | Review the users in the list and remove any that do not absolutely need the Global administrator role. </br>Assign lower privileged roles to these users instead. | | **Prevention** | Assign users the least privileged role they need. |
This section lists all the security alerts for Azure AD roles, along with how to
### Roles are being activated too frequently
-| | |
+Severity: **Low**
+
+| | Description |
| | |
-| **Severity** | Low |
| **Why do I get this alert?** | Multiple activations to the same privileged role by the same user is a sign of an attack. | | **How to fix?** | Review the users in the list and ensure that the [activation duration](pim-how-to-change-default-settings.md) for their privileged role is set long enough for them to perform their tasks. | | **Prevention** | Ensure that the [activation duration](pim-how-to-change-default-settings.md) for privileged roles is set long enough for users to perform their tasks.</br>[Require multi-factor authentication](pim-how-to-change-default-settings.md) for privileged roles that have accounts shared by multiple administrators. |
This section lists all the security alerts for Azure AD roles, along with how to
### Administrators aren't using their privileged roles
-| | |
+Severity: **Low**
+
+| | Description |
| | |
-| **Severity** | Low |
| **Why do I get this alert?** | Users that have been assigned privileged roles they don't need increases the chance of an attack. It is also easier for attackers to remain unnoticed in accounts that are not actively being used. | | **How to fix?** | Review the users in the list and remove them from privileged roles that they do not need. | | **Prevention** | Assign privileged roles only to users who have a business justification. </br>Schedule regular [access reviews](pim-how-to-start-security-review.md) to verify that users still need their access. |
This section lists all the security alerts for Azure AD roles, along with how to
### Roles don't require multi-factor authentication for activation
-| | |
+Severity: **Low**
+
+| | Description |
| | |
-| **Severity** | Low |
| **Why do I get this alert?** | Without multi-factor authentication, compromised users can activate privileged roles. | | **How to fix?** | Review the list of roles and [require multi-factor authentication](pim-how-to-change-default-settings.md) for every role. | | **Prevention** | [Require MFA](pim-how-to-change-default-settings.md) for every role. |
This section lists all the security alerts for Azure AD roles, along with how to
### The organization doesn't have Azure AD Premium P2
-| | |
+Severity: **Low**
+
+| | Description |
| | |
-| **Severity** | Low |
| **Why do I get this alert?** | The current Azure AD organization does not have Azure AD Premium P2. | | **How to fix?** | Review information about [Azure AD editions](../fundamentals/active-directory-whatis.md). Upgrade to Azure AD Premium P2. | ### Potential stale accounts in a privileged role
-| | |
+Severity: **Medium**
+
+| | Description |
| | |
-| **Severity** | Medium |
| **Why do I get this alert?** | Accounts in a privileged role have not changed their password in the past 90 days. These accounts might be service or shared accounts that aren't being maintained and are vulnerable to attackers. | | **How to fix?** | Review the accounts in the list. If they no longer need access, remove them from their privileged roles. | | **Prevention** | Ensure that accounts that are shared are rotating strong passwords when there is a change in the users that know the password. </br>Regularly review accounts with privileged roles using [access reviews](pim-how-to-start-security-review.md) and remove role assignments that are no longer needed. |
This section lists all the security alerts for Azure AD roles, along with how to
### Roles are being assigned outside of Privileged Identity Management
-| | |
+Severity: **High**
+
+| | Description |
| | |
-| **Severity** | High |
| **Why do I get this alert?** | Privileged role assignments made outside of Privileged Identity Management are not properly monitored and may indicate an active attack. | | **How to fix?** | Review the users in the list and remove them from privileged roles assigned outside of Privileged Identity Management. | | **Prevention** | Investigate where users are being assigned privileged roles outside of Privileged Identity Management and prohibit future assignments from there. |
This section lists all the security alerts for Azure AD roles, along with how to
### There are too many global administrators
-| | |
+Severity: **Low**
+
+| | Description |
| | |
-| **Severity** | Low |
| **Why do I get this alert?** | Global administrator is the highest privileged role. If a Global Administrator is compromised, the attacker gains access to all of their permissions, which puts your whole system at risk. | | **How to fix?** | Review the users in the list and remove any that do not absolutely need the Global administrator role. </br>Assign lower privileged roles to these users instead. | | **Prevention** | Assign users the least privileged role they need. |
This section lists all the security alerts for Azure AD roles, along with how to
### Roles are being activated too frequently
-| | |
+Severity: **Low**
+
+| | Description |
| | |
-| **Severity** | Low |
| **Why do I get this alert?** | Multiple activations to the same privileged role by the same user is a sign of an attack. | | **How to fix?** | Review the users in the list and ensure that the [activation duration](pim-how-to-change-default-settings.md) for their privileged role is set long enough for them to perform their tasks. | | **Prevention** | Ensure that the [activation duration](pim-how-to-change-default-settings.md) for privileged roles is set long enough for users to perform their tasks.</br>[Require multi-factor authentication](pim-how-to-change-default-settings.md) for privileged roles that have accounts shared by multiple administrators. |
active-directory Pim Resource Roles Configure Role Settings https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/privileged-identity-management/pim-resource-roles-configure-role-settings.md
You can choose from two assignment duration options for each assignment type (el
You can choose one of these **eligible** assignment duration options:
-| | |
+| | Description |
| | | | **Allow permanent eligible assignment** | Resource administrators can assign permanent eligible assignment. | | **Expire eligible assignment after** | Resource administrators can require that all eligible assignments have a specified start and end date. | And, you can choose one of these **active** assignment duration options:
-| | |
+| | Description |
| | | | **Allow permanent active assignment** | Resource administrators can assign permanent active assignment. | | **Expire active assignment after** | Resource administrators can require that all active assignments have a specified start and end date. |
active-directory Clever Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/clever-tutorial.md
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
## Configure Clever SSO
-Follow the instructions given in the [link](https://support.clever.com/hc/articles/205889768-Single-Sign-On-SSO-Log-in-with-Office-365-Azure-) to configure single sign-on on Clever side.
+Follow the instructions given in the [link](https://support.clever.com/hc/s/articles/205889768) to configure single sign-on on Clever side.
### Create Clever test user
active-directory Mozy Enterprise Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/mozy-enterprise-tutorial.md
To configure Azure AD single sign-on with Mozy Enterprise, perform the following
`https://<tenantname>.Mozyenterprise.com` > [!NOTE]
- > The value is not real. Update the value with the actual Sign-On URL. Contact [Mozy Enterprise Client support team](https://support.mozy.com/) to get the value. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+ > The value is not real. Update the value with the actual Sign-On URL. Contact [Mozy Enterprise Client support team](https://www.safenames.net/about-us/contact-us) to get the value. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
5. On the **Set up Single Sign-On with SAML** page, in the **SAML Signing Certificate** section, click **Download** to download the **Certificate (Base64)** from the given options as per your requirement and save it on your computer.
active-directory Olfeo Saas Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/olfeo-saas-provisioning-tutorial.md
+
+ Title: 'Tutorial: Configure Olfeo SAAS for automatic user provisioning with Azure Active Directory | Microsoft Docs'
+description: Learn how to automatically provision and de-provision user accounts from Azure AD to Olfeo SAAS.
+
+documentationcenter: ''
+
+writer: Zhchia
++
+ms.assetid: 5f6b0320-dfe7-451c-8cd8-6ba7f2e40434
+++
+ na
+ms.devlang: na
+ Last updated : 02/26/2021+++
+# Tutorial: Configure Olfeo SAAS for automatic user provisioning
+
+This tutorial describes the steps you need to do in both Olfeo SAAS and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and de-provisions users and groups to [Olfeo SAAS](https://www.olfeo.com) using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../manage-apps/user-provisioning.md).
++
+## Capabilities Supported
+> [!div class="checklist"]
+> * Create users in Olfeo SAAS
+> * Remove users in Olfeo SAAS when they do not require access anymore
+> * Keep user attributes synchronized between Azure AD and Olfeo SAAS
+> * Provision groups and group memberships in Olfeo SAAS
+> * [Single sign-on](olfeo-saas-tutorial.md) to Olfeo SAAS (recommended)
+
+## Prerequisites
+
+The scenario outlined in this tutorial assumes that you already have the following prerequisites:
+
+* [An Azure AD tenant](../develop/quickstart-create-new-tenant.md)
+* A user account in Azure AD with [permission](../roles/permissions-reference.md) to configure provisioning (for example, Application Administrator, Cloud Application administrator, Application Owner, or Global Administrator).
+* A [Olfeo SAAS tenant](https://www.olfeo.com/).
+* A user account in Olfeo SAAS with Admin permissions.
+
+## Step 1. Plan your provisioning deployment
+
+1. Learn about [how the provisioning service works](../app-provisioning/user-provisioning.md).
+1. Determine who will be in [scope for provisioning](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+1. Determine what data to [map between Azure AD and Olfeo SAAS](../app-provisioning/customize-application-attributes.md).
+
+## Step 2. Configure Olfeo SAAS to support provisioning with Azure AD
+
+1. Login to Olfeo SAAS admin console.
+1. Navigate to **Configuration > Annuaires**.
+1. Create a new directory and then name it.
+1. Select **Azure** provider and then click on **Cr�er** to save the new directory.
+1. Navigate to the **Synchronisation** tab to see the **Tenant URL** and the **Jeton secret**. These values will be copied and pasted in the **Tenant URL** and **Secret Token** fields in the Provisioning tab of your Olfeo SAAS application in the Azure portal.
+
+## Step 3. Add Olfeo SAAS from the Azure AD application gallery
+
+Add Olfeo SAAS from the Azure AD application gallery to start managing provisioning to Olfeo SAAS. If you have previously setup Olfeo SAAS for SSO, you can use the same application. However it's recommended that you create a separate app when testing out the integration initially. Learn more about adding an application from the gallery [here](../manage-apps/add-gallery-app.md).
+
+## Step 4. Define who will be in scope for provisioning
+
+The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+
+* When assigning users and groups to Olfeo SAAS, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add more roles.
+
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
++
+## Step 5. Configure automatic user provisioning to Olfeo SAAS
+
+This section guides you through the steps to configure the Azure AD provisioning service to create, update, and disable users and groups in Olfeo SAAS app based on user and group assignments in Azure AD.
+
+### To configure automatic user provisioning for Olfeo SAAS in Azure AD:
+
+1. Sign in to the [Azure portal](https://portal.azure.com). Select **Enterprise Applications**, then select **All applications**.
+
+ ![Enterprise applications blade](common/enterprise-applications.png)
+
+1. In the applications list, select **Olfeo SAAS**.
+
+ ![The Olfeo SAAS link in the Applications list](common/all-applications.png)
+
+1. Select the **Provisioning** tab.
+
+ ![Provisioning tab](common/provisioning.png)
+
+1. Set the **Provisioning Mode** to **Automatic**.
+
+ ![Provisioning tab automatic](common/provisioning-automatic.png)
+
+1. In the **Admin Credentials** section, enter your Olfeo SAAS **Tenant URL** and **Secret token** information. Select **Test Connection** to ensure that Azure AD can connect to Olfeo SAAS. If the connection fails, ensure that your Olfeo SAAS account has admin permissions and try again.
+
+ ![Token](common/provisioning-testconnection-tenanturltoken.png)
+
+1. In the **Notification Email** field, enter the email address of a person or group who should receive the provisioning error notifications. Select the **Send an email notification when a failure occurs** check box.
+
+ ![Notification Email](common/provisioning-notification-email.png)
+
+1. Select **Save**.
+
+1. In the **Mappings** section, select **Synchronize Azure Active Directory Users to Olfeo SAAS**.
+
+1. Review the user attributes that are synchronized from Azure AD to Olfeo SAAS in the **Attribute Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in Olfeo SAAS for update operations. If you change the [matching target attribute](../app-provisioning/customize-application-attributes.md), you'll need to ensure that the Olfeo SAAS API supports filtering users based on that attribute. Select **Save** to commit any changes.
+
+ |Attribute|Type|Supported for filtering|
+ ||||
+ |userName|String|&check;|
+ |displayName|String|
+ |active|Boolean|
+ |emails[type eq "work"].value|String|
+ |preferredLanguage|String|
+ |name.givenName|String|
+ |name.familyName|String|
+ |name.formatted|String|
+ |externalId|String|
+
+1. Under the **Mappings** section, select **Synchronize Azure Active Directory Groups to Olfeo SAAS**.
+
+1. Review the group attributes that are synchronized from Azure AD to Olfeo SAAS in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the groups in Olfeo SAAS for update operations. Select the **Save** button to commit any changes.
+
+ |Attribute|Type|Supported for filtering|
+ ||||
+ |displayName|String|&check;|
+ |externalId|String|
+ |members|Reference|
+
+1. To configure scoping filters, see the instructions provided in the [Scoping filter tutorial](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+
+1. To enable the Azure AD provisioning service for Olfeo SAAS, change **Provisioning Status** to **On** in the **Settings** section.
+
+ ![Provisioning Status Toggled On](common/provisioning-toggle-on.png)
+
+1. Define the users or groups that you want to provision to Olfeo SAAS by selecting the desired values in **Scope** in the **Settings** section.
+
+ ![Provisioning Scope](common/provisioning-scope.png)
+
+1. When you're ready to provision, select **Save**.
+
+ ![Saving Provisioning Configuration](common/provisioning-configuration-save.png)
+
+This operation starts the initial synchronization cycle of all users and groups defined in **Scope** in the **Settings** section. The initial cycle takes longer to do than next cycles, which occur about every 40 minutes as long as the Azure AD provisioning service is running.
+
+## Step 6. Monitor your deployment
+
+After you've configured provisioning, use the following resources to monitor your deployment:
+
+* Use the [provisioning logs](../reports-monitoring/concept-provisioning-logs.md) to determine which users were provisioned successfully or unsuccessfully.
+* Check the [progress bar](../app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user.md) to see the status of the provisioning cycle and how close it's to completion.
+* If the provisioning configuration seems to be in an unhealthy state, the application will go into quarantine. To learn more about quarantine states, see [Application provisioning status of quarantine](../app-provisioning/application-provisioning-quarantine-status.md).
+
+## More resources
+
+* [Managing user account provisioning for enterprise apps](../app-provisioning/configure-automatic-user-provisioning-portal.md)
+* [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+
+## Next steps
+
+* [Learn how to review logs and get reports on provisioning activity](../app-provisioning/check-status-user-account-provisioning.md)
active-directory Saba Cloud Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/saba-cloud-tutorial.md
+
+ Title: 'Tutorial: Azure Active Directory single sign-on (SSO) integration with Saba Cloud | Microsoft Docs'
+description: Learn how to configure single sign-on between Azure Active Directory and Saba Cloud.
++++++++ Last updated : 03/22/2021++++
+# Tutorial: Azure Active Directory single sign-on (SSO) integration with Saba Cloud
+
+In this tutorial, you'll learn how to integrate Saba Cloud with Azure Active Directory (Azure AD). When you integrate Saba Cloud with Azure AD, you can:
+
+* Control in Azure AD who has access to Saba Cloud.
+* Enable your users to be automatically signed-in to Saba Cloud with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+## Prerequisites
+
+To get started, you need the following items:
+
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Saba Cloud single sign-on (SSO) enabled subscription.
+
+## Scenario description
+
+In this tutorial, you configure and test Azure AD SSO in a test environment.
+
+* Saba Cloud supports **SP and IDP** initiated SSO.
+* Saba Cloud supports **Just In Time** user provisioning.
+* Saba Cloud Mobile application can now be configured with Azure AD for enabling SSO. In this tutorial, you configure and test Azure AD SSO in a test environment.
+
+## Adding Saba Cloud from the gallery
+
+To configure the integration of Saba Cloud into Azure AD, you need to add Saba Cloud from the gallery to your list of managed SaaS apps.
+
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **Saba Cloud** in the search box.
+1. Select **Saba Cloud** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
++
+## Configure and test Azure AD SSO for Saba Cloud
+
+Configure and test Azure AD SSO with Saba Cloud using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Saba Cloud.
+
+To configure and test Azure AD SSO with Saba Cloud, perform the following steps:
+
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure Saba Cloud SSO](#configure-saba-cloud-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create Saba Cloud test user](#create-saba-cloud-test-user)** - to have a counterpart of B.Simon in Saba Cloud that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+1. **[Test SSO for Saba Cloud (mobile)](#test-sso-for-saba-cloud-mobile)** to verify whether the configuration works.
+
+## Configure Azure AD SSO
+
+Follow these steps to enable Azure AD SSO in the Azure portal.
+
+1. In the Azure portal, on the **Saba Cloud** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Edit Basic SAML Configuration](common/edit-urls.png)
+
+1. On the **Basic SAML Configuration** section, if you wish to configure the application in **IDP** initiated mode, enter the values for the following fields:
+
+ a. In the **Identifier** text box, type a URL using the following pattern:
+ `<CUSTOMER_NAME>_SPLN_PRINCIPLE`
+
+ b. In the **Reply URL** text box, type a URL using the following pattern:
+ `https://<SIGN-ON URL>/Saba/saml/SSO/alias/<ENTITY_ID>`
+
+1. Click **Set additional URLs** and perform the following step if you wish to configure the application in **SP** initiated mode:
+
+ a. In the **Sign-on URL** text box, type a URL using the following pattern:
+ `https://<CUSTOMER_NAME>.sabacloud.com`
+
+ b. In the **Relay State** text box, type a URL using the following pattern: `IDP_INITSAML_SSO_SITE=<SITE_ID> `or in case SAML is configured for a microsite, type a URL using the following pattern:
+`IDP_INITSAML_SSO_SITE=<SITE_ID>SAML_SSO_MICRO_SITE=<MicroSiteId>`
+
+ > [!NOTE]
+ > For more information on configuring the RelayState, please refer to [this](https://help.sabacloud.com/sabacloud/help-system/topics/help-system-idp-and-sp-initiated-sso-for-a-microsite.html) link.
+
+ > [!NOTE]
+ > These values are not real. Update these values with the actual Identifier, Reply URL, Sign-on URL and Relay State. Contact [Saba Cloud Client support team](mailto:support@saba.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+
+1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Federation Metadata XML** and select **Download** to download the certificate and save it on your computer.
+
+ ![The Certificate download link](common/metadataxml.png)
+
+1. On the **Set up Saba Cloud** section, copy the appropriate URL(s) based on your requirement.
+
+ ![Copy configuration URLs](common/copy-configuration-urls.png)
+
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+### Assign the Azure AD test user
+
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Saba Cloud.
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **Saba Cloud**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure Saba Cloud SSO
+
+1. Sign in to your Saba Cloud company site as an administrator.
+1. Click on **Menu** icon and click **Admin**, then select **System Admin** tab.
+
+ ![screenshot for system admin](./media/saba-cloud-tutorial/system.png)
+
+1. In **Configure System**, select **SAML SSO Setup** and click on **SETUP SAML SSO** button.
+
+ ![screenshot for configuration](./media/saba-cloud-tutorial/configure.png)
+
+1. In the pop up window, select **Microsite** from the dropdown and click **ADD AND CONFIGURE**.
+
+ ![screenshot for add site/microsite](./media/saba-cloud-tutorial/microsite.png)
+
+1. In the **Configure IDP** section, click on **BROWSE** to upload the **Federation Metadata XML** file, which you have downloaded from the Azure portal. Enable the **Site Specific IDP** checkbox and click **IMPORT**.
+
+ ![screenshot for Certificate import](./media/saba-cloud-tutorial/certificate.png)
+
+1. In the **Configure SP** section, copy the **Entity Alias** value and paste this value into the **Identifier (Entity ID)** text box in the **Basic SAML Configuration** section in the Azure portal. Click **GENERATE**.
+
+ ![screenshot for Configure SP](./media/saba-cloud-tutorial/generate-metadata.png)
+
+1. In the **Configure Properties** section, verify the populated fields and click **SAVE**.
+
+ ![screenshot for Configure Properties](./media/saba-cloud-tutorial/configure-properties.png)
+
+### Create Saba Cloud test user
+
+In this section, a user called Britta Simon is created in Saba Cloud. Saba Cloud supports just-in-time user provisioning, which is enabled by default. There is no action item for you in this section. If a user doesn't already exist in Saba Cloud, a new one is created after authentication.
+
+> [!NOTE]
+> For enabling SAML just in time user provisioning with Saba cloud, please refer to [this](https://help.sabacloud.com/sabacloud/help-system/topics/help-system-user-provisioning-with-saml.html) documentation.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+#### SP initiated:
+
+* Click on **Test this application** in Azure portal. This will redirect to Saba Cloud Sign on URL where you can initiate the login flow.
+
+* Go to Saba Cloud Sign-on URL directly and initiate the login flow from there.
+
+#### IDP initiated:
+
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the Saba Cloud for which you set up the SSO
+
+You can also use Microsoft My Apps to test the application in any mode. When you click the Saba Cloud tile in the My Apps, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Saba Cloud for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](https://docs.microsoft.com/azure/active-directory/active-directory-saas-access-panel-introduction).
+
+> [!NOTE]
+> If the sign-on URL is not populated in Azure AD then the application is treated as IDP initiated mode and if the sign-on URL is populated then Azure AD will always redirect the user to the Saba Cloud application for service provider initiated flow.
+
+## Test SSO for Saba Cloud (mobile)
+
+1. Open Saba Cloud Mobile application, give the **Site Name** in the textbox and click **Enter**.
+
+ ![Screenshot for Site name.](./media/saba-cloud-tutorial/site-name.png)
+
+1. Enter your **email address** and click **Next**.
+
+ ![Screenshot for email address.](./media/saba-cloud-tutorial/email-address.png)
+
+1. Finally after successful sign in, the application page will be displayed.
+
+ ![Screenshot for successful sign in.](./media/saba-cloud-tutorial/dashboard.png)
+
+## Next steps
+
+Once you configure Saba Cloud you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](https://docs.microsoft.com/cloud-app-security/proxy-deployment-any-app).
++
aks Kubernetes Walkthrough Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/kubernetes-walkthrough-portal.md
Two Kubernetes Services are also created:
app: azure-vote-back template: metadata:
- name: azure-vote-back
- spec:
- ports:
- - port: 6379
- selector:
+ labels:
app: azure-vote-back
-
- apiVersion: apps/v1
- kind: Deployment
- metadata:
- name: azure-vote-front
spec:
- replicas: 1
- selector:
- matchLabels:
- app: azure-vote-front
- template:
- metadata:
- labels:
- app: azure-vote-front
- spec:
- nodeSelector:
- "beta.kubernetes.io/os": linux
- containers:
- - name: azure-vote-front
- image: mcr.microsoft.com/azuredocs/azure-vote-front:v1
- resources:
- requests:
- cpu: 100m
- memory: 128Mi
- limits:
- cpu: 250m
- memory: 256Mi
- ports:
- - containerPort: 80
- env:
- - name: REDIS
- value: "azure-vote-back"
-
- apiVersion: v1
- kind: Service
+ nodeSelector:
+ "beta.kubernetes.io/os": linux
+ containers:
+ - name: azure-vote-back
+ image: mcr.microsoft.com/oss/bitnami/redis:6.0.8
+ env:
+ - name: ALLOW_EMPTY_PASSWORD
+ value: "yes"
+ resources:
+ requests:
+ cpu: 100m
+ memory: 128Mi
+ limits:
+ cpu: 250m
+ memory: 256Mi
+ ports:
+ - containerPort: 6379
+ name: redis
+
+ apiVersion: v1
+ kind: Service
+ metadata:
+ name: azure-vote-back
+ spec:
+ ports:
+ - port: 6379
+ selector:
+ app: azure-vote-back
+
+ apiVersion: apps/v1
+ kind: Deployment
+ metadata:
+ name: azure-vote-front
+ spec:
+ replicas: 1
+ selector:
+ matchLabels:
+ app: azure-vote-front
+ template:
metadata:
- name: azure-vote-front
- spec:
- type: LoadBalancer
- ports:
- - port: 80
- selector:
+ labels:
app: azure-vote-front
+ spec:
+ nodeSelector:
+ "beta.kubernetes.io/os": linux
+ containers:
+ - name: azure-vote-front
+ image: mcr.microsoft.com/azuredocs/azure-vote-front:v1
+ resources:
+ requests:
+ cpu: 100m
+ memory: 128Mi
+ limits:
+ cpu: 250m
+ memory: 256Mi
+ ports:
+ - containerPort: 80
+ env:
+ - name: REDIS
+ value: "azure-vote-back"
+
+ apiVersion: v1
+ kind: Service
+ metadata:
+ name: azure-vote-front
+ spec:
+ type: LoadBalancer
+ ports:
+ - port: 80
+ selector:
+ app: azure-vote-front
``` 1. Deploy the application using the `kubectl apply` command and specify the name of your YAML manifest:
aks Security Baseline https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/security-baseline.md
You can implement a private AKS cluster to ensure network traffic between your A
**Responsibility**: Customer
-**Azure Security Center monitoring**: The [Azure Security Benchmark](/home/mbaldwin/docs/asb/azure-docs-pr/articles/governance/policy/samples/azure-security-benchmark.md) is the default policy initiative for Security Center and is the foundation for [Security Center's recommendations](/home/mbaldwin/docs/asb/azure-docs-pr/articles/security-center/security-center-recommendations.md). The Azure Policy definitions related to this control are enabled automatically by Security Center. Alerts related to this control may require an [Azure Defender](/home/mbaldwin/docs/asb/azure-docs-pr/articles/security-center/azure-defender.md) plan for the related services.
+**Azure Security Center monitoring**: The [Azure Security Benchmark](/azure/governance/policy/samples/azure-security-benchmark) is the default policy initiative for Security Center and is the foundation for [Security Center's recommendations](/azure/security-center/security-center-recommendations). The Azure Policy definitions related to this control are enabled automatically by Security Center. Alerts related to this control may require an [Azure Defender](/azure/security-center/azure-defender) plan for the related services.
**Azure Policy built-in definitions - Microsoft.ContainerService**:
Use the AKS built-in roles with Azure RBAC- Resource Policy Contributor and Owne
**Responsibility**: Customer
-**Azure Security Center monitoring**: The [Azure Security Benchmark](/home/mbaldwin/docs/asb/azure-docs-pr/articles/governance/policy/samples/azure-security-benchmark.md) is the default policy initiative for Security Center and is the foundation for [Security Center's recommendations](/home/mbaldwin/docs/asb/azure-docs-pr/articles/security-center/security-center-recommendations.md). The Azure Policy definitions related to this control are enabled automatically by Security Center. Alerts related to this control may require an [Azure Defender](/home/mbaldwin/docs/asb/azure-docs-pr/articles/security-center/azure-defender.md) plan for the related services.
+**Azure Security Center monitoring**: The [Azure Security Benchmark](/azure/governance/policy/samples/azure-security-benchmark) is the default policy initiative for Security Center and is the foundation for [Security Center's recommendations](/azure/security-center/security-center-recommendations). The Azure Policy definitions related to this control are enabled automatically by Security Center. Alerts related to this control may require an [Azure Defender](/azure/security-center/azure-defender) plan for the related services.
**Azure Policy built-in definitions - Microsoft.ContainerService**:
Note that the process to keep Windows Server nodes up to date differs from nodes
**Responsibility**: Customer
-**Azure Security Center monitoring**: The [Azure Security Benchmark](/home/mbaldwin/docs/asb/azure-docs-pr/articles/governance/policy/samples/azure-security-benchmark.md) is the default policy initiative for Security Center and is the foundation for [Security Center's recommendations](/home/mbaldwin/docs/asb/azure-docs-pr/articles/security-center/security-center-recommendations.md). The Azure Policy definitions related to this control are enabled automatically by Security Center. Alerts related to this control may require an [Azure Defender](/home/mbaldwin/docs/asb/azure-docs-pr/articles/security-center/azure-defender.md) plan for the related services.
+**Azure Security Center monitoring**: The [Azure Security Benchmark](/azure/governance/policy/samples/azure-security-benchmark) is the default policy initiative for Security Center and is the foundation for [Security Center's recommendations](/azure/security-center/security-center-recommendations). The Azure Policy definitions related to this control are enabled automatically by Security Center. Alerts related to this control may require an [Azure Defender](/azure/security-center/azure-defender) plan for the related services.
**Azure Policy built-in definitions - Microsoft.ContainerService**:
aks Use Pod Security Policies https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/use-pod-security-policies.md
Last updated 02/12/2021
> It is highly recommended to begin testing scenarios with Azure Policy for AKS, which offers built-in policies to secure pods and built-in initiatives which map to pod security policies. To migrate from pod security policy, you need to take the following actions on a cluster. > > 1. [Disable pod security policy](#clean-up-resources) on the cluster
-> 1. Enable the [Azure Policy Add-on][kubernetes-policy-reference]
+> 1. Enable the [Azure Policy Add-on][azure-policy-add-on]
> 1. Enable the desired Azure policies from [available built-in policies][policy-samples] > 1. Review [behavior changes between pod security policy and Azure Policy](#behavior-changes-between-pod-security-policy-and-azure-policy)
For more information about limiting pod network traffic, see [Secure traffic bet
[kubectl-logs]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#logs [terms-of-use]: https://azure.microsoft.com/support/legal/preview-supplemental-terms/ [kubernetes-policy-reference]: https://kubernetes.io/docs/concepts/policy/pod-security-policy/#policy-reference- <!-- LINKS - internal --> [aks-quickstart-cli]: kubernetes-walkthrough.md [aks-quickstart-portal]: kubernetes-walkthrough-portal.md
For more information about limiting pod network traffic, see [Secure traffic bet
[az-extension-add]: /cli/azure/extension#az-extension-add [az-extension-update]: /cli/azure/extension#az-extension-update [policy-samples]: ./policy-reference.md#microsoftcontainerservice
+[azure-policy-add-on]: ../governance/policy/concepts/policy-for-kubernetes.md
api-management Security Baseline https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/security-baseline.md
The service backup and restore features of API Management provide the necessary
- [How to implement disaster recovery using service backup and restore in Azure API Management](./api-management-howto-disaster-recovery-backup-restore.md#calling-the-backup-and-restore-operations) -- [How to call the API Management backup operation](/rest/api/apimanagement/2019-01-01/apimanagementservice/backup)
+- [How to call the API Management backup operation](/rest/api/apimanagement/2019-12-01/apimanagementservice/backup)
-- [How to call the API Management restore operation](/rest/api/apimanagement/2019-01-01/apimanagementservice/restore)
+- [How to call the API Management restore operation](/rest/api/apimanagement/2019-12-01/apimanagementservice/restore)
**Responsibility**: Customer
app-service Security Baseline https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/security-baseline.md
Consider implementing an Azure Firewall to centrally create, enforce, and log ap
**Responsibility**: Customer
-**Azure Security Center monitoring**: The [Azure Security Benchmark](/home/mbaldwin/docs/asb/azure-docs-pr/articles/governance/policy/samples/azure-security-benchmark.md) is the default policy initiative for Security Center and is the foundation for [Security Center's recommendations](/home/mbaldwin/docs/asb/azure-docs-pr/articles/security-center/security-center-recommendations.md). The Azure Policy definitions related to this control are enabled automatically by Security Center. Alerts related to this control may require an [Azure Defender](/home/mbaldwin/docs/asb/azure-docs-pr/articles/security-center/azure-defender.md) plan for the related services.
+**Azure Security Center monitoring**: The [Azure Security Benchmark](/azure/governance/policy/samples/azure-security-benchmark) is the default policy initiative for Security Center and is the foundation for [Security Center's recommendations](/azure/security-center/security-center-recommendations). The Azure Policy definitions related to this control are enabled automatically by Security Center. Alerts related to this control may require an [Azure Defender](/azure/security-center/azure-defender) plan for the related services.
**Azure Policy built-in definitions - Microsoft.Network**:
Use Azure Firewall to send traffic and centrally create, enforce, and log applic
**Responsibility**: Customer
-**Azure Security Center monitoring**: The [Azure Security Benchmark](/home/mbaldwin/docs/asb/azure-docs-pr/articles/governance/policy/samples/azure-security-benchmark.md) is the default policy initiative for Security Center and is the foundation for [Security Center's recommendations](/home/mbaldwin/docs/asb/azure-docs-pr/articles/security-center/security-center-recommendations.md). The Azure Policy definitions related to this control are enabled automatically by Security Center. Alerts related to this control may require an [Azure Defender](/home/mbaldwin/docs/asb/azure-docs-pr/articles/security-center/azure-defender.md) plan for the related services.
+**Azure Security Center monitoring**: The [Azure Security Benchmark](/azure/governance/policy/samples/azure-security-benchmark) is the default policy initiative for Security Center and is the foundation for [Security Center's recommendations](/azure/security-center/security-center-recommendations). The Azure Policy definitions related to this control are enabled automatically by Security Center. Alerts related to this control may require an [Azure Defender](/azure/security-center/azure-defender) plan for the related services.
**Azure Policy built-in definitions - Microsoft.Network**:
Additionally, review and follow recommendations in the Locking down an App Servi
**Responsibility**: Customer
-**Azure Security Center monitoring**: The [Azure Security Benchmark](/home/mbaldwin/docs/asb/azure-docs-pr/articles/governance/policy/samples/azure-security-benchmark.md) is the default policy initiative for Security Center and is the foundation for [Security Center's recommendations](/home/mbaldwin/docs/asb/azure-docs-pr/articles/security-center/security-center-recommendations.md). The Azure Policy definitions related to this control are enabled automatically by Security Center. Alerts related to this control may require an [Azure Defender](/home/mbaldwin/docs/asb/azure-docs-pr/articles/security-center/azure-defender.md) plan for the related services.
+**Azure Security Center monitoring**: The [Azure Security Benchmark](/azure/governance/policy/samples/azure-security-benchmark) is the default policy initiative for Security Center and is the foundation for [Security Center's recommendations](/azure/security-center/security-center-recommendations). The Azure Policy definitions related to this control are enabled automatically by Security Center. Alerts related to this control may require an [Azure Defender](/azure/security-center/azure-defender) plan for the related services.
**Azure Policy built-in definitions - Microsoft.Web**:
Use service endpoints to restrict access to your web app from an Azure Virtual N
**Responsibility**: Customer
-**Azure Security Center monitoring**: The [Azure Security Benchmark](/home/mbaldwin/docs/asb/azure-docs-pr/articles/governance/policy/samples/azure-security-benchmark.md) is the default policy initiative for Security Center and is the foundation for [Security Center's recommendations](/home/mbaldwin/docs/asb/azure-docs-pr/articles/security-center/security-center-recommendations.md). The Azure Policy definitions related to this control are enabled automatically by Security Center. Alerts related to this control may require an [Azure Defender](/home/mbaldwin/docs/asb/azure-docs-pr/articles/security-center/azure-defender.md) plan for the related services.
+**Azure Security Center monitoring**: The [Azure Security Benchmark](/azure/governance/policy/samples/azure-security-benchmark) is the default policy initiative for Security Center and is the foundation for [Security Center's recommendations](/azure/security-center/security-center-recommendations). The Azure Policy definitions related to this control are enabled automatically by Security Center. Alerts related to this control may require an [Azure Defender](/azure/security-center/azure-defender) plan for the related services.
**Azure Policy built-in definitions - Microsoft.Network**:
Use service endpoints to restrict access to your web app from an Azure Virtual N
**Responsibility**: Customer
-**Azure Security Center monitoring**: The [Azure Security Benchmark](/home/mbaldwin/docs/asb/azure-docs-pr/articles/governance/policy/samples/azure-security-benchmark.md) is the default policy initiative for Security Center and is the foundation for [Security Center's recommendations](/home/mbaldwin/docs/asb/azure-docs-pr/articles/security-center/security-center-recommendations.md). The Azure Policy definitions related to this control are enabled automatically by Security Center. Alerts related to this control may require an [Azure Defender](/home/mbaldwin/docs/asb/azure-docs-pr/articles/security-center/azure-defender.md) plan for the related services.
+**Azure Security Center monitoring**: The [Azure Security Benchmark](/azure/governance/policy/samples/azure-security-benchmark) is the default policy initiative for Security Center and is the foundation for [Security Center's recommendations](/azure/security-center/security-center-recommendations). The Azure Policy definitions related to this control are enabled automatically by Security Center. Alerts related to this control may require an [Azure Defender](/azure/security-center/azure-defender) plan for the related services.
**Azure Policy built-in definitions - Microsoft.Network**:
Review the referenced links for additional information.
- [How to configure end-to-end TLS by using Application Gateway with the portal](../application-gateway/end-to-end-ssl-portal.md) -- [Secure the ASE as described in Locking down an App Service](/azure/app-service/environment/firewall-integrationEnvironment:)
+- [Secure the ASE as described in Locking down an App Service](/azure/app-service/environment/firewall-integration)
**Responsibility**: Customer
Review the referenced links for additional information.
- [How to configure end-to-end TLS by using Application Gateway with the portal](../application-gateway/end-to-end-ssl-portal.md) -- [Secure the ASE as described in Locking down an App Service](/azure/app-service/environment/firewall-integrationEnvironment:)
+- [Secure the ASE as described in Locking down an App Service](/azure/app-service/environment/firewall-integration)
**Responsibility**: Customer
Apply any of the built-in Azure Policy definitions related to tagging effects, s
- [How to create and use tags](../azure-resource-manager/management/tag-resources.md) -- [Azure App Service Access Restrictions](/azure/app-service/app-service-ip-restriction)
+- [Azure App Service Access Restrictions](/azure/app-service/app-service-ip-restrictions)
**Responsibility**: Customer
Additionally, Azure Key Vault provides centralized secret management with access
**Responsibility**: Customer
-**Azure Security Center monitoring**: The [Azure Security Benchmark](/home/mbaldwin/docs/asb/azure-docs-pr/articles/governance/policy/samples/azure-security-benchmark.md) is the default policy initiative for Security Center and is the foundation for [Security Center's recommendations](/home/mbaldwin/docs/asb/azure-docs-pr/articles/security-center/security-center-recommendations.md). The Azure Policy definitions related to this control are enabled automatically by Security Center. Alerts related to this control may require an [Azure Defender](/home/mbaldwin/docs/asb/azure-docs-pr/articles/security-center/azure-defender.md) plan for the related services.
+**Azure Security Center monitoring**: The [Azure Security Benchmark](/azure/governance/policy/samples/azure-security-benchmark) is the default policy initiative for Security Center and is the foundation for [Security Center's recommendations](/azure/security-center/security-center-recommendations). The Azure Policy definitions related to this control are enabled automatically by Security Center. Alerts related to this control may require an [Azure Defender](/azure/security-center/azure-defender) plan for the related services.
**Azure Policy built-in definitions - Microsoft.Web**:
Microsoft manages the underlying infrastructure for App Service and has implemen
**Responsibility**: Customer
-**Azure Security Center monitoring**: The [Azure Security Benchmark](/home/mbaldwin/docs/asb/azure-docs-pr/articles/governance/policy/samples/azure-security-benchmark.md) is the default policy initiative for Security Center and is the foundation for [Security Center's recommendations](/home/mbaldwin/docs/asb/azure-docs-pr/articles/security-center/security-center-recommendations.md). The Azure Policy definitions related to this control are enabled automatically by Security Center. Alerts related to this control may require an [Azure Defender](/home/mbaldwin/docs/asb/azure-docs-pr/articles/security-center/azure-defender.md) plan for the related services.
+**Azure Security Center monitoring**: The [Azure Security Benchmark](/azure/governance/policy/samples/azure-security-benchmark) is the default policy initiative for Security Center and is the foundation for [Security Center's recommendations](/azure/security-center/security-center-recommendations). The Azure Policy definitions related to this control are enabled automatically by Security Center. Alerts related to this control may require an [Azure Defender](/azure/security-center/azure-defender) plan for the related services.
**Azure Policy built-in definitions - Microsoft.Web**:
Apply Azure Policy [audit], [deny], and [deploy if not exist], effects to automa
**Responsibility**: Customer
-**Azure Security Center monitoring**: The [Azure Security Benchmark](/home/mbaldwin/docs/asb/azure-docs-pr/articles/governance/policy/samples/azure-security-benchmark.md) is the default policy initiative for Security Center and is the foundation for [Security Center's recommendations](/home/mbaldwin/docs/asb/azure-docs-pr/articles/security-center/security-center-recommendations.md). The Azure Policy definitions related to this control are enabled automatically by Security Center. Alerts related to this control may require an [Azure Defender](/home/mbaldwin/docs/asb/azure-docs-pr/articles/security-center/azure-defender.md) plan for the related services.
+**Azure Security Center monitoring**: The [Azure Security Benchmark](/azure/governance/policy/samples/azure-security-benchmark) is the default policy initiative for Security Center and is the foundation for [Security Center's recommendations](/azure/security-center/security-center-recommendations). The Azure Policy definitions related to this control are enabled automatically by Security Center. Alerts related to this control may require an [Azure Defender](/azure/security-center/azure-defender) plan for the related services.
**Azure Policy built-in definitions - Microsoft.Web**:
application-gateway Security Baseline https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/application-gateway/security-baseline.md
file](https://github.com/MicrosoftDocs/SecurityBenchmarks/tree/master/Azure%20Of
**Responsibility**: Customer
-**Azure Security Center monitoring**: The [Azure Security Benchmark](/home/mbaldwin/docs/asb/azure-docs-pr/articles/governance/policy/samples/azure-security-benchmark.md) is the default policy initiative for Security Center and is the foundation for [Security Center's recommendations](/home/mbaldwin/docs/asb/azure-docs-pr/articles/security-center/security-center-recommendations.md). The Azure Policy definitions related to this control are enabled automatically by Security Center. Alerts related to this control may require an [Azure Defender](/home/mbaldwin/docs/asb/azure-docs-pr/articles/security-center/azure-defender.md) plan for the related services.
+**Azure Security Center monitoring**: The [Azure Security Benchmark](/azure/governance/policy/samples/azure-security-benchmark) is the default policy initiative for Security Center and is the foundation for [Security Center's recommendations](/azure/security-center/security-center-recommendations). The Azure Policy definitions related to this control are enabled automatically by Security Center. Alerts related to this control may require an [Azure Defender](/azure/security-center/azure-defender) plan for the related services.
**Azure Policy built-in definitions - Microsoft.Network**:
For additional information, see the references below.
**Responsibility**: Customer
-**Azure Security Center monitoring**: The [Azure Security Benchmark](/home/mbaldwin/docs/asb/azure-docs-pr/articles/governance/policy/samples/azure-security-benchmark.md) is the default policy initiative for Security Center and is the foundation for [Security Center's recommendations](/home/mbaldwin/docs/asb/azure-docs-pr/articles/security-center/security-center-recommendations.md). The Azure Policy definitions related to this control are enabled automatically by Security Center. Alerts related to this control may require an [Azure Defender](/home/mbaldwin/docs/asb/azure-docs-pr/articles/security-center/azure-defender.md) plan for the related services.
+**Azure Security Center monitoring**: The [Azure Security Benchmark](/azure/governance/policy/samples/azure-security-benchmark) is the default policy initiative for Security Center and is the foundation for [Security Center's recommendations](/azure/security-center/security-center-recommendations). The Azure Policy definitions related to this control are enabled automatically by Security Center. Alerts related to this control may require an [Azure Defender](/azure/security-center/azure-defender) plan for the related services.
**Azure Policy built-in definitions - Microsoft.Network**:
For additional information, see the references below.
**Responsibility**: Customer
-**Azure Security Center monitoring**: The [Azure Security Benchmark](/home/mbaldwin/docs/asb/azure-docs-pr/articles/governance/policy/samples/azure-security-benchmark.md) is the default policy initiative for Security Center and is the foundation for [Security Center's recommendations](/home/mbaldwin/docs/asb/azure-docs-pr/articles/security-center/security-center-recommendations.md). The Azure Policy definitions related to this control are enabled automatically by Security Center. Alerts related to this control may require an [Azure Defender](/home/mbaldwin/docs/asb/azure-docs-pr/articles/security-center/azure-defender.md) plan for the related services.
+**Azure Security Center monitoring**: The [Azure Security Benchmark](/azure/governance/policy/samples/azure-security-benchmark) is the default policy initiative for Security Center and is the foundation for [Security Center's recommendations](/azure/security-center/security-center-recommendations). The Azure Policy definitions related to this control are enabled automatically by Security Center. Alerts related to this control may require an [Azure Defender](/azure/security-center/azure-defender) plan for the related services.
**Azure Policy built-in definitions - Microsoft.Network**:
For additional information, see the references below.
**Responsibility**: Customer
-**Azure Security Center monitoring**: The [Azure Security Benchmark](/home/mbaldwin/docs/asb/azure-docs-pr/articles/governance/policy/samples/azure-security-benchmark.md) is the default policy initiative for Security Center and is the foundation for [Security Center's recommendations](/home/mbaldwin/docs/asb/azure-docs-pr/articles/security-center/security-center-recommendations.md). The Azure Policy definitions related to this control are enabled automatically by Security Center. Alerts related to this control may require an [Azure Defender](/home/mbaldwin/docs/asb/azure-docs-pr/articles/security-center/azure-defender.md) plan for the related services.
+**Azure Security Center monitoring**: The [Azure Security Benchmark](/azure/governance/policy/samples/azure-security-benchmark) is the default policy initiative for Security Center and is the foundation for [Security Center's recommendations](/azure/security-center/security-center-recommendations). The Azure Policy definitions related to this control are enabled automatically by Security Center. Alerts related to this control may require an [Azure Defender](/azure/security-center/azure-defender) plan for the related services.
**Azure Policy built-in definitions - Microsoft.Network**:
automation Security Baseline https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/security-baseline.md
When using Hybrid Runbook Workers, the virtual disks on the virtual machines are
**Responsibility**: Customer
-**Azure Security Center monitoring**: The [Azure Security Benchmark](/home/mbaldwin/docs/asb/azure-docs-pr/articles/governance/policy/samples/azure-security-benchmark.md) is the default policy initiative for Security Center and is the foundation for [Security Center's recommendations](/home/mbaldwin/docs/asb/azure-docs-pr/articles/security-center/security-center-recommendations.md). The Azure Policy definitions related to this control are enabled automatically by Security Center. Alerts related to this control may require an [Azure Defender](/home/mbaldwin/docs/asb/azure-docs-pr/articles/security-center/azure-defender.md) plan for the related services.
+**Azure Security Center monitoring**: The [Azure Security Benchmark](/azure/governance/policy/samples/azure-security-benchmark) is the default policy initiative for Security Center and is the foundation for [Security Center's recommendations](/azure/security-center/security-center-recommendations). The Azure Policy definitions related to this control are enabled automatically by Security Center. Alerts related to this control may require an [Azure Defender](/azure/security-center/azure-defender) plan for the related services.
**Azure Policy built-in definitions - Microsoft.Automation**:
You may also use recommendations from Azure Security Center as a secure configur
- [Understanding Azure Policy Effects](../governance/policy/concepts/effects.md) -- [Deploy an Automation Account using an Azure Resource Manager template](/azure/automation/quickstart-create-account-template#deploy-the-template)
+- [Deploy an Automation Account using an Azure Resource Manager template](/azure/automation/quickstart-create-automation-account-template)
- [Azure Policy sample built-ins for Azure Automation](policy-reference.md)
automation Schedules https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/shared-resources/schedules.md
New-AzAutomationSchedule -AutomationAccountName "TestAzureAuto" -Name "1st, 15th
## Create a schedule with a Resource Manager template
-In this example, we use an Automation Resource Manager (ARM) template that creates a new job schedule. For general information about this template to manage Automation job schedules, see [Microsoft.Automation automationAccounts/jobSchedules template reference](/templates/microsoft.automation/automationaccounts/jobschedules#quickstart-templates).
+In this example, we use an Automation Resource Manager (ARM) template that creates a new job schedule. For general information about this template to manage Automation job schedules, see [Microsoft.Automation automationAccounts/jobSchedules template reference](/azure/templates/microsoft.automation/2015-10-31/automationaccounts/jobschedules#quickstart-templates).
Copy this template file into a text editor:
azure-cache-for-redis Cache How To Active Geo Replication https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-cache-for-redis/cache-how-to-active-geo-replication.md
Active geo-replication groups two or more Enterprise Azure Cache for Redis insta
> >
-1. In the **New Redis Cache** creation UI, click **Configure** to set up **Active geo-replication** in the **Advanced** tab.
+1. In the **Advanced** tab of **New Redis Cache** creation UI, select **Enterprise** for **Clustering Policy**.
![Configure active geo-replication](./media/cache-how-to-active-geo-replication/cache-active-geo-replication-not-configured.png)
+1. Click **Configure** to set up **Active geo-replication**.
+ 1. Create a new replication group, for a first cache instance, or select an existing one from the list. ![Link caches](./media/cache-how-to-active-geo-replication/cache-active-geo-replication-new-group.png)
azure-cache-for-redis Security Baseline https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-cache-for-redis/security-baseline.md
Microsoft manages the underlying infrastructure for Azure Cache for Redis and ha
**Responsibility**: Shared
-**Azure Security Center monitoring**: The [Azure Security Benchmark](/home/mbaldwin/docs/asb/azure-docs-pr/articles/governance/policy/samples/azure-security-benchmark.md) is the default policy initiative for Security Center and is the foundation for [Security Center's recommendations](/home/mbaldwin/docs/asb/azure-docs-pr/articles/security-center/security-center-recommendations.md). The Azure Policy definitions related to this control are enabled automatically by Security Center. Alerts related to this control may require an [Azure Defender](/home/mbaldwin/docs/asb/azure-docs-pr/articles/security-center/azure-defender.md) plan for the related services.
+**Azure Security Center monitoring**: The [Azure Security Benchmark](/azure/governance/policy/samples/azure-security-benchmark) is the default policy initiative for Security Center and is the foundation for [Security Center's recommendations](/azure/security-center/security-center-recommendations). The Azure Policy definitions related to this control are enabled automatically by Security Center. Alerts related to this control may require an [Azure Defender](/azure/security-center/azure-defender) plan for the related services.
**Azure Policy built-in definitions - Microsoft.Cache**:
azure-functions Functions Create Maven Intellij https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-create-maven-intellij.md
Title: Create a Java function in Azure Functions using IntelliJ description: Learn how to use IntelliJ to create a simple HTTP-triggered Java function, which you then publish to run in a serverless environment in Azure.-+ Last updated 07/01/2018-+
In this section, you use Azure Toolkit for IntelliJ to create a local Azure Func
![Deploy project to Azure](media/functions-create-first-java-intellij/deploy-functions-to-azure.png)
-1. If you don't have any Function App yet, click *No available function, click to create a new one*.
+1. If you don't have any Function App yet, click *+* in the *Function* line. Type in the function app name and choose proper platform, here we can simply accept default. Click *OK* and the new function app you just created will be automatically selected. Click *Run* to deploy your functions.
![Create function app in Azure](media/functions-create-first-java-intellij/deploy-functions-create-app.png)
-1. Type in the function app name and choose proper subscription/platform/resource group/App Service plan, you can also create resource group/App Service plan here. Then, keep app settings unchanged, click *OK* and wait some minutes for the new function app to be created. After *Creating New Function App...* progress bar disappears.
-
- ![Deploy function app to Azure create app wizard](media/functions-create-first-java-intellij/deploy-functions-create-app-wizard.png)
-
-1. Select the function app you want to deploy to, (the new function app you just created will be automatically selected). Click *Run* to deploy your functions.
-
- ![Screenshot shows the Deploy Azure Functions dialog box.](media/functions-create-first-java-intellij/deploy-functions-run.png)
- ![Deploy function app to Azure log](media/functions-create-first-java-intellij/deploy-functions-log.png) ## Manage function apps from IDEA
azure-functions Security Baseline https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/security-baseline.md
file](https://github.com/MicrosoftDocs/SecurityBenchmarks/tree/master/Azure%20Of
**Responsibility**: Customer
-**Azure Security Center monitoring**: The [Azure Security Benchmark](/home/mbaldwin/docs/asb/azure-docs-pr/articles/governance/policy/samples/azure-security-benchmark.md) is the default policy initiative for Security Center and is the foundation for [Security Center's recommendations](/home/mbaldwin/docs/asb/azure-docs-pr/articles/security-center/security-center-recommendations.md). The Azure Policy definitions related to this control are enabled automatically by Security Center. Alerts related to this control may require an [Azure Defender](/home/mbaldwin/docs/asb/azure-docs-pr/articles/security-center/azure-defender.md) plan for the related services.
+**Azure Security Center monitoring**: The [Azure Security Benchmark](/azure/governance/policy/samples/azure-security-benchmark) is the default policy initiative for Security Center and is the foundation for [Security Center's recommendations](/azure/security-center/security-center-recommendations). The Azure Policy definitions related to this control are enabled automatically by Security Center. Alerts related to this control may require an [Azure Defender](/azure/security-center/azure-defender) plan for the related services.
**Azure Policy built-in definitions - Microsoft.Web**:
Consider deploying Azure Web Application Firewall (WAF) as part of the networkin
**Responsibility**: Customer
-**Azure Security Center monitoring**: The [Azure Security Benchmark](/home/mbaldwin/docs/asb/azure-docs-pr/articles/governance/policy/samples/azure-security-benchmark.md) is the default policy initiative for Security Center and is the foundation for [Security Center's recommendations](/home/mbaldwin/docs/asb/azure-docs-pr/articles/security-center/security-center-recommendations.md). The Azure Policy definitions related to this control are enabled automatically by Security Center. Alerts related to this control may require an [Azure Defender](/home/mbaldwin/docs/asb/azure-docs-pr/articles/security-center/azure-defender.md) plan for the related services.
+**Azure Security Center monitoring**: The [Azure Security Benchmark](/azure/governance/policy/samples/azure-security-benchmark) is the default policy initiative for Security Center and is the foundation for [Security Center's recommendations](/azure/security-center/security-center-recommendations). The Azure Policy definitions related to this control are enabled automatically by Security Center. Alerts related to this control may require an [Azure Defender](/azure/security-center/azure-defender) plan for the related services.
**Azure Policy built-in definitions - Microsoft.Web**:
If you have built-in custom security/audit logging within your function app, ena
**Responsibility**: Customer
-**Azure Security Center monitoring**: The [Azure Security Benchmark](/home/mbaldwin/docs/asb/azure-docs-pr/articles/governance/policy/samples/azure-security-benchmark.md) is the default policy initiative for Security Center and is the foundation for [Security Center's recommendations](/home/mbaldwin/docs/asb/azure-docs-pr/articles/security-center/security-center-recommendations.md). The Azure Policy definitions related to this control are enabled automatically by Security Center. Alerts related to this control may require an [Azure Defender](/home/mbaldwin/docs/asb/azure-docs-pr/articles/security-center/azure-defender.md) plan for the related services.
+**Azure Security Center monitoring**: The [Azure Security Benchmark](/azure/governance/policy/samples/azure-security-benchmark) is the default policy initiative for Security Center and is the foundation for [Security Center's recommendations](/azure/security-center/security-center-recommendations). The Azure Policy definitions related to this control are enabled automatically by Security Center. Alerts related to this control may require an [Azure Defender](/azure/security-center/azure-defender) plan for the related services.
**Azure Policy built-in definitions - Microsoft.Web**:
You may also use Private Endpoints to perform network isolation. An Azure Privat
**Responsibility**: Customer
-**Azure Security Center monitoring**: The [Azure Security Benchmark](/home/mbaldwin/docs/asb/azure-docs-pr/articles/governance/policy/samples/azure-security-benchmark.md) is the default policy initiative for Security Center and is the foundation for [Security Center's recommendations](/home/mbaldwin/docs/asb/azure-docs-pr/articles/security-center/security-center-recommendations.md). The Azure Policy definitions related to this control are enabled automatically by Security Center. Alerts related to this control may require an [Azure Defender](/home/mbaldwin/docs/asb/azure-docs-pr/articles/security-center/azure-defender.md) plan for the related services.
+**Azure Security Center monitoring**: The [Azure Security Benchmark](/azure/governance/policy/samples/azure-security-benchmark) is the default policy initiative for Security Center and is the foundation for [Security Center's recommendations](/azure/security-center/security-center-recommendations). The Azure Policy definitions related to this control are enabled automatically by Security Center. Alerts related to this control may require an [Azure Defender](/azure/security-center/azure-defender) plan for the related services.
**Azure Policy built-in definitions - Microsoft.Web**:
Additional information is available at the referenced links.
**Responsibility**: Customer
-**Azure Security Center monitoring**: The [Azure Security Benchmark](/home/mbaldwin/docs/asb/azure-docs-pr/articles/governance/policy/samples/azure-security-benchmark.md) is the default policy initiative for Security Center and is the foundation for [Security Center's recommendations](/home/mbaldwin/docs/asb/azure-docs-pr/articles/security-center/security-center-recommendations.md). The Azure Policy definitions related to this control are enabled automatically by Security Center. Alerts related to this control may require an [Azure Defender](/home/mbaldwin/docs/asb/azure-docs-pr/articles/security-center/azure-defender.md) plan for the related services.
+**Azure Security Center monitoring**: The [Azure Security Benchmark](/azure/governance/policy/samples/azure-security-benchmark) is the default policy initiative for Security Center and is the foundation for [Security Center's recommendations](/azure/security-center/security-center-recommendations). The Azure Policy definitions related to this control are enabled automatically by Security Center. Alerts related to this control may require an [Azure Defender](/azure/security-center/azure-defender) plan for the related services.
**Azure Policy built-in definitions - Microsoft.Web**:
azure-monitor Javascript https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/javascript.md
All configuration options have now been move towards the end of the script to he
Each configuration option is shown above on a new line, if you don't wish to override the default value of an item listed as [optional] you can remove that line to minimize the resulting size of your returned page. The available configuration options are-
+
| Name | Type | Description |||- | src | string **[required]** | The full URL for where to load the SDK from. This value is used for the "src" attribute of a dynamically added &lt;script /&gt; tag. You can use the public CDN location or your own privately hosted one.
appInsights.trackTrace({message: 'this message will not be sent'}); // Not sent
## Configuration Most configuration fields are named such that they can be defaulted to false. All fields are optional except for `instrumentationKey`.
-| Name | Default | Description |
-|||-|
-| instrumentationKey | null | **Required**<br>Instrumentation key that you obtained from the Azure portal. |
-| accountId | null | An optional account ID, if your app groups users into accounts. No spaces, commas, semicolons, equals, or vertical bars |
-| sessionRenewalMs | 1800000 | A session is logged if the user is inactive for this amount of time in milliseconds. Default is 30 minutes |
-| sessionExpirationMs | 86400000 | A session is logged if it has continued for this amount of time in milliseconds. Default is 24 hours |
-| maxBatchSizeInBytes | 10000 | Max size of telemetry batch. If a batch exceeds this limit, it is immediately sent and a new batch is started |
-| maxBatchInterval | 15000 | How long to batch telemetry for before sending (milliseconds) |
-| disableExceptionTracking | false | If true, exceptions are not autocollected. Default is false. |
-| disableTelemetry | false | If true, telemetry is not collected or sent. Default is false. |
-| enableDebug | false | If true, **internal** debugging data is thrown as an exception **instead** of being logged, regardless of SDK logging settings. Default is false. <br>***Note:*** Enabling this setting will result in dropped telemetry whenever an internal error occurs. This can be useful for quickly identifying issues with your configuration or usage of the SDK. If you do not want to lose telemetry while debugging, consider using `consoleLoggingLevel` or `telemetryLoggingLevel` instead of `enableDebug`. |
-| loggingLevelConsole | 0 | Logs **internal** Application Insights errors to console. <br>0: off, <br>1: Critical errors only, <br>2: Everything (errors & warnings) |
-| loggingLevelTelemetry | 1 | Sends **internal** Application Insights errors as telemetry. <br>0: off, <br>1: Critical errors only, <br>2: Everything (errors & warnings) |
-| diagnosticLogInterval | 10000 | (internal) Polling interval (in ms) for internal logging queue |
-| samplingPercentage | 100 | Percentage of events that will be sent. Default is 100, meaning all events are sent. Set this if you wish to preserve your data cap for large-scale applications. |
-| autoTrackPageVisitTime | false | If true, on a pageview, the previous instrumented page's view time is tracked and sent as telemetry and a new timer is started for the current pageview. Default is false. |
-| disableAjaxTracking | false | If true, Ajax calls are not autocollected. Default is false. |
-| disableFetchTracking | true | If true, Fetch requests are not autocollected. Default is true |
-| overridePageViewDuration | false | If true, default behavior of trackPageView is changed to record end of page view duration interval when trackPageView is called. If false and no custom duration is provided to trackPageView, the page view performance is calculated using the navigation timing API. Default is false. |
-| maxAjaxCallsPerView | 500 | Default 500 - controls how many Ajax calls will be monitored per page view. Set to -1 to monitor all (unlimited) Ajax calls on the page. |
-| disableDataLossAnalysis | true | If false, internal telemetry sender buffers will be checked at startup for items not yet sent. |
-| disableCorrelationHeaders | false | If false, the SDK will add two headers ('Request-Id' and 'Request-Context') to all dependency requests to correlate them with corresponding requests on the server side. Default is false. |
-| correlationHeaderExcludedDomains | | Disable correlation headers for specific domains |
-| correlationHeaderDomains | | Enable correlation headers for specific domains |
-| disableFlushOnBeforeUnload | false | Default false. If true, flush method will not be called when onBeforeUnload event triggers |
-| enableSessionStorageBuffer | true | Default true. If true, the buffer with all unsent telemetry is stored in session storage. The buffer is restored on page load |
-| isCookieUseDisabled | false | Default false. If true, the SDK will not store or read any data from cookies. Note that this disables the User and Session cookies and renders the usage blades and experiences useless. |
-| cookieDomain | null | Custom cookie domain. This is helpful if you want to share Application Insights cookies across subdomains. |
-| isRetryDisabled | false | Default false. If false, retry on 206 (partial success), 408 (timeout), 429 (too many requests), 500 (internal server error), 503 (service unavailable), and 0 (offline, only if detected) |
-| isStorageUseDisabled | false | If true, the SDK will not store or read any data from local and session storage. Default is false. |
-| isBeaconApiDisabled | true | If false, the SDK will send all telemetry using the [Beacon API](https://www.w3.org/TR/beacon) |
-| onunloadDisableBeacon | false | Default false. when tab is closed, the SDK will send all remaining telemetry using the [Beacon API](https://www.w3.org/TR/beacon) |
-| sdkExtension | null | Sets the sdk extension name. Only alphabetic characters are allowed. The extension name is added as a prefix to the 'ai.internal.sdkVersion' tag (for example, 'ext_javascript:2.0.0'). Default is null. |
-| isBrowserLinkTrackingEnabled | false | Default is false. If true, the SDK will track all [Browser Link](/aspnet/core/client-side/using-browserlink) requests. |
-| appId | null | AppId is used for the correlation between AJAX dependencies happening on the client-side with the server-side requests. When Beacon API is enabled, it cannot be used automatically, but can be set manually in the configuration. Default is null |
-| enableCorsCorrelation | false | If true, the SDK will add two headers ('Request-Id' and 'Request-Context') to all CORS requests to correlate outgoing AJAX dependencies with corresponding requests on the server side. Default is false |
-| namePrefix | undefined | An optional value that will be used as name postfix for localStorage and cookie name.
-| enableAutoRouteTracking | false | Automatically track route changes in Single Page Applications (SPA). If true, each route change will send a new Pageview to Application Insights. Hash route changes (`example.com/foo#bar`) are also recorded as new page views.
-| enableRequestHeaderTracking | false | If true, AJAX & Fetch request headers is tracked, default is false.
-| enableResponseHeaderTracking | false | If true, AJAX & Fetch request's response headers is tracked, default is false.
-| distributedTracingMode | `DistributedTracingModes.AI` | Sets the distributed tracing mode. If AI_AND_W3C mode or W3C mode is set, W3C trace context headers (traceparent/tracestate) will be generated and included in all outgoing requests. AI_AND_W3C is provided for back-compatibility with any legacy Application Insights instrumented services. See example [here](./correlation.md#enable-w3c-distributed-tracing-support-for-web-apps).
-| enableAjaxErrorStatusText | false | Default false. If true, include response error data text in dependency event on failed AJAX requests.
-| enableAjaxPerfTracking | false | Default false. Flag to enable looking up and including additional browser window.performance timings in the reported `ajax` (XHR and fetch) reported metrics.
-| maxAjaxPerfLookupAttempts | 3 | Defaults to 3. The maximum number of times to look for the window.performance timings (if available), this is required as not all browsers populate the window.performance before reporting the end of the XHR request and for fetch requests this is added after its complete.
-| ajaxPerfLookupDelay | 25 | Defaults to 25 ms. The amount of time to wait before re-attempting to find the windows.performance timings for an `ajax` request, time is in milliseconds and is passed directly to setTimeout().
-| enableUnhandledPromiseRejectionTracking | false | If true, unhandled promise rejections will be autocollected and reported as a JavaScript error. When disableExceptionTracking is true (don't track exceptions), the config value will be ignored and unhandled promise rejections will not be reported.
+| Name | Description | Default |
+||-||
+| instrumentationKey | **Required**<br>Instrumentation key that you obtained from the Azure portal. | string<br/>null |
+| accountId | An optional account ID, if your app groups users into accounts. No spaces, commas, semicolons, equals, or vertical bars | string<br/>null |
+| sessionRenewalMs | A session is logged if the user is inactive for this amount of time in milliseconds. | numeric<br/>1800000<br/>(30 mins) |
+| sessionExpirationMs | A session is logged if it has continued for this amount of time in milliseconds. | numeric<br/>86400000<br/>(24 hours) |
+| maxBatchSizeInBytes | Max size of telemetry batch. If a batch exceeds this limit, it is immediately sent and a new batch is started | numeric<br/>10000 |
+| maxBatchInterval | How long to batch telemetry for before sending (milliseconds) | numeric<br/>15000 |
+| disable&#8203;ExceptionTracking | If true, exceptions are not autocollected. | boolean<br/> false |
+| disableTelemetry | If true, telemetry is not collected or sent. | boolean<br/>false |
+| enableDebug | If true, **internal** debugging data is thrown as an exception **instead** of being logged, regardless of SDK logging settings. Default is false. <br>***Note:*** Enabling this setting will result in dropped telemetry whenever an internal error occurs. This can be useful for quickly identifying issues with your configuration or usage of the SDK. If you do not want to lose telemetry while debugging, consider using `consoleLoggingLevel` or `telemetryLoggingLevel` instead of `enableDebug`. | boolean<br/>false |
+| loggingLevelConsole | Logs **internal** Application Insights errors to console. <br>0: off, <br>1: Critical errors only, <br>2: Everything (errors & warnings) | numeric<br/> 0 |
+| loggingLevelTelemetry | Sends **internal** Application Insights errors as telemetry. <br>0: off, <br>1: Critical errors only, <br>2: Everything (errors & warnings) | numeric<br/> 1 |
+| diagnosticLogInterval | (internal) Polling interval (in ms) for internal logging queue | numeric<br/> 10000 |
+| samplingPercentage | Percentage of events that will be sent. Default is 100, meaning all events are sent. Set this if you wish to preserve your data cap for large-scale applications. | numeric<br/>100 |
+| autoTrackPageVisitTime | If true, on a pageview, the previous instrumented page's view time is tracked and sent as telemetry and a new timer is started for the current pageview. | boolean<br/>false |
+| disableAjaxTracking | If true, Ajax calls are not autocollected. | boolean<br/> false |
+| disableFetchTracking | If true, Fetch requests are not autocollected.|boolean<br/>true |
+| overridePageViewDuration | If true, default behavior of trackPageView is changed to record end of page view duration interval when trackPageView is called. If false and no custom duration is provided to trackPageView, the page view performance is calculated using the navigation timing API. |boolean<br/>
+| maxAjaxCallsPerView | Default 500 - controls how many Ajax calls will be monitored per page view. Set to -1 to monitor all (unlimited) Ajax calls on the page. | numeric<br/> 500 |
+| disableDataLossAnalysis | If false, internal telemetry sender buffers will be checked at startup for items not yet sent. | boolean<br/> true |
+| disable&#8203;CorrelationHeaders | If false, the SDK will add two headers ('Request-Id' and 'Request-Context') to all dependency requests to correlate them with corresponding requests on the server side. | boolean<br/> false |
+| correlationHeader&#8203;ExcludedDomains | Disable correlation headers for specific domains | string[]<br/>undefined |
+| correlationHeader&#8203;ExcludePatterns | Disable correlation headers using regular expressions | regex[]<br/>undefined |
+| correlationHeader&#8203;Domains | Enable correlation headers for specific domains | string[]<br/>undefined |
+| disableFlush&#8203;OnBeforeUnload | If true, flush method will not be called when onBeforeUnload event triggers | boolean<br/> false |
+| enableSessionStorageBuffer | If true, the buffer with all unsent telemetry is stored in session storage. The buffer is restored on page load | boolean<br />true |
+| cookieCfg | Defaults to cookie usage enabled see [ICookieCfgConfig](#icookiemgrconfig) settings for full defaults. | [ICookieCfgConfig](#icookiemgrconfig)<br>(Since 2.6.0)<br/>undefined |
+| ~~isCookieUseDisabled~~<br>disableCookiesUsage | If true, the SDK will not store or read any data from cookies. Note that this disables the User and Session cookies and renders the usage blades and experiences useless. isCookieUseDisable is deprecated in favor of disableCookiesUsage, when both are provided disableCookiesUsage takes precedence.<br>(Since v2.6.0) And if `cookieCfg.enabled` is also defined it will take precedence over these values, Cookie usage can be re-enabled after initialization via the core.getCookieMgr().setEnabled(true). | alias for [`cookieCfg.enabled`](#icookiemgrconfig)<br>false |
+| cookieDomain | Custom cookie domain. This is helpful if you want to share Application Insights cookies across subdomains.<br>(Since v2.6.0) If `cookieCfg.domain` is defined it will take precedence over this value. | alias for [`cookieCfg.domain`](#icookiemgrconfig)<br>null |
+| cookiePath | Custom cookie path. This is helpful if you want to share Application Insights cookies behind an application gateway.<br>If `cookieCfg.path` is defined it will take precedence over this value. | alias for [`cookieCfg.path`](#icookiemgrconfig)<br>(Since 2.6.0)<br/>null |
+| isRetryDisabled | If false, retry on 206 (partial success), 408 (timeout), 429 (too many requests), 500 (internal server error), 503 (service unavailable), and 0 (offline, only if detected) | boolean<br/>false |
+| isStorageUseDisabled | If true, the SDK will not store or read any data from local and session storage. | boolean<br/> false |
+| isBeaconApiDisabled | If false, the SDK will send all telemetry using the [Beacon API](https://www.w3.org/TR/beacon) | boolean<br/>true |
+| onunloadDisableBeacon | When tab is closed, the SDK will send all remaining telemetry using the [Beacon API](https://www.w3.org/TR/beacon) | boolean<br/> false |
+| sdkExtension | Sets the sdk extension name. Only alphabetic characters are allowed. The extension name is added as a prefix to the 'ai.internal.sdkVersion' tag (for example, 'ext_javascript:2.0.0'). | string<br/> null |
+| isBrowserLink&#8203;TrackingEnabled | If true, the SDK will track all [Browser Link](/aspnet/core/client-side/using-browserlink) requests. | boolean<br/>false |
+| appId | AppId is used for the correlation between AJAX dependencies happening on the client-side with the server-side requests. When Beacon API is enabled, it cannot be used automatically, but can be set manually in the configuration. |string<br/> null |
+| enable&#8203;CorsCorrelation | If true, the SDK will add two headers ('Request-Id' and 'Request-Context') to all CORS requests to correlate outgoing AJAX dependencies with corresponding requests on the server side. | boolean<br/>false |
+| namePrefix | An optional value that will be used as name postfix for localStorage and cookie name. | string<br/>undefined |
+| enable&#8203;AutoRoute&#8203;Tracking | Automatically track route changes in Single Page Applications (SPA). If true, each route change will send a new Pageview to Application Insights. Hash route changes (`example.com/foo#bar`) are also recorded as new page views.| boolean<br/>false |
+| enableRequest&#8203;HeaderTracking | If true, AJAX & Fetch request headers is tracked. | boolean<br/> false |
+| enableResponse&#8203;HeaderTracking | If true, AJAX & Fetch request's response headers is tracked. | boolean<br/> false |
+| distributedTracingMode | Sets the distributed tracing mode. If AI_AND_W3C mode or W3C mode is set, W3C trace context headers (traceparent/tracestate) will be generated and included in all outgoing requests. AI_AND_W3C is provided for back-compatibility with any legacy Application Insights instrumented services. See example [here](./correlation.md#enable-w3c-distributed-tracing-support-for-web-apps).| `DistributedTracingModes`or<br/>numeric<br/>(Since v2.6.0) `DistributedTracingModes.AI_AND_W3C`<br />(v2.5.11 or earlier) `DistributedTracingModes.AI` |
+| enable&#8203;AjaxErrorStatusText | If true, include response error data text in dependency event on failed AJAX requests. | boolean<br/> false |
+| enable&#8203;AjaxPerfTracking |Flag to enable looking up and including additional browser window.performance timings in the reported `ajax` (XHR and fetch) reported metrics. | boolean<br/> false |
+| maxAjaxPerf&#8203;LookupAttempts | The maximum number of times to look for the window.performance timings (if available), this is required as not all browsers populate the window.performance before reporting the end of the XHR request and for fetch requests this is added after its complete.| numeric<br/> 3 |
+| ajaxPerfLookupDelay | The amount of time to wait before re-attempting to find the windows.performance timings for an `ajax` request, time is in milliseconds and is passed directly to setTimeout(). | numeric<br/> 25 ms |
+| enableUnhandled&#8203;PromiseRejection&#8203;Tracking | If true, unhandled promise rejections will be autocollected and reported as a JavaScript error. When disableExceptionTracking is true (don't track exceptions), the config value will be ignored and unhandled promise rejections will not be reported. | boolean<br/> false |
+| disable&#8203;InstrumentationKey&#8203;Validation | If true, instrumentation key validation check is bypassed. | boolean<br/>false |
+| enablePerfMgr | When enabled (true) this will create local perfEvents for code that has been instrumented to emit perfEvents (via the doPerf() helper). This can be used to identify performance issues within the SDK based on your usage or optionally within your own instrumented code. [More details are available by the basic documentation](https://github.com/microsoft/ApplicationInsights-JS/blob/master/docs/PerformanceMonitoring.md). Since v2.5.7 | boolean<br/>false |
+| perfEvtsSendAll | When _enablePerfMgr_ is enabled and the [IPerfManager](https://github.com/microsoft/ApplicationInsights-JS/blob/master/shared/AppInsightsCore/src/JavaScriptSDK.Interfaces/IPerfManager.ts) fires a [INotificationManager](https://github.com/microsoft/ApplicationInsights-JS/blob/master/shared/AppInsightsCore/src/JavaScriptSDK.Interfaces/INotificationManager.ts).perfEvent() this flag determines whether an event is fired (and sent to all listeners) for all events (true) or only for 'parent' events (false &lt;default&gt;).<br />A parent [IPerfEvent](https://github.com/microsoft/ApplicationInsights-JS/blob/master/shared/AppInsightsCore/src/JavaScriptSDK.Interfaces/IPerfEvent.ts) is an event where no other IPerfEvent is still running at the point of this event being created and it's _parent_ property is not null or undefined. Since v2.5.7 | boolean<br />false |
+| idLength | Identifies the default length used to generate new random session and user id values. Defaults to 22, previous default value was 5 (v2.5.8 or less), if you need to keep the previous maximum length you should set this value to 5. | numeric<br />22 |
+
+## Cookie Handling
+
+From version 2.6.0, cookie management is now available directly from the instance and can be disabled and re-enabled after initialization.
+
+If disabled during initialization via the `disableCookiesUsage` or `cookieCfg.enabled` configurations, you can now re-enable via the [ICookieMgr](https://github.com/microsoft/ApplicationInsights-JS/blob/master/shared/AppInsightsCore/src/JavaScriptSDK.Interfaces/ICookieMgr.ts) `setEnabled` function.
+
+The instance based cookie management also replaces the previous CoreUtils global functions of `disableCookies()`, `setCookie(...)`, `getCookie(...)` and `deleteCookie(...)`. And to benefit from the tree-shaking enhancements also introduced as part of version 2.6.0 you should no longer uses the global functions.
+
+### ICookieMgrConfig
+
+Cookie Configuration for instance based cookie management added in version 2.6.0.
+
+| Name | Description | Type and Default |
+||-||
+| enabled | A boolean that indicates whether the use of cookies by the SDK is enabled by the current instance. If false, the instance of the SDK initialized by this configuration will not store or read any data from cookies | boolean<br/> true |
+| domain | Custom cookie domain. This is helpful if you want to share Application Insights cookies across subdomains. If not provided uses the value from root `cookieDomain` value. | string<br/>null |
+| path | Specifies the path to use for the cookie, if not provided it will use any value from the root `cookiePath` value. | string <br/> / |
+| getCookie | Function to fetch the named cookie value, if not provided it will use the internal cookie parsing / caching. | `(name: string) => string` <br/> null |
+| setCookie | Function to set the named cookie with the specified value, only called when adding or updating a cookie. | `(name: string, value: string) => void` <br/> null |
+| delCookie | Function to delete the named cookie with the specified value, separated from setCookie to avoid the need to parse the value to determine whether the cookie is being added or removed. If not provided it will use the internal cookie parsing / caching. | `(name: string, value: string) => void` <br/> null |
+
+### Simplified Usage of new instance Cookie Manager
+
+- appInsights.[getCookieMgr()](https://github.com/microsoft/ApplicationInsights-JS/blob/master/shared/AppInsightsCore/src/JavaScriptSDK.Interfaces/ICookieMgr.ts).setEnabled(true/false);
+- appInsights.[getCookieMgr()](https://github.com/microsoft/ApplicationInsights-JS/blob/master/shared/AppInsightsCore/src/JavaScriptSDK.Interfaces/ICookieMgr.ts).set("MyCookie", "the%20encoded%20value");
+- appInsights.[getCookieMgr()](https://github.com/microsoft/ApplicationInsights-JS/blob/master/shared/AppInsightsCore/src/JavaScriptSDK.Interfaces/ICookieMgr.ts).get("MyCookie");
+- appInsights.[getCookieMgr()](https://github.com/microsoft/ApplicationInsights-JS/blob/master/shared/AppInsightsCore/src/JavaScriptSDK.Interfaces/ICookieMgr.ts).del("MyCookie");
## Enable time-on-page tracking
azure-monitor Status Monitor V2 Api Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/status-monitor-v2-api-reference.md
PS C:\> Enable-ApplicationInsightsMonitoring -InstrumentationKeyMap
```
+> [!NOTE]
+> The naming of AppFilter in this context can be confusing, `AppFilter` sets the application name regex filter (HostingEnvironment.SiteName in the case of .Net on IIS). `VirtualPathFilter` sets the virtual path regex filter (HostingEnvironment.ApplicationVirtualPath in the case of .Net on IIS). To instrument a single app you would use the VirtualPathFilter as follows: `Enable-ApplicationInsightsMonitoring -InstrumentationKeyMap @(@{VirtualPathFilter="^/MyAppName$"; InstrumentationSettings=@{InstrumentationKey='<your ikey>'}})`
### Parameters
azure-monitor Container Insights Enable Existing Clusters https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/containers/container-insights-enable-existing-clusters.md
You can enable monitoring of an AKS cluster that's already deployed using one of
* [From Azure Monitor](#enable-from-azure-monitor-in-the-portal) or [directly from the AKS cluster](#enable-directly-from-aks-cluster-in-the-portal) in the Azure portal * With the [provided Azure Resource Manager template](#enable-using-an-azure-resource-manager-template) by using the Azure PowerShell cmdlet `New-AzResourceGroupDeployment` or with Azure CLI.
+If you're connecting an existing AKS cluster to an Azure Log Analytics workspace in another subscription, the Microsoft.ContainerService resource provider must be registered in the subscription in which the Log Analytics workspace was created. For more information, see [Register resource provider](../../azure-resource-manager/management/resource-providers-and-types.md#register-resource-provider).
+ ## Sign in to the Azure portal Sign in to the [Azure portal](https://portal.azure.com).
After a few minutes, the command completes and returns JSON-formatted informatio
* If you experience issues while attempting to onboard the solution, review the [troubleshooting guide](container-insights-troubleshoot.md)
-* With monitoring enabled to collect health and resource utilization of your AKS cluster and workloads running on them, learn [how to use](container-insights-analyze.md) Container insights.
+* With monitoring enabled to collect health and resource utilization of your AKS cluster and workloads running on them, learn [how to use](container-insights-analyze.md) Container insights.
azure-monitor Container Insights Onboard https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/containers/container-insights-onboard.md
Kubelet secure port (:10250) should be opened in the cluster's virtual network f
- To view the monitoring data, you need to have [*Log Analytics reader*](../logs/manage-access.md#manage-access-using-azure-permissions) role in the Log Analytics workspace, configured with Container insights. - Prometheus metrics aren't collected by default. Before you [configure the agent](container-insights-prometheus-integration.md) to collect the metrics, it's important to review the [Prometheus documentation](https://prometheus.io/) to understand what data can be scraped and what methods are supported.
+- An AKS cluster can be attached to a Log Analytics workspace in a different Azure subscription in the same Azure AD Tenant. This cannot currently be done with the Azure Portal, but can be done with Azure CLI or Resource Manager template.
## Supported configurations
To enable Container insights, use one of the methods that's described in the fol
## Next steps
-Now that you've enabled monitoring, you can begin analyzing the performance of your Kubernetes clusters that are hosted on Azure Kubernetes Service (AKS), Azure Stack, or another environment. To learn how to use Container insights, see [View Kubernetes cluster performance](container-insights-analyze.md).
+Now that you've enabled monitoring, you can begin analyzing the performance of your Kubernetes clusters that are hosted on Azure Kubernetes Service (AKS), Azure Stack, or another environment. To learn how to use Container insights, see [View Kubernetes cluster performance](container-insights-analyze.md).
azure-monitor Container Insights Prometheus Integration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/containers/container-insights-prometheus-integration.md
When a URL is specified, Container insights only scrapes the endpoint. When Kube
||--|--|-|-| | Cluster-wide | | | | Specify any one of the following three methods to scrape endpoints for metrics. | | | `urls` | String | Comma-separated array | HTTP endpoint (Either IP address or valid URL path specified). For example: `urls=[$NODE_IP/metrics]`. ($NODE_IP is a specific Container insights parameter and can be used instead of node IP address. Must be all uppercase.) |
-| | `kubernetes_services` | String | Comma-separated array | An array of Kubernetes services to scrape metrics from kube-state-metrics. For example,`kubernetes_services = ["https://metrics-server.kube-system.svc.cluster.local/metrics",http://my-service-dns.my-namespace:9100/metrics]`.|
+| | `kubernetes_services` | String | Comma-separated array | An array of Kubernetes services to scrape metrics from kube-state-metrics. Fully qualified domain names must be used here. For example,`kubernetes_services = ["https://metrics-server.kube-system.svc.cluster.local/metrics",http://my-service-dns.my-namespace.svc.cluster.local:9100/metrics]`.|
| | `monitor_kubernetes_pods` | Boolean | true or false | When set to `true` in the cluster-wide settings, Container insights agent will scrape Kubernetes pods across the entire cluster for the following Prometheus annotations:<br> `prometheus.io/scrape:`<br> `prometheus.io/scheme:`<br> `prometheus.io/path:`<br> `prometheus.io/port:` | | | `prometheus.io/scrape` | Boolean | true or false | Enables scraping of the pod. `monitor_kubernetes_pods` must be set to `true`. | | | `prometheus.io/scheme` | String | http or https | Defaults to scrapping over HTTP. If necessary, set to `https`. |
Further information on how to monitor data usage and analyze cost is available i
## Next steps
-Learn more about configuring the agent collection settings for stdout, stderr, and environmental variables from container workloads [here](container-insights-agent-config.md).
+Learn more about configuring the agent collection settings for stdout, stderr, and environmental variables from container workloads [here](container-insights-agent-config.md).
azure-monitor Data Platform Metrics https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/essentials/data-platform-metrics.md
Azure Monitor Metrics is a feature of Azure Monitor that collects numeric data f
## What can you do with Azure Monitor Metrics? The following table lists the different ways that you can use Metrics in Azure Monitor.
-| | |
+| | Description |
|:|:| | **Analyze** | Use [metrics explorer](metrics-charts.md) to analyze collected metrics on a chart and compare metrics from different resources. | | **Alert** | Configure a [metric alert rule](../alerts/alerts-metric.md) that sends a notification or takes [automated action](../alerts/action-groups.md) when the metric value crosses a threshold. |
azure-monitor Faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/faq.md
Azure Data Explorer is a fast and highly scalable data exploration service for l
### How do I retrieve log data? All data is retrieved from a Log Analytics workspace using a log query written using Kusto Query Language (KQL). You can write your own queries or use solutions and insights that include log queries for a particular application or service. See [Overview of log queries in Azure Monitor](logs/log-query-overview.md).
- p
+ ### Can I delete data from a Log Analytics workspace? Data is removed from a workspace according to its [retention period](logs/manage-cost-storage.md#change-the-data-retention-period). You can delete specific data for privacy or compliance reasons. See [How to export and delete private data](logs/personal-data-mgmt.md#how-to-export-and-delete-private-data) for more information.
Under this condition, you will be prompted with the **Try Now** option when you
## SQL insights (preview) ### What versions of SQL Server are supported?
-See [Supported versions](insights/sql-insights-overview.md#supported-versions) for supported versions of SQL.
+We support SQL Server 2012 and all newer versions. See [Supported versions](insights/sql-insights-overview.md#supported-versions) for more details.
### What SQL resource types are supported?
+- Azure SQL Database
+- Azure SQL Managed Instance
+- SQL Server on Azure Virtual Machines (SQL Server running on virtual machines registered with the [SQL virtual machine](../azure-sql/virtual-machines/windows/sql-agent-extension-manually-register-single-vm.md) provider)
+- Azure VMs (SQL Server running on virtual machines not registered with the [SQL virtual machine](../azure-sql/virtual-machines/windows/sql-agent-extension-manually-register-single-vm.md) provider)
-- Azure SQL Database. Single database only, not databases in an Elastic Pool.-- Azure SQL Managed Instance -- Azure SQL virtual machines ([Windows](../azure-sql/virtual-machines/windows/sql-server-on-azure-vm-iaas-what-is-overview.md#get-started-with-sql-server-vms), [Linux](../azure-sql/virtual-machines/linux/sql-server-on-linux-vm-what-is-iaas-overview.md#create)) and Azure virtual machines that SQL Server is installed on.-
-### What operating systems for the machine running SQL Server are supported?
-Any OS that supports running supported version of SQL.
+See [Supported versions](insights/sql-insights-overview.md#supported-versions) for more details and for details about scenarios with no support or limited support.
-### What operating system for the remote monitoring server are supported?
+### What operating systems for the virtual machine running SQL Server are supported?
+We support all operating systems specified by the [Windows](../azure-sql/virtual-machines/windows/sql-server-on-azure-vm-iaas-what-is-overview.md#get-started-with-sql-server-vms) and [Linux](../azure-sql/virtual-machines/linux/sql-server-on-linux-vm-what-is-iaas-overview.md#create) documentation for SQL Server on Azure Virtual Machines.
-Ubuntu 18.04 is currently the only operating system supported.
+### What operating system for the monitoring virtual machine are supported?
+Ubuntu 18.04 is currently the only operating system supported for the monitoring virtual machine.
-### Where will the monitoring data be stored in Log Analytics
-All of the monitoring data is stored in the **InsightsMetrics** table. The **Origin** column has the value *solutions.azm.ms/telegraf/SqlInsights*. The **Namespace** column has values that start with *sqlserver_*.
+### Where will the monitoring data be stored in Log Analytics?
+All of the monitoring data is stored in the **InsightsMetrics** table. The **Origin** column has the value `solutions.azm.ms/telegraf/SqlInsights`. The **Namespace** column has values that start with `sqlserver_`.
### How often is data collected?
-See [Data collected by SQL insights](../insights/../azure-monitor/insights/sql-insights-overview.md#data-collected-by-sql-insights) for details on the frequency that different data is collected.
+The frequency of data collection is customizable. See [Data collected by SQL insights](../insights/../azure-monitor/insights/sql-insights-overview.md#data-collected-by-sql-insights) for details on the default frequencies and see [Create SQL monitoring profile](../insights/../azure-monitor/insights/sql-insights-enable.md#create-sql-monitoring-profile) for instructions on customizing frequencies.
## Next steps If your question isn't answered here, you can refer to the following forums to additional questions and answers.
If your question isn't answered here, you can refer to the following forums to a
- [Log Analytics](/answers/topics/azure-monitor.html) - [Application Insights](/answers/topics/azure-monitor.html)
-For general feedback on Azure Monitor please visit the [feedback forum](https://feedback.azure.com/forums/34192--general-feedback).
+For general feedback on Azure Monitor please visit the [feedback forum](https://feedback.azure.com/forums/34192--general-feedback).
azure-monitor Resource Group Insights https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/insights/resource-group-insights.md
By default, the resources are grouped by app layer and resource type. **App laye
The resource group insights page provides several other tools scoped to help you diagnose issues
- | | |
+ | Tool | Description |
| - |:--| | [**Alerts**](../alerts/alerts-overview.md) | View, create, and manage your alerts. | | [**Metrics**](../data-platform.md) | Visualize and explore your metric based data. |
azure-monitor Sql Insights Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/insights/sql-insights-overview.md
Last updated 03/15/2021
# Monitor your SQL deployments with SQL insights (preview)
-SQL insights monitors the performance and health of your SQL deployments. It can help deliver predictable performance and availability of vital workloads you have built around a SQL backend by identifying performance bottlenecks and issues. SQL insights stores its data in [Azure Monitor Logs](../logs/data-platform-logs.md), which allows it to deliver powerful aggregation and filtering and to analyze data trends over time. You can view this data from Azure Monitor in the views we ship as part of this offering and you can delve directly into the Log data to run queries and analyze trends.
+SQL insights is a comprehensive solution for monitoring any product in the [Azure SQL family](../../azure-sql/index.yml). SQL insights uses [dynamic management views](../../azure-sql/database/monitoring-with-dmvs.md) to expose the data you need to monitor health, diagnose problems, and tune performance.
-SQL insights does not install anything on your SQL IaaS deployments. Instead, it uses dedicated monitoring virtual machines to remotely collect data for both SQL PaaS and SQL IaaS deployments. The SQL insights monitoring profile allows you to manage the data sets to be collected based upon the type of SQL, including Azure SQL DB, Azure SQL Managed Instance, and SQL server running on an Azure virtual machine.
+SQL insights performs all monitoring remotely. Monitoring agents on dedicated virtual machines connect to your SQL resources and remotely gather data. The gathered data is stored in [Azure Monitor Logs](../logs/data-platform-logs.md), enabling easy aggregation, filtering, and trend analysis. You can view the collected data from the SQL insights [workbook template](../visualize/workbooks-overview.md), or you can delve directly into the data using [log queries](../logs/get-started-queries.md).
## Pricing
+There is no direct cost for SQL insights. All costs are incurred by the virtual machines that gather the data, the Log Analytics workspaces that store the data, and any alert rules configured on the data.
-There's no direct cost for SQL insights, but you're charged for its activity in the Log Analytics workspace. Based on the pricing that's published on the [Azure Monitor pricing page](https://azure.microsoft.com/pricing/details/monitor/), SQL insights is billed for:
+**Virtual machines**
-- Data ingested from agents and stored in the workspace.-- Alert rules based on log data.-- Notifications sent from alert rules.
+For virtual machines, you're charged based on the pricing published on the [virtual machines pricing page](https://azure.microsoft.com/en-us/pricing/details/virtual-machines/linux/). The number of virtual machines required will vary based on the number of connection strings you want to monitor. We recommend to allocate 1 virtual machine of size Standard_B2s for every 100 connection strings. See [Azure virtual machine requirements](sql-insights-enable.md#azure-virtual-machine-requirements) for more details.
-The log size varies by the string lengths of the data collected, and it can increase with the amount of database activity.
+**Log Analytics workspaces**
+
+For the Log Analytics workspaces, you're charged based on the pricing published on the [Azure Monitor pricing page](https://azure.microsoft.com/pricing/details/monitor/). The Log Analytics workspaces used by SQL insights will incur costs for data ingestion, data retention, and (optionally) data export. Exact charges will vary based on the amount of data ingested, retained, and exported. The amount of this data will subsequently vary based on your database activity and the collection settings defined in your [monitoring profiles](sql-insights-enable.md#create-sql-monitoring-profile).
+
+**Alert rules**
+
+For alert rules in Azure Monitor, you're charged based on the pricing published on the [Azure Monitor pricing page](https://azure.microsoft.com/pricing/details/monitor/). If you choose to [create alerts with SQL insights](sql-insights-alerts.md), you're charged for any alert rules created and any notifications sent.
## Supported versions SQL insights supports the following versions of SQL Server:- - SQL Server 2012 and newer SQL insights supports SQL Server running in the following environments:- - Azure SQL Database - Azure SQL Managed Instance-- Azure SQL VMs-- Azure VMs
+- SQL Server on Azure Virtual Machines (SQL Server running on virtual machines registered with the [SQL virtual machine](../../azure-sql/virtual-machines/windows/sql-agent-extension-manually-register-single-vm.md) provider)
+- Azure VMs (SQL Server running on virtual machines not registered with the [SQL virtual machine](../../azure-sql/virtual-machines/windows/sql-agent-extension-manually-register-single-vm.md) provider)
SQL insights has no support or limited support for the following:--- SQL Server running on virtual machines outside of Azure are currently not supported.-- Azure SQL Database Elastic Pools: Limited support during the Public Preview. Will be fully supported at general availability. -- Azure SQL Serverless Deployments: Like Active Geo-DR, the current monitoring agents will prevent serverless deployment from going to sleep. This could cause higher than expected costs from serverless deployments. -- Readable Secondaries: Currently only deployment types with a single readable secondary endpoint (Business Critical or Hyperscale) will be supported. When Hyperscale deployments support named replicas, we will be capable of supporting multiple readable secondary endpoints for a single logical database. -- Azure Active Directories: Currently we only support SQL Logins for the Monitoring Agent. We plan to support Azure Active Directories in an upcoming release and have no current support for SQL VM authentication using Active Directories on a bespoke domain controller. -
+- **Non-Azure instances**: SQL Server running on virtual machines outside of Azure are not supported
+- **Azure SQL Database elastic pools**: Metrics cannot be gathered for elastic pools. Metrics cannot be gathered for databases within elastic pools.
+- **Azure SQL Database low service tiers**: Metrics cannot be gathered for databases on Basic, S0, S1, and S2 [service tiers](../../azure-sql/database/resource-limits-dtu-single-databases.md)
+- **Azure SQL Database serverless tier**: Metrics can be gathered for databases using the serverless compute tier. However, the process of gathering metrics will reset the auto-pause delay timer, preventing the database from entering an auto-paused state
+- **Secondary replicas**: Metrics can only be gathered for a single secondary replica per-database. If a database has more than 1 secondary replica, only 1 can be monitored.
+- **Authentication with Azure Active Directory**: The only supported method of [authentication](../../azure-sql/database/logins-create-manage.md#authentication-and-authorization) for monitoring is SQL authentication. For SQL Server on Azure VM, authentication using Active Directory on a custom domain controller is not supported.
## Open SQL insights Open SQL insights by selecting **SQL (preview)** from the **Insights** section of the **Azure Monitor** menu in the Azure portal. Click on a tile to load the experience for the type of SQL you are monitoring. :::image type="content" source="media/sql-insights/portal.png" alt-text="SQL insights in Azure portal."::: - ## Enable SQL insights
-See [Enable SQL insights](sql-insights-enable.md) for the detailed procedure to enable SQL insights in addition to steps for troubleshooting.
+See [Enable SQL insights](sql-insights-enable.md) for instructions on enabling SQL insights.
+## Troubleshoot SQL insights
+See [Troubleshooting SQL insights](sql-insights-troubleshoot.md) for instructions on troubleshooting SQL insights.
## Data collected by SQL insights-
-SQL insights only supports the remote method of monitoring SQL. We do not install any agents on the VMs that are running SQL Server. One or more dedicated monitoring VMs are required which we use to remotely collect data from your SQL resources.
-
-Each of these monitoring VMs will have the [Azure Monitor agent](https://docs.microsoft.com/azure/azure-monitor/agents/azure-monitor-agent-overview) installed on them along with the Workload insights (WLI) extension.
-
-The WLI extension includes the open source [telegraf agent](https://www.influxdata.com/time-series-platform/telegraf/). We use [data collection rules](https://docs.microsoft.com/azure/azure-monitor/agents/data-collection-rule-overview) to configure the [sqlserver input plugin](https://www.influxdata.com/integration/microsoft-sql-server/) to specify the data to collect from Azure SQL DB, Azure SQL Managed Instance, and SQL Server running on an Azure VM.
-
-The following tables summarize the following:
--- Name of the query in the sqlserver telegraf plugin-- Dynamic managed views the query calls-- Namespace the data appears under in the *InsighstMetrics* table-- Whether the data is collected by default-- How often the data is collected by default
-
-You can modify which queries are run and data collection frequency when you create your monitoring profile.
-
-### Azure SQL DB data
-
-| Query Name | DMV | Namespace | Enabled by Default | Default collection frequency |
-|:|:|:|:|:|
-| AzureSQLDBWaitStats | sys.dm_db_wait_stats | sqlserver_azuredb_waitstats | No | NA |
-| AzureSQLDBResourceStats | sys.dm_db_resource_stats | sqlserver_azure_db_resource_stats | Yes | 60 seconds |
-| AzureSQLDBResourceGovernance | sys.dm_user_db_resource_governance | sqlserver_db_resource_governance | Yes | 60 seconds |
-| AzureSQLDBDatabaseIO | sys.dm_io_virtual_file_stats<br>sys.database_files<br>tempdb.sys.database_files | sqlserver_database_io | Yes | 60 seconds |
-| AzureSQLDBServerProperties | sys.dm_os_job_object<br>sys.database_files<br>sys.[databases]<br>sys.[database_service_objectives] | sqlserver_server_properties | Yes | 60 seconds |
-| AzureSQLDBOsWaitstats | sys.dm_os_wait_stats | sqlserver_waitstats | Yes | 60 seconds |
-| AzureSQLDBMemoryClerks | sys.dm_os_memory_clerks | sqlserver_memory_clerks | Yes | 60 seconds |
-| AzureSQLDBPerformanceCounters | sys.dm_os_performance_counters<br>sys.databases | sqlserver_performance | Yes | 60 seconds |
-| AzureSQLDBRequests | sys.dm_exec_sessions<br>sys.dm_exec_requests<br>sys.dm_exec_sql_text | sqlserver_requests | No | NA |
-| AzureSQLDBSchedulers | sys.dm_os_schedulers | sqlserver_schedulers | No | NA |
-
-### Azure SQL managed instance data
-
-| Query Name | DMV | Namespace | Enabled by Default | Default collection frequency |
-|:|:|:|:|:|
-| AzureSQLMIResourceStats | sys.server_resource_stats | sqlserver_azure_db_resource_stats | Yes | 60 seconds |
-| AzureSQLMIResourceGovernance | sys.dm_instance_resource_governance | sqlserver_instance_resource_governance | Yes | 60 seconds |
-| AzureSQLMIDatabaseIO | sys.dm_io_virtual_file_stats<br>sys.master_files | sqlserver_database_io | Yes | 60 seconds |
-| AzureSQLMIServerProperties | sys.server_resource_stats | sqlserver_server_properties | Yes | 60 seconds |
-| AzureSQLMIOsWaitstats | sys.dm_os_wait_stats | sqlserver_waitstats | Yes | 60 seconds |
-| AzureSQLMIMemoryClerks | sys.dm_os_memory_clerks | sqlserver_memory_clerks | Yes | 60 seconds |
-| AzureSQLMIPerformanceCounters | sys.dm_os_performance_counters<br>sys.databases | sqlserver_performance | Yes | 60 seconds |
-| AzureSQLMIRequests | sys.dm_exec_sessions<br>sys.dm_exec_requests<br>sys.dm_exec_sql_text | sqlserver_requests | No | NA |
-| AzureSQLMISchedulers | sys.dm_os_schedulers | sqlserver_schedulers | No | NA |
-
-### SQL Server data
-
-| Query Name | DMV | Namespace | Enabled by Default | Default collection frequency |
-|:|:|:|:|:|
-| SQLServerPerformanceCounters | sys.dm_os_performance_counters | sqlserver_performance | Yes | 60 seconds |
-| SQLServerWaitStatsCategorized | sys.dm_os_wait_stats | sqlserver_waitstats | Yes | 60 seconds |
-| SQLServerDatabaseIO | sys.dm_io_virtual_file_stats<br>sys.master_files | sqlserver_database_io | Yes | 60 seconds |
-| SQLServerProperties | sys.dm_os_sys_info | sqlserver_server_properties | Yes | 60 seconds |
-| SQLServerMemoryClerks | sys.dm_os_memory_clerks | sqlserver_memory_clerks | Yes | 60 seconds |
-| SQLServerSchedulers | sys.dm_os_schedulers | sqlserver_schedulers | No | NA |
-| SQLServerRequests | sys.dm_exec_sessions<br>sys.dm_exec_requests<br>sys.dm_exec_sql_text | sqlserver_requests | No | NA |
-| SQLServerVolumeSpace | sys.master_files | sqlserver_volume_space | Yes | 60 seconds |
-| SQLServerCpu | sys.dm_os_ring_buffers | sqlserver_cpu | Yes | 60 seconds |
-| SQLServerAvailabilityReplicaStates | sys.dm_hadr_availability_replica_states<br>sys.availability_replicas<br>sys.availability_groups<br>sys.dm_hadr_availability_group_states | sqlserver_hadr_replica_states | | 60 seconds |
-| SQLServerDatabaseReplicaStates | sys.dm_hadr_database_replica_states<br>sys.availability_replicas | sqlserver_hadr_dbreplica_states | | 60 seconds |
---
+SQL insights performs all monitoring remotely. We do not install any agents on the virtual machines running SQL Server.
+
+SQL insights uses dedicated monitoring virtual machines to remotely collect data from your SQL resources. Each monitoring virtual machine will have the [Azure Monitor agent](https://docs.microsoft.com/azure/azure-monitor/agents/azure-monitor-agent-overview) and the Workload insights (WLI) extension installed. The WLI extension includes the open source [Telegraf agent](https://www.influxdata.com/time-series-platform/telegraf/). SQL insights uses [data collection rules](https://docs.microsoft.com/azure/azure-monitor/agents/data-collection-rule-overview) to specify the data collection settings for Telegraf's [SQL Server plugin](https://www.influxdata.com/integration/microsoft-sql-server/).
+
+Different sets of data are available for Azure SQL Database, Azure SQL Managed Instance, and SQL Server. The tables below describe the available data. You can customize which data sets to collect and the frequency of collection when you [create a monitoring profile](sql-insights-enable.md#create-sql-monitoring-profile).
+
+The tables below have the following columns:
+- **Friendly Name**: Name of the query as shown on the Azure portal when creating a monitoring profile
+- **Configuration Name**: Name of the query as shown on the Azure portal when editing a monitoring profile
+- **Namespace**: Name of the query as found in a Log Analytics workspace. This identifier appears in the **InsighstMetrics** table on the `Namespace` property in the `Tags` column
+- **DMVs**: The dynamic managed views used to produce the data set
+- **Enabled by Default**: Whether the data is collected by default
+- **Default Collection Frequency**: How often the data is collected by default
+
+### Data for Azure SQL Database
+| Friendly Name | Configuration Name | Namespace | DMVs | Enabled by Default | Default Collection Frequency |
+|:|:|:|:|:|:|
+| DB wait stats | AzureSQLDBWaitStats | sqlserver_azuredb_waitstats | sys.dm_db_wait_stats | No | NA |
+| DBO wait stats | AzureSQLDBOsWaitstats | sqlserver_waitstats |sys.dm_os_wait_stats | Yes | 60 seconds |
+| Memory clerks | AzureSQLDBMemoryClerks | sqlserver_memory_clerks | sys.dm_os_memory_clerks | Yes | 60 seconds |
+| Database IO | AzureSQLDBDatabaseIO | sqlserver_database_io | sys.dm_io_virtual_file_stats<br>sys.database_files<br>tempdb.sys.database_files | Yes | 60 seconds |
+| Server properties | AzureSQLDBServerProperties | sqlserver_server_properties | sys.dm_os_job_object<br>sys.database_files<br>sys.[databases]<br>sys.[database_service_objectives] | Yes | 60 seconds |
+| Performance counters | AzureSQLDBPerformanceCounters | sqlserver_performance | sys.dm_os_performance_counters<br>sys.databases | Yes | 60 seconds |
+| Resource stats | AzureSQLDBResourceStats | sqlserver_azure_db_resource_stats | sys.dm_db_resource_stats | Yes | 60 seconds |
+| Resource governance | AzureSQLDBResourceGovernance | sqlserver_db_resource_governance | sys.dm_user_db_resource_governance | Yes | 60 seconds |
+| Requests | AzureSQLDBRequests | sqlserver_requests | sys.dm_exec_sessions<br>sys.dm_exec_requests<br>sys.dm_exec_sql_text | No | NA |
+| Schedulers| AzureSQLDBSchedulers | sqlserver_schedulers | sys.dm_os_schedulers | No | NA |
+
+### Data for Azure SQL Managed Instance
+
+| Friendly Name | Configuration Name | Namespace | DMVs | Enabled by Default | Default Collection Frequency |
+|:|:|:|:|:|:|
+| Wait stats | AzureSQLMIOsWaitstats | sqlserver_waitstats | sys.dm_os_wait_stats | Yes | 60 seconds |
+| Memory clerks | AzureSQLMIMemoryClerks | sqlserver_memory_clerks | sys.dm_os_memory_clerks | Yes | 60 seconds |
+| Database IO | AzureSQLMIDatabaseIO | sqlserver_database_io | sys.dm_io_virtual_file_stats<br>sys.master_files | Yes | 60 seconds |
+| Server properties | AzureSQLMIServerProperties | sqlserver_server_properties | sys.server_resource_stats | Yes | 60 seconds |
+| Performance counters | AzureSQLMIPerformanceCounters | sqlserver_performance | sys.dm_os_performance_counters<br>sys.databases| Yes | 60 seconds |
+| Resource stats | AzureSQLMIResourceStats | sqlserver_azure_db_resource_stats | sys.server_resource_stats | Yes | 60 seconds |
+| Resource governance | AzureSQLMIResourceGovernance | sqlserver_instance_resource_governance | sys.dm_instance_resource_governance | Yes | 60 seconds |
+| Requests | AzureSQLMIRequests | sqlserver_requests | sys.dm_exec_sessions<br>sys.dm_exec_requests<br>sys.dm_exec_sql_text | No | NA |
+| Schedulers | AzureSQLMISchedulers | sqlserver_schedulers | sys.dm_os_schedulers | No | NA |
+
+### Data for SQL Server
+
+| Friendly Name | Configuration Name | Namespace | DMVs | Enabled by Default | Default Collection Frequency |
+|:|:|:|:|:|:|
+| Wait stats | SQLServerWaitStatsCategorized | sqlserver_waitstats | sys.dm_os_wait_stats | Yes | 60 seconds |
+| Memory clerks | SQLServerMemoryClerks | sqlserver_memory_clerks | sys.dm_os_memory_clerks | Yes | 60 seconds |
+| Database IO | SQLServerDatabaseIO | sqlserver_database_io | sys.dm_io_virtual_file_stats<br>sys.master_files | Yes | 60 seconds |
+| Server properties | SQLServerProperties | sqlserver_server_properties | sys.dm_os_sys_info | Yes | 60 seconds |
+| Performance counters | SQLServerPerformanceCounters | sqlserver_performance | sys.dm_os_performance_counters | Yes | 60 seconds |
+| Volume space | SQLServerVolumeSpace | sqlserver_volume_space | sys.master_files | Yes | 60 seconds |
+| SQL Server CPU | SQLServerCpu | sqlserver_cpu | sys.dm_os_ring_buffers | Yes | 60 seconds |
+| Schedulers | SQLServerSchedulers | sqlserver_schedulers | sys.dm_os_schedulers | No | NA |
+| Requests | SQLServerRequests | sqlserver_requests | sys.dm_exec_sessions<br>sys.dm_exec_requests<br>sys.dm_exec_sql_text | No | NA |
+| Availability Replica States | SQLServerAvailabilityReplicaStates | sqlserver_hadr_replica_states | sys.dm_hadr_availability_replica_states<br>sys.availability_replicas<br>sys.availability_groups<br>sys.dm_hadr_availability_group_states | No | 60 seconds |
+| Availability Database Replicas | SQLServerDatabaseReplicaStates | sqlserver_hadr_dbreplica_states | sys.dm_hadr_database_replica_states<br>sys.availability_replicas | No | 60 seconds |
## Next steps
-See [Enable SQL insights](sql-insights-enable.md) for the detailed procedure to enable SQL insights.
-See [Frequently asked questions](../faq.md#sql-insights-preview) for frequently asked questions about SQL insights.
+- See [Enable SQL insights](sql-insights-enable.md) for instructions on enabling SQL insights
+- See [Frequently asked questions](../faq.md#sql-insights-preview) for frequently asked questions about SQL insights
azure-monitor Data Platform Logs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/logs/data-platform-logs.md
Azure Monitor Logs is a feature of Azure Monitor that collects and organizes log
## What can you do with Azure Monitor Logs? The following table describes some of the different ways that you can use Logs in Azure Monitor:
-| | |
+| | Description |
|:|:| | **Analyze** | Use [Log Analytics](./log-analytics-tutorial.md) in the Azure portal to write [log queries](./log-query-overview.md) and interactively analyze log data using a powerful analysis engine | | **Alert** | Configure a [log alert rule](../alerts/alerts-log.md) that sends a notification or takes [automated action](../alerts/action-groups.md) when the results of the query match a particular result. |
azure-monitor Manage Cost Storage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/logs/manage-cost-storage.md
find where TimeGenerated > ago(24h) project _ResourceId, _BilledSize, _IsBillabl
For data from nodes hosted in Azure you can get the **size** of ingested data __per Azure subscription__, get use the `_SubscriptionId` property as: ```kusto
-find where TimeGenerated > ago(24h) project _ResourceId, _BilledSize, _IsBillable
+find where TimeGenerated > ago(24h) project _BilledSize, _IsBillable, _SubscriptionId
| where _IsBillable == true
-| summarize BillableDataBytes = sum(_BilledSize) by _ResourceId
-| summarize BillableDataBytes = sum(BillableDataBytes) by _SubscriptionId | sort by BillableDataBytes nulls last
+| summarize BillableDataBytes = sum(_BilledSize) by _SubscriptionId | sort by BillableDataBytes nulls last
``` To get data volume by resource group, you can parse `_ResourceId`:
azure-monitor Security Baseline https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/security-baseline.md
Alternatively, you may enable and on-board data to Azure Sentinel or a third-par
**Responsibility**: Customer
-**Azure Security Center monitoring**: The [Azure Security Benchmark](/home/mbaldwin/docs/asb/azure-docs-pr/articles/governance/policy/samples/azure-security-benchmark.md) is the default policy initiative for Security Center and is the foundation for [Security Center's recommendations](/home/mbaldwin/docs/asb/azure-docs-pr/articles/security-center/security-center-recommendations.md). The Azure Policy definitions related to this control are enabled automatically by Security Center. Alerts related to this control may require an [Azure Defender](/home/mbaldwin/docs/asb/azure-docs-pr/articles/security-center/azure-defender.md) plan for the related services.
+**Azure Security Center monitoring**: The [Azure Security Benchmark](/azure/governance/policy/samples/azure-security-benchmark) is the default policy initiative for Security Center and is the foundation for [Security Center's recommendations](/azure/security-center/security-center-recommendations). The Azure Policy definitions related to this control are enabled automatically by Security Center. Alerts related to this control may require an [Azure Defender](/azure/security-center/azure-defender) plan for the related services.
**Azure Policy built-in definitions - microsoft.insights**:
Alternatively, you may enable and on-board data to Azure Sentinel or a third-par
**Responsibility**: Customer
-**Azure Security Center monitoring**: The [Azure Security Benchmark](/home/mbaldwin/docs/asb/azure-docs-pr/articles/governance/policy/samples/azure-security-benchmark.md) is the default policy initiative for Security Center and is the foundation for [Security Center's recommendations](/home/mbaldwin/docs/asb/azure-docs-pr/articles/security-center/security-center-recommendations.md). The Azure Policy definitions related to this control are enabled automatically by Security Center. Alerts related to this control may require an [Azure Defender](/home/mbaldwin/docs/asb/azure-docs-pr/articles/security-center/azure-defender.md) plan for the related services.
+**Azure Security Center monitoring**: The [Azure Security Benchmark](/azure/governance/policy/samples/azure-security-benchmark) is the default policy initiative for Security Center and is the foundation for [Security Center's recommendations](/azure/security-center/security-center-recommendations). The Azure Policy definitions related to this control are enabled automatically by Security Center. Alerts related to this control may require an [Azure Defender](/azure/security-center/azure-defender) plan for the related services.
**Azure Policy built-in definitions - microsoft.insights**:
For the underlying platform which is managed by Microsoft, Microsoft treats all
**Responsibility**: Customer
-**Azure Security Center monitoring**: The [Azure Security Benchmark](/home/mbaldwin/docs/asb/azure-docs-pr/articles/governance/policy/samples/azure-security-benchmark.md) is the default policy initiative for Security Center and is the foundation for [Security Center's recommendations](/home/mbaldwin/docs/asb/azure-docs-pr/articles/security-center/security-center-recommendations.md). The Azure Policy definitions related to this control are enabled automatically by Security Center. Alerts related to this control may require an [Azure Defender](/home/mbaldwin/docs/asb/azure-docs-pr/articles/security-center/azure-defender.md) plan for the related services.
+**Azure Security Center monitoring**: The [Azure Security Benchmark](/azure/governance/policy/samples/azure-security-benchmark) is the default policy initiative for Security Center and is the foundation for [Security Center's recommendations](/azure/security-center/security-center-recommendations). The Azure Policy definitions related to this control are enabled automatically by Security Center. Alerts related to this control may require an [Azure Defender](/azure/security-center/azure-defender) plan for the related services.
**Azure Policy built-in definitions - microsoft.insights**:
If using live streaming APM capabilities, make the channel secure with a secret
- [How to create a Key Vault](../key-vault/secrets/quick-create-portal.md) -- [How to provide Key Vault authentication with a managed identity](/azure/key-vault/general/assign-access=policy-portal)
+- [How to provide Key Vault authentication with a managed identity](/azure/key-vault/general/assign-access-policy-portal)
**Responsibility**: Customer
azure-monitor Vminsights Workbooks https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/vm/vminsights-workbooks.md
Each section has its own advanced settings, which are accessible via the setting
![Screenshot of the Advanced Settings dialog in the Virtual Machines Workbook section of Azure Monitor. The icon that opens the dialog is highlighted.](media/vminsights-workbooks/007-settings-expanded.png)
-| | |
+| Setting | Description |
| - |:--|
-| **Custom width** | Makes an item an arbitrary size, so you can fit many items on a single line allowing you to better organize your charts and tables into rich interactive reports. |
-| **Conditionally visible** | Specify to hide steps based on a parameter when in reading mode. |
-| **Export a parameter**| Allow a selected row in the grid or chart to cause later steps to change values or become visible. |
-| **Show query when not editing** | Displays the query above the chart or table even when in reading mode.
-| **Show open in analytics button when not editing** | Adds the blue Analytics icon to the right-hand corner of the chart to allow one-click access.|
+| Custom width | Makes an item an arbitrary size, so you can fit many items on a single line allowing you to better organize your charts and tables into rich interactive reports. |
+| Conditionally visible | Specify to hide steps based on a parameter when in reading mode. |
+| Export a parameter| Allow a selected row in the grid or chart to cause later steps to change values or become visible. |
+| Show query when not editing | Displays the query above the chart or table even when in reading mode.
+| Show open in analytics button when not editing | Adds the blue Analytics icon to the right-hand corner of the chart to allow one-click access.|
Most of these settings are fairly intuitive, but to understand **Export a parameter** it is better to examine a workbook that makes use of this functionality.
Parameters are linear, starting from the top of a workbook and flowing down to l
There are four different types of parameters, which are currently supported:
-| | |
+| Parameter | Description |
| - |:--|
-| **Text** | Allows the user to edit a text box, and you can optionally supply a query to fill in the default value. |
-| **Drop down** | Allows the user to choose from a set of values. |
-| **Time range picker**| Allows the user to choose from a predefined set of time range values, or pick from a custom time range.|
-| **Resource picker** | Allows the user to choose from the resources selected for the workbook.|
+| Text | Allows the user to edit a text box, and you can optionally supply a query to fill in the default value. |
+| Drop down | Allows the user to choose from a set of values. |
+| Time range picker| Allows the user to choose from a predefined set of time range values, or pick from a custom time range.|
+| Resource picker | Allows the user to choose from the resources selected for the workbook.|
### Using a text parameter
azure-netapp-files Azacsnap Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/azacsnap-release-notes.md
+
+ Title: Release Notes for Azure Application Consistent Snapshot tool for Azure NetApp Files | Microsoft Docs
+description: Provides release notes for the Azure Application Consistent Snapshot tool that you can use with Azure NetApp Files.
+
+documentationcenter: ''
++
+editor: ''
+
+ms.assetid:
++
+ na
+ms.devlang: na
+ Last updated : 03/22/2021+++
+# Release Notes for Azure Application Consistent Snapshot tool (preview)
+
+This page lists major changes made to AzAcSnap to provide new functionality or resolve defects.
+
+## March-2021
+
+### AzAcSnap v5.0 Preview (Build:20210318.30771)
+
+AzAcSnap v5.0 Preview (Build:20210318.30771) has been released with the following fixes and improvements:
+
+- Removed the need to add the AZACSNAP user into the SAP HANA Tenant DBs, see the [Enable communication with SAP HANA](azacsnap-installation.md#enable-communication-with-sap-hana) section.
+- Fix to allow a [restore](azacsnap-cmd-ref-restore.md) with volumes configured with Manual QOS.
+- Added mutex control to throttle SSH connections for Azure Large Instance.
+- Fix installer for handling path names with spaces and other related issues.
+- In preparation for supporting other database servers, changed the optional parameter '--hanasid' to '--dbsid'.
+
+Download the [latest release](https://aka.ms/azacsnapdownload) of the installer and review how to [get started](azacsnap-get-started.md).
+
+## Next steps
+
+- [Get started with Azure Application Consistent Snapshot tool](azacsnap-get-started.md)
azure-netapp-files Azure Netapp Files Faqs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/azure-netapp-files-faqs.md
na ms.devlang: na Previously updated : 03/09/2021 Last updated : 03/25/2021 # FAQs About Azure NetApp Files
The volume size reported by the SMB client is the maximum size the Azure NetApp
As a best practice, set the maximum tolerance for computer clock synchronization to five minutes. For more information, see [Maximum tolerance for computer clock synchronization](/previous-versions/windows/it-pro/windows-server-2012-r2-and-2012/jj852172(v=ws.11)).
+### How can I obtain the IP address of an SMB volume via the portal?
+
+Use the **JSON View** link on the volume overview pane, and look for the **startIp** identifier under **properties** -> **mountTargets**.
+ ## Capacity management FAQs ### How do I monitor usage for capacity pool and volume of Azure NetApp Files?
azure-netapp-files Azure Netapp Files Solution Architectures https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/azure-netapp-files-solution-architectures.md
na ms.devlang: na Previously updated : 03/19/2021 Last updated : 03/25/2021 # Solution architectures using Azure NetApp Files
This section provides references to SAP on Azure solutions.
* [Cloud Volumes ONTAP and Azure NetApp Files: SAP HANA system migration made easy](https://blog.netapp.com/cloud-volumes-ontap-and-azure-netapp-files-sap-hana-system-migration-made-easy/) * [Architectural Decisions to maximize ANF investment in HANA N+M Scale-Out Architecture - Part 1](https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/architectural-decisions-to-maximize-anf-investment-in-hana-n-m/ba-p/2078737) * [Architectural Decisions to maximize ANF investment in HANA N+M Scale-Out Architecture - Part 2](https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/architectural-decisions-to-maximize-anf-investment-in-hana-n-m/ba-p/2117130)
+* [Architectural Decisions to maximize ANF investment in HANA N+M Scale-Out Architecture - Part 3](https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/architectural-decisions-to-maximize-anf-investment-in-hana-n-m/ba-p/2215948)
## Azure VMware Solutions
azure-percept Concept Security Configuration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-percept/concept-security-configuration.md
+
+ Title: Azure Percept firewall configuration and security recommendations
+description: Learn more about Azure Percept firewall configuration and security recommendations
++++ Last updated : 03/25/2021++
+# Azure Percept firewall configuration and security recommendations
+
+Review the guidelines below for information on configuring firewalls and general security best practices with Azure Percept.
+
+## Configuring firewalls for Azure Percept DK
+
+If your networking setup requires that you explicitly permit connections made from Azure Percept DK devices, review the following list of components.
+
+This checklist is a starting point for firewall rules:
+
+|URL (* = wildcard)|Outbound TCP Ports|Usage|
+|-|||
+|*.auth.azureperceptdk.azure.net|443|Azure DK SOM Authentication and Authorization|
+|*.auth.projectsantacruz.azure.net|443|Azure DK SOM Authentication and Authorization|
+
+Additionally, review the list of [connections used by Azure IoT Edge](https://docs.microsoft.com/azure/iot-edge/production-checklist#allow-connections-from-iot-edge-devices).
+
+## Additional recommendations for deployment to production
+
+Azure Percept DK offers a great variety of security capabilities out of the box. In addition to those powerful security features included in the current release, Microsoft also suggests the following guidelines when considering production deployments:
+
+- Strong physical protection of the device itself
+- Ensure data-at-rest encryption is enabled
+- Continuously monitor the device posture and quickly respond to alerts
+- Limit the number of administrators who have access to the device
azure-percept Overview 8020 Integration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-percept/overview-8020-integration.md
Previously updated : 02/18/2021 Last updated : 03/24/2021 # Azure Percept DK 80/20 integration overview
-The Azure Percept DK and Audio Accessory were designed to integrate with the [80/20 railing system](https://8020.net/).
+The Azure Percept DK and Audio Accessory were designed to integrate with the [80/20 T-slot aluminum building system](https://8020.net/).
## 80/20 features
-Each hardware component is built with the notches and protrusions to fit in the 1010 extrusion type. This integration enables customers and solution builders to more easily extend their proof of concepts to production environments.
+The Azure Percept DK carrier board, Azure Percept Vision device, and Azure Percept Audio accessory are manufactured with built-in 80/20 1010 connectors, which allow for endless mounting configurations with 80/20 rails. This integration enables customers and solution builders to more easily extend their proof of concepts to production environments.
Check out this video for more information on how to use Azure Percept DK with 80/20:
Check out this video for more information on how to use Azure Percept DK with 80
## Next steps
-Learn about the [Azure Percept Audio accessory](./overview-azure-percept-audio.md).
+> [!div class="nextstepaction"]
+> [Buy an Azure Percept DK from the Microsoft online store](https://go.microsoft.com/fwlink/p/?LinkId=2155270)
azure-percept Overview Ai Models https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-percept/overview-ai-models.md
Title: Azure Percept AI models description: Learn more about the AI models available for prototyping and deployment--++ Last updated 03/23/2021
azure-percept Overview Azure Percept Audio https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-percept/overview-azure-percept-audio.md
Title: Azure Percept Audio device overview description: Learn more about Azure Percept Audio--++ Last updated 03/23/2021
azure-percept Overview Azure Percept Dk https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-percept/overview-azure-percept-dk.md
Title: Azure Percept DK overview description: Learn more about the Azure Percept DK--++ Last updated 03/23/2021
azure-percept Overview Azure Percept Studio https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-percept/overview-azure-percept-studio.md
Title: Azure Percept Studio overview description: Learn more about Azure Percept Studio--++ Last updated 03/23/2021
The workflows in Azure Percept Studio integrate many Azure AI and IoT services,
Regardless of if you are a beginner or an advanced AI model and solution developer, working on a prototype, or moving to a production solution, Azure Percept Studio offers access to workflows you can use to reduce friction around building edge AI solutions.
+## Video walkthrough
+ </br> > [!VIDEO https://www.youtube.com/embed/rZsUuCytZWY]
azure-percept Overview Azure Percept https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-percept/overview-azure-percept.md
Title: Azure Percept overview description: Learn more about the Azure Percept platform--++ Last updated 03/23/2021
The integration challenges one faces when attempting to deploy edge AI solutions
The main components of Azure Percept are:
-1. [Azure Percept DK.](./overview-azure-percept-dk.md)
+- [Azure Percept DK.](./overview-azure-percept-dk.md)
- A development kit that is flexible enough to support a wide variety of prototyping scenarios for device builders, solution builders, and customers. > [!div class="nextstepaction"] > [Buy now](https://go.microsoft.com/fwlink/p/?LinkId=2155270)
-1. Services and workflows that accelerate edge AI model and solution development.
+- Services and workflows that accelerate edge AI model and solution development.
- Development workflows and pre-built models accessible from [Azure Percept Studio](https://go.microsoft.com/fwlink/?linkid=2135819). - Model development services. - Device management services for scaling. - End-to-end security.
-1. AI hardware reference design and certification programs.
+- AI hardware reference design and certification programs.
- Provides the ecosystem of hardware developers with patterns and best practices for developing edge AI hardware that can be integrated easily with Azure AI and IoT services.
azure-percept Overview Percept Security https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-percept/overview-percept-security.md
Title: Azure Percept security overview description: Learn more about Azure Percept security--++ Previously updated : 02/18/2021 Last updated : 03/24/2021 # Azure Percept security overview
-Azure Percept DK devices are designed with a hardware root of trust: additional built-in security on every device. It helps protect privacy-sensitive sensors like cameras and microphones, inference data, and enables device authentication and authorization for Azure Percept Studio services.
+Azure Percept devices are designed with a hardware root of trust. This built-in security helps protect inference data and privacy-sensitive sensors like cameras and microphones and enables device authentication and authorization for Azure Percept Studio services.
> [!NOTE] > The Azure Percept DK is licensed for use in development and test environments only.
Azure Percept DK devices are designed with a hardware root of trust: additional
### Azure Percept DK
-Azure Percept DK includes a Trusted Platform Module (TPM) version 2.0 which can be utilized to connect the device to Azure Device Provisioning Services with additional security. TPM is an industry-wide, ISO standard from the Trusted Computing Group, and you can read more about TPM at the [complete TPM 2.0 spec](https://trustedcomputinggroup.org/resource/tpm-library-specification/) or the ISO/IEC 11889 spec. For more information on how DPS can provision devices in a secure manner see [Azure IoT Hub Device Provisioning Service - TPM Attestation](../iot-dps/concepts-tpm-attestation.md).
+Azure Percept DK includes a Trusted Platform Module (TPM) version 2.0, which can be utilized to connect the device to Azure Device Provisioning Services (DPS) with additional security. TPM is an industry-wide, ISO standard from the Trusted Computing Group. Check out the [Trusted Computing Group website](https://trustedcomputinggroup.org/resource/tpm-library-specification/) for more information about the complete TPM 2.0 spec or the ISO/IEC 11889 spec. For more information on how DPS can provision devices in a secure manner, see [Azure IoT Hub Device Provisioning Service - TPM Attestation](../iot-dps/concepts-tpm-attestation.md).
-### Azure Percept system on module (SOM)
+### Azure Percept system-on-modules (SoMs)
-Azure Percept DK vision-enabled system on module (SOM) and the Azure Percept Audio accessory SOM both include a Micro Controller Unit (MCU) for protecting access to the embedded AI sensors. At every boot, the MCU firmware authenticates and authorizes the AI accelerator with Azure Percept Studio services using the Device Identifier Composition Engine (DICE) architecture. DICE works by breaking up boot into layers and creating secrets unique to each layer and configuration based on a Unique Device Secret (UDS). If different code or configuration is booted, at any point in the chain, the secrets will be different. You can read more about DICE at the [DICE workgroup spec](https://trustedcomputinggroup.org/work-groups/dice-architectures/). For configuring access to Azure Percept Studio and required services see the **Configuring firewalls for Azure Percept DK** below.
+The Azure Percept Vision system-on-module (SoM) and the Azure Percept Audio SoM both include a microcontroller unit (MCU) for protecting access to the embedded AI sensors. At every boot, the MCU firmware authenticates and authorizes the AI accelerator with Azure Percept Studio services using the Device Identifier Composition Engine (DICE) architecture. DICE works by breaking up boot into layers and creating Unique Device Secrets (UDS) for each layer and configuration. If different code or configuration is booted at any point in the chain, the secrets will be different. You can read more about DICE at the [DICE workgroup spec](https://trustedcomputinggroup.org/work-groups/dice-architectures/). For configuring access to Azure Percept Studio and required services see the article on [configuring firewalls for Azure Percept DK](concept-security-configuration.md).
-Azure Percept devices use the hardware root trust to secure firmware. The boot ROM ensures integrity of firmware between ROM and operating system (OS) loader which in turn ensures integrity of the other software components creating a chain of trust.
+Azure Percept devices use the hardware root of trust to secure firmware. The boot ROM ensures integrity of firmware between ROM and operating system (OS) loader, which in turn ensures integrity of the other software components, creating a chain of trust.
## Services ### IoT Edge
-Azure Percept DK connects to Azure Percept Studio with additional security and other Azure services utilizing Transport Layer Security (TLS) protocol. Azure Percept DK is an Azure IoT Edge enabled device. IoT Edge runtime is a collection of programs that turn a device into an IoT Edge device. Collectively, the IoT Edge runtime components enable IoT Edge devices to receive code to run at the edge and communicate the results. Azure Percept DK utilizes Docker containers for isolating IoT Edge workloads from the host operating system and edge enabled applications. For more information about the Azure IoT Edge security framework, read about the [IoT Edge security manager](../iot-edge/iot-edge-security-manager.md).
+Azure Percept DK connects to Azure Percept Studio with additional security and other Azure services utilizing Transport Layer Security (TLS) protocol. Azure Percept DK is an Azure IoT Edge-enabled device. IoT Edge runtime is a collection of programs that turn a device into an IoT Edge device. Collectively, the IoT Edge runtime components enable IoT Edge devices to receive code to run at the edge and communicate the results. Azure Percept DK utilizes Docker containers for isolating IoT Edge workloads from the host operating system and edge-enabled applications. For more information about the Azure IoT Edge security framework, read about the [IoT Edge security manager](../iot-edge/iot-edge-security-manager.md).
### Device Update for IoT Hub Device Update for IoT Hub enables more secure, scalable, and reliable over-the-air updating that brings renewable security to Azure Percept devices. It provides rich management controls and update compliance through insights. Azure Percept DK includes a pre-integrated device update solution providing resilient update (A/B) from firmware to OS layers.
-<!I think the below topics need to be somewhere else, (i.e. not on the main page)
->
-
-## Configuring firewalls for Azure Percept DK
-
-If your networking setup requires that you explicitly permit connections made from Azure Percept DK devices, review the following list of components.
-
-This checklist is a starting point for firewall rules:
-
-|URL (* = wildcard) |Outbound TCP Ports| Usage|
-|-|||
-|*.auth.azureperceptdk.azure.net| 443| Azure DK SOM Authentication and Authorization|
-|*.auth.projectsantacruz.azure.net| 443| Azure DK SOM Authentication and Authorization|
-
-Additionally, review the list of [connections used by Azure IoT Edge](../iot-edge/production-checklist.md#allow-connections-from-iot-edge-devices).
-
-<!
-## Additional Recommendations for Deployment to Production
-
-Azure Percept DK offers a great variety of security capabilities out of the box. In addition to those powerful security features included in the current release, Microsoft also suggests the following guidelines when considering production deployments:
--- Strong physical protection of the device itself-- Ensuring data at rest encryption is enabled-- Continuously monitoring the device posture and quickly responding to alerts-- Limiting the number of administrators who have access to the device
->
-- ## Next steps
-Learn about the available [Azure Percept AI models](./overview-ai-models.md).
+> [!div class="nextstepaction"]
+> [Learn more about firewall configurations and security recommendations](concept-security-configuration.md)
+
+> [!div class="nextstepaction"]
+> [Buy an Azure Percept DK from the Microsoft online store](https://go.microsoft.com/fwlink/p/?LinkId=2155270)
azure-percept Overview Update Experience https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-percept/overview-update-experience.md
Previously updated : 02/18/2021 Last updated : 03/24/2021
-# The Azure Percept DK update experience
+# Azure Percept DK update experience overview
-With Azure Percept DK, you have two options to update your dev kit OS and firmware: over-the-air (OTA) or via USB. OTA updating is an easy way keep devices up-to-date, while USB updating is a good option for when OTA is not possible or when you want to factory reset your device. To ensure you are able to take advantage of whichever update method is best for you, we have put together a collection of how-to guides to assist you.
+With Azure Percept DK, you may update your dev kit OS and firmware over-the-air (OTA) or via USB. OTA updating is an easy way keep devices up-to-date through the [Device Update for IoT Hub](https://docs.microsoft.com/azure/iot-hub-device-update/) service. USB updates are available for users who are unable to use OTA updates or when a factory reset of the device is needed. Check out the following how-to guides to get started with Azure Percept DK device updates:
-- [How to set up Azure IoT Hub to to deploy over the air updates to your Azure Percept DK](./how-to-set-up-over-the-air-updates.md)-- [How to update your Azure Percept DK over the air](./how-to-update-over-the-air.md)
+- [Set up Azure IoT Hub to deploy over-the-air (OTA) updates to your Azure Percept DK](./how-to-set-up-over-the-air-updates.md)
+- [Update your Azure Percept DK over-the-air (OTA)](./how-to-update-over-the-air.md)
- [Update your Azure Percept DK over USB](./how-to-update-via-usb.md) ## Next steps
azure-portal Capture Browser Trace https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-portal/capture-browser-trace.md
Title: Capture a browser trace for troubleshooting description: Capture network information from a browser trace to help troubleshoot issues with the Azure portal. Previously updated : 05/11/2020 Last updated : 03/25/2021
The following steps show how to use the developer tools in Firefox. For more inf
![Screenshot of browser trace results](media/capture-browser-trace/firefox-browser-trace-results.png)
-1. After you have reproduced the unexpected portal behavior, select **HAR Export/Import** then **Save All As HAR**.
+1. After you have reproduced the unexpected portal behavior, select **Save All As HAR**.
![Screenshot of "Export HAR"](media/capture-browser-trace/firefox-network-export-har.png) 1. Stop Steps Recorder on Windows or the screen recording on Mac, and save the recording.
-1. Back in the browser developer tools pane, select the **Console** tab. Right-click one of the messages, then select **Export Visible Message To**, and save the console output to a text file.
+1. Back in the browser developer tools pane, select the **Console** tab. Right-click one of the messages, then select **Export Visible Messages To**, and save the console output to a text file.
![Screenshot of console output](media/capture-browser-trace/firefox-console-select.png)
azure-portal Quickstart Portal Dashboard Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-portal/quickstart-portal-dashboard-powershell.md
Title: Create an Azure portal dashboard with PowerShell
description: Learn how to create a dashboard in the Azure portal using Azure PowerShell. Previously updated : 07/24/2020 Last updated : 03/25/2021 # Quickstart: Create an Azure portal dashboard with PowerShell
azure-portal Recover Shared Deleted Dashboard https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-portal/recover-shared-deleted-dashboard.md
Title: Recover a deleted dashboard in the Azure portal description: If you delete a published dashboard in the Azure portal, you can recover the dashboard. Previously updated : 01/21/2020 Last updated : 03/25/2021
azure-resource-manager Control Plane And Data Plane https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/management/control-plane-and-data-plane.md
The control plane includes two scenarios for handling requests - "green field" a
## Data plane
-Requests for data plane operations are sent to an endpoint that is specific to your instance. For example, the [Detect Language operation](/rest/api/cognitiveservices/textanalytics/detect%20language/detect%20language) in Cognitive Services is a data plane operation because the request URL is:
+Requests for data plane operations are sent to an endpoint that is specific to your instance. For example, the [Detect Language operation](/azure/cognitive-services/text-analytics/how-tos/text-analytics-how-to-language-detection) in Cognitive Services is a data plane operation because the request URL is:
```http POST {Endpoint}/text/analytics/v2.0/languages
azure-resource-manager Move Resource Group And Subscription https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/management/move-resource-group-and-subscription.md
Title: Move resources to a new subscription or resource group description: Use Azure Resource Manager to move resources to a new resource group or subscription. Previously updated : 09/15/2020 Last updated : 03/23/2021
Both the source group and the target group are locked during the move operation.
Moving a resource only moves it to a new resource group or subscription. It doesn't change the location of the resource.
+## Changed resource ID
+
+When you move a resource, you change its resource ID. The standard format for a resource ID is `/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/{resourceProviderNamespace}/{resourceType}/{resourceName}`. When you move a resource to a new resource group or subscription, you change one or more values in that path.
+
+If you use the resource ID anywhere, you'll need to change that value. For example, if you have a [custom dashboard](../../azure-portal/quickstart-portal-dashboard-azure-cli.md) in the portal that references a resource ID, you'll need to update that value. Look for any scripts or templates that need to be updated for the new resource ID.
+ ## Checklist before moving resources There are some important steps to do before moving a resource. By verifying these conditions, you can avoid errors.
There are some important steps to do before moving a resource. By verifying thes
* [Virtual Machines move guidance](./move-limitations/virtual-machines-move-limitations.md) * To move an Azure subscription to a new management group, see [Move subscriptions](../../governance/management-groups/manage.md#move-subscriptions).
-1. If you move a resource that has an Azure role assigned directly to the resource (or a child resource), the role assignment is not moved and becomes orphaned. After the move, you must re-create the role assignment. Eventually, the orphaned role assignment will be automatically removed, but it is a best practice to remove the role assignment before moving the resource.
+1. If you move a resource that has an Azure role assigned directly to the resource (or a child resource), the role assignment isn't moved and becomes orphaned. After the move, you must re-create the role assignment. Eventually, the orphaned role assignment is automatically removed, but we recommend removing the role assignment before the move.
For information about how to manage role assignments, see [List Azure role assignments](../../role-based-access-control/role-assignments-list-portal.md#list-role-assignments-at-a-scope) and [Assign Azure roles](../../role-based-access-control/role-assignments-portal.md).
Moving a resource is a complex operation that has different phases. It can invol
**Question: Why is my resource group locked for four hours during resource move?**
-A move request is allowed a maximum of four hours to complete. To prevent modifications on the resources being moved, both the source and destination resource groups are locked for the duration of the resource move.
+A move request is allowed a maximum of four hours to complete. To prevent modifications on the resources being moved, both the source and destination resource groups are locked during the resource move.
There are two phases in a move request. In the first phase, the resource is moved. In the second phase, notifications are sent to other resource providers that are dependent on the resource being moved. A resource group can be locked for the entire four hours when a resource provider fails either phase. During the allowed time, Resource Manager retries the failed step.
azure-resource-manager Security Baseline https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/management/security-baseline.md
file](https://github.com/MicrosoftDocs/SecurityBenchmarks/tree/master/Azure%20Of
**Responsibility**: Customer
-**Azure Security Center monitoring**: The [Azure Security Benchmark](/home/mbaldwin/docs/asb/azure-docs-pr/articles/governance/policy/samples/azure-security-benchmark.md) is the default policy initiative for Security Center and is the foundation for [Security Center's recommendations](/home/mbaldwin/docs/asb/azure-docs-pr/articles/security-center/security-center-recommendations.md). The Azure Policy definitions related to this control are enabled automatically by Security Center. Alerts related to this control may require an [Azure Defender](/home/mbaldwin/docs/asb/azure-docs-pr/articles/security-center/azure-defender.md) plan for the related services.
+**Azure Security Center monitoring**: The [Azure Security Benchmark](/azure/governance/policy/samples/azure-security-benchmark) is the default policy initiative for Security Center and is the foundation for [Security Center's recommendations](/azure/security-center/security-center-recommendations). The Azure Policy definitions related to this control are enabled automatically by Security Center. Alerts related to this control may require an [Azure Defender](/azure/security-center/azure-defender) plan for the related services.
**Azure Policy built-in definitions - Microsoft.Resources**:
Alternatively, you can enable and on-board data to Azure Sentinel or a third-par
**Responsibility**: Customer
-**Azure Security Center monitoring**: The [Azure Security Benchmark](/home/mbaldwin/docs/asb/azure-docs-pr/articles/governance/policy/samples/azure-security-benchmark.md) is the default policy initiative for Security Center and is the foundation for [Security Center's recommendations](/home/mbaldwin/docs/asb/azure-docs-pr/articles/security-center/security-center-recommendations.md). The Azure Policy definitions related to this control are enabled automatically by Security Center. Alerts related to this control may require an [Azure Defender](/home/mbaldwin/docs/asb/azure-docs-pr/articles/security-center/azure-defender.md) plan for the related services.
+**Azure Security Center monitoring**: The [Azure Security Benchmark](/azure/governance/policy/samples/azure-security-benchmark) is the default policy initiative for Security Center and is the foundation for [Security Center's recommendations](/azure/security-center/security-center-recommendations). The Azure Policy definitions related to this control are enabled automatically by Security Center. Alerts related to this control may require an [Azure Defender](/azure/security-center/azure-defender) plan for the related services.
**Azure Policy built-in definitions - Microsoft.Resources**:
You can also enable a Just-In-Time access by using Azure Active Directory (Azure
**Responsibility**: Customer
-**Azure Security Center monitoring**: The [Azure Security Benchmark](/home/mbaldwin/docs/asb/azure-docs-pr/articles/governance/policy/samples/azure-security-benchmark.md) is the default policy initiative for Security Center and is the foundation for [Security Center's recommendations](/home/mbaldwin/docs/asb/azure-docs-pr/articles/security-center/security-center-recommendations.md). The Azure Policy definitions related to this control are enabled automatically by Security Center. Alerts related to this control may require an [Azure Defender](/home/mbaldwin/docs/asb/azure-docs-pr/articles/security-center/azure-defender.md) plan for the related services.
+**Azure Security Center monitoring**: The [Azure Security Benchmark](/azure/governance/policy/samples/azure-security-benchmark) is the default policy initiative for Security Center and is the foundation for [Security Center's recommendations](/azure/security-center/security-center-recommendations). The Azure Policy definitions related to this control are enabled automatically by Security Center. Alerts related to this control may require an [Azure Defender](/azure/security-center/azure-defender) plan for the related services.
**Azure Policy built-in definitions - Microsoft.Resources**:
You can also enable a Just-In-Time access by using Azure Active Directory (Azure
**Responsibility**: Customer
-**Azure Security Center monitoring**: The [Azure Security Benchmark](/home/mbaldwin/docs/asb/azure-docs-pr/articles/governance/policy/samples/azure-security-benchmark.md) is the default policy initiative for Security Center and is the foundation for [Security Center's recommendations](/home/mbaldwin/docs/asb/azure-docs-pr/articles/security-center/security-center-recommendations.md). The Azure Policy definitions related to this control are enabled automatically by Security Center. Alerts related to this control may require an [Azure Defender](/home/mbaldwin/docs/asb/azure-docs-pr/articles/security-center/azure-defender.md) plan for the related services.
+**Azure Security Center monitoring**: The [Azure Security Benchmark](/azure/governance/policy/samples/azure-security-benchmark) is the default policy initiative for Security Center and is the foundation for [Security Center's recommendations](/azure/security-center/security-center-recommendations). The Azure Policy definitions related to this control are enabled automatically by Security Center. Alerts related to this control may require an [Azure Defender](/azure/security-center/azure-defender) plan for the related services.
**Azure Policy built-in definitions - Microsoft.Resources**:
You can also enable a Just-In-Time access by using Azure Active Directory (Azure
**Responsibility**: Customer
-**Azure Security Center monitoring**: The [Azure Security Benchmark](/home/mbaldwin/docs/asb/azure-docs-pr/articles/governance/policy/samples/azure-security-benchmark.md) is the default policy initiative for Security Center and is the foundation for [Security Center's recommendations](/home/mbaldwin/docs/asb/azure-docs-pr/articles/security-center/security-center-recommendations.md). The Azure Policy definitions related to this control are enabled automatically by Security Center. Alerts related to this control may require an [Azure Defender](/home/mbaldwin/docs/asb/azure-docs-pr/articles/security-center/azure-defender.md) plan for the related services.
+**Azure Security Center monitoring**: The [Azure Security Benchmark](/azure/governance/policy/samples/azure-security-benchmark) is the default policy initiative for Security Center and is the foundation for [Security Center's recommendations](/azure/security-center/security-center-recommendations). The Azure Policy definitions related to this control are enabled automatically by Security Center. Alerts related to this control may require an [Azure Defender](/azure/security-center/azure-defender) plan for the related services.
**Azure Policy built-in definitions - Microsoft.Resources**:
You can streamline this process by creating diagnostic settings for Azure AD use
**Responsibility**: Customer
-**Azure Security Center monitoring**: The [Azure Security Benchmark](/home/mbaldwin/docs/asb/azure-docs-pr/articles/governance/policy/samples/azure-security-benchmark.md) is the default policy initiative for Security Center and is the foundation for [Security Center's recommendations](/home/mbaldwin/docs/asb/azure-docs-pr/articles/security-center/security-center-recommendations.md). The Azure Policy definitions related to this control are enabled automatically by Security Center. Alerts related to this control may require an [Azure Defender](/home/mbaldwin/docs/asb/azure-docs-pr/articles/security-center/azure-defender.md) plan for the related services.
+**Azure Security Center monitoring**: The [Azure Security Benchmark](/azure/governance/policy/samples/azure-security-benchmark) is the default policy initiative for Security Center and is the foundation for [Security Center's recommendations](/azure/security-center/security-center-recommendations). The Azure Policy definitions related to this control are enabled automatically by Security Center. Alerts related to this control may require an [Azure Defender](/azure/security-center/azure-defender) plan for the related services.
**Azure Policy built-in definitions - Microsoft.Authorization**:
You can streamline this process by creating diagnostic settings for Azure AD use
**Responsibility**: Customer
-**Azure Security Center monitoring**: The [Azure Security Benchmark](/home/mbaldwin/docs/asb/azure-docs-pr/articles/governance/policy/samples/azure-security-benchmark.md) is the default policy initiative for Security Center and is the foundation for [Security Center's recommendations](/home/mbaldwin/docs/asb/azure-docs-pr/articles/security-center/security-center-recommendations.md). The Azure Policy definitions related to this control are enabled automatically by Security Center. Alerts related to this control may require an [Azure Defender](/home/mbaldwin/docs/asb/azure-docs-pr/articles/security-center/azure-defender.md) plan for the related services.
+**Azure Security Center monitoring**: The [Azure Security Benchmark](/azure/governance/policy/samples/azure-security-benchmark) is the default policy initiative for Security Center and is the foundation for [Security Center's recommendations](/azure/security-center/security-center-recommendations). The Azure Policy definitions related to this control are enabled automatically by Security Center. Alerts related to this control may require an [Azure Defender](/azure/security-center/azure-defender) plan for the related services.
**Azure Policy built-in definitions - Microsoft.Resources**:
azure-resource-manager Bicep Install https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/bicep-install.md
You can deploy Bicep files by using Azure CLI or Azure PowerShell. For Azure CLI
- [Install Azure CLI on macOS](/cli/azure/install-azure-cli-macos) > [!NOTE]
-> Currently, both Azure CLI and Azure PowerShell can only deploy local Bicep files. For more information about deploying Bicep files by using Azure CLI, see [Deploy - CLI](/deploy-cli.md#deploy-remote-template). For more information about deploying Bicep files by using Azure PowerShell, see [Deploy - PowerShell](/deploy-powershell.md#deploy-remote-template).
+> Currently, both Azure CLI and Azure PowerShell can only deploy local Bicep files. For more information about deploying Bicep files by using Azure CLI, see [Deploy - CLI](./deploy-cli.md#deploy-remote-template). For more information about deploying Bicep files by using Azure PowerShell, see [Deploy - PowerShell]( ./deploy-powershell.md#deploy-remote-template).
After the supported version of Azure PowerShell or Azure CLI is installed, you can deploy a Bicep file with:
azure-resource-manager Bicep Tutorial Add Modules https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/bicep-tutorial-add-modules.md
Title: Tutorial - add modules to Azure Resource Manager Bicep file description: Use modules to encapsulate complex details of the raw resource declaration. Previously updated : 03/10/2021 Last updated : 03/25/2021
Congratulations, you've finished this introduction to deploying Bicep files to A
The next tutorial series goes into more detail about deploying templates. > [!div class="nextstepaction"]
-> [Add modules](./bicep-tutorial-add-modules.md)
+> [Deploy a local template](./deployment-tutorial-local-template.md)
azure-resource-manager Common Deployment Errors https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/common-deployment-errors.md
If you're looking for information about an error code and that information isn't
| - | - | - | | AccountNameInvalid | Follow naming restrictions for storage accounts. | [Resolve storage account name](error-storage-account-name.md) | | AccountPropertyCannotBeSet | Check available storage account properties. | [storageAccounts](/azure/templates/microsoft.storage/storageaccounts) |
-| AllocationFailed | The cluster or region doesn't have resources available or can't support the requested VM size. Retry the request at a later time, or request a different VM size. | [Provisioning and allocation issues for Linux](../../virtual-machines/troubleshooting/troubleshoot-deployment-new-vm-linux.md), [Provisioning and allocation issues for Windows](../../virtual-machines/troubleshooting/troubleshoot-deployment-new-vm-windows.md) and [Troubleshoot allocation failures](../../virtual-machines/troubleshooting/allocation-failure.md)|
+| AllocationFailed | The cluster or region doesn't have resources available or can't support the requested VM size. Retry the request at a later time, or request a different VM size. | [Provisioning and allocation issues for Linux](/troubleshoot/azure/virtual-machines/troubleshoot-deployment-new-vm-linux), [Provisioning and allocation issues for Windows](/troubleshoot/azure/virtual-machines/troubleshoot-deployment-new-vm-windows) and [Troubleshoot allocation failures](/troubleshoot/azure/virtual-machines/allocation-failure)|
| AnotherOperationInProgress | Wait for concurrent operation to complete. | | | AuthorizationFailed | Your account or service principal doesn't have sufficient access to complete the deployment. Check the role your account belongs to, and its access for the deployment scope.<br><br>You might receive this error when a required resource provider isn't registered. | [Azure role-based access control (Azure RBAC)](../../role-based-access-control/role-assignments-portal.md)<br><br>[Resolve registration](error-register-resource-provider.md) | | BadRequest | You sent deployment values that don't match what is expected by Resource Manager. Check the inner status message for help with troubleshooting. | [Template reference](/azure/templates/) and [Supported locations](resource-location.md) |
azure-resource-manager Deploy Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/deploy-cli.md
Title: Deploy resources with Azure CLI and template description: Use Azure Resource Manager and Azure CLI to deploy resources to Azure. The resources are defined in a Resource Manager template or a Bicep file. Previously updated : 03/04/2021 Last updated : 03/25/2021 # Deploy resources with ARM templates and Azure CLI
The deployment can take a few minutes to complete. When it finishes, you see a m
Instead of storing ARM templates on your local machine, you may prefer to store them in an external location. You can store templates in a source control repository (such as GitHub). Or, you can store them in an Azure storage account for shared access in your organization. + If you're deploying to a resource group that doesn't exist, create the resource group. The name of the resource group can only include alphanumeric characters, periods, underscores, hyphens, and parenthesis. It can be up to 90 characters. The name can't end in a period. ```azurecli-interactive
azure-resource-manager Deploy Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/deploy-powershell.md
Title: Deploy resources with PowerShell and template description: Use Azure Resource Manager and Azure PowerShell to deploy resources to Azure. The resources are defined in a Resource Manager template or a Bicep file. Previously updated : 03/04/2021 Last updated : 03/25/2021 # Deploy resources with ARM templates and Azure PowerShell
The deployment can take several minutes to complete.
Instead of storing ARM templates on your local machine, you may prefer to store them in an external location. You can store templates in a source control repository (such as GitHub). Or, you can store them in an Azure storage account for shared access in your organization. + If you're deploying to a resource group that doesn't exist, create the resource group. The name of the resource group can only include alphanumeric characters, periods, underscores, hyphens, and parenthesis. It can be up to 90 characters. The name can't end in a period. ```azurepowershell
azure-resource-manager Deploy To Azure Button https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/deploy-to-azure-button.md
Title: Deploy to Azure button description: Use button to deploy Azure Resource Manager templates from a GitHub repository. Previously updated : 11/10/2020 Last updated : 03/25/2021 # Use a deployment button to deploy templates from GitHub repository
https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.github
You have your full URL for the link.
-Typically, you host the template in a public repo. If you use a private repo, you must include a token to access the raw contents of the template. The token generated by GitHub is valid for only a short time. You would need to update the link often.
If you're using [Git with Azure Repos](/azure/devops/repos/git/) instead of a GitHub repo, you can still use the **Deploy to Azure** button. Make sure your repo is public. Use the [Items operation](/rest/api/azure/devops/git/items/get) to get the template. Your request should be in the following format:
azure-resource-manager Linked Templates https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/linked-templates.md
Title: Link templates for deployment description: Describes how to use linked templates in an Azure Resource Manager template (ARM template) to create a modular template solution. Shows how to pass parameters values, specify a parameter file, and dynamically created URLs. Previously updated : 01/26/2021 Last updated : 03/25/2021 # Using linked and nested templates when deploying Azure resources
If you're linking to a template in GitHub, use the raw URL. The link has the for
:::image type="content" source="./media/linked-templates/select-raw.png" alt-text="Select raw URL"::: + ### Parameters for linked template You can provide the parameters for your linked template either in an external file or inline. When providing an external parameter file, use the `parametersLink` property:
azure-resource-manager Template Functions Resource https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/template-functions-resource.md
The possible uses of list* are shown in the following table.
| Microsoft.DevTestLab/labs/schedules | [ListApplicable](/rest/api/dtl/schedules/listapplicable) | | Microsoft.DevTestLab/labs/users/serviceFabrics | [ListApplicableSchedules](/rest/api/dtl/servicefabrics/listapplicableschedules) | | Microsoft.DevTestLab/labs/virtualMachines | [ListApplicableSchedules](/rest/api/dtl/virtualmachines/listapplicableschedules) |
-| Microsoft.DocumentDB/databaseAccounts | [listConnectionStrings](/rest/api/cosmos-db-resource-provider/2020-06-01-preview/databaseaccounts/listconnectionstrings) |
-| Microsoft.DocumentDB/databaseAccounts | [listKeys](/rest/api/cosmos-db-resource-provider/2020-06-01-preview/databaseaccounts/listkeys) |
-| Microsoft.DocumentDB/databaseAccounts/notebookWorkspaces | [listConnectionInfo](/rest/api/cosmos-db-resource-provider/2020-06-01/notebookworkspaces/listconnectioninfo) |
+| Microsoft.DocumentDB/databaseAccounts | [listConnectionStrings](/rest/api/cosmos-db-resource-provider/2021-03-01-preview/databaseaccounts/listconnectionstrings) |
+| Microsoft.DocumentDB/databaseAccounts | [listKeys](/rest/api/cosmos-db-resource-provider/2021-03-01-preview/databaseaccounts/listkeys) |
+| Microsoft.DocumentDB/databaseAccounts/notebookWorkspaces | [listConnectionInfo](/rest/api/cosmos-db-resource-provider/2021-03-01-preview/notebookworkspaces/listconnectioninfo) |
| Microsoft.DomainRegistration | [listDomainRecommendations](/rest/api/appservice/domains/listrecommendations) | | Microsoft.DomainRegistration/topLevelDomains | [listAgreements](/rest/api/appservice/topleveldomains/listagreements) | | Microsoft.EventGrid/domains | [listKeys](/rest/api/eventgrid/version2020-06-01/domains/listsharedaccesskeys) |
The possible uses of list* are shown in the following table.
| Microsoft.NotificationHubs/Namespaces/NotificationHubs/authorizationRules | [listkeys](/rest/api/notificationhubs/notificationhubs/listkeys) | | Microsoft.OperationalInsights/workspaces | [list](/rest/api/loganalytics/workspaces/list) | | Microsoft.OperationalInsights/workspaces | listKeys |
-| Microsoft.PolicyInsights/remediations | [listDeployments](/rest/api/policy-insights/remediations/listdeploymentsatresourcegroup) |
+| Microsoft.PolicyInsights/remediations | [listDeployments](/rest/api/policy/remediations/listdeploymentsatresourcegroup) |
| Microsoft.RedHatOpenShift/openShiftClusters | [listCredentials](/rest/api/openshift/openshiftclusters/listcredentials) | | Microsoft.Relay/namespaces/authorizationRules | [listkeys](/rest/api/relay/namespaces/listkeys) | | Microsoft.Relay/namespaces/disasterRecoveryConfigs/authorizationRules | listkeys |
azure-resource-manager Template Functions String https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/template-functions-string.md
param testArray array = [
'two' 'three' ]
-param elementsToSkip int = 2
+param elementsToTake int = 2
param testString string = 'one two three'
-param charactersToSkip int = 2
+param charactersToTake int = 2
-output arrayOutput array = take(testArray, elementsToSkip)
-output stringOutput string = take(testString, charactersToSkip)
+output arrayOutput array = take(testArray, elementsToTake)
+output stringOutput string = take(testString, charactersToTake)
```
azure-signalr Signalr Concept Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-signalr/signalr-concept-disaster-recovery.md
Below is a diagram that illustrates such topology:
![Diagram shows two regions each with an app server and a SignalR service, where each server is associated with the SignalR service in its region as primary and with the service in the other region as secondary.](media/signalr-concept-disaster-recovery/topology.png)
-## Configure app servers with multiple SignalR service instances
+## Configure multiple SignalR service instances
-Once you have SignalR service and app servers created in each region, you can configure your app servers to connect to all SignalR service instances.
+Multiple SignalR service instances are supported on both app servers and Azure Functions.
+Once you have SignalR service and app servers/Azure Functions created in each region, you can configure your app servers/Azure Functions to connect to all SignalR service instances.
+
+### Configure on app servers
There are two ways you can do it:
-### Through config
+#### Through config
You should already know how to set SignalR service connection string through environment variables/app settings/web.cofig, in a config entry named `Azure:SignalR:ConnectionString`. If you have multiple endpoints, you can set them in multiple config entries, each in the following format:
Azure:SignalR:ConnectionString:<name>:<role>
Here `<name>` is the name of the endpoint and `<role>` is its role (primary or secondary). Name is optional but it will be useful if you want to further customize the routing behavior among multiple endpoints.
-### Through code
+#### Through code
If you prefer to store the connection strings somewhere else, you can also read them in your code and use them as parameters when calling `AddAzureSignalR()` (in ASP.NET Core) or `MapAzureSignalR()` (in ASP.NET).
You can configure multiple primary or secondary instances. If there're multiple
1. If there is at least one primary instance online, return a random primary online instance. 2. If all primary instances are down, return a random secondary online instance.
+### Configure on Azure Functions
+See [this article](https://github.com/Azure/azure-functions-signalrservice-extension/blob/dev/docs/sharding.md#configuration-method).
+ ## Failover sequence and best practice Now you have the right system topology setup. Whenever one SignalR service instance is down, online traffic will be routed to other instances.
You'll need to handle such cases at client side to make it transparent to your e
In this article, you have learned how to configure your application to achieve resiliency for SignalR service. To understand more details about server/client connection and connection routing in SignalR service, you can read [this article](signalr-concept-internals.md) for SignalR service internals. For scaling scenarios such as sharding, that use multiple instances together to handle large number of connections, read [how to scale multiple instances](signalr-howto-scale-multi-instances.md).+
+For details on how to configure Azure Functions with multiple SignalR service instances, read [multiple Azure SignalR Service instances support in Azure Functions](https://github.com/Azure/azure-functions-signalrservice-extension/blob/dev/docs/sharding.md).
azure-sql Advance Notifications https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/advance-notifications.md
The following table shows additional notifications that may be sent while mainte
- [Maintenance window](maintenance-window.md) - [Maintenance window FAQ](maintenance-window-faq.yml)-- [Overview of alerts in Microsoft Azure](../../azure-monitor/platform/alerts-overview.md)-- [Email Azure Resource Manager Role](../../azure-monitor/platform/action-groups.md#email-azure-resource-manager-role)
+- [Overview of alerts in Microsoft Azure](../../azure-monitor/alerts/alerts-overview.md)
+- [Email Azure Resource Manager Role](../../azure-monitor/alerts/action-groups.md#email-azure-resource-manager-role)
azure-sql Authentication Aad Directory Readers Role https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/authentication-aad-directory-readers-role.md
Last updated 08/14/2020
Azure Active Directory (Azure AD) has introduced [using cloud groups to manage role assignments in Azure Active Directory (preview)](../../active-directory/roles/groups-concept.md). This allows for Azure AD roles to be assigned to groups.
-When enabling a [managed identity](../../active-directory/managed-identities-azure-resources/overview.md#managed-identity-types) for Azure SQL Database, Azure SQL Managed Instance, or Azure Synapse Analytics, the Azure AD [**Directory Readers**](../../active-directory/roles/permissions-reference.md#directory-readers) role must be assigned to the identity to allow read access to the [Azure AD Graph API](../../active-directory/develop/active-directory-graph-api.md). The managed identity of SQL Database and Azure Synapse is referred to as the server identity. The managed identity of SQL Managed Instance is referred to as the managed instance identity, and is automatically assigned when the instance is created. For more information on assigning a server identity to SQL Database or Azure Synapse, see [Enable service principals to create Azure AD users](authentication-aad-service-principal.md#enable-service-principals-to-create-azure-ad-users).
+When enabling a [managed identity](../../active-directory/managed-identities-azure-resources/overview.md#managed-identity-types) for Azure SQL Database, Azure SQL Managed Instance, or Azure Synapse Analytics, the Azure AD [**Directory Readers**](../../active-directory/roles/permissions-reference.md#directory-readers) role must be assigned to the identity to allow read access to the [Azure AD Graph API](/graph/migrate-azure-ad-graph-planning-checklist). The managed identity of SQL Database and Azure Synapse is referred to as the server identity. The managed identity of SQL Managed Instance is referred to as the managed instance identity, and is automatically assigned when the instance is created. For more information on assigning a server identity to SQL Database or Azure Synapse, see [Enable service principals to create Azure AD users](authentication-aad-service-principal.md#enable-service-principals-to-create-azure-ad-users).
The **Directory Readers** role is necessary to:
azure-sql Automatic Tuning Enable https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/automatic-tuning-enable.md
In case of error messages that automated recommendation management has been disa
- Query Store stopped running because it used the allocated storage space. The following steps can be considered to rectify this issue:-- Clean up the Query Store, or modify the data retention period to "auto" by using T-SQL. See how to [configure recommended retention and capture policy for Query Store](/azure/azure-sql/database/query-performance-insight-use#recommended-retention-and-capture-policy).
+- Clean up the Query Store, or modify the data retention period to "auto" by using T-SQL. See how to [configure recommended retention and capture policy for Query Store](./query-performance-insight-use.md#recommended-retention-and-capture-policy).
- Use SQL Server Management Studio (SSMS) and follow these steps: - Connect to the Azure SQL Database - Right click on the database
To receive automated email notifications on recommendations made by the automati
- Read the [Automatic tuning article](automatic-tuning-overview.md) to learn more about automatic tuning and how it can help you improve your performance. - See [Performance recommendations](database-advisor-implement-performance-recommendations.md) for an overview of Azure SQL Database performance recommendations.-- See [Query Performance Insights](query-performance-insight-use.md) to learn about viewing the performance impact of your top queries.
+- See [Query Performance Insights](query-performance-insight-use.md) to learn about viewing the performance impact of your top queries.
azure-sql Database Export https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/database-export.md
$exportStatus
``` ## Cancel the export request
-Use the [Database Operations - Cancel API](https://docs.microsoft.com/rest/api/sql/databaseoperations/cancel)
-or the Powershell [Stop-AzSqlDatabaseActivity command](https://docs.microsoft.com/powershell/module/az.sql/Stop-AzSqlDatabaseActivity), here an example of powershell command.
+Use the [Database Operations - Cancel API](/rest/api/sql/databaseoperations/cancel)
+or the Powershell [Stop-AzSqlDatabaseActivity command](/powershell/module/az.sql/Stop-AzSqlDatabaseActivity), here an example of powershell command.
```cmd Stop-AzSqlDatabaseActivity -ResourceGroupName $ResourceGroupName -ServerName $ServerName -DatabaseName $DatabaseName -OperationId $Operation.OperationId
Stop-AzSqlDatabaseActivity -ResourceGroupName $ResourceGroupName -ServerName $Se
- To learn about exporting a BACPAC from a SQL Server database, see [Export a Data-tier Application](/sql/relational-databases/data-tier-applications/export-a-data-tier-application) - To learn about using the Data Migration Service to migrate a database, see [Migrate from SQL Server to Azure SQL Database offline using DMS](../../dms/tutorial-sql-server-to-azure-sql.md). - If you are exporting from SQL Server as a prelude to migration to Azure SQL Database, see [Migrate a SQL Server database to Azure SQL Database](migrate-to-database-from-sql-server.md).-- To learn how to manage and share storage keys and shared access signatures securely, see [Azure Storage Security Guide](../../storage/blobs/security-recommendations.md).
+- To learn how to manage and share storage keys and shared access signatures securely, see [Azure Storage Security Guide](../../storage/blobs/security-recommendations.md).
azure-sql Database Import https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/database-import.md
az sql db import --resource-group "<resourceGroup>" --server "<server>" --name "
## Cancel the import request
-Use the [Database Operations - Cancel API](https://docs.microsoft.com/rest/api/sql/databaseoperations/cancel)
-or the Powershell [Stop-AzSqlDatabaseActivity command](https://docs.microsoft.com/powershell/module/az.sql/Stop-AzSqlDatabaseActivity), here an example of powershell command.
+Use the [Database Operations - Cancel API](/rest/api/sql/databaseoperations/cancel)
+or the Powershell [Stop-AzSqlDatabaseActivity command](/powershell/module/az.sql/Stop-AzSqlDatabaseActivity), here an example of powershell command.
```cmd Stop-AzSqlDatabaseActivity -ResourceGroupName $ResourceGroupName -ServerName $ServerName -DatabaseName $DatabaseName -OperationId $Operation.OperationId
You can also use these wizards.
- To learn how to connect to and query a database in Azure SQL Database, see [Quickstart: Azure SQL Database: Use SQL Server Management Studio to connect to and query data](connect-query-ssms.md). - For a SQL Server Customer Advisory Team blog about migrating using BACPAC files, see [Migrating from SQL Server to Azure SQL Database using BACPAC Files](https://techcommunity.microsoft.com/t5/DataCAT/Migrating-from-SQL-Server-to-Azure-SQL-Database-using-Bacpac/ba-p/305407). - For a discussion of the entire SQL Server database migration process, including performance recommendations, see [SQL Server database migration to Azure SQL Database](migrate-to-database-from-sql-server.md).-- To learn how to manage and share storage keys and shared access signatures securely, see [Azure Storage Security Guide](../../storage/blobs/security-recommendations.md).
+- To learn how to manage and share storage keys and shared access signatures securely, see [Azure Storage Security Guide](../../storage/blobs/security-recommendations.md).
azure-sql Maintenance Window https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/maintenance-window.md
Once the maintenance window selection is made and service configuration complete
Configuring and using maintenance window is free of charge for all eligible [offer types](https://azure.microsoft.com/support/legal/offer-details/): Pay-As-You-Go, Cloud Solution Provider (CSP), Microsoft Enterprise Agreement, or Microsoft Customer Agreement. > [!Note]
-> An Azure offer is the type of the Azure subscription you have. For example, a subscription with [pay-as-you-go rates](https://azure.microsoft.com/offers/ms-azr-0003p/), [Azure in Open](https://azure.microsoft.com/offers/ms-azr-0111p/), and [Visual Studio Enterprise](https://azure.microsoft.com/offers/ms-azr-0063p/) are all Azure offers. Each offer or plan has different terms and benefits. Your offer or plan is shown on the subscription's Overview. For more information on switching your subscription to a different offer, see [Change your Azure subscription to a different offer](/azure/cost-management-billing/manage/switch-azure-offer).
+> An Azure offer is the type of the Azure subscription you have. For example, a subscription with [pay-as-you-go rates](https://azure.microsoft.com/offers/ms-azr-0003p/), [Azure in Open](https://azure.microsoft.com/offers/ms-azr-0111p/), and [Visual Studio Enterprise](https://azure.microsoft.com/offers/ms-azr-0063p/) are all Azure offers. Each offer or plan has different terms and benefits. Your offer or plan is shown on the subscription's Overview. For more information on switching your subscription to a different offer, see [Change your Azure subscription to a different offer](../../cost-management-billing/manage/switch-azure-offer.md).
## Advance notifications
For more on the client connection policy in Azure SQL managed instance see [Azur
## Considerations for Azure SQL managed instance
-Azure SQL managed instance consists of service components hosted on a dedicated set of isolated virtual machines that run inside the customer's virtual network subnet. These virtual machines form [virtual cluster(s)](/azure/azure-sql/managed-instance/connectivity-architecture-overview#high-level-connectivity-architecture) that can host multiple managed instances. Maintenance window configured on instances of one subnet can influence the number of virtual clusters within the subnet and distribution of instances among virtual clusters. This may require a consideration of few effects.
+Azure SQL managed instance consists of service components hosted on a dedicated set of isolated virtual machines that run inside the customer's virtual network subnet. These virtual machines form [virtual cluster(s)](../managed-instance/connectivity-architecture-overview.md#high-level-connectivity-architecture) that can host multiple managed instances. Maintenance window configured on instances of one subnet can influence the number of virtual clusters within the subnet and distribution of instances among virtual clusters. This may require a consideration of few effects.
### Maintenance window configuration is long running operation All instances hosted in a virtual cluster share the maintenance window. By default, all managed instances are hosted in the virtual cluster with the default maintenance window. Specifying another maintenance window for managed instance during its creation or afterwards means that it must be placed in virtual cluster with corresponding maintenance window. If there is no such virtual cluster in the subnet, a new one must be created first to accommodate the instance. Accommodating additional instance in the existing virtual cluster may require cluster resize. Both operations contribute to the duration of configuring maintenance window for a managed instance.
-Expected duration of configuring maintenance window on managed instance can be calculated using [estimated duration of instance management operations](/azure/azure-sql/managed-instance/management-operations-overview#duration).
+Expected duration of configuring maintenance window on managed instance can be calculated using [estimated duration of instance management operations](../managed-instance/management-operations-overview.md#duration).
> [!Important] > A short reconfiguration happens at the end of the maintenance operation and typically lasts up to 8 seconds even in case of interrupted long-running transactions. To minimize the impact of the reconfiguration you should schedule the operation outside of the peak hours. ### IP address space requirements
-Each new virtual cluster in subnet requires additional IP addresses according to the [virtual cluster IP address allocation](/azure/azure-sql/managed-instance/vnet-subnet-determine-size#determine-subnet-size). Changing maintenance window for existing managed instance also requires [temporary additional IP capacity](/azure/azure-sql/managed-instance/vnet-subnet-determine-size#address-requirements-for-update-scenarios) as in scaling vCores scenario for corresponding service tier.
+Each new virtual cluster in subnet requires additional IP addresses according to the [virtual cluster IP address allocation](../managed-instance/vnet-subnet-determine-size.md#determine-subnet-size). Changing maintenance window for existing managed instance also requires [temporary additional IP capacity](../managed-instance/vnet-subnet-determine-size.md#address-requirements-for-update-scenarios) as in scaling vCores scenario for corresponding service tier.
### IP address change Configuring and changing maintenance window causes change of the IP address of the instance, within the IP address range of the subnet.
Configuring and changing maintenance window causes change of the IP address of t
* [Maintenance window FAQ](maintenance-window-faq.yml) * [Azure SQL Database](sql-database-paas-overview.md) * [SQL managed instance](../managed-instance/sql-managed-instance-paas-overview.md)
-* [Plan for Azure maintenance events in Azure SQL Database and Azure SQL managed instance](planned-maintenance.md)
-----
+* [Plan for Azure maintenance events in Azure SQL Database and Azure SQL managed instance](planned-maintenance.md)
azure-sql Private Endpoint Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/private-endpoint-overview.md
Clients can connect to the Private endpoint from the same virtual network, peere
![Diagram of connectivity options][1]
-In addition, services that are not running directly in the virtual network but are integrated with it (for example, App Service web apps or Functions) can also achieve private connectivity to the database. For more information on this specific use case, see the [Web app with private connectivity to Azure SQL database](https://docs.microsoft.com/azure/architecture/example-scenario/private-web-app/private-web-app) architecture scenario.
+In addition, services that are not running directly in the virtual network but are integrated with it (for example, App Service web apps or Functions) can also achieve private connectivity to the database. For more information on this specific use case, see the [Web app with private connectivity to Azure SQL database](/azure/architecture/example-scenario/private-web-app/private-web-app) architecture scenario.
## Test connectivity to SQL Database from an Azure VM in same virtual network
PolyBase and the COPY statement is commonly used to load data into Azure Synapse
- For an overview of Azure SQL Database security, see [Securing your database](security-overview.md) - For an overview of Azure SQL Database connectivity, see [Azure SQL Connectivity Architecture](connectivity-architecture.md)-- You may also be interested in the [Web app with private connectivity to Azure SQL database](https://docs.microsoft.com/azure/architecture/example-scenario/private-web-app/private-web-app) architecture scenario, which connects a web application outside of the virtual network to the private endpoint of a database.
+- You may also be interested in the [Web app with private connectivity to Azure SQL database](/azure/architecture/example-scenario/private-web-app/private-web-app) architecture scenario, which connects a web application outside of the virtual network to the private endpoint of a database.
<!--Image references--> [1]: media/quickstart-create-single-database/pe-connect-overview.png
azure-sql Resource Health To Troubleshoot Connectivity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/resource-health-to-troubleshoot-connectivity.md
Previously updated : 02/26/2019 Last updated : 03/24/2021 # Use Resource Health to troubleshoot connectivity for Azure SQL Database and Azure SQL Managed Instance [!INCLUDE[appliesto-sqldb-sqlmi](../includes/appliesto-sqldb-sqlmi.md)]
Reconfigurations are considered transient conditions and are expected from time
- Learn more about [retry logic for transient errors](troubleshoot-common-connectivity-issues.md#retry-logic-for-transient-errors). - [Troubleshoot, diagnose, and prevent SQL connection errors](troubleshoot-common-connectivity-issues.md). - Learn more about [configuring Resource Health alerts](../../service-health/resource-health-alert-arm-template-guide.md).-- Get an overview of [Resource Health](../../application-gateway/resource-health-overview.md).
+- Get an overview of [Resource Health](../../service-health/resource-health-overview.md).
- Review [Resource Health FAQ](../../service-health/resource-health-faq.md).
azure-sql Resource Limits Logical Server https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/resource-limits-logical-server.md
Previously updated : 02/02/2021 Last updated : 03/25/2021 # Resource limits for Azure SQL Database and Azure Synapse Analytics servers [!INCLUDE[appliesto-sqldb-asa](../includes/appliesto-sqldb-asa.md)]
-This article provides an overview of the resource limits for the logical server used by Azure SQL Database and Azure Synapse Analytics. It provides information on what happens when those resource limits are hit or exceeded and describes the resource governance mechanisms used to enforce these limits.
+This article provides an overview of the resource limits for the [logical server](logical-servers.md) used by Azure SQL Database and Azure Synapse Analytics. It provides information on what happens when those resource limits are hit or exceeded and describes the resource governance mechanisms used to enforce these limits.
> [!NOTE]
-> For Azure SQL Managed Instance limits, see [SQL Database resource limits for managed instances](../managed-instance/resource-limits.md).
+> For Azure SQL Managed Instance limits, see [resource limits for managed instances](../managed-instance/resource-limits.md).
## Maximum resource limits | Resource | Limit | | : | : |
-| Databases per server | 5000 |
-| Default number of servers per subscription in any region | 20 |
-| Max number of servers per subscription in any region | 200 |
-| DTU / eDTU quota per server | 54,000 |
-| vCore quota per server/instance | 540 |
-| Max pools per server | Limited by number of DTUs or vCores. For example, if each pool is 1000 DTUs, then a server can support 54 pools.|
+| Databases per logical server | 5000 |
+| Default number of logical servers per subscription in a region | 20 |
+| Max number of logical servers per subscription in a region | 200 |
+| DTU / eDTU quota per logical server | 54,000 |
+| vCore quota per logical server | 540 |
+| Max pools per logical server | Limited by number of DTUs or vCores. For example, if each pool is 1000 DTUs, then a server can support 54 pools.|
||| > [!IMPORTANT]
-> As the number of databases approaches the limit per server, the following can occur:
+> As the number of databases approaches the limit per logical server, the following can occur:
>
-> - Increasing latency in running queries against the master database. This includes views of resource utilization statistics such as sys.resource_stats.
+> - Increasing latency in running queries against the master database. This includes views of resource utilization statistics such as `sys.resource_stats`.
> - Increasing latency in management operations and rendering portal viewpoints that involve enumerating databases in the server. > [!NOTE]
-> To obtain more DTU/eDTU quota, vCore quota, or more servers than the default amount, submit a new support request in the Azure portal. For more information, see [Request quota increases for Azure SQL Database](quota-increase-request.md).
+> To obtain more DTU/eDTU quota, vCore quota, or more logical servers than the default amount, submit a new support request in the Azure portal. For more information, see [Request quota increases for Azure SQL Database](quota-increase-request.md).
### Storage size
azure-sql Resource Limits Vcore Elastic Pools https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/resource-limits-vcore-elastic-pools.md
You can set the service tier, compute size (service objective), and storage amou
|Storage type|Premium (Remote) Storage|Premium (Remote) Storage|Premium (Remote) Storage|Premium (Remote) Storage|Premium (Remote) Storage|Premium (Remote) Storage| |IO latency (approximate)|5-7 ms (write)<br>5-10 ms (read)|5-7 ms (write)<br>5-10 ms (read)|5-7 ms (write)<br>5-10 ms (read)|5-7 ms (write)<br>5-10 ms (read)|5-7 ms (write)<br>5-10 ms (read)|5-7 ms (write)<br>5-10 ms (read)| |Max data IOPS per pool <sup>2</sup>|2800|3200|3600|4000|6400|9600|
-|Max log rate per pool (MBps)|42|48|48|48|48|48|
+|Max log rate per pool (MBps)|42|48|54|60|62.5|62.5|
|Max concurrent workers per pool (requests) <sup>3</sup>|1470|1680|1890|2100|3360|5040| |Max concurrent logins pool (requests) <sup>3</sup>|1470|1680|1890|2100|3360|5040| |Max concurrent sessions|30,000|30,000|30,000|30,000|30,000|30,000|
You can set the service tier, compute size (service objective), and storage amou
|Storage type|Premium (Remote) Storage|Premium (Remote) Storage|Premium (Remote) Storage|Premium (Remote) Storage|Premium (Remote) Storage|Premium (Remote) Storage|Premium (Remote) Storage| |IO latency (approximate)|5-7 ms (write)<br>5-10 ms (read)|5-7 ms (write)<br>5-10 ms (read)|5-7 ms (write)<br>5-10 ms (read)|5-7 ms (write)<br>5-10 ms (read)|5-7 ms (write)<br>5-10 ms (read)|5-7 ms (write)<br>5-10 ms (read)|5-7 ms (write)<br>5-10 ms (read)| |Max data IOPS per pool <sup>2</sup>|800|1600|2400|3200|4000|4800|5600|
-|Max log rate per pool (MBps)|12|24|36|48|48|48|48|
+|Max log rate per pool (MBps)|12|24|36|48|60|62.5|62.5|
|Max concurrent workers per pool (requests) <sup>3</sup>|210|420|630|840|1050|1260|1470| |Max concurrent logins per pool (requests) <sup>3</sup>|210|420|630|840|1050|1260|1470| |Max concurrent sessions|30,000|30,000|30,000|30,000|30,000|30,000|30,000|
You can set the service tier, compute size (service objective), and storage amou
|Storage type|Premium (Remote) Storage|Premium (Remote) Storage|Premium (Remote) Storage|Premium (Remote) Storage|Premium (Remote) Storage|Premium (Remote) Storage|Premium (Remote) Storage| |IO latency (approximate)|5-7 ms (write)<br>5-10 ms (read)|5-7 ms (write)<br>5-10 ms (read)|5-7 ms (write)<br>5-10 ms (read)|5-7 ms (write)<br>5-10 ms (read)|5-7 ms (write)<br>5-10 ms (read)|5-7 ms (write)<br>5-10 ms (read)|5-7 ms (write)<br>5-10 ms (read)| |Max data IOPS per pool <sup>2</sup> |6,400|7,200|8,000|9,600|12,800|16,000|16,000|
-|Max log rate per pool (MBps)|48|48|48|48|48|48|48|
+|Max log rate per pool (MBps)|62.5|62.5|62.5|62.5|62.5|62.5|62.5|
|Max concurrent workers per pool (requests) <sup>3</sup>|1680|1890|2100|2520|3360|4200|8400| |Max concurrent logins per pool (requests) <sup>3</sup>|1680|1890|2100|2520|3360|4200|8400| |Max concurrent sessions|30,000|30,000|30,000|30,000|30,000|30,000|30,000|
You can set the service tier, compute size (service objective), and storage amou
|Storage type|Remote SSD|Remote SSD|Remote SSD|Remote SSD|Remote SSD| |IO latency (approximate)|5-7 ms (write)<br>5-10 ms (read)|5-7 ms (write)<br>5-10 ms (read)|5-7 ms (write)<br>5-10 ms (read)|5-7 ms (write)<br>5-10 ms (read)|5-7 ms (write)<br>5-10 ms (read)| |Max data IOPS per pool <sup>2</sup>|2560|3200|3840|4480|5120|
-|Max log rate per pool (MBps)|48|48|48|48|48|
+|Max log rate per pool (MBps)|48|60|62.5|62.5|62.5|
|Max concurrent workers per pool (requests) <sup>3</sup>|400|500|600|700|800| |Max concurrent logins per pool (requests) <sup>3</sup>|800|1000|1200|1400|1600| |Max concurrent sessions|30,000|30,000|30,000|30,000|30,000|
You can set the service tier, compute size (service objective), and storage amou
|Read Scale-out|N/A|N/A|N/A|N/A|N/A| |Included backup storage|1X DB size|1X DB size|1X DB size|1X DB size|1X DB size| - <sup>1</sup> See [Resource management in dense elastic pools](elastic-pool-resource-management.md) for additional considerations. <sup>2</sup> The maximum value for IO sizes ranging between 8 KB and 64 KB. Actual IOPS are workload-dependent. For details, see [Data IO Governance](resource-limits-logical-server.md#resource-governance).
You can set the service tier, compute size (service objective), and storage amou
|Storage type|Remote SSD|Remote SSD|Remote SSD|Remote SSD|Remote SSD|Remote SSD| |IO latency (approximate)|5-7 ms (write)<br>5-10 ms (read)|5-7 ms (write)<br>5-10 ms (read)|5-7 ms (write)<br>5-10 ms (read)|5-7 ms (write)<br>5-10 ms (read)|5-7 ms (write)<br>5-10 ms (read)|5-7 ms (write)<br>5-10 ms (read)| |Max data IOPS per pool <sup>2</sup>|5760|6400|7680|10240|11520|12800|
-|Max log rate per pool (MBps)|48|48|48|48|48|48|
+|Max log rate per pool (MBps)|62.5|62.5|62.5|62.5|62.5|62.5|
|Max concurrent workers per pool (requests) <sup>3</sup>|900|1000|1200|1600|1800|3600| |Max concurrent logins per pool (requests) <sup>3</sup>|1800|2000|2400|3200|3600|7200| |Max concurrent sessions|30,000|30,000|30,000|30,000|30,000|30,000|
You can set the service tier, compute size (service objective), and storage amou
<sup>3</sup> For the max concurrent workers (requests) for any individual database, see [Single database resource limits](resource-limits-vcore-single-databases.md). For example, if the elastic pool is using Gen5 and the max vCore per database is set at 2, then the max concurrent workers value is 200. If max vCore per database is set to 0.5, then the max concurrent workers value is 50 since on Gen5 there are a max of 100 concurrent workers per vCore. For other max vCore settings per database that are less 1 vCore or less, the number of max concurrent workers is similarly rescaled. - ## General purpose - provisioned compute - DC-series |Compute size (service objective)|GP_DC_2|GP_DC_4|GP_DC_6|GP_DC_8|
You can set the service tier, compute size (service objective), and storage amou
|Storage type|Premium (Remote) Storage|Premium (Remote) Storage|Premium (Remote) Storage|Premium (Remote) Storage| |IO latency (approximate)|5-7 ms (write)<br>5-10 ms (read)|5-7 ms (write)<br>5-10 ms (read)|5-7 ms (write)<br>5-10 ms (read)|5-7 ms (write)<br>5-10 ms (read)| |Max data IOPS per pool <sup>2</sup>|800|1600|2400|3200|
-|Max log rate per pool (MBps)|9.4|18.8|28.1|32.8|
+|Max log rate per pool (MBps)|12|24|36|48|
|Max concurrent workers per pool (requests) <sup>3</sup>|168|336|504|672| |Max concurrent logins per pool (requests) <sup>3</sup>|168|336|504|672| |Max concurrent sessions|30,000|30,000|30,000|30,000|
If all vCores of an elastic pool are busy, then each database in the pool receiv
|Compute generation|M-series|M-series|M-series|M-series|M-series| |vCores|20|24|32|64|128| |Memory (GB)|588.6|706.3|941.8|1883.5|3767.0|
-|Max number DBs per pool <sup>1</sup>|100|100|100|100|100|100|
+|Max number DBs per pool <sup>1</sup>|100|100|100|100|100|
|Columnstore support|Yes|Yes|Yes|Yes|Yes| |In-memory OLTP storage (GB)|172|216|304|704|1768| |Max data size (GB)|1280|1536|2048|4096|4096| |Max log size (GB)|427|512|683|1024|1024|
-|TempDB max data size (GB)|4096|2048|1024|768|640|
+|TempDB max data size (GB)|640|768|1024|2048|4096|
|Storage type|Local SSD|Local SSD|Local SSD|Local SSD|Local SSD| |IO latency (approximate)|1-2 ms (write)<br>1-2 ms (read)|1-2 ms (write)<br>1-2 ms (read)|1-2 ms (write)<br>1-2 ms (read)|1-2 ms (write)<br>1-2 ms (read)|1-2 ms (write)<br>1-2 ms (read)| |Max data IOPS per pool <sup>2</sup>|31,248|37,497|49,996|99,993|160,000|
azure-sql Resource Limits Vcore Single Databases https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/resource-limits-vcore-single-databases.md
The [serverless compute tier](serverless-tier-overview.md) is currently availabl
|Storage type|Remote SSD|Remote SSD|Remote SSD|Remote SSD| |IO latency (approximate)|5-7 ms (write)<br>5-10 ms (read)|5-7 ms (write)<br>5-10 ms (read)|5-7 ms (write)<br>5-10 ms (read)|5-7 ms (write)<br>5-10 ms (read)| |Max data IOPS *|3200|3840|4480|5120|
-|Max log rate (MBps)|36|36|36|36|
+|Max log rate (MBps)|45|50|50|50|
|Max concurrent workers (requests)|750|900|1050|1200| |Max concurrent sessions|30,000|30,000|30,000|30,000| |Number of replicas|1|1|1|1|
The [serverless compute tier](serverless-tier-overview.md) is currently availabl
|Storage type|Remote SSD|Remote SSD|Remote SSD|Remote SSD|Remote SSD| |IO latency (approximate)|5-7 ms (write)<br>5-10 ms (read)|5-7 ms (write)<br>5-10 ms (read)|5-7 ms (write)<br>5-10 ms (read)|5-7 ms (write)<br>5-10 ms (read)|5-7 ms (write)<br>5-10 ms (read)| |Max data IOPS *|5760|6400|7680|10240|12800|
-|Max log rate (MBps)|36|36|36|36|36|
+|Max log rate (MBps)|50|50|50|50|50|
|Max concurrent workers (requests)|1350|1500|1800|2400|3000| |Max concurrent sessions|30,000|30,000|30,000|30,000|30,000| |Number of replicas|1|1|1|1|1|
The [serverless compute tier](serverless-tier-overview.md) is currently availabl
|Max log size (TB)|Unlimited |Unlimited |Unlimited |Unlimited | |TempDB max data size (GB)|64|128|192|256| |Storage type| [Note 1](#notes) |[Note 1](#notes)|[Note 1](#notes) |[Note 1](#notes) |
-|Max local SSD IOPS *|8000 |16000 |24000 |32000 |
+|Max local SSD IOPS *|14000|28000|42000|44800|
|Max log rate (MBps)|100 |100 |100 |100 | |IO latency (approximate)|[Note 2](#notes)|[Note 2](#notes)|[Note 2](#notes)|[Note 2](#notes)| |Max concurrent workers (requests)|160|320|480|640|
The [serverless compute tier](serverless-tier-overview.md) is currently availabl
|Storage type|Remote SSD|Remote SSD|Remote SSD|Remote SSD|Remote SSD|Remote SSD| |IO latency (approximate)|5-7 ms (write)<br>5-10 ms (read)|5-7 ms (write)<br>5-10 ms (read)|5-7 ms (write)<br>5-10 ms (read)|5-7 ms (write)<br>5-10 ms (read)|5-7 ms (write)<br>5-10 ms (read)|5-7 ms (write)<br>5-10 ms (read) |Max data IOPS *|2240|2560|2880|3200|5120|7680|
-|Max log rate (MBps)|31.5|36|36|36|36|36|
+|Max log rate (MBps)|31.5|36|40.5|45|50|50|
|Max concurrent workers (requests)|1400|1600|1800|2000|3200|4800| |Max concurrent sessions|30,000|30,000|30,000|30,000|30,000|30,000| |Number of replicas|1|1|1|1|1|1|
The [serverless compute tier](serverless-tier-overview.md) is currently availabl
|Storage type|Remote SSD|Remote SSD|Remote SSD|Remote SSD|Remote SSD|Remote SSD|Remote SSD| |IO latency (approximate)|5-7 ms (write)<br>5-10 ms (read)|5-7 ms (write)<br>5-10 ms (read)|5-7 ms (write)<br>5-10 ms (read)|5-7 ms (write)<br>5-10 ms (read)|5-7 ms (write)<br>5-10 ms (read)|5-7 ms (write)<br>5-10 ms (read)|5-7 ms (write)<br>5-10 ms (read)| |Max data IOPS *|640|1280|1920|2560|3200|3840|4480|
-|Max log rate (MBps)|9|18|27|36|36|36|36|
+|Max log rate (MBps)|9|18|27|36|45|50|50|
|Max concurrent workers (requests)|200|400|600|800|1000|1200|1400| |Max concurrent sessions|30,000|30,000|30,000|30,000|30,000|30,000|30,000| |Number of replicas|1|1|1|1|1|1|1|
The [serverless compute tier](serverless-tier-overview.md) is currently availabl
|Storage type|Remote SSD|Remote SSD|Remote SSD|Remote SSD|Remote SSD|Remote SSD|Remote SSD| |IO latency (approximate)|5-7 ms (write)<br>5-10 ms (read)|5-7 ms (write)<br>5-10 ms (read)|5-7 ms (write)<br>5-10 ms (read)|5-7 ms (write)<br>5-10 ms (read)|5-7 ms (write)<br>5-10 ms (read)|5-7 ms (write)<br>5-10 ms (read)|5-7 ms (write)<br>5-10 ms (read)| |Max data IOPS *|5120|5760|6400|7680|10240|12800|12800|
-|Max log rate (MBps)|36|36|36|36|36|36|36|
+|Max log rate (MBps)|50|50|50|50|50|50|50|
|Max concurrent workers (requests)|1600|1800|2000|2400|3200|4000|8000| |Max concurrent sessions|30,000|30,000|30,000|30,000|30,000|30,000|30,000| |Number of replicas|1|1|1|1|1|1|1|
The [serverless compute tier](serverless-tier-overview.md) is currently availabl
|Storage type|Remote SSD|Remote SSD|Remote SSD|Remote SSD|Remote SSD| |IO latency (approximate)|5-7 ms (write)<br>5-10 ms (read)|5-7 ms (write)<br>5-10 ms (read)|5-7 ms (write)<br>5-10 ms (read)|5-7 ms (write)<br>5-10 ms (read)|5-7 ms (write)<br>5-10 ms (read)| |Max data IOPS *|2560|3200|3840|4480|5120|
-|Max log rate (MBps)|36|36|36|36|36|
+|Max log rate (MBps)|36|45|50|50|50|
|Max concurrent workers (requests)|400|500|600|700|800| |Max concurrent logins|800|1000|1200|1400|1600| |Max concurrent sessions|30,000|30,000|30,000|30,000|30,000|
The [serverless compute tier](serverless-tier-overview.md) is currently availabl
|Storage type|Remote SSD|Remote SSD|Remote SSD|Remote SSD|Remote SSD|Remote SSD| |IO latency (approximate)|5-7 ms (write)<br>5-10 ms (read)|5-7 ms (write)<br>5-10 ms (read)|5-7 ms (write)<br>5-10 ms (read)|5-7 ms (write)<br>5-10 ms (read)|5-7 ms (write)<br>5-10 ms (read)|5-7 ms (write)<br>5-10 ms (read)| |Max data IOPS *|5760|6400|7680|10240|11520|12800|
-|Max log rate (MBps)|36|36|36|36|36|36|
+|Max log rate (MBps)|50|50|50|50|50|50|
|Max concurrent workers (requests)|900|1000|1200|1600|1800|3600| |Max concurrent logins|1800|2000|2400|3200|3600|7200| |Max concurrent sessions|30,000|30,000|30,000|30,000|30,000|30,000|
The [serverless compute tier](serverless-tier-overview.md) is currently availabl
|TempDB max data size (GB)|64|128|192|256| |Storage type|Local SSD|Local SSD|Local SSD|Local SSD| |IO latency (approximate)|1-2 ms (write)<br>1-2 ms (read)|1-2 ms (write)<br>1-2 ms (read)|1-2 ms (write)<br>1-2 ms (read)|1-2 ms (write)<br>1-2 ms (read)|
-|Max data IOPS *|14000|28000|42000|56000|
+|Max data IOPS *|14000|28000|42000|44800|
|Max log rate (MBps)|24|48|72|96| |Max concurrent workers (requests)|200|400|600|800| |Max concurrent logins|200|400|600|800|
azure-sql Sql Data Sync Sync Data Between Sql Databases Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/scripts/sql-data-sync-sync-data-between-sql-databases-rest-api.md
For an overview of SQL Data Sync, see [Sync data across multiple cloud and on-pr
## Create sync group
-Use the [create or update](https://docs.microsoft.com/rest/api/sql/syncgroups/createorupdate) template to create a sync group.
+Use the [create or update](/rest/api/sql/syncgroups/createorupdate) template to create a sync group.
When creating a sync group, do not pass in the sync schema (table\column) and do not pass in masterSyncMemberName, because at this time sync group does not have table\column information yet.
Status code: 201
## Create sync member
-Use the [create or update](https://docs.microsoft.com/rest/api/sql/syncmembers/createorupdate) template to create a sync member.
+Use the [create or update](/rest/api/sql/syncmembers/createorupdate) template to create a sync member.
Sample request for creating a sync member:
Status code:201
Once your sync group is created successfully, refresh schema using the following templates.
-Use the [refresh hub schema](https://docs.microsoft.com/rest/api/sql/syncgroups/refreshhubschema) template to refresh the schema for the hub database.
+Use the [refresh hub schema](/rest/api/sql/syncgroups/refreshhubschema) template to refresh the schema for the hub database.
Sample request for refreshing a hub database schema:
Status code: 200
Status code: 202
-Use the [list hub schemas](https://docs.microsoft.com/rest/api/sql/syncgroups/listhubschemas) template to list the hub database schema.
+Use the [list hub schemas](/rest/api/sql/syncgroups/listhubschemas) template to list the hub database schema.
-Use the [refresh member schema](https://docs.microsoft.com/rest/api/sql/syncmembers/refreshmemberschema) template to refresh the member database schema.
+Use the [refresh member schema](/rest/api/sql/syncmembers/refreshmemberschema) template to refresh the member database schema.
-Use the [list member schema](https://docs.microsoft.com/rest/api/sql/syncmembers/listmemberschemas) template to list member database schema.
+Use the [list member schema](/rest/api/sql/syncmembers/listmemberschemas) template to list member database schema.
Only proceed to the next step once your schema refreshes successfully. ## Update sync group
-Use the [create or update](https://docs.microsoft.com/rest/api/sql/syncgroups/createorupdate) template to update your sync group.
+Use the [create or update](/rest/api/sql/syncgroups/createorupdate) template to update your sync group.
Update sync group by specifying the sync schema. Include your schema and masterSyncMemberName, which is the name that holds the schema you want to use.
Sample response for updating sync group:
``` ## Update sync member
-Use the [create or update](https://docs.microsoft.com/rest/api/sql/syncmembers/createorupdate) template to update your sync member.
+Use the [create or update](/rest/api/sql/syncmembers/createorupdate) template to update your sync member.
Sample request for updating a sync member:
Status code: 201
## Trigger sync
-Use the [trigger sync](https://docs.microsoft.com/rest/api/sql/syncgroups/triggersync) template to trigger a sync operation.
+Use the [trigger sync](/rest/api/sql/syncgroups/triggersync) template to trigger a sync operation.
Sample request for triggering sync operation:
For more information about SQL Data Sync, see:
For more information about SQL Database, see: - [SQL Database overview](../sql-database-paas-overview.md)-- [Database Lifecycle Management](/previous-versions/sql/sql-server-guides/jj907294(v=sql.110))
+- [Database Lifecycle Management](/previous-versions/sql/sql-server-guides/jj907294(v=sql.110))
azure-sql Security Baseline https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/security-baseline.md
To allow traffic to reach Azure SQL Database, use the SQL service tags to allow
Virtual network rules enable Azure SQL Database to only accept communications that are sent from selected subnets inside a virtual network. -- [How to set up Private Link for Azure SQL Database](/azure/sql-database/sql-database-private-endpoint-overview#how-to-set-up-private-link-for-azure-sql-database)
+- [How to set up Private Link for Azure SQL Database](./private-endpoint-overview.md#how-to-set-up-private-link-for-azure-sql-database)
-- [How to use virtual network service endpoints and rules for database servers](/azure/sql-database/sql-database-vnet-service-endpoint-rule-overview)
+- [How to use virtual network service endpoints and rules for database servers](./vnet-service-endpoint-rule-overview.md)
**Responsibility**: Customer
-**Azure Security Center monitoring**: The [Azure Security Benchmark](/home/mbaldwin/docs/asb/azure-docs-pr/articles/governance/policy/samples/azure-security-benchmark.md) is the default policy initiative for Security Center and is the foundation for [Security Center's recommendations](/home/mbaldwin/docs/asb/azure-docs-pr/articles/security-center/security-center-recommendations.md). The Azure Policy definitions related to this control are enabled automatically by Security Center. Alerts related to this control may require an [Azure Defender](/home/mbaldwin/docs/asb/azure-docs-pr/articles/security-center/azure-defender.md) plan for the related services.
+**Azure Security Center monitoring**: The [Azure Security Benchmark](/azure/governance/policy/samples/azure-security-benchmark) is the default policy initiative for Security Center and is the foundation for [Security Center's recommendations](/azure/security-center/security-center-recommendations). The Azure Policy definitions related to this control are enabled automatically by Security Center. Alerts related to this control may require an [Azure Defender](/azure/security-center/azure-defender) plan for the related services.
**Azure Policy built-in definitions - Microsoft.Sql**:
You may also send NSG flow logs to a Log Analytics workspace and use Traffic Ana
**Guidance**: Enable DDoS Protection Standard on the Virtual Networks associated with your SQL Server instances for protections from distributed denial-of-service attacks. Use Azure Security Center Integrated Threat Intelligence to deny communications with known malicious or unused Internet IP addresses. -- [How to configure DDoS protection](/azure/virtual-network/manage-ddos-protection)
+- [How to configure DDoS protection](../../ddos-protection/manage-ddos-protection.md)
-- [Understand Azure Security Center Integrated Threat Intelligence](/azure/security-center/security-center-alerts-data-services)
+- [Understand Azure Security Center Integrated Threat Intelligence](../../security-center/azure-defender.md)
**Responsibility**: Customer
You may also send NSG flow logs to a Log Analytics workspace and use Traffic Ana
**Guidance**: Enable Advanced Threat Protection (ATP) for Azure SQL Database. Users receive an alert upon suspicious database activities, potential vulnerabilities, and SQL injection attacks, as well as anomalous database access and queries patterns. Advanced Threat Protection also integrates alerts with Azure Security Center. -- [Understand and using Advanced Threat Protection for Azure SQL Database](/azure/sql-database/sql-database-threat-detection-overview)
+- [Understand and using Advanced Threat Protection for Azure SQL Database](./threat-detection-overview.md)
**Responsibility**: Customer
You may also send NSG flow logs to a Log Analytics workspace and use Traffic Ana
When using service endpoints for Azure SQL Database, outbound to Azure SQL Database Public IP addresses is required: Network Security Groups (NSGs) must be opened to Azure SQL Database IPs to allow connectivity. You can do this by using NSG service tags for Azure SQL Database. -- [Understand Service Tags with Service Endpoints for Azure SQL Database](/azure/sql-database/sql-database-vnet-service-endpoint-rule-overview#limitations)
+- [Understand Service Tags with Service Endpoints for Azure SQL Database](./vnet-service-endpoint-rule-overview.md#limitations)
- [Understand and using Service Tags](../../virtual-network/service-tags-overview.md)
Use any of the built-in Azure Policy definitions related to tagging, such as "Re
You may use Azure PowerShell or Azure CLI to look up or perform actions on resources based on their tags. -- [How to create and use tags](/azure/azure-resource-manager/resource-group-using-tags)
+- [How to create and use tags](../../azure-resource-manager/management/tag-resources.md)
**Responsibility**: Customer
You may use Azure PowerShell or Azure CLI to look up or perform actions on resou
**Guidance**: Use Azure Activity Log to monitor network resource configurations and detect changes for network resources related to your Azure SQL Database server instances. Create alerts within Azure Monitor that will trigger when changes to critical network resources take place. -- [How to view and retrieve Azure Activity Log events](/azure/azure-monitor/platform/activity-log-view)
+- [How to view and retrieve Azure Activity Log events](../../azure-monitor/essentials/activity-log.md#view-the-activity-log)
-- [How to create alerts in Azure Monitor](/azure/azure-monitor/platform/alerts-activity-log)
+- [How to create alerts in Azure Monitor](../../azure-monitor/alerts/alerts-activity-log.md)
**Responsibility**: Customer
You may use Azure PowerShell or Azure CLI to look up or perform actions on resou
In addition, you can stream Azure SQL diagnostics telemetry into Azure SQL Analytics, a cloud solution that monitors the performance of Azure SQL Databases and Azure SQL Managed Instances at scale and across multiple subscriptions. It can help you collect and visualize Azure SQL Database performance metrics, and it has built-in intelligence for performance troubleshooting. -- [How to setup auditing for your Azure SQL Database](/azure/sql-database/sql-database-auditing)
+- [How to setup auditing for your Azure SQL Database](./auditing-overview.md)
-- [How to collect platform logs and metrics with Azure Monitor](/azure/sql-database/sql-database-metrics-diag-logging)
+- [How to collect platform logs and metrics with Azure Monitor](./metrics-diagnostic-telemetry-logging-streaming-export-configure.md)
-- [How to stream diagnostics into Azure SQL Analytics](/azure/sql-database/sql-database-metrics-diag-logging#stream-into-azure-sql-analytics)
+- [How to stream diagnostics into Azure SQL Analytics](./metrics-diagnostic-telemetry-logging-streaming-export-configure.md#stream-into-sql-analytics)
**Responsibility**: Customer
In addition, you can stream Azure SQL diagnostics telemetry into Azure SQL Analy
**Guidance**: Enable auditing on your Azure SQL Database server instance and choose a storage location for the audit logs (Azure Storage, Log Analytics, or Event Hub). -- [How to enable auditing for Azure SQL Server](/azure/sql-database/sql-database-auditing)
+- [How to enable auditing for Azure SQL Server](./auditing-overview.md)
**Responsibility**: Customer
-**Azure Security Center monitoring**: The [Azure Security Benchmark](/home/mbaldwin/docs/asb/azure-docs-pr/articles/governance/policy/samples/azure-security-benchmark.md) is the default policy initiative for Security Center and is the foundation for [Security Center's recommendations](/home/mbaldwin/docs/asb/azure-docs-pr/articles/security-center/security-center-recommendations.md). The Azure Policy definitions related to this control are enabled automatically by Security Center. Alerts related to this control may require an [Azure Defender](/home/mbaldwin/docs/asb/azure-docs-pr/articles/security-center/azure-defender.md) plan for the related services.
+**Azure Security Center monitoring**: The [Azure Security Benchmark](/azure/governance/policy/samples/azure-security-benchmark) is the default policy initiative for Security Center and is the foundation for [Security Center's recommendations](/azure/security-center/security-center-recommendations). The Azure Policy definitions related to this control are enabled automatically by Security Center. Alerts related to this control may require an [Azure Defender](/azure/security-center/azure-defender) plan for the related services.
**Azure Policy built-in definitions - Microsoft.Sql**:
In addition, you can stream Azure SQL diagnostics telemetry into Azure SQL Analy
**Guidance**: When storing your Azure SQL Database logs in a Log Analytics workspace, set log retention period according to your organization's compliance regulations. -- [How to set log retention parameters](/azure/azure-monitor/platform/manage-cost-storage#change-the-data-retention-period)
+- [How to set log retention parameters](../../azure-monitor/logs/manage-cost-storage.md#change-the-data-retention-period)
**Responsibility**: Customer
-**Azure Security Center monitoring**: The [Azure Security Benchmark](/home/mbaldwin/docs/asb/azure-docs-pr/articles/governance/policy/samples/azure-security-benchmark.md) is the default policy initiative for Security Center and is the foundation for [Security Center's recommendations](/home/mbaldwin/docs/asb/azure-docs-pr/articles/security-center/security-center-recommendations.md). The Azure Policy definitions related to this control are enabled automatically by Security Center. Alerts related to this control may require an [Azure Defender](/home/mbaldwin/docs/asb/azure-docs-pr/articles/security-center/azure-defender.md) plan for the related services.
+**Azure Security Center monitoring**: The [Azure Security Benchmark](/azure/governance/policy/samples/azure-security-benchmark) is the default policy initiative for Security Center and is the foundation for [Security Center's recommendations](/azure/security-center/security-center-recommendations). The Azure Policy definitions related to this control are enabled automatically by Security Center. Alerts related to this control may require an [Azure Defender](/azure/security-center/azure-defender) plan for the related services.
**Azure Policy built-in definitions - Microsoft.Sql**:
In addition, you can stream Azure SQL diagnostics telemetry into Azure SQL Analy
**Guidance**: Analyze and monitor logs for anomalous behaviors and regularly review results. Use Azure Security Center's Advanced Threat Protection to alert on unusual activity related to your Azure SQL Database instance. Alternatively, configure alerts based on Metric Values or Azure Activity Log entries related to your Azure SQL Database instances. -- [Understand Advanced Threat Protection and alerting for Azure SQL Server](/azure/sql-database/sql-database-threat-detection-overview)
+- [Understand Advanced Threat Protection and alerting for Azure SQL Server](./threat-detection-overview.md)
- [How to configure custom alerts for Azure SQL Database](alerts-insights-configure-portal.md)
In addition, you can stream Azure SQL diagnostics telemetry into Azure SQL Analy
**Guidance**: Use Azure Security Center Advanced Threat Protection for Azure SQL Databases for monitoring and alerting on anomalous activity. Enable Azure Defender for SQL for your SQL Databases. Azure Defender for SQL includes functionality for discovering and classifying sensitive data, surfacing and mitigating potential database vulnerabilities, and detecting anomalous activities that could indicate a threat to your database. -- [Understand Advanced Threat Protection and alerting for Azure SQL Database](/azure/sql-database/sql-database-threat-detection-overview)
+- [Understand Advanced Threat Protection and alerting for Azure SQL Database](./threat-detection-overview.md)
- [How to enable Azure Defender for SQL for Azure SQL Database](azure-defender-for-sql.md)
In addition, you can stream Azure SQL diagnostics telemetry into Azure SQL Analy
**Responsibility**: Customer
-**Azure Security Center monitoring**: The [Azure Security Benchmark](/home/mbaldwin/docs/asb/azure-docs-pr/articles/governance/policy/samples/azure-security-benchmark.md) is the default policy initiative for Security Center and is the foundation for [Security Center's recommendations](/home/mbaldwin/docs/asb/azure-docs-pr/articles/security-center/security-center-recommendations.md). The Azure Policy definitions related to this control are enabled automatically by Security Center. Alerts related to this control may require an [Azure Defender](/home/mbaldwin/docs/asb/azure-docs-pr/articles/security-center/azure-defender.md) plan for the related services.
+**Azure Security Center monitoring**: The [Azure Security Benchmark](/azure/governance/policy/samples/azure-security-benchmark) is the default policy initiative for Security Center and is the foundation for [Security Center's recommendations](/azure/security-center/security-center-recommendations). The Azure Policy definitions related to this control are enabled automatically by Security Center. Alerts related to this control may require an [Azure Defender](/azure/security-center/azure-defender) plan for the related services.
**Azure Policy built-in definitions - Microsoft.Sql**:
In addition, you can stream Azure SQL diagnostics telemetry into Azure SQL Analy
**Guidance**: Azure Active Directory (Azure AD) has built-in roles that must be explicitly assigned and are queryable. Use the Azure AD PowerShell module to perform ad-hoc queries to discover accounts that are members of administrative groups. -- [How to get a directory role in Azure AD with PowerShell](https://docs.microsoft.com/powershell/module/azuread/get-azureaddirectoryrole?view=azureadps-2.0&amp;preserve-view=true)
+- [How to get a directory role in Azure AD with PowerShell](/powershell/module/azuread/get-azureaddirectoryrole?amp;preserve-view=true&view=azureadps-2.0)
-- [How to get members of a directory role in Azure AD with PowerShell](https://docs.microsoft.com/powershell/module/azuread/get-azureaddirectoryrolemember?view=azureadps-2.0&amp;preserve-view=true)
+- [How to get members of a directory role in Azure AD with PowerShell](/powershell/module/azuread/get-azureaddirectoryrolemember?amp;preserve-view=true&view=azureadps-2.0)
**Responsibility**: Customer
In addition, you can stream Azure SQL diagnostics telemetry into Azure SQL Analy
**Guidance**: Azure Active Directory (Azure AD) does not have the concept of default passwords. When provisioning an Azure SQL Database instance, it is recommended that you choose to integrate authentication with Azure AD. -- [How to configure and manage Azure AD authentication with Azure SQL](/azure/sql-database/azure-sql/database/authentication-aad-configure)
+- [How to configure and manage Azure AD authentication with Azure SQL](/azure/azure-sql/database/authentication-aad-configure)
**Responsibility**: Customer
In addition, you can stream Azure SQL diagnostics telemetry into Azure SQL Analy
Use Advanced Threat Protection for Azure SQL Database to detect anomalous activities indicating unusual and potentially harmful attempts to access or exploit databases. -- [How to identify Azure AD users flagged for risky activity](/azure/active-directory/reports-monitoring/concept-user-at-risk)
+- [How to identify Azure AD users flagged for risky activity](../../active-directory/identity-protection/overview-identity-protection.md)
- [How to monitor users identity and access activity in Azure Security Center](../../security-center/security-center-identity-access.md) -- [Review Advanced Threat Protection and potential alerts](https://docs.microsoft.com/azure/azure-sql/database/threat-detection-overview#alerts)
+- [Review Advanced Threat Protection and potential alerts](./threat-detection-overview.md#alerts)
**Responsibility**: Customer
Use Advanced Threat Protection for Azure SQL Database to detect anomalous activi
**Responsibility**: Customer
-**Azure Security Center monitoring**: The [Azure Security Benchmark](/home/mbaldwin/docs/asb/azure-docs-pr/articles/governance/policy/samples/azure-security-benchmark.md) is the default policy initiative for Security Center and is the foundation for [Security Center's recommendations](/home/mbaldwin/docs/asb/azure-docs-pr/articles/security-center/security-center-recommendations.md). The Azure Policy definitions related to this control are enabled automatically by Security Center. Alerts related to this control may require an [Azure Defender](/home/mbaldwin/docs/asb/azure-docs-pr/articles/security-center/azure-defender.md) plan for the related services.
+**Azure Security Center monitoring**: The [Azure Security Benchmark](/azure/governance/policy/samples/azure-security-benchmark) is the default policy initiative for Security Center and is the foundation for [Security Center's recommendations](/azure/security-center/security-center-recommendations). The Azure Policy definitions related to this control are enabled automatically by Security Center. Alerts related to this control may require an [Azure Defender](/azure/security-center/azure-defender) plan for the related services.
**Azure Policy built-in definitions - Microsoft.Sql**:
Use Advanced Threat Protection for Azure SQL Database to detect anomalous activi
**Guidance**: Use Azure Active Directory (Azure AD) Identity Protection and risk detections to configure automated responses to detected suspicious actions related to user identities. Additionally, you can ingest data into Azure Sentinel for further investigation. -- [How to view Azure AD risk sign-ins](/azure/active-directory/reports-monitoring/concept-risky-sign-ins)
+- [How to view Azure AD risk sign-ins](../../active-directory/identity-protection/overview-identity-protection.md)
- [How to configure and enable Identity Protection risk policies](../../active-directory/identity-protection/howto-identity-protection-configure-risk-policies.md)
Use Advanced Threat Protection for Azure SQL Database to detect anomalous activi
**Guidance**: Use tags to assist in tracking Azure resources that store or process sensitive information. -- [How to create and use tags](/azure/azure-resource-manager/resource-group-using-tags)
+- [How to create and use tags](../../azure-resource-manager/management/tag-resources.md)
**Responsibility**: Customer
-**Azure Security Center monitoring**: The [Azure Security Benchmark](/home/mbaldwin/docs/asb/azure-docs-pr/articles/governance/policy/samples/azure-security-benchmark.md) is the default policy initiative for Security Center and is the foundation for [Security Center's recommendations](/home/mbaldwin/docs/asb/azure-docs-pr/articles/security-center/security-center-recommendations.md). The Azure Policy definitions related to this control are enabled automatically by Security Center. Alerts related to this control may require an [Azure Defender](/home/mbaldwin/docs/asb/azure-docs-pr/articles/security-center/azure-defender.md) plan for the related services.
+**Azure Security Center monitoring**: The [Azure Security Benchmark](/azure/governance/policy/samples/azure-security-benchmark) is the default policy initiative for Security Center and is the foundation for [Security Center's recommendations](/azure/security-center/security-center-recommendations). The Azure Policy definitions related to this control are enabled automatically by Security Center. Alerts related to this control may require an [Azure Defender](/azure/security-center/azure-defender) plan for the related services.
**Azure Policy built-in definitions - Microsoft.Sql**:
Use Advanced Threat Protection for Azure SQL Database to detect anomalous activi
**Guidance**: Implement separate subscriptions and/or management groups for development, test, and production. Resources should be separated by Vnet/Subnet, tagged appropriately, and secured within an NSG or Azure Firewall. Resources storing or processing sensitive data should be isolated. Use Private Link; deploy Azure SQL Server inside your Vnet and connect privately using Private Endpoints. -- [How to create additional Azure subscriptions](/azure/billing/billing-create-subscription)
+- [How to create additional Azure subscriptions](../../cost-management-billing/manage/create-subscription.md)
-- [How to create Management Groups](/azure/governance/management-groups/create)
+- [How to create Management Groups](../../governance/management-groups/create-management-group-portal.md)
-- [How to create and use Tags](/azure/azure-resource-manager/resource-group-using-tags)
+- [How to create and use Tags](../../azure-resource-manager/management/tag-resources.md)
-- [How to set up Private Link for Azure SQL Database](/azure/sql-database/sql-database-private-endpoint-overview#how-to-set-up-private-link-for-azure-sql-database)
+- [How to set up Private Link for Azure SQL Database](./private-endpoint-overview.md#how-to-set-up-private-link-for-azure-sql-database)
**Responsibility**: Customer
Use Advanced Threat Protection for Azure SQL Database to detect anomalous activi
For the underlying platform which is managed by Microsoft, Microsoft treats all customer content as sensitive and goes to great lengths to guard against customer data loss and exposure. To ensure customer data within Azure remains secure, Microsoft has implemented and maintains a suite of robust data protection controls and capabilities. -- [How to configure Private Link and NSGs to prevent data exfiltration on your Azure SQL Database instances](/azure/sql-database/sql-database-private-endpoint-overview)
+- [How to configure Private Link and NSGs to prevent data exfiltration on your Azure SQL Database instances](./private-endpoint-overview.md)
- [Understand customer data protection in Azure](../../security/fundamentals/protection-customer-data.md)
For the underlying platform which is managed by Microsoft, Microsoft treats all
**Guidance**: Use the Azure SQL Database data discovery and classification feature. Data discovery and classification provides advanced capabilities built into Azure SQL Database for discovering, classifying, labeling &amp; protecting the sensitive data in your databases. -- [How to use data discovery and classification for Azure SQL Server](/azure/sql-database/sql-database-data-discovery-and-classification)
+- [How to use data discovery and classification for Azure SQL Server](./data-discovery-and-classification-overview.md)
**Responsibility**: Customer
-**Azure Security Center monitoring**: The [Azure Security Benchmark](/home/mbaldwin/docs/asb/azure-docs-pr/articles/governance/policy/samples/azure-security-benchmark.md) is the default policy initiative for Security Center and is the foundation for [Security Center's recommendations](/home/mbaldwin/docs/asb/azure-docs-pr/articles/security-center/security-center-recommendations.md). The Azure Policy definitions related to this control are enabled automatically by Security Center. Alerts related to this control may require an [Azure Defender](/home/mbaldwin/docs/asb/azure-docs-pr/articles/security-center/azure-defender.md) plan for the related services.
+**Azure Security Center monitoring**: The [Azure Security Benchmark](/azure/governance/policy/samples/azure-security-benchmark) is the default policy initiative for Security Center and is the foundation for [Security Center's recommendations](/azure/security-center/security-center-recommendations). The Azure Policy definitions related to this control are enabled automatically by Security Center. Alerts related to this control may require an [Azure Defender](/azure/security-center/azure-defender) plan for the related services.
**Azure Policy built-in definitions - Microsoft.Sql**:
For the underlying platform which is managed by Microsoft, Microsoft treats all
**Guidance**: Use Azure Active Directory (Azure AD) for authenticating and controlling access to Azure SQL Database instances. -- [How to integrate Azure SQL Server with Azure AD for authentication](/azure/sql-database/sql-database-aad-authentication)
+- [How to integrate Azure SQL Server with Azure AD for authentication](./authentication-aad-overview.md)
-- [How to control access in Azure SQL Server](/azure/sql-database/sql-database-control-access)
+- [How to control access in Azure SQL Server](./logins-create-manage.md)
**Responsibility**: Customer
For the underlying platform which is managed by Microsoft, Microsoft treats all
**Guidance**: Transparent data encryption (TDE) helps protect Azure SQL Database, Azure SQL managed instance, and Azure Data Warehouse against the threat of malicious offline activity by encrypting data at rest. It performs real-time encryption and decryption of the database, associated backups, and transaction log files at rest without requiring changes to the application. By default, TDE is enabled for all newly deployed databases in SQL Database and SQL Managed Instance. The TDE encryption key can be managed by either Microsoft or the customer. -- [How to manage transparent data encryption and use your own encryption keys](https://docs.microsoft.com/azure/sql-database/transparent-data-encryption-azure-sql?tabs=azure-portal#manage-transparent-data-encryption)
+- [How to manage transparent data encryption and use your own encryption keys](./transparent-data-encryption-tde-overview.md?tabs=azure-portal#manage-transparent-data-encryption)
**Responsibility**: Customer
-**Azure Security Center monitoring**: The [Azure Security Benchmark](/home/mbaldwin/docs/asb/azure-docs-pr/articles/governance/policy/samples/azure-security-benchmark.md) is the default policy initiative for Security Center and is the foundation for [Security Center's recommendations](/home/mbaldwin/docs/asb/azure-docs-pr/articles/security-center/security-center-recommendations.md). The Azure Policy definitions related to this control are enabled automatically by Security Center. Alerts related to this control may require an [Azure Defender](/home/mbaldwin/docs/asb/azure-docs-pr/articles/security-center/azure-defender.md) plan for the related services.
+**Azure Security Center monitoring**: The [Azure Security Benchmark](/azure/governance/policy/samples/azure-security-benchmark) is the default policy initiative for Security Center and is the foundation for [Security Center's recommendations](/azure/security-center/security-center-recommendations). The Azure Policy definitions related to this control are enabled automatically by Security Center. Alerts related to this control may require an [Azure Defender](/azure/security-center/azure-defender) plan for the related services.
**Azure Policy built-in definitions - Microsoft.Sql**:
For the underlying platform which is managed by Microsoft, Microsoft treats all
**Guidance**: Use Azure Monitor with the Azure Activity Log to create alerts for when changes take place to production instances of Azure SQL Database and other critical or related resources. -- [How to create alerts for Azure Activity Log events](/azure/azure-monitor/platform/alerts-activity-log)
+- [How to create alerts for Azure Activity Log events](../../azure-monitor/alerts/alerts-activity-log.md)
**Responsibility**: Customer
For the underlying platform which is managed by Microsoft, Microsoft treats all
**Guidance**: Enable Azure Defender for SQL for Azure SQL Database and follow recommendations from Azure Security Center on performing vulnerability assessments on your Azure SQL Servers. -- [How to run vulnerability assessments on Azure SQL Database](/azure/sql-database/sql-vulnerability-assessment)
+- [How to run vulnerability assessments on Azure SQL Database](./sql-vulnerability-assessment.md)
- [How to enable Azure Defender for SQL](azure-defender-for-sql.md) -- [How to implement Azure Security Center vulnerability assessment recommendations](/azure/security-center/security-center-vulnerability-assessment-recommendations)
+- [How to implement Azure Security Center vulnerability assessment recommendations](../../security-center/deploy-vulnerability-assessment-vm.md)
**Responsibility**: Customer
-**Azure Security Center monitoring**: The [Azure Security Benchmark](/home/mbaldwin/docs/asb/azure-docs-pr/articles/governance/policy/samples/azure-security-benchmark.md) is the default policy initiative for Security Center and is the foundation for [Security Center's recommendations](/home/mbaldwin/docs/asb/azure-docs-pr/articles/security-center/security-center-recommendations.md). The Azure Policy definitions related to this control are enabled automatically by Security Center. Alerts related to this control may require an [Azure Defender](/home/mbaldwin/docs/asb/azure-docs-pr/articles/security-center/azure-defender.md) plan for the related services.
+**Azure Security Center monitoring**: The [Azure Security Benchmark](/azure/governance/policy/samples/azure-security-benchmark) is the default policy initiative for Security Center and is the foundation for [Security Center's recommendations](/azure/security-center/security-center-recommendations). The Azure Policy definitions related to this control are enabled automatically by Security Center. Alerts related to this control may require an [Azure Defender](/azure/security-center/azure-defender) plan for the related services.
**Azure Policy built-in definitions - Microsoft.Sql**:
For the underlying platform which is managed by Microsoft, Microsoft treats all
**Guidance**: Enable periodic recurring scans for your Azure SQL Database instances; this will configure a vulnerability assessment to automatically run a scan on your database once per week. A scan result summary will be sent to the email address(es) you provide. Compare the results to verify that vulnerabilities have been remediated. -- [How to export a vulnerability assessment report in Azure Security Center](/azure/sql-database/sql-vulnerability-assessment#implementing-vulnerability-assessment)
+- [How to export a vulnerability assessment report in Azure Security Center](./sql-vulnerability-assessment.md#export-an-assessment-report)
**Responsibility**: Customer
For the underlying platform which is managed by Microsoft, Microsoft treats all
**Guidance**: Use the default risk ratings (Secure Score) provided by Azure Security Center. -- [Understand Azure Security Center Secure Score](/azure/security-center/security-center-secure-score)
+- [Understand Azure Security Center Secure Score](../../security-center/secure-score-security-controls.md)
**Responsibility**: Customer
-**Azure Security Center monitoring**: The [Azure Security Benchmark](/home/mbaldwin/docs/asb/azure-docs-pr/articles/governance/policy/samples/azure-security-benchmark.md) is the default policy initiative for Security Center and is the foundation for [Security Center's recommendations](/home/mbaldwin/docs/asb/azure-docs-pr/articles/security-center/security-center-recommendations.md). The Azure Policy definitions related to this control are enabled automatically by Security Center. Alerts related to this control may require an [Azure Defender](/home/mbaldwin/docs/asb/azure-docs-pr/articles/security-center/azure-defender.md) plan for the related services.
+**Azure Security Center monitoring**: The [Azure Security Benchmark](/azure/governance/policy/samples/azure-security-benchmark) is the default policy initiative for Security Center and is the foundation for [Security Center's recommendations](/azure/security-center/security-center-recommendations). The Azure Policy definitions related to this control are enabled automatically by Security Center. Alerts related to this control may require an [Azure Defender](/azure/security-center/azure-defender) plan for the related services.
**Azure Policy built-in definitions - Microsoft.Sql**:
Although classic Azure resources may be discovered via Resource Graph, it is hig
- [How to create queries with Azure Resource Graph](../../governance/resource-graph/first-query-portal.md) -- [How to view your Azure Subscriptions](https://docs.microsoft.com/powershell/module/az.accounts/get-azsubscription?view=azps-4.8.0&amp;preserve-view=true)
+- [How to view your Azure Subscriptions](/powershell/module/az.accounts/get-azsubscription?amp;preserve-view=true&view=azps-4.8.0)
- [Understand Azure RBAC](../../role-based-access-control/overview.md)
Although classic Azure resources may be discovered via Resource Graph, it is hig
**Guidance**: Apply tags to Azure resources giving metadata to logically organize them into a taxonomy. -- [How to create and use Tags](/azure/azure-resource-manager/resource-group-using-tags)
+- [How to create and use Tags](../../azure-resource-manager/management/tag-resources.md)
**Responsibility**: Customer
Although classic Azure resources may be discovered via Resource Graph, it is hig
**Guidance**: Use tagging, management groups, and separate subscriptions, where appropriate, to organize and track assets. Reconcile inventory on a regular basis and ensure unauthorized resources are deleted from the subscription in a timely manner. -- [How to create additional Azure subscriptions](/azure/billing/billing-create-subscription)
+- [How to create additional Azure subscriptions](../../cost-management-billing/manage/create-subscription.md)
-- [How to create Management Groups](/azure/governance/management-groups/create)
+- [How to create Management Groups](../../governance/management-groups/create-management-group-portal.md)
-- [How to create and use Tags](/azure/azure-resource-manager/resource-group-using-tags)
+- [How to create and use Tags](../../azure-resource-manager/management/tag-resources.md)
**Responsibility**: Customer
Use Azure Resource Graph to query/discover resources within your subscription(s)
- [How to configure and manage Azure Policy](../../governance/policy/tutorials/create-and-manage.md) -- [How to deny a specific resource type with Azure Policy](https://docs.microsoft.com/azure/governance/policy/samples/built-in-policies#general)
+- [How to deny a specific resource type with Azure Policy](../../governance/policy/samples/built-in-policies.md#general)
**Responsibility**: Customer
Use Azure Resource Graph to query/discover resources within your subscription(s)
**Guidance**: If using custom Azure Policy definitions, use Azure DevOps or Azure Repos to securely store and manage your code. -- [How to store code in Azure DevOps](https://docs.microsoft.com/azure/devops/repos/git/gitworkflow?view=azure-devops&amp;preserve-view=true)
+- [How to store code in Azure DevOps](/azure/devops/repos/git/gitworkflow?amp;preserve-view=true&view=azure-devops)
-- [Azure Repos Documentation](https://docs.microsoft.com/azure/devops/repos/?view=azure-devops&amp;preserve-view=true)
+- [Azure Repos Documentation](/azure/devops/repos/?amp;preserve-view=true&view=azure-devops)
**Responsibility**: Customer
Use Azure Resource Graph to query/discover resources within your subscription(s)
**Guidance**: Leverage Azure Security Center to perform baseline scans for your Azure SQL Servers and Databases. -- [How to remediate recommendations in Azure Security Center](/azure/security-center/security-center-sql-service-recommendations)
+- [How to remediate recommendations in Azure Security Center](../../security-center/security-center-remediate-recommendations.md)
**Responsibility**: Customer
Use Azure Resource Graph to query/discover resources within your subscription(s)
**Guidance**: Use Azure Key Vault to store encryption keys for Azure SQL Database Transparent Data Encryption (TDE). -- [How to protect sensitive data being stored in Azure SQL Server and store the encryption keys in Azure Key Vault](/azure/sql-database/sql-database-always-encrypted-azure-key-vault)
+- [How to protect sensitive data being stored in Azure SQL Server and store the encryption keys in Azure Key Vault](./always-encrypted-azure-key-vault-configure.md)
**Responsibility**: Customer
Pre-scan any content being uploaded to non-compute Azure resources, such as App
To meet different compliance requirements, you can select different retention periods for weekly, monthly and/or yearly backups. The storage consumption depends on the selected frequency of backups and the retention period(s). -- [Understand backups and business continuity with Azure SQL Server](/azure/sql-database/sql-database-business-continuity)
+- [Understand backups and business continuity with Azure SQL Server](./business-continuity-high-availability-disaster-recover-hadr-overview.md)
**Responsibility**: Shared
-**Azure Security Center monitoring**: The [Azure Security Benchmark](/home/mbaldwin/docs/asb/azure-docs-pr/articles/governance/policy/samples/azure-security-benchmark.md) is the default policy initiative for Security Center and is the foundation for [Security Center's recommendations](/home/mbaldwin/docs/asb/azure-docs-pr/articles/security-center/security-center-recommendations.md). The Azure Policy definitions related to this control are enabled automatically by Security Center. Alerts related to this control may require an [Azure Defender](/home/mbaldwin/docs/asb/azure-docs-pr/articles/security-center/azure-defender.md) plan for the related services.
+**Azure Security Center monitoring**: The [Azure Security Benchmark](/azure/governance/policy/samples/azure-security-benchmark) is the default policy initiative for Security Center and is the foundation for [Security Center's recommendations](/azure/security-center/security-center-recommendations). The Azure Policy definitions related to this control are enabled automatically by Security Center. Alerts related to this control may require an [Azure Defender](/azure/security-center/azure-defender) plan for the related services.
**Azure Policy built-in definitions - Microsoft.Sql**:
To meet different compliance requirements, you can select different retention pe
If using customer-managed keys for Transparent Data Encryption, ensure your keys are being backed up. -- [Understand backups in Azure SQL Server](https://docs.microsoft.com/azure/sql-database/sql-database-automated-backups?tabs=single-database)
+- [Understand backups in Azure SQL Server](./automated-backups-overview.md?tabs=single-database)
-- [How to backup key vault keys in Azure](https://docs.microsoft.com/powershell/module/az.keyvault/backup-azkeyvaultkey?view=azps-4.8.0&amp;preserve-view=true)
+- [How to backup key vault keys in Azure](/powershell/module/az.keyvault/backup-azkeyvaultkey?amp;preserve-view=true&view=azps-4.8.0)
**Responsibility**: Customer
-**Azure Security Center monitoring**: The [Azure Security Benchmark](/home/mbaldwin/docs/asb/azure-docs-pr/articles/governance/policy/samples/azure-security-benchmark.md) is the default policy initiative for Security Center and is the foundation for [Security Center's recommendations](/home/mbaldwin/docs/asb/azure-docs-pr/articles/security-center/security-center-recommendations.md). The Azure Policy definitions related to this control are enabled automatically by Security Center. Alerts related to this control may require an [Azure Defender](/home/mbaldwin/docs/asb/azure-docs-pr/articles/security-center/azure-defender.md) plan for the related services.
+**Azure Security Center monitoring**: The [Azure Security Benchmark](/azure/governance/policy/samples/azure-security-benchmark) is the default policy initiative for Security Center and is the foundation for [Security Center's recommendations](/azure/security-center/security-center-recommendations). The Azure Policy definitions related to this control are enabled automatically by Security Center. Alerts related to this control may require an [Azure Defender](/azure/security-center/azure-defender) plan for the related services.
**Azure Policy built-in definitions - Microsoft.Sql**:
If using customer-managed keys for Transparent Data Encryption, ensure your keys
**Guidance**: Ensure ability to periodically perform data restoration of content within Azure Backup. If necessary, test restore content to an isolated VLAN. Test restoration of backed up customer-managed keys. -- [How to restore key vault keys in Azure](https://docs.microsoft.com/powershell/module/az.keyvault/restore-azkeyvaultkey?view=azps-4.8.0&amp;preserve-view=true)
+- [How to restore key vault keys in Azure](/powershell/module/az.keyvault/restore-azkeyvaultkey?amp;preserve-view=true&view=azps-4.8.0)
-- [How to recover Azure SQL Database backups using point-in-time restore](/azure/sql-database/sql-database-recovery-using-backups#point-in-time-restore)
+- [How to recover Azure SQL Database backups using point-in-time restore](./recovery-using-backups.md#point-in-time-restore)
**Responsibility**: Customer
If using customer-managed keys for Transparent Data Encryption, ensure your keys
**Guidance**: Enable soft delete in Azure Key Vault to protect keys against accidental or malicious deletion. -- [How to enable soft delete in Key Vault](https://docs.microsoft.com/azure/storage/blobs/storage-blob-soft-delete?tabs=azure-portal)
+- [How to enable soft delete in Key Vault](../../storage/blobs/soft-delete-blob-overview.md?tabs=azure-portal)
**Responsibility**: Customer
If using customer-managed keys for Transparent Data Encryption, ensure your keys
## Next steps -- See the [Azure Security Benchmark V2 overview](/azure/security/benchmarks/overview)-- Learn more about [Azure security baselines](/azure/security/benchmarks/security-baselines-overview)
+- See the [Azure Security Benchmark V2 overview](../../security/benchmarks/overview.md)
+- Learn more about [Azure security baselines](../../security/benchmarks/security-baselines-overview.md)
azure-sql Log Replay Service Migrate https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/managed-instance/log-replay-service-migrate.md
Last updated 03/01/2021
This article explains how to manually configure database migration from SQL Server 2008-2019 to Azure SQL Managed Instance by using Log Replay Service (LRS), currently in public preview. LRS is a cloud service that's enabled for SQL Managed Instance and is based on SQL Server log-shipping technology.
-[Azure Database Migration Service](/azure/dms/tutorial-sql-server-to-managed-instance) and LRS use the same underlying migration technology and the same APIs. By releasing LRS, we're further enabling complex custom migrations and hybrid architecture between on-premises SQL Server and SQL Managed Instance.
+[Azure Database Migration Service](../../dms/tutorial-sql-server-to-managed-instance.md) and LRS use the same underlying migration technology and the same APIs. By releasing LRS, we're further enabling complex custom migrations and hybrid architecture between on-premises SQL Server and SQL Managed Instance.
## When to use Log Replay Service
After LRS is stopped, either automatically through autocomplete or manually thro
| Operation | Details | | :-- | :- |
-| **1. Copy database backups from SQL Server to Blob Storage**. | Copy full, differential, and log backups from SQL Server to a Blob Storage container by using [Azcopy](/azure/storage/common/storage-use-azcopy-v10) or [Azure Storage Explorer](https://azure.microsoft.com/features/storage-explorer/). <br /><br />Use any file names. LRS doesn't require a specific file-naming convention.<br /><br />In migrating several databases, you need a separate folder for each database. |
+| **1. Copy database backups from SQL Server to Blob Storage**. | Copy full, differential, and log backups from SQL Server to a Blob Storage container by using [Azcopy](../../storage/common/storage-use-azcopy-v10.md) or [Azure Storage Explorer](https://azure.microsoft.com/features/storage-explorer/). <br /><br />Use any file names. LRS doesn't require a specific file-naming convention.<br /><br />In migrating several databases, you need a separate folder for each database. |
| **2. Start LRS in the cloud**. | You can restart the service with a choice of cmdlets: PowerShell ([start-azsqlinstancedatabaselogreplay](/powershell/module/az.sql/start-azsqlinstancedatabaselogreplay)) or Azure CLI ([az_sql_midb_log_replay_start cmdlets](/cli/azure/sql/midb/log-replay#az_sql_midb_log_replay_start)). <br /><br /> Start LRS separately for each database that points to a backup folder on Blob Storage. <br /><br /> After you start the service, it will take backups from the Blob Storage container and start restoring them on SQL Managed Instance.<br /><br /> If you started LRS in continuous mode, after all initially uploaded backups are restored, the service will watch for any new files uploaded to the folder. The service will continuously apply logs based on the log sequence number (LSN) chain until it's stopped. | | **2.1. Monitor the operation's progress**. | You can monitor progress of the restore operation with a choice of cmdlets: PowerShell ([get-azsqlinstancedatabaselogreplay](/powershell/module/az.sql/get-azsqlinstancedatabaselogreplay)) or Azure CLI ([az_sql_midb_log_replay_show cmdlets](/cli/azure/sql/midb/log-replay#az_sql_midb_log_replay_show)). | | **2.2. Stop the operation if needed**. | If you need to stop the migration process, you have a choice of cmdlets: PowerShell ([stop-azsqlinstancedatabaselogreplay](/powershell/module/az.sql/stop-azsqlinstancedatabaselogreplay)) or Azure CLI ([az_sql_midb_log_replay_stop](/cli/azure/sql/midb/log-replay#az_sql_midb_log_replay_stop)). <br /><br /> Stopping the operation will delete the database that you're restoring on SQL Managed Instance. After you stop an operation, you can't resume LRS for a database. You need to restart the migration process from scratch. |
Azure Blob Storage is used as intermediary storage for backup files between SQL
In migrating databases to a managed instance by using LRS, you can use the following approaches to upload backups to Blob Storage: - Using SQL Server native [BACKUP TO URL](/sql/relational-databases/backup-restore/sql-server-backup-to-url) functionality-- Using [Azcopy](/azure/storage/common/storage-use-azcopy-v10) or [Azure Storage Explorer](https://azure.microsoft.com/en-us/features/storage-explorer) to upload backups to a blob container
+- Using [Azcopy](../../storage/common/storage-use-azcopy-v10.md) or [Azure Storage Explorer](https://azure.microsoft.com/en-us/features/storage-explorer) to upload backups to a blob container
- Using Storage Explorer in the Azure portal ### Make backups from SQL Server directly to Blob Storage
After you start LRS, use the monitoring cmdlet (`get-azsqlinstancedatabaselogrep
## Next steps - Learn more about [migrating SQL Server to SQL Managed instance](../migration-guides/managed-instance/sql-server-to-managed-instance-guide.md). - Learn more about [differences between SQL Server and SQL Managed Instance](transact-sql-tsql-differences-sql-server.md).-- Learn more about [best practices to cost and size workloads migrated to Azure](/azure/cloud-adoption-framework/migrate/azure-best-practices/migrate-best-practices-costs).
+- Learn more about [best practices to cost and size workloads migrated to Azure](/azure/cloud-adoption-framework/migrate/azure-best-practices/migrate-best-practices-costs).
azure-sql Migrate To Instance From Sql Server https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/managed-instance/migrate-to-instance-from-sql-server.md
SELECT * FROM sys.table_types WHERE is_memory_optimized=1
SELECT * FROM sys.sql_modules WHERE uses_native_compilation=1 ```
-To learn more about in-memory technologies, see [Optimize performance by using in-memory technologies in Azure SQL Database and Azure SQL Managed Instance](https://docs.microsoft.com/azure/azure-sql/in-memory-oltp-overview)
+To learn more about in-memory technologies, see [Optimize performance by using in-memory technologies in Azure SQL Database and Azure SQL Managed Instance](../in-memory-oltp-overview.md)
### Create a performance baseline
azure-sql Mysql To Sql Database Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/migration-guides/database/mysql-to-sql-database-guide.md
These resources were developed as part of the Data SQL Ninja Program, which is s
- Be sure to check out the [Azure Total Cost of Ownership (TCO) Calculator](https://aka.ms/azure-tco) to help estimate the cost savings you can realize by migrating your workloads to Azure. -- For a matrix of the Microsoft and third-party services and tools that are available to assist you with various database and data migration scenarios as well as specialty tasks, see the article [Service and tools for data migration](https://docs.microsoft.com/azure/dms/dms-tools-matrix).
+- For a matrix of the Microsoft and third-party services and tools that are available to assist you with various database and data migration scenarios as well as specialty tasks, see the article [Service and tools for data migration](../../../dms/dms-tools-matrix.md).
- For other migration guides, see [Database Migration](https://datamigration.microsoft.com/). For videos, see: -- [Overview of the migration journey and the tools/services recommended for performing assessment and migration](https://azure.microsoft.com/resources/videos/overview-of-migration-and-recommended-tools-services/)
+- [Overview of the migration journey and the tools/services recommended for performing assessment and migration](https://azure.microsoft.com/resources/videos/overview-of-migration-and-recommended-tools-services/)
azure-sql Oracle To Sql Database Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/migration-guides/database/oracle-to-sql-database-guide.md
To publish your schema and migrate your data, follow these steps:
Alternatively, you can also use SQL Server Integration Services (SSIS) to perform the migration. To learn more, see: -- [Getting Started with SQL Server Integration Services](https://docs.microsoft.com/sql/integration-services/sql-server-integration-services)
+- [Getting Started with SQL Server Integration Services](/sql/integration-services/sql-server-integration-services)
- [SQL Server Integration
These resources were developed as part of the Data SQL Ninja Program, which is s
## Next steps -- For a matrix of the Microsoft and third-party services and tools that are available to assist you with various database and data migration scenarios as well as specialty tasks, see the article [Service and tools for data migration](https://docs.microsoft.com/azure/dms/dms-tools-matrix).
+- For a matrix of the Microsoft and third-party services and tools that are available to assist you with various database and data migration scenarios as well as specialty tasks, see the article [Service and tools for data migration](../../../dms/dms-tools-matrix.md).
- To learn more about Azure SQL Database, see: - [An overview of Azure SQL Database](../../database/sql-database-paas-overview.md)
These resources were developed as part of the Data SQL Ninja Program, which is s
- [Best practices for costing and sizing workloads migrate to Azure](/azure/cloud-adoption-framework/migrate/azure-best-practices/migrate-best-practices-costs) - For video content, see:
- - [Overview of the migration journey and the tools/services recommended for performing assessment and migration](https://azure.microsoft.com/resources/videos/overview-of-migration-and-recommended-tools-services/)
--
+ - [Overview of the migration journey and the tools/services recommended for performing assessment and migration](https://azure.microsoft.com/resources/videos/overview-of-migration-and-recommended-tools-services/)
azure-sql Sql Server To Sql Database Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/migration-guides/database/sql-server-to-sql-database-guide.md
When using migration options that continuously replicate / sync data changes fro
After you verify that data is same on both the source and the target, you can cutover from the source to the target environment. It is important to plan the cutover process with business / application teams to ensure minimal interruption during cutover does not affect business continuity. > [!IMPORTANT]
-> For details on the specific steps associated with performing a cutover as part of migrations using DMS, see [Performing migration cutover](../../../dms/tutorial-sql-server-azure-sql-online.md#perform-migration-cutover).
+> For details on the specific steps associated with performing a cutover as part of migrations using DMS, see [Performing migration cutover](../../../dms/tutorial-sql-server-to-azure-sql.md).
## Migration recommendations
To learn more, see [managing Azure SQL Database after migration](../../database/
- [Best practices for costing and sizing workloads migrate to Azure](/azure/cloud-adoption-framework/migrate/azure-best-practices/migrate-best-practices-costs) - To assess the Application access layer, see [Data Access Migration Toolkit (Preview)](https://marketplace.visualstudio.com/items?itemName=ms-databasemigration.data-access-migration-toolkit)-- For details on how to perform Data Access Layer A/B testing see [Database Experimentation Assistant](/sql/dea/database-experimentation-assistant-overview).
+- For details on how to perform Data Access Layer A/B testing see [Database Experimentation Assistant](/sql/dea/database-experimentation-assistant-overview).
azure-sql Sql Server To Sql Database Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/migration-guides/database/sql-server-to-sql-database-overview.md
The following table lists the recommended migration tools:
|Technology | Description| |||
-| [Azure Migrate](/azure/migrate/how-to-create-azure-sql-assessment) | Azure Migrate for Azure SQL allows you to discover and assess your SQL data estate at scale when on VMware, providing Azure SQL deployment recommendations, target sizing, and monthly estimates. |
+| [Azure Migrate](../../../migrate/how-to-create-azure-sql-assessment.md) | Azure Migrate for Azure SQL allows you to discover and assess your SQL data estate at scale when on VMware, providing Azure SQL deployment recommendations, target sizing, and monthly estimates. |
|[Data Migration Assistant (DMA)](/sql/dma/dma-migrateonpremsqltosqldb)|The Data Migration Assistant is a desktop tool that provides seamless assessments of SQL Server and migrations to Azure SQL Database (both schema and data). The tool can be installed on a server on-premises or on your local machine that has connectivity to your source databases. The migration process is a logical data movement between objects in the source and target database. </br> - Migrate single databases (both schema and data)| |[Azure Database Migration Service (DMS)](../../../dms/tutorial-sql-server-to-azure-sql.md)|A first party Azure service that can migrate your SQL Server databases to Azure SQL Database using the Azure portal or automated with PowerShell. Azure DMS requires you to select a preferred Azure Virtual Network (VNet) during provisioning to ensure there is connectivity to your source SQL Server databases. </br> - Migrate single databases or at scale. | | | |
azure-sql Oracle To Managed Instance Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/migration-guides/managed-instance/oracle-to-managed-instance-guide.md
To publish your schema and migrate your data, follow these steps:
Alternatively, you can also use SQL Server Integration Services (SSIS) to perform the migration. To learn more, see: -- [Getting Started with SQL Server Integration Services](https://docs.microsoft.com/sql/integration-services/sql-server-integration-services)
+- [Getting Started with SQL Server Integration Services](/sql/integration-services/sql-server-integration-services)
- [SQL Server Integration
-
-- ## Post-migration After you have successfully completed the **Migration** stage, you need to go through a series of post-migration tasks to ensure that everything is functioning as smoothly and efficiently as possible.
These resources were developed as part of the Data SQL Ninja Program, which is s
## Next steps -- For a matrix of the Microsoft and third-party services and tools that are available to assist you with various database and data migration scenarios as well as specialty tasks, see the article [Service and tools for data migration](https://docs.microsoft.com/azure/dms/dms-tools-matrix).
+- For a matrix of the Microsoft and third-party services and tools that are available to assist you with various database and data migration scenarios as well as specialty tasks, see the article [Service and tools for data migration](../../../dms/dms-tools-matrix.md).
- To learn more about Azure SQL Managed Instance, see: - [An overview of Azure SQL Managed Instance](../../managed-instance/sql-managed-instance-paas-overview.md)
azure-sql Sql Server To Managed Instance Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/migration-guides/managed-instance/sql-server-to-managed-instance-overview.md
The following table lists the recommended migration tools:
|Technology | Description| |||
-| [Azure Migrate](/azure/migrate/how-to-create-azure-sql-assessment) | Azure Migrate for Azure SQL allows you to discover and assess your SQL data estate at scale when on VMware, providing Azure SQL deployment recommendations, target sizing, and monthly estimates. |
+| [Azure Migrate](../../../migrate/how-to-create-azure-sql-assessment.md) | Azure Migrate for Azure SQL allows you to discover and assess your SQL data estate at scale when on VMware, providing Azure SQL deployment recommendations, target sizing, and monthly estimates. |
|[Azure Database Migration Service (DMS)](../../../dms/tutorial-sql-server-to-managed-instance.md) | First party Azure service that supports migration in the offline mode for applications that can afford downtime during the migration process. Unlike the continuous migration in online mode, offline mode migration runs a one-time restore of a full database backup from the source to the target. | |[Native backup and restore](../../managed-instance/restore-sample-database-quickstart.md) | SQL Managed Instance supports RESTORE of native SQL Server database backups (.bak files), making it the easiest migration option for customers who can provide full database backups to Azure storage. Full and differential backups are also supported and documented in the [migration assets section](#migration-assets) later in this article.| |[Log Replay Service (LRS)](../../managed-instance/log-replay-service-migrate.md) | This is a cloud service enabled for Managed Instance based on the SQL Server log shipping technology, making it a migration option for customers who can provide full, differential, and log database backups to Azure storage. LRS is used to restore backup files from Azure Blob Storage to SQL Managed Instance.|
SELECT * FROM sys.table_types WHERE is_memory_optimized=1
SELECT * FROM sys.sql_modules WHERE uses_native_compilation=1 ```
-To learn more about in-memory technologies, see [Optimize performance by using in-memory technologies in Azure SQL Database and Azure SQL Managed Instance](https://docs.microsoft.com/azure/azure-sql/in-memory-oltp-overview)
+To learn more about in-memory technologies, see [Optimize performance by using in-memory technologies in Azure SQL Database and Azure SQL Managed Instance](../../in-memory-oltp-overview.md)
## Leverage advanced features
To start migrating your SQL Server to Azure SQL Managed Instance, see the [SQL S
- To assess the Application access layer, see [Data Access Migration Toolkit (Preview)](https://marketplace.visualstudio.com/items?itemName=ms-databasemigration.data-access-migration-toolkit)-- For details on how to perform Data Access Layer A/B testing see [Database Experimentation Assistant](/sql/dea/database-experimentation-assistant-overview).
+- For details on how to perform Data Access Layer A/B testing see [Database Experimentation Assistant](/sql/dea/database-experimentation-assistant-overview).
azure-sql Oracle To Sql On Azure Vm Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/migration-guides/virtual-machines/oracle-to-sql-on-azure-vm-guide.md
To publish your schema and migrate the data, follow these steps:
In addition to using SSMA, you can also use SQL Server Integration Services (SSIS) to migrate the data. To learn more, see: -- The article [Getting Started with SQL Server Integration Services](https://docs.microsoft.com//sql/integration-services/sql-server-integration-services).
+- The article [Getting Started with SQL Server Integration Services](//sql/integration-services/sql-server-integration-services).
- The white paper [SQL Server Integration
These resources were developed as part of the Data SQL Ninja Program, which is s
- To assess the Application access layer, see [Data Access Migration Toolkit (Preview)](https://marketplace.visualstudio.com/items?itemName=ms-databasemigration.data-access-migration-toolkit)-- For details on how to perform Data Access Layer A/B testing see [Database Experimentation Assistant](/sql/dea/database-experimentation-assistant-overview).-
+- For details on how to perform Data Access Layer A/B testing see [Database Experimentation Assistant](/sql/dea/database-experimentation-assistant-overview).
azure-sql Dh2i High Availability Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/virtual-machines/linux/dh2i-high-availability-tutorial.md
In this tutorial, we are going to set up a DxEnterprise cluster using [DxAdmin C
## Prerequisites -- Create four VMs in Azure. Follow the [Quickstart: Create Linux virtual machine in Azure portal](https://docs.microsoft.com/azure/virtual-machines/linux/quick-create-portal) article to create Linux based virtual machines. Similarly, for creating the Windows based virtual machine, follow the [Quickstart: Create a Windows virtual machine in the Azure portal](https://docs.microsoft.com/azure/virtual-machines/windows/quick-create-portal) article.-- Install .NET 3.1 on all the Linux-based VMs that are going to be part of the cluster. Follow the instructions documented [here](https://docs.microsoft.com/dotnet/core/install/linux) based on the Linux operating system that you choose.
+- Create four VMs in Azure. Follow the [Quickstart: Create Linux virtual machine in Azure portal](../../../virtual-machines/linux/quick-create-portal.md) article to create Linux based virtual machines. Similarly, for creating the Windows based virtual machine, follow the [Quickstart: Create a Windows virtual machine in the Azure portal](../../../virtual-machines/windows/quick-create-portal.md) article.
+- Install .NET 3.1 on all the Linux-based VMs that are going to be part of the cluster. Follow the instructions documented [here](/dotnet/core/install/linux) based on the Linux operating system that you choose.
- A valid DxEnterprise license with availability group management features enabled will be required. For more information, see [DxEnterprise Free Trial](https://dh2i.com/trial/) about how you can obtain a free trial. ## Install SQL Server on all the Azure VMs that will be part of the availability group
-In this tutorial, we are setting up a three node Linux-based cluster running the availability group. Follow the documentation for [SQL Server installation on Linux](https://docs.microsoft.com/sql/linux/sql-server-linux-overview#install) based on the choice of your Linux platform. We also recommend you install the [SQL Server tools](https://docs.microsoft.com/sql/linux/sql-server-linux-setup-tools) for this tutorial.
+In this tutorial, we are setting up a three node Linux-based cluster running the availability group. Follow the documentation for [SQL Server installation on Linux](/sql/linux/sql-server-linux-overview#install) based on the choice of your Linux platform. We also recommend you install the [SQL Server tools](/sql/linux/sql-server-linux-setup-tools) for this tutorial.
  > [!NOTE]
-> Ensure that the Linux OS that you choose is a common distribution that is supported by both [DH2i DxEnterprise ( See Minimal System Requirements Section)](https://dh2i.com/wp-content/uploads/DxEnterprise-v20-Admin-Guide.pdf) and [Microsoft SQL Server](https://docs.microsoft.com/sql/linux/sql-server-linux-release-notes-2019#supported-platforms).
+> Ensure that the Linux OS that you choose is a common distribution that is supported by both [DH2i DxEnterprise ( See Minimal System Requirements Section)](https://dh2i.com/wp-content/uploads/DxEnterprise-v20-Admin-Guide.pdf) and [Microsoft SQL Server](/sql/linux/sql-server-linux-release-notes-2019#supported-platforms).
> > In this example, we use Ubuntu 18.04, which is supported by both DH2i DxEnterprise and Microsoft SQL Server. For this tutorial, we are not going to install SQL Server on the Windows VM, as this node is not going to be part of the cluster, and is used only to manage the cluster using DxAdmin.
-After you complete this step, you should have SQL Server and [SQL Server tools](https://docs.microsoft.com/sql/linux/sql-server-linux-setup-tools) (optionally) installed on all three Linux-based VMs that will participate in the availability group.
+After you complete this step, you should have SQL Server and [SQL Server tools](/sql/linux/sql-server-linux-setup-tools) (optionally) installed on all three Linux-based VMs that will participate in the availability group.
  ## Install DxEnterprise on all the VMs and Configure the cluster
To install just the DxAdmin client tool on the Windows VM, follow [DxAdmin Clien
After this step, you should have the DxEnterprise cluster created on the Linux VMs, and DxAdmin client installed on the Windows Client machine. > [!NOTE]
-> You can also create a three node cluster where one of the node is added as *configuration-only mode*, as described [here](https://docs.microsoft.com/sql/database-engine/availability-groups/windows/availability-modes-always-on-availability-groups#SupportedAvModes) to enable automatic failover.
+> You can also create a three node cluster where one of the node is added as *configuration-only mode*, as described [here](/sql/database-engine/availability-groups/windows/availability-modes-always-on-availability-groups#SupportedAvModes) to enable automatic failover.
## Create the virtual hosts to provide failover support and high availability
Connect to the Windows client machine running DxAdmin to connect to the cluster
## Create the Internal Azure Load balancer for Listener (optional)
-In this optional step, you can create and configure the Azure Load balancer that holds the IP addresses for the availability group listeners. For more information on Azure Load Balancer, refer [Azure Load Balancer](https://docs.microsoft.com/azure/load-balancer/load-balancer-overview). To configure the Azure load balancer and availability group listener using DxAdmin, follow the DxEnterprise [Azure Load Balancer Quick Start Guide](https://dh2i.com/docs/20-0/dxenterprise/dh2i-dxenterprise-20-0-software-azure-load-balancer-quick-start-guide/).
+In this optional step, you can create and configure the Azure Load balancer that holds the IP addresses for the availability group listeners. For more information on Azure Load Balancer, refer [Azure Load Balancer](../../../load-balancer/load-balancer-overview.md). To configure the Azure load balancer and availability group listener using DxAdmin, follow the DxEnterprise [Azure Load Balancer Quick Start Guide](https://dh2i.com/docs/20-0/dxenterprise/dh2i-dxenterprise-20-0-software-azure-load-balancer-quick-start-guide/).
After this step, you should have an availability group listener created and mapped to the Internal Azure load balancer.
For more information on more operations within DxEnterprise, access the [DxEnter
## Next Steps -- Learn more about [Availability Groups on Linux](https://docs.microsoft.com/sql/linux/sql-server-linux-availability-group-overview)-- [Quickstart: Create Linux virtual machine in Azure portal](https://docs.microsoft.com/azure/virtual-machines/linux/quick-create-portal)-- [Quickstart: Create a Windows virtual machine in the Azure portal](https://docs.microsoft.com/azure/virtual-machines/windows/quick-create-portal)-- [Supported platforms for SQL Server 2019 on Linux](https://docs.microsoft.com/sql/linux/sql-server-linux-release-notes-2019#supported-platforms)
+- Learn more about [Availability Groups on Linux](/sql/linux/sql-server-linux-availability-group-overview)
+- [Quickstart: Create Linux virtual machine in Azure portal](../../../virtual-machines/linux/quick-create-portal.md)
+- [Quickstart: Create a Windows virtual machine in the Azure portal](../../../virtual-machines/windows/quick-create-portal.md)
+- [Supported platforms for SQL Server 2019 on Linux](/sql/linux/sql-server-linux-release-notes-2019#supported-platforms)
azure-sql Performance Guidelines Best Practices Checklist https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/virtual-machines/windows/performance-guidelines-best-practices-checklist.md
+
+ Title: "Checklist: Performance best practices & guidelines"
+description: Provides a quick checklist to review your best practices and guidelines to optimize the performance of your SQL Server on Azure Virtual Machine (VM).
+
+documentationcenter: na
+
+editor: ''
+tags: azure-service-management
+
+ms.devlang: na
+
+ vm-windows-sql-server
+ Last updated : 03/25/2021++++
+# Checklist: Performance best practices for SQL Server on Azure VMs
+
+This article provides a quick checklist as a series of best practices and guidelines to optimize performance of your SQL Server on Azure Virtual Machines (VMs).
+
+For comprehensive details, see the other articles in this series: [VM size](performance-guidelines-best-practices-vm-size.md), [Storage](performance-guidelines-best-practices-storage.md), [Collect baseline](performance-guidelines-best-practices-collect-baseline.md).
++
+## Overview
+
+While running SQL Server on Azure Virtual Machines, continue using the same database performance tuning options that are applicable to SQL Server in on-premises server environments. However, the performance of a relational database in a public cloud depends on many factors, such as the size of a virtual machine, and the configuration of the data disks.
+
+There is typically a trade-off between optimizing for costs and optimizing for performance. This performance best practices series is focused on getting the *best* performance for SQL Server on Azure Virtual Machines. If your workload is less demanding, you might not require every recommended optimization. Consider your performance needs, costs, and workload patterns as you evaluate these recommendations.
+
+## VM Size
+
+The following is a quick checklist of VM size best practices for running your SQL Server on Azure VM:
+
+- Use VM sizes with 4 or more vCPU like the [Standard_M8-4ms](/../../virtual-machines/m-series), the [E4ds_v4](../../../virtual-machines/edv4-edsv4-series.md#edv4-series), or the [DS12_v2](../../../virtual-machines/dv2-dsv2-series-memory.md#dsv2-series-11-15) or higher.
+- Use [memory optimized](../../../virtual-machines/sizes-memory.md) virtual machine sizes for the best performance of SQL Server workloads.
+- The [DSv2 11-15](../../../virtual-machines/dv2-dsv2-series-memory.md), [Edsv4](../../../virtual-machines/edv4-edsv4-series.md) series, the [M-](../../../virtual-machines/m-series.md), and the [Mv2-](../../../virtual-machines/mv2-series.md) series offer the optimal memory-to-vCore ratio required for OLTP workloads. Both M series VMs offer the highest memory-to-vCore ratio required for mission critical workloads and are also ideal for data warehouse workloads.
+- Consider a higher memory-to-vCore ratio for mission critical and data warehouse workloads.
+- Use the Azure Virtual Machine marketplace images as the SQL Server settings and storage options are configured for optimal SQL Server performance.
+- Collect the target workload's performance characteristics and use them to determine the appropriate VM size for your business.
+
+To learn more, see the comprehensive [VM size best practices](performance-guidelines-best-practices-vm-size.md).
+
+## Storage
+
+The following is a quick checklist of storage configuration best practices for running your SQL Server on Azure VM:
+
+- Monitor the application and [determine storage bandwidth and latency requirements](../../../virtual-machines/premium-storage-performance.md#counters-to-measure-application-performance-requirements) for SQL Server data, log, and tempdb files before choosing the disk type.
+- To optimize storage performance, plan for highest uncached IOPS available and use data caching as a performance feature for data reads while avoiding [virtual machine and disks capping/throttling](../../../virtual-machines/premium-storage-performance.md#throttling).
+- Place data, log, and tempdb files on separate drives.
+ - For the data drive, only use [premium P30 and P40 disks](../../../virtual-machines/disks-types.md#premium-ssd) to ensure the availability of cache support
+ - For the log drive plan for capacity and test performance versus cost while evaluating the [premium P30 - P80 disks](../../../virtual-machines/disks-types.md#premium-ssd).
+ - If submillisecond storage latency is required, use [Azure ultra disks](../../../virtual-machines/disks-types.md#ultra-disk) for the transaction log.
+ - For M-series virtual machine deployments consider [Write Accelerator](../../../virtual-machines/how-to-enable-write-accelerator.md) over using Azure ultra disks.
+ - Place [tempdb](/sql/relational-databases/databases/tempdb-database) on the local ephemeral SSD `D:\` drive for most SQL Server workloads after choosing the optimal VM size.
+ - If the capacity of the local drive is not enough for tempdb, consider sizing up the VM. See [Data file caching policies](performance-guidelines-best-practices-storage.md#data-file-caching-policies) for more information.
+- Stripe multiple Azure data disks using [Storage Spaces](/windows-server/storage/storage-spaces/overview) to increase I/O bandwidth up to the target virtual machine's IOPS and throughput limits.
+- Set [host caching](../../../virtual-machines/disks-performance.md#virtual-machine-uncached-vs-cached-limits) to read-only for data file disks.
+- Set [host caching](../../../virtual-machines/disks-performance.md#virtual-machine-uncached-vs-cached-limits) to none for log file disks.
+ - Do not enable read/write caching on disks that contain SQL Server files.
+ - Always stop the SQL Server service before changing the cache settings of your disk.
+- For development and test workloads consider using standard storage. It is not recommended to use Standard HDD/SDD for production workloads.
+- [Credit-based Disk Bursting](../../../virtual-machines/disk-bursting.md#credit-based-bursting) (P1-P20) should only be considered for smaller dev/test workloads and departmental systems.
+- Provision the storage account in the same region as the SQL Server VM.
+- Disable Azure geo-redundant storage (geo-replication) and use LRS (local redundant storage) on the storage account.
+- Format your data disk to use 64 KB allocation unit size for all data files placed on a drive other than the temporary `D:\` drive (which has a default of 4 KB). SQL Server VMs deployed through Azure Marketplace come with data disks formatted with allocation unit size and interleave for the storage pool set to 64 KB.
+
+To learn more, see the comprehensive [Storage best practices](performance-guidelines-best-practices-storage.md).
++
+## Azure & SQL feature specific
+
+The following is a quick checklist of best practices for SQL Server and Azure-specific configurations when running your SQL Server on Azure VM:
+
+- Register with the [SQL IaaS Agent Extension](sql-agent-extension-manually-register-single-vm.md) to unlock a number of [feature benefits](sql-server-iaas-agent-extension-automate-management.md#feature-benefits).
+- Enable database page compression.
+- Enable instant file initialization for data files.
+- Limit autogrowth of the database.
+- Disable autoshrink of the database.
+- Move all databases to data disks, including system databases.
+- Move SQL Server error log and trace file directories to data disks.
+- Configure default backup and database file locations.
+- [Enable locked pages in memory](/sql/database-engine/configure-windows/enable-the-lock-pages-in-memory-option-windows).
+- Evaluate and apply the [latest cumulative updates](/sql/database-engine/install-windows/latest-updates-for-microsoft-sql-server) for the installed version of SQL Server.
+- Back up directly to Azure Blob storage.
+- Use multiple [tempdb](/sql/relational-databases/databases/tempdb-database#optimizing-tempdb-performance-in-sql-server) files, 1 file per core, up to 8 files.
+++
+## Next steps
+
+To learn more, see the other articles in this series:
+- [VM size](performance-guidelines-best-practices-vm-size.md)
+- [Storage](performance-guidelines-best-practices-storage.md)
+- [Collect baseline](performance-guidelines-best-practices-collect-baseline.md)
+
+For security best practices, see [Security considerations for SQL Server on Azure Virtual Machines](security-considerations-best-practices.md).
+
+Review other SQL Server Virtual Machine articles at [SQL Server on Azure Virtual Machines Overview](sql-server-on-azure-vm-iaas-what-is-overview.md). If you have questions about SQL Server virtual machines, see the [Frequently Asked Questions](frequently-asked-questions-faq.md).
azure-sql Performance Guidelines Best Practices Collect Baseline https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/virtual-machines/windows/performance-guidelines-best-practices-collect-baseline.md
+
+ Title: "Collect baseline: Performance best practices & guidelines"
+description: Provides steps to collect a performance baseline as guidelines to optimize the performance of your SQL Server on Azure Virtual Machine (VM).
+
+documentationcenter: na
+
+editor: ''
+tags: azure-service-management
+ms.assetid: a0c85092-2113-4982-b73a-4e80160bac36
+
+ms.devlang: na
+
+ vm-windows-sql-server
+ Last updated : 03/25/2021+++
+# Collect baseline: Performance best practices for SQL Server on Azure VM
+
+This article provides information to collect a performance baseline as a series of best practices and guidelines to optimize performance for your SQL Server on Azure Virtual Machines (VMs).
+
+There is typically a trade-off between optimizing for costs and optimizing for performance. This performance best practices series is focused on getting the *best* performance for SQL Server on Azure Virtual Machines. If your workload is less demanding, you might not require every recommended optimization. Consider your performance needs, costs, and workload patterns as you evaluate these recommendations.
+
+## Overview
+
+For a prescriptive approach, gather performance counters using PerfMon/LogMan and capture SQL Server wait statistics to better understand general pressures and potential bottlenecks of the source environment.
+
+Start by collecting the CPU, memory, [IOPS](../../../virtual-machines/premium-storage-performance.md#iops), [throughput](../../../virtual-machines/premium-storage-performance.md#throughput), and [latency](../../../virtual-machines/premium-storage-performance.md#latency) of the source workload at peak times following the [application performance checklist](../../../virtual-machines/premium-storage-performance.md#application-performance-requirements-checklist).
+
+Gather data during peak hours such as workloads during your typical business day, but also other high load processes such as end-of-day processing, and weekend ETL workloads. Consider scaling up your resources for atypically heavily workloads, such as end-of-quarter processing, and then scale done once the workload completes.
+
+Use the performance analysis to select the [VM Size](../../../virtual-machines/sizes-memory.md) that can scale to your workload's performance requirements.
++
+## Storage
+
+SQL Server performance depends heavily on the I/O subsystem and storage performance is measured by IOPS and throughput. Unless your database fits into physical memory, SQL Server constantly brings database pages in and out of the buffer pool. The data files for SQL Server should be treated differently. Access to log files is sequential except when a transaction needs to be rolled back where data files, including tempdb, are randomly accessed. If you have a slow I/O subsystem, your users may experience performance issues such as slow response times and tasks that do not complete due to time-outs.
+
+The Azure Marketplace virtual machines have log files on a physical disk that is separate from the data files by default. The tempdb data files count and size meet best practices and are targeted to the ephemeral `D:\` drive.
+
+The following PerfMon counters can help validate the IO throughput required by your SQL Server:
+* **\LogicalDisk\Disk Reads/Sec** (read IOPS)
+* **\LogicalDisk\Disk Writes/Sec** (write IOPS)
+* **\LogicalDisk\Disk Read Bytes/Sec** (read throughput requirements for the data, log, and tempdb files)
+* **\LogicalDisk\Disk Write Bytes/Sec** (write throughput requirements for the data, log, and tempdb files)
+
+Using IOPS and throughput requirements at peak levels, evaluate VM sizes that match the capacity from your measurements.
+
+If your workload requires 20K read IOPS and 10K write IOPS, you can either choose E16s_v3 (with up to 32K cached and 25600 uncached IOPS) or M16_s (with up to 20K cached and 10K uncached IOPS) with 2 P30 disks striped using Storage Spaces.
+
+Make sure to understand both throughput and IOPS requirements of the workload as VMs have different scale limits for IOPS and throughput.
+
+## Memory
+
+Track both external memory used by the OS as well as the memory used internally by SQL Server. Identifying pressure for either component will help size virtual machines and identify opportunities for tuning.
+
+The following PerfMon counters can help validate the memory health of a SQL Server virtual machine:
+* [\Memory\Available MBytes](/azure/monitoring/infrastructure-health/vmhealth-windows/winserver-memory-availmbytes)
+* [\SQLServer:Memory Manager\Target Server Memory (KB)](/sql/relational-databases/performance-monitor/sql-server-buffer-manager-object)
+* [\SQLServer:Memory Manager\Total Server Memory (KB)](/sql/relational-databases/performance-monitor/sql-server-buffer-manager-object)
+* [\SQLServer:Buffer Manager\Lazy writes/sec](/sql/relational-databases/performance-monitor/sql-server-buffer-manager-object)
+* [\SQLServer:Buffer Manager\Page life expectancy](/sql/relational-databases/performance-monitor/sql-server-buffer-manager-object)
+
+## Compute
+
+Compute in Azure is managed differently than on-premises. On-premises servers are built to last several years without an upgrade due to the management overhead and cost of acquiring new hardware. Virtualization mitigates some of these issues but applications are optimized to take the most advantage of the underlying hardware, meaning any significant change to resource consumption requires rebalancing the entire physical environment.
+
+This is not a challenge in Azure where a new virtual machine on a different series of hardware, and even in a different region, is easy to achieve.
+
+In Azure, you want to take advantage of as much of the virtual machines resources as possible, therefore, Azure virtual machines should be configured to keep the average CPU as high as possible without impacting the workload.
+
+The following PerfMon counters can help validate the compute health of a SQL Server virtual machine:
+* **\Processor Information(_Total)\% Processor Time**
+* **\Process(sqlservr)\% Processor Time**
+
+> [!NOTE]
+> Ideally, try to aim for using 80% of your compute, with peaks above 90% but not reaching 100% for any sustained period of time. Fundamentally, you only want to provision the compute the application needs and then plan to scale up or down as the business requires.
++
+## Next steps
+
+To learn more, see the other articles in this series:
+- [Quick checklist](performance-guidelines-best-practices-checklist.md)
+- [VM size](performance-guidelines-best-practices-vm-size.md)
+- [Storage](performance-guidelines-best-practices-storage.md)
++
+For security best practices, see [Security considerations for SQL Server on Azure Virtual Machines](security-considerations-best-practices.md).
+
+Review other SQL Server Virtual Machine articles at [SQL Server on Azure Virtual Machines Overview](sql-server-on-azure-vm-iaas-what-is-overview.md). If you have questions about SQL Server virtual machines, see the [Frequently Asked Questions](frequently-asked-questions-faq.md).
azure-sql Performance Guidelines Best Practices Storage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/virtual-machines/windows/performance-guidelines-best-practices-storage.md
+
+ Title: "Storage: Performance best practices & guidelines"
+description: Provides storage best practices and guidelines to optimize the performance of your SQL Server on Azure Virtual Machine (VM).
+
+documentationcenter: na
+
+editor: ''
+tags: azure-service-management
+ms.assetid: a0c85092-2113-4982-b73a-4e80160bac36
+
+ms.devlang: na
+
+ vm-windows-sql-server
+ Last updated : 03/25/2021+++
+# Storage: Performance best practices for SQL Server on Azure VMs
+
+This article provides storage best practices and guidelines to optimize performance for your SQL Server on Azure Virtual Machines (VMs).
+
+There is typically a trade-off between optimizing for costs and optimizing for performance. This performance best practices series is focused on getting the *best* performance for SQL Server on Azure Virtual Machines. If your workload is less demanding, you might not require every recommended optimization. Consider your performance needs, costs, and workload patterns as you evaluate these recommendations.
+
+To learn more, see the other articles in this series: [Performance Checklist](performance-guidelines-best-practices-checklist.md), [VM size](performance-guidelines-best-practices-vm-size.md), and [Collect baseline](performance-guidelines-best-practices-collect-baseline.md).
+
+## Checklist
+
+Review the following checklist for a brief overview of the storage best practices that the rest of the article covers in greater detail:
+
+- Monitor the application and [determine storage bandwidth and latency requirements](../../../virtual-machines/premium-storage-performance.md#counters-to-measure-application-performance-requirements) for SQL Server data, log, and tempdb files before choosing the disk type.
+- To optimize storage performance, plan for highest uncached IOPS available and use data caching as a performance feature for data reads while avoiding [virtual machine and disks capping](../../../virtual-machines/premium-storage-performance.md#throttling).
+- Place data, log, and tempdb files on separate drives.
+ - For the data drive, only use [premium P30 and P40 disks](../../../virtual-machines/disks-types.md#premium-ssd) to ensure the availability of cache support
+ - For the log drive plan for capacity and test performance versus cost while evaluating the [premium P30 - P80 disks](../../../virtual-machines/disks-types.md#premium-ssd)
+ - If submillisecond storage latency is required, use [Azure ultra disks](../../../virtual-machines/disks-types.md#ultra-disk) for the transaction log.
+ - For M-series virtual machine deployments consider [write accelerator](../../../virtual-machines/how-to-enable-write-accelerator.md) over using Azure ultra disks.
+ - Place [tempdb](/sql/relational-databases/databases/tempdb-database) on the local ephemeral SSD `D:\` drive for most SQL Server workloads after choosing the optimal VM size.
+ - If the capacity of the local drive is not enough for tempdb, consider sizing up the VM. See [Data file caching policies](#data-file-caching-policies) for more information.
+- Stripe multiple Azure data disks using [Storage Spaces](/windows-server/storage/storage-spaces/overview) to increase I/O bandwidth up to the target virtual machine's IOPS and throughput limits.
+- Set [host caching](../../../virtual-machines/disks-performance.md#virtual-machine-uncached-vs-cached-limits) to read-only for data file disks.
+- Set [host caching](../../../virtual-machines/disks-performance.md#virtual-machine-uncached-vs-cached-limits) to none for log file disks.
+ - Do not enable read/write caching on disks that contain SQL Server files.
+ - Always stop the SQL Server service before changing the cache settings of your disk.
+- For development and test workloads, and long-term backup archival consider using standard storage. It is not recommended to use Standard HDD/SDD for production workloads.
+- [Credit-based Disk Bursting](../../../virtual-machines/disk-bursting.md#credit-based-bursting) (P1-P20) should only be considered for smaller dev/test workloads and departmental systems.
+- Provision the storage account in the same region as the SQL Server VM.
+- Disable Azure geo-redundant storage (geo-replication) and use LRS (local redundant storage) on the storage account.
+- Format your data disk to use 64 KB allocation unit size for all data files placed on a drive other than the temporary `D:\` drive (which has a default of 4 KB). SQL Server VMs deployed through Azure Marketplace come with data disks formatted with allocation unit size and interleave for the storage pool set to 64 KB.
+
+To compare the storage checklist with the others, see the comprehensive [Performance best practices checklist](performance-guidelines-best-practices-checklist.md).
+
+## Overview
+
+To find the most effective configuration for SQL Server workloads on an Azure VM, start by [measuring the storage performance of your business application](performance-guidelines-best-practices-collect-baseline.md#storage). Once storage requirements are known, select a virtual machine that supports the necessary IOPS and throughput with the appropriate memory-to-vCore ratio.
+
+Choose a VM size with enough storage scalability for your workload and a mixture of disks (usually in a storage pool) that meet the capacity and performance requirements of your business.
+
+The type of disk depends on both the file type that's hosted on the disk and your peak performance requirements.
+
+> [!TIP]
+> Provisioning a SQL Server VM through the Azure portal helps guide you through the storage configuration process and implements most storage best practices such as creating separate storage pools for your data and log files, targeting tempdb to the `D:\` drive, and enabling the optimal caching policy. For more information about provisioning and configuring storage, see [SQL VM storage configuration](storage-configuration.md).
+
+## VM disk types
+
+You have a choice in the performance level for your disks. The types of managed disks available as underlying storage (listed by increasing performance capabilities) are standard hard disk drives (HDD), standard SSDs, premium solid-state drives (SSD), and ultra disks.
+
+The performance of the disk increases with the capacity, grouped by [premium disk labels](../../../virtual-machines/disks-types.md#premium-ssd) such as the P1 with 4 GiB of space and 120 IOPS to the P80 with 32 TiB of storage and 20,000 IOPS. Premium storage supports a storage cache that helps improve read and write performance for some workloads. For more information, see [Managed disks overview](../../../virtual-machines/managed-disks-overview.md).
+
+There are also three main [disk types](../../../virtual-machines/managed-disks-overview.md#disk-roles) to consider for your SQL Server on Azure VM - an OS disk, a temporary disk, and your data disks. Carefully choose what is stored on the operating system drive `(C:\)` and the ephemeral temporary drive `(D:\)`.
+
+### Operating system disk
+
+An operating system disk is a VHD that can be booted and mounted as a running version of an operating system and is labeled as the `C:\` drive. When you create an Azure virtual machine, the platform will attach at least one disk to the VM for the operating system disk. The `C:\` drive is the default location for application installs and file configuration.
+
+For production SQL Server environments, do not use the operating system disk for data files, log files, error logs.
+
+### Temporary disk
+
+Many Azure virtual machines contain another disk type called the temporary disk (labeled as the `D:\` drive). Depending on the virtual machine series and size the capacity of this disk will vary. The temporary disk is ephemeral, which means the disk storage is recreated (as in, it is deallocated and allocated again), when the virtual machine is restarted, or moved to a different host (for [service healing](/troubleshoot/azure/virtual-machines/understand-vm-reboot), for example).
+
+The temporary storage drive is not persisted to remote storage and therefore should not store user database files, transaction log files, or anything that must be preserved.
+
+Place tempdb on the local temporary SSD `D:\` drive for SQL Server workloads unless consumption of local cache is a concern. If you are using a virtual machine that [does not have a temporary disk](../../../virtual-machines/azure-vms-no-temp-disk.md) then it is recommended to place tempdb on its own isolated disk or storage pool with caching set to read-only. To learn more, see [tempdb data caching policies](performance-guidelines-best-practices-storage.md#data-file-caching-policies).
+
+### Data disks
+
+Data disks are remote storage disks that are often created in [storage pools](/windows-server/storage/storage-spaces/overview) in order to exceed the capacity and performance that any single disk could offer to the virtual machine.
+
+Attach the minimum number of disks that satisfies the IOPS, throughput, and capacity requirements of your workload. Do not exceed the maximum number of data disks of the smallest virtual machine you plan to resize to.
+
+Place data and log files on data disks provisioned to best suit performance requirements.
+
+Format your data disk to use 64 KB allocation unit size for all data files placed on a drive other than the temporary `D:\` drive (which has a default of 4 KB). SQL Server VMs deployed through Azure Marketplace come with data disks formatted with allocation unit size and interleave for the storage pool set to 64 KB.
+
+## Premium disks
+
+Use premium SSD disks for data and log files for production SQL Server workloads. Premium SSD IOPS and bandwidth varies based on the [disk size and type](../../../virtual-machines/disks-types.md).
+
+For production workloads, use the P30 and/or P40 disks for SQL Server data files to ensure caching support and use the P30 up to P80 for SQL Server transaction log files. For the best total cost of ownership, start with P30s (5000 IOPS/200 MBPS) for data and log files and only choose higher capacities when you need to control the virtual machine disk count.
+
+For OLTP workloads, match the target IOPS per disk (or storage pool) with your performance requirements using workloads at peak times and the `Disk Reads/sec` + `Disk Writes/sec` performance counters. For data warehouse and reporting workloads, match the target throughput using workloads at peak times and the `Disk Read Bytes/sec` + `Disk Write Bytes/sec`.
+
+Use Storage Spaces to achieve optimal performance, configure two pools, one for the log file(s) and the other for the data files. If you are not using disk striping, use two premium SSD disks mapped to separate drives, where one drive contains the log file and the other contains the data.
+
+The [provisioned IOPS and throughput](../../../virtual-machines/disks-types.md#premium-ssd) per disk that is used as part of your storage pool. The combined IOPS and throughput capabilities of the disks is the maximum capability up to the throughput limits of the virtual machine.
+
+The best practice is to use the least number of disks possible while meeting the minimal requirements for IOPS (and throughput) and capacity. However, the balance of price and performance tends to be better with a large number of small disks rather than a small number of large disks.
+
+### Scaling premium disks
+
+When an Azure Managed Disk is first deployed, the performance tier for that disk is based on the provisioned disk size. Designate the performance tier at deployment or change it afterwards, without changing the size of the disk. If demand increases, you can increase the performance level to meet your business needs.
+
+Changing the performance tier allows administrators to prepare for and meet higher demand without relying on [disk bursting](../../../virtual-machines/disk-bursting.md#credit-based-bursting).
+
+Use the higher performance for as long as needed where billing is designed to meet the storage performance tier. Upgrade the tier to match the performance requirements without increasing the capacity. Return to the original tier when the extra performance is no longer required.
+
+This cost-effective and temporary expansion of performance is a strong use case for targeted events such as shopping, performance testing, training events and other brief windows where greater performance is needed only for a short term.
+
+For more information, see [Performance tiers for managed disks](../../../virtual-machines/disks-change-performance.md).
+
+## Azure ultra disk
+
+If there is a need for submillisecond response times with reduced latency consider using [Azure ultra disk](../../../virtual-machines/disks-types.md#ultra-disk) for the SQL Server log drive, or even the data drive for applications that are extremely sensitive to I/O latency.
+
+Ultra disk can be configured where capacity and IOPS can scale independently. With ultra disk administrators can provision a disk with the capacity, IOPS, and throughput requirements based on application needs.
+
+Ultra disk is not supported on all VM series and has other limitations such as region availability, redundancy, and support for Azure Backup. To learn more, see [Using Azure ultra disks](../../../virtual-machines/disks-enable-ultra-ssd.md) for a full list of limitations.
+
+## Standard HDDs and SSDs
+
+[Standard HDDs](../../../virtual-machines/disks-types.md#standard-hdd) and SSDs have varying latencies and bandwidth and are only recommended for dev/test workloads. Production workloads should use premium SSDs. If you are using Standard SSD (dev/test scenarios), the recommendation is to add the maximum number of data disks supported by your [VM size](../../../virtual-machines/sizes.md?toc=/azure/virtual-machines/windows/toc.json) and use disk striping with Storage Spaces for the best performance.
+
+## Caching
+
+Virtual machines that support premium storage caching can take advantage of an additional feature called the Azure BlobCache or host caching to extend the IOPS and throughput capabilities of a virtual machine. Virtual machines enabled for both premium storage and premium storage caching have these two different storage bandwidth limits that can be used together to improve storage performance.
+
+The IOPS and MBps throughput without caching counts against a virtual machine's uncached disk throughput limits. The maximum cached limits provide an additional buffer for reads that helps address growth and unexpected peaks.
+
+Enable premium caching whenever the option is supported to significantly improve performance for reads against the data drive without additional cost.
+
+Reads and writes to the Azure BlobCache (cached IOPS and throughput) do not count against the uncached IOPS and throughput limits of the virtual machine.
+
+> [!NOTE]
+> Disk Caching is not supported for disks 4 TiB and larger (P50 and larger). If multiple disks are attached to your VM, each disk that is smaller than 4 TiB will support caching. For more information, see [Disk caching](../../../virtual-machines/premium-storage-performance.md#disk-caching).
+
+### Uncached throughput
+
+The max uncached disk IOPS and throughput is the maximum remote storage limit that the virtual machine can handle. This limit is defined at the virtual machine and is not a limit of the underlying disk storage. This limit applies only to I/O against data drives remotely attached to the VM, not the local I/O against the temp drive (`D:\` drive) or the OS drive.
+
+The amount of uncached IOPS and throughput that is available for a VM can be verified in the documentation for your virtual machine.
+
+For example, the [M-series](../../../virtual-machines/m-series.md) documentation shows that the max uncached throughput for the Standard_M8ms VM is 5000 IOPS and 125 MBps of uncached disk throughput.
+
+![Screenshot showing M-series uncached disk throughput documentation.](./media/performance-guidelines-best-practices/m-series-table.png)
+
+Likewise, you can see that the Standard_M32ts supports 20,000 uncached disk IOPS and 500 MBps uncached disk throughput. This limit is governed at the virtual machine level regardless of the underlying premium disk storage.
+
+For more information, see [uncached and cached limits](../../../virtual-machines/linux/disk-performance-linux.md#virtual-machine-uncached-vs-cached-limits).
++
+### Cached and temp storage throughput
+
+The max cached and temp storage throughput limit is a separate limit from the uncached throughput limit on the virtual machine. The Azure BlobCache consists of a combination of the virtual machine host's random-access memory and locally attached SSD. The temp drive (`D:\` drive) within the virtual machine is also hosted on this local SSD.
+
+The max cached and temp storage throughput limit governs the I/O against the local temp drive (`D:\` drive) and the Azure BlobCache **only if** host caching is enabled.
+
+When caching is enabled on premium storage, virtual machines can scale beyond the limitations of the remote storage uncached VM IOPS and throughput limits.
+
+Only certain virtual machines support both premium storage and premium storage caching (which needs to be verified in the virtual machine documentation). For example, the [M-series](../../../virtual-machines/m-series.md) documentation indicates that both premium storage, and premium storage caching is supported:
+
+![Screenshot showing M-Series Premium Storage support.](./media/performance-guidelines-best-practices/m-series-table-premium-support.png)
+
+The limits of the cache will vary based on the virtual machine size. For example, the Standard_M8ms VM supports 10000 cached disk IOPS and 1000 MBps cached disk throughput with a total cache size of 793 GiB. Similarly, the Standard_M32ts VM supports 40000 cached disk IOPS and 400 MBps cached disk throughput with a total cache size of 3174 GiB.
+
+![Screenshot showing M-series cached disk throughput documentation.](./media/performance-guidelines-best-practices/m-series-table-cached-temp.png)
+
+You can manually enable host caching on an existing VM. Stop all application workloads and the SQL Server services before any changes are made to your virtual machine's caching policy. Changing any of the virtual machine cache settings results in the target disk being detached and reattached after the settings are applied.
+
+### Data file caching policies
+
+Your storage caching policy varies depending on the type of SQL Server data files that are hosted on the drive.
+
+The following table provides a summary of the recommended caching policies based on the type of SQL Server data:
+
+|SQL Server disk |Recommendation |
+|||
+| **Data disk** | Enable `Read-only` caching for the disks hosting SQL Server data files. <br/> Reads from cache will be faster than the uncached reads from the data disk. <br/> Uncached IOPS and throughput plus Cached IOPS and throughput will yield the total possible performance available from the virtual machine within the VMs limits, but actual performance will vary based on the workload's ability to use the cache (cache hit ratio). <br/>|
+|**Transaction log disk**|Set the caching policy to `None` for disks hosting the transaction log. There is no performance benefit to enabling caching for the Transaction log disk, and in fact having either `Read-only` or `Read/Write` caching enabled on the log drive can degrade performance of the writes against the drive and decrease the amount of cache available for reads on the data drive. |
+|**Operating OS disk** | The default caching policy could be `Read-only` or `Read/write` for the OS drive. <br/> It is not recommended to change the caching level of the OS drive. |
+| **tempdb**| If tempdb cannot be placed on the ephemeral drive `D:\` due to capacity reasons, either resize the virtual machine to get a larger ephemeral drive or place tempdb on a separate data drive with `Read-only` caching configured. <br/> The virtual machine cache and ephemeral drive both use the local SSD, so keep this in mind when sizing as tempdb I/O will count against the cached IOPS and throughput virtual machine limits when hosted on the ephemeral drive.|
+| | |
++
+To learn more, see [Disk caching](../../../virtual-machines/premium-storage-performance.md#disk-caching).
++
+## Disk striping
+
+Analyze the throughput and bandwidth required for your SQL data files to determine the number of data disks, including the log file and tempdb. Throughput and bandwidth limits vary by VM size. To learn more, see [VM Size](../../../virtual-machines/sizes.md)
+
+Add additional data disks and use disk striping for more throughput. For example, an application that needs 12,000 IOPS and 180 MB/s throughput can use three striped P30 disks to deliver 15,000 IOPS and 600 MB/s throughput.
+
+To configure disk striping, see [disk striping](storage-configuration.md#disk-striping).
+
+## Disk capping
+
+There are throughput limits at both the disk and virtual machine level. The maximum IOPS limits per VM and per disk differ and are independent of each other.
+
+Applications that consume resources beyond these limits will be throttled (also known as capped). Select a virtual machine and disk size in a disk stripe that meets application requirements and will not face capping limitations. To address capping, use caching, or tune the application so that less throughput is required.
+
+For example, an application that needs 12,000 IOPS and 180 MB/s can:
+- Use the [Standard_M32ms](../../../virtual-machines/m-series.md) which has a max uncached disk throughput of 20,000 IOPS and 500 MBps.
+- Stripe three P30 disks to deliver 15,000 IOPS and 600-MB/s throughput.
+- Use a [Standard_M16ms](../../../virtual-machines/m-series.md) virtual machine and use host caching to utilize local cache over consuming throughput.
+
+Virtual machines configured to scale up during times of high utilization should provision storage with enough IOPS and throughput to support the maximum VM size while keeping the overall number of disks less than or equal to the maximum number supported by the smallest VM SKU targeted to be used.
+
+For more information on disk capping limitations and using caching to avoid capping, see [Disk IO capping](../../../virtual-machines/disks-performance.md).
+
+> [!NOTE]
+> Some disk capping may still result in satisfactory performance to users; tune and maintain workloads rather than resize to a larger VM to balance managing cost and performance for the business.
++
+## Write Acceleration
+
+Write Acceleration is a disk feature that is only available for the [M-Series](https://docs.microsoft.com/azure/virtual-machines/m-series) Virtual Machines (VMs). The purpose of Write Acceleration is to improve the I/O latency of writes against Azure Premium Storage when you need single digit I/O latency due to high volume mission critical OLTP workloads or data warehouse environments.
+
+Use Write Acceleration to improve write latency to the drive hosting the log files. Do not use Write Acceleration for SQL Server data files.
+
+Write Accelerator disks share the same IOPS limit as the virtual machine. Attached disks cannot exceed the Write Accelerator IOPS limit for a VM.
+
+The follow table outlines the number of data disks and IOPS supported per virtual machine:
+
+| VM SKU | # Write Accelerator disks | Write Accelerator disk IOPS per VM |
+||||
+| M416ms_v2, M416s_v2 | 16 | 20000 |
+| M128ms, M128s | 16 | 20000 |
+| M208ms_v2, M208s_v2 | 8 | 10000 |
+| M64ms, M64ls, M64s | 8 | 10000 |
+| M32ms, M32ls, M32ts, M32s | 4 | 5000 |
+| M16ms, M16s | 2 | 2500 |
+| M8ms, M8s | 1 | 1250 |
+
+There are a number of restrictions to using Write Acceleration. To learn more, see [Restrictions when using Write Accelerator](../../../virtual-machines/how-to-enable-write-accelerator.md#restrictions-when-using-write-accelerator).
++
+### Comparing to Azure ultra disk
+
+The biggest difference between Write Acceleration and Azure ultra disks is that Write Acceleration is a virtual machine feature only available for the M-Series and Azure ultra disks is a storage option. Write Acceleration is a write-optimized cache with its own limitations based on the virtual machine size. Azure ultra disks are a low latency disk storage option for Azure Virtual Machines.
+
+If possible, use Write Acceleration over ultra disks for the transaction log disk. For virtual machines that do not support Write Acceleration but require low latency to the transaction log, use Azure ultra disks.
+
+## Monitor storage performance
+
+To assess storage needs, and determine how well storage is performing, you need to understand what to measure, and what those indicators mean.
+
+[IOPS (Input/Output per second)](../../../virtual-machines/premium-storage-performance.md#iops) is the number of requests the application is making to storage per second. Measure IOPS using Performance Monitor counters `Disk Reads/sec` and `Disk Writes/sec`. [OLTP (Online transaction processing)](/azure/architecture/data-guide/relational-data/online-transaction-processing) applications need to drive higher IOPS in order to achieve optimal performance. Applications such as payment processing systems, online shopping, and retail point-of-sale systems are all examples of OLTP applications.
+
+[Throughput](../../../virtual-machines/premium-storage-performance.md#throughput) is the volume of data that is being sent to the underlying storage, often measured by megabytes per second. Measure throughput with the Performance Monitor counters `Disk Read Bytes/sec` and `Disk Write Bytes/sec`. [Data warehousing](/azure/architecture/data-guide/relational-data/data-warehousing) is optimized around maximizing throughput over IOPS. Applications such as data stores for analysis, reporting, ETL workstreams, and other business intelligence targets are all examples of data warehousing applications.
+
+I/O unit sizes influence IOPS and throughput capabilities as smaller I/O sizes yield higher IOPS and larger I/O sizes yield higher throughput. SQL Server chooses the optimal I/O size automatically. For more information about, see [Optimize IOPS, throughput, and latency for your applications](../../../virtual-machines/premium-storage-performance.md#optimize-iops-throughput-and-latency-at-a-glance).
+
+There are specific Azure Monitor metrics that are invaluable for discovering capping at the virtual machine and disk level as well as the consumption and the health of the AzureBlob cache. To identify key counters to add to your monitoring solution and Azure portal dashboard, see [Storage utilization metrics](../../../virtual-machines/disks-metrics.md#storage-io-utilization-metrics).
+
+> [!NOTE]
+> Azure Monitor does not currently offer disk-level metrics for the ephemeral temp drive `(D:\)`. VM Cached IOPS Consumed Percentage and VM Cached Bandwidth Consumed Percentage will reflect IOPS and throughput from both the ephemeral temp drive `(D:\)` and host caching together.
++
+## Next steps
+
+To learn more about performance best practices, see the other articles in this series:
+- [Quick checklist](performance-guidelines-best-practices-checklist.md)
+- [VM size](performance-guidelines-best-practices-vm-size.md)
+- [Collect baseline](performance-guidelines-best-practices-collect-baseline.md)
+
+For security best practices, see [Security considerations for SQL Server on Azure Virtual Machines](security-considerations-best-practices.md).
+
+For detailed testing of SQL Server performance on Azure VMs with TPC-E and TPC_C benchmarks, refer to the blog [Optimize OLTP performance](https://techcommunity.microsoft.com/t5/sql-server/optimize-oltp-performance-with-sql-server-on-azure-vm/ba-p/916794).
+
+Review other SQL Server Virtual Machine articles at [SQL Server on Azure Virtual Machines Overview](sql-server-on-azure-vm-iaas-what-is-overview.md). If you have questions about SQL Server virtual machines, see the [Frequently Asked Questions](frequently-asked-questions-faq.md).
azure-sql Performance Guidelines Best Practices Vm Size https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/virtual-machines/windows/performance-guidelines-best-practices-vm-size.md
+
+ Title: "VM size: Performance best practices & guidelines"
+description: Provides VM size guidelines and best practices to optimize the performance of your SQL Server on Azure Virtual Machine (VM).
+
+documentationcenter: na
+
+editor: ''
+tags: azure-service-management
+
+ms.devlang: na
+
+ vm-windows-sql-server
+ Last updated : 03/25/2021+++
+# VM size: Performance best practices for SQL Server on Azure VMs
+
+This article provides VM size guidance a series of best practices and guidelines to optimize performance for your SQL Server on Azure Virtual Machines (VMs).
+
+There is typically a trade-off between optimizing for costs and optimizing for performance. This performance best practices series is focused on getting the *best* performance for SQL Server on Azure Virtual Machines. If your workload is less demanding, you might not require every recommended optimization. Consider your performance needs, costs, and workload patterns as you evaluate these recommendations.
++
+## Checklist
+
+Review the following checklist for a brief overview of the VM size best practices that the rest of the article covers in greater detail:
+
+- Use VM sizes with 4 or more vCPU like the [Standard_M8-4ms](/../../virtual-machines/m-series), the [E4ds_v4](../../../virtual-machines/edv4-edsv4-series.md#edv4-series), or the [DS12_v2](../../../virtual-machines/dv2-dsv2-series-memory.md#dsv2-series-11-15) or higher.
+- Use [memory optimized](../../../virtual-machines/sizes-memory.md) virtual machine sizes for the best performance of SQL Server workloads.
+- The [DSv2 11-15](../../../virtual-machines/dv2-dsv2-series-memory.md), [Edsv4](../../../virtual-machines/edv4-edsv4-series.md) series, the [M-](../../../virtual-machines/m-series.md), and the [Mv2-](../../../virtual-machines/mv2-series.md) series offer the optimal memory-to-vCore ratio required for OLTP workloads. Both M series VMs offer the highest memory-to-vCore ratio required for mission critical workloads and are also ideal for data warehouse workloads.
+- Consider a higher memory-to-vCore ratio for mission critical and data warehouse workloads.
+- Leverage the Azure Virtual Machine marketplace images as the SQL Server settings and storage options are configured for optimal SQL Server performance.
+- Collect the target workload's performance characteristics and use them to determine the appropriate VM size for your business.
+
+To compare the VM size checklist with the others, see the comprehensive [Performance best practices checklist](performance-guidelines-best-practices-checklist.md).
+
+## Overview
+
+When you are creating a SQL Server on Azure VM, carefully consider the type of workload necessary. If you are migrating an existing environment, [collect a performance baseline](performance-guidelines-best-practices-collect-baseline.md) to determine your SQL Server on Azure VM requirements. If this is a new VM, then create your new SQL Server VM based on your vendor requirements.
+
+If you are creating a new SQL Server VM with a new application built for the cloud, you can easily size your SQL Server VM as your data and usage requirements evolve.
+Start the development environments with the lower-tier D-Series, B-Series, or Av2-series and grow your environment over time.
+
+Use the SQL Server VM marketplace images with the storage configuration in the portal. This will make it easier to properly create the storage pools necessary to get the size, IOPS, and throughput necessary for your workloads. It is important to choose SQL Server VMs that support premium storage and premium storage caching. See the [storage](performance-guidelines-best-practices-storage.md) article to learn more.
+
+The recommended minimum for a production OLTP environment is 4 vCore, 32 GB of memory, and a memory-to-vCore ratio of 8. For new environments, start with 4 vCore machines and scale to 8, 16, 32 vCores or more when your data and compute requirements change. For OLTP throughput, target SQL Server VMs that have 5000 IOPS for every vCore.
+
+SQL Server data warehouse and mission critical environments will often need to scale beyond the 8 memory-to-vCore ratio. For medium environments, you may want to choose a 16 memory-to-vCore ratio, and a 32 memory-to-vCore ratio for larger data warehouse environments.
+
+SQL Server data warehouse environments often benefit from the parallel processing of larger machines. For this reason, the M-series and the Mv2-series are strong options for larger data warehouse environments.
+
+Use the vCPU and memory configuration from your source machine as a baseline for migrating a current on-premises SQL Server database to SQL Server on Azure VMs. Bring your core license to Azure to take advantage of the [Azure Hybrid Benefit](https://azure.microsoft.com/pricing/hybrid-benefit/) and save on SQL Server licensing costs.
+
+## Memory optimized
+
+The [memory optimized virtual machine sizes](../../../virtual-machines/sizes-memory.md) are a primary target for SQL Server VMs and the recommended choice by Microsoft. The memory optimized virtual machines offer stronger memory-to-CPU ratios and medium-to-large cache options.
+
+### M, Mv2, and Mdsv2 series
+
+The [M-series](../../../virtual-machines/m-series.md) offers vCore counts and memory for some of the largest SQL Server workloads.
+
+The [Mv2-series](../../../virtual-machines/mv2-series.md) has the highest vCore counts and memory and is recommended for mission critical and data warehouse workloads. Mv2-series instances are memory optimized VM sizes providing unparalleled computational performance to support large in-memory databases and workloads with a high memory-to-CPU ratio that is perfect for relational database servers, large caches, and in-memory analytics.
+
+The [Standard_M64ms](../../../virtual-machines/m-series.md) has a 28 memory-to-vCore ratio for example.
+
+[Mdsv2 Medium Memory series](../../..//virtual-machines/msv2-mdsv2-series.md) is a new M-series that is currently in [preview](https://aka.ms/Mv2MedMemoryPreview) that offers a range of M-series level Azure virtual machines with a midtier memory offering. These machines are well suited for SQL Server workloads with a minimum of 10 memory-to-vCore support up to 30.
+
+Some of the features of the M and Mv2-series attractive for SQL Server performance include [premium storage](../../../virtual-machines/premium-storage-performance.md) and [premium storage caching](../../../virtual-machines/premium-storage-performance.md#disk-caching) support, [ultra-disk](../../../virtual-machines/disks-enable-ultra-ssd.md) support, and [write acceleration](../../../virtual-machines/how-to-enable-write-accelerator.md).
+
+### Edsv4-series
+
+The [Edsv4-series](../../../virtual-machines/edv4-edsv4-series.md) is designed for memory-intensive applications. These VMs have a large local storage SSD capacity, strong local disk IOPS, up to 504 GiB of RAM. There is a nearly consistent 8 GiB of memory per vCore across most of these virtual machines, which is ideal for standard SQL Server workloads.
+
+There is a new virtual machine in this group with the [Standard_E80ids_v4](../../../virtual-machines/edv4-edsv4-series.md) that offers 80 vCores, 504 GBs of memory, with a memory-to-vCore ratio of 6. This virtual machine is notable because it is [isolated](../../../virtual-machines/isolation.md) which means it is guaranteed to be the only virtual machine running on the host, and therefore is isolated from other customer workloads. This has a memory-to-vCore ratio that is lower than what is recommended for SQL Server, so it should only be used if isolation is required.
+
+The Edsv4-series virtual machines support [premium storage](../../../virtual-machines/premium-storage-performance.md), and [premium storage caching](../../../virtual-machines/premium-storage-performance.md#disk-caching).
+
+### DSv2-series 11-15
+
+The [DSv2-series 11-15](../../../virtual-machines/dv2-dsv2-series-memory.md#dsv2-series-11-15) has the same memory and disk configurations as the previous D-series. This series has a consistent memory-to-CPU ratio of 7 across all virtual machines. This is the smallest of the memory-optimized series and is a good low-cost option for entry-level SQL Server workloads.
+
+The [DSv2-series 11-15](../../../virtual-machines/dv2-dsv2-series-memory.md#dsv2-series-11-15) supports [premium storage](../../../virtual-machines/premium-storage-performance.md) and [premium storage caching](../../../virtual-machines/premium-storage-performance.md#disk-caching), which is strongly recommended for optimal performance.
+
+## General purpose
+
+The [general purpose virtual machine sizes](../../../virtual-machines/sizes-general.md) are designed to provide balanced memory-to-vCore ratios for smaller entry level workloads such as development and test, web servers, and smaller database servers.
+
+Because of the smaller memory-to-vCore ratios with the general purpose virtual machines, it is important to carefully monitor memory-based performance counters to ensure SQL Server is able to get the buffer cache memory it needs. See [memory performance baseline](performance-guidelines-best-practices-collect-baseline.md#memory) for more information.
+
+Since the starting recommendation for production workloads is a memory-to-vCore ratio of 8, the minimum recommended configuration for a general purpose VM running SQL Server is 4 vCPU and 32 GB of memory.
+
+### Ddsv4 series
+
+The [Ddsv4-series](../../../virtual-machines/ddv4-ddsv4-series.md) offers a fair combination of vCPU, memory, and temporary disk but with smaller memory-to-vCore support.
+
+The Ddsv4 VMs include lower latency and higher-speed local storage.
+
+These machines are ideal for side-by-side SQL and app deployments that require fast access to temp storage and departmental relational databases. There is a standard memory-to-vCore ratio of 4 across all of the virtual machines in this series.
+
+For this reason, it is recommended to leverage the D8ds_v4 as the starter virtual machine in this series, which has 8 vCores and 32 GBs of memory. The largest machine is the D64ds_v4, which has 64 vCores and 256 GBs of memory.
+
+The [Ddsv4-series](../../../virtual-machines/ddv4-ddsv4-series.md) virtual machines support [premium storage](../../../virtual-machines/premium-storage-performance.md) and [premium storage caching](../../../virtual-machines/premium-storage-performance.md#disk-caching).
+
+> [!NOTE]
+> The [Ddsv4-series](../../../virtual-machines/ddv4-ddsv4-series.md) does not have the memory-to-vCore ratio of 8 that is recommended for SQL Server workloads. As such, considering using these virtual machines for smaller application and development workloads only.
+
+### B-series
+
+The [burstable B-series](../../../virtual-machines/sizes-b-series-burstable.md) virtual machine sizes are ideal for workloads that do not need consistent performance such as proof of concept and very small application and development servers.
+
+Most of the [burstable B-series](../../../virtual-machines/sizes-b-series-burstable.md) virtual machine sizes have a memory-to-vCore ratio of 4. The largest of these machines is the [Standard_B20ms](../../../virtual-machines/sizes-b-series-burstable.md) with 20 vCores and 80 GB of memory.
+
+This series is unique as the apps have the ability to **burst** during business hours with burstable credits varying based on machine size.
+
+When the credits are exhausted, the VM returns to the baseline machine performance.
+
+The benefit of the B-series is the compute savings you could achieve compared to the other VM sizes in other series especially if you need the processing power sparingly throughout the day.
+
+This series supports [premium storage](../../../virtual-machines/premium-storage-performance.md), but **does not support** [premium storage caching](../../../virtual-machines/premium-storage-performance.md#disk-caching).
+
+> [!NOTE]
+> The [burstable B-series](../../../virtual-machines/sizes-b-series-burstable.md) does not have the memory-to-vCore ratio of 8 that is recommended for SQL Server workloads. As such, consider using these virtual machines for smaller applications, web servers, and development workloads only.
+
+### Av2-series
+
+The [Av2-series](../../../virtual-machines/av2-series.md) VMs are best suited for entry-level workloads like development and test, low traffic web servers, small to medium app databases, and proof-of-concepts.
+
+Only the [Standard_A2m_v2](../../../virtual-machines/av2-series.md) (2 vCores and 16GBs of memory), [Standard_A4m_v2](../../../virtual-machines/av2-series.md) (4 vCores and 32GBs of memory), and the [Standard_A8m_v2](../../../virtual-machines/av2-series.md) (8 vCores and 64GBs of memory) have a good memory-to-vCore ratio of 8 for these top three virtual machines.
+
+These virtual machines are both good options for smaller development and test SQL Server machines.
+
+The 8 vCore [Standard_A8m_v2](../../../virtual-machines/av2-series.md) may also be a good option for small application and web servers.
+
+> [!NOTE]
+> The Av2 series does not support premium storage and as such, is not recommended for production SQL Server workloads even with the virtual machines that have a memory-to-vCore ratio of 8.
+
+## Storage optimized
+
+The [storage optimized VM sizes](../../../virtual-machines/sizes-storage.md) are for specific use cases. These virtual machines are specifically designed with optimized disk throughput and IO.
+
+### Lsv2-series
+
+The [Lsv2-series](../../../virtual-machines/lsv2-series.md) features high throughput, low latency, and local NVMe storage. The Lsv2-series VMs are optimized to use the local disk on the node attached directly to the VM rather than using durable data disks.
+
+These virtual machines are strong options for big data, data warehouse, reporting, and ETL workloads. The high throughput and IOPS of the local NVMe storage is a good use case for processing files that will be loaded into your database and other scenarios where the data can be recreated from the source system or other repositories such as Azure Blob storage or Azure Data Lake. [Lsv2-series](../../../virtual-machines/lsv2-series.md) VMs can also burst their disk performance for up to 30 minutes at a time.
+
+These virtual machines size from 8 to 80 vCPU with 8 GiB of memory per vCPU and for every 8 vCPUs there is 1.92 TB of NVMe SSD. This means for the largest VM of this series, the [L80s_v2](../../../virtual-machines/lsv2-series.md), there is 80 vCPU and 640 BiB of memory with 10x1.92TB of NVMe storage. There is a consistent memory-to-vCore ratio of 8 across all of these virtual machines.
+
+The NVMe storage is ephemeral meaning that data will be lost on these disks if you deallocate your virtual machine, or if it's moved to a different host for service healing.
+
+The Lsv2 and Ls series support [premium storage](../../../virtual-machines/premium-storage-performance.md), but not premium storage caching. The creation of a local cache to increase IOPs is not supported.
+
+> [!WARNING]
+> Storing your data files on the ephemeral NVMe storage could result in data loss when the VM is deallocated.
+
+## Constrained vCores
+
+High performing SQL Server workloads often need larger amounts of memory, I/O, and throughput without the higher vCore counts.
+
+Most OLTP workloads are application databases driven by large numbers of smaller transactions. With OLTP workloads, only a small amount of the data is read or modified, but the volumes of transactions driven by user counts are much higher. It is important to have the SQL Server memory available to cache plans, store recently accessed data for performance, and ensure physical reads can be read into memory quickly.
+
+These OLTP environments need higher amounts of memory, fast storage, and the I/O bandwidth necessary to perform optimally.
+
+In order to maintain this level of performance without the higher SQL Server licensing costs, Azure offers VM sizes with [constrained vCPU counts](../../../virtual-machines/constrained-vcpu.md).
+
+This helps control licensing costs by reducing the available vCores while maintaining the same memory, storage, and I/O bandwidth of the parent virtual machine.
+
+The vCPU count can be constrained to one-half to one-quarter of the original VM size. Reducing the vCores available to the virtual machine will achieve higher memory-to-vCore ratios, but the compute cost will remain the same.
+
+These new VM sizes have a suffix that specifies the number of active vCPUs to make them easier to identify.
+
+For example, the [M64-32ms](../../../virtual-machines/constrained-vcpu.md) requires licensing only 32 SQL Server vCores with the memory, I/O, and throughput of the [M64ms](../../../virtual-machines/m-series.md) and the [M64-16ms](../../../virtual-machines/constrained-vcpu.md) requires licensing only 16 vCores. Though while the [M64-16ms](../../../virtual-machines/constrained-vcpu.md) has a quarter of the SQL Server licensing cost of the M64ms, the compute cost of the virtual machine will be the same.
+
+> [!NOTE]
+> - Medium to large data warehouse workloads may still benefit from [constrained vCore VMs](../../../virtual-machines/constrained-vcpu.md), but data warehouse workloads are commonly characterized by fewer users and processes addressing larger amounts of data through query plans that run in parallel.
+> - The compute cost, which includes operating system licensing, will remain the same as the parent virtual machine.
+++
+## Next steps
+
+To learn more, see the other articles in this series:
+- [Quick checklist](performance-guidelines-best-practices-checklist.md)
+- [Storage](performance-guidelines-best-practices-storage.md)
+- [Collect baseline](performance-guidelines-best-practices-collect-baseline.md)
+
+For security best practices, see [Security considerations for SQL Server on Azure Virtual Machines](security-considerations-best-practices.md).
+
+Review other SQL Server Virtual Machine articles at [SQL Server on Azure Virtual Machines Overview](sql-server-on-azure-vm-iaas-what-is-overview.md). If you have questions about SQL Server virtual machines, see the [Frequently Asked Questions](frequently-asked-questions-faq.md).
azure-sql Performance Guidelines Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/virtual-machines/windows/performance-guidelines-best-practices.md
- Title: Performance guidelines for SQL Server in Azure | Microsoft Docs
-description: Provides guidelines for optimizing SQL Server performance in Microsoft Azure Virtual Machines.
--
-tags: azure-service-management
---- Previously updated : 11/09/2020---
-# Performance guidelines for SQL Server on Azure Virtual Machines
-
-This article provides guidance for optimizing SQL Server performance in Microsoft Azure Virtual Machines.
-
-## Overview
-
-While running SQL Server on Azure Virtual Machines, we recommend that you continue using the same database performance tuning options that are applicable to SQL Server in on-premises server environments. However, the performance of a relational database in a public cloud depends on many factors such as the size of a virtual machine, and the configuration of the data disks.
-
-[SQL Server images provisioned in the Azure portal](sql-vm-create-portal-quickstart.md) follow general storage [configuration best practices](storage-configuration.md). After provisioning, consider applying other optimizations discussed in this article. Base your choices on your workload and verify through testing.
-
-> [!TIP]
-> There is typically a trade-off between optimizing for costs and optimizing for performance. This article is focused on getting the *best* performance for SQL Server on Azure Virtual Machines. If your workload is less demanding, you might not require every optimization listed below. Consider your performance needs, costs, and workload patterns as you evaluate these recommendations.
-
-## Quick checklist
-
-The following is a quick checklist for optimal performance of SQL Server on Azure Virtual Machines:
-
-| Area | Optimizations |
-| | |
-| [VM size](#vm-size-guidance) | - Use VM sizes with 4 or more vCPU like the [Standard_M8-4ms](../../../virtual-machines/m-series.md), the [E4ds_v4](../../../virtual-machines/edv4-edsv4-series.md#edv4-series), or the [DS12_v2](../../../virtual-machines/dv2-dsv2-series-memory.md#dsv2-series-11-15) or higher. <br/><br/> - Use [memory optimized](../../../virtual-machines/sizes-memory.md) virtual machine sizes for the best performance of SQL Server workloads. <br/><br/> - The [DSv2 11-15](../../../virtual-machines/dv2-dsv2-series-memory.md), [Edsv4](../../../virtual-machines/edv4-edsv4-series.md) series, the [M-](../../../virtual-machines/m-series.md), and the [Mv2-](../../../virtual-machines/mv2-series.md) series offer the optimal memory-to-vCore ratio required for OLTP workloads. Both M series VMs offer the highest memory-to-vCore ratio required for mission critical workloads and is also ideal for data warehouse workloads. <br/><br/> - A higher memory-to-vCore ratio may be required for mission critical and data warehouse workloads. <br/><br/> - Leverage the Azure Virtual Machine marketplace images as the SQL Server settings and storage options are configured for optimal SQL Server performance. <br/><br/> - Collect the target workload's performance characteristics and use them to determine the appropriate VM size for your business.|
-| [Storage](#storage-guidance) | - For detailed testing of SQL Server performance on Azure Virtual Machines with TPC-E and TPC_C benchmarks, refer to the blog [Optimize OLTP performance](https://techcommunity.microsoft.com/t5/SQL-Server/Optimize-OLTP-Performance-with-SQL-Server-on-Azure-VM/ba-p/916794). <br/><br/> - Use [premium SSDs](https://techcommunity.microsoft.com/t5/SQL-Server/Optimize-OLTP-Performance-with-SQL-Server-on-Azure-VM/ba-p/916794) for the best price/performance advantages. Configure [Read only cache](../../../virtual-machines/premium-storage-performance.md#disk-caching) for data files and no cache for the log file. <br/><br/> - Use [Ultra Disks](../../../virtual-machines/disks-types.md#ultra-disk) if less than 1-ms storage latencies are required by the workload. See [migrate to ultra disk](storage-migrate-to-ultradisk.md) to learn more. <br/><br/> - Collect the storage latency requirements for SQL Server data, log, and Temp DB files by [monitoring the application](../../../virtual-machines/premium-storage-performance.md#application-performance-requirements-checklist) before choosing the disk type. If < 1-ms storage latencies are required, then use Ultra Disks, otherwise use premium SSD. If low latencies are only required for the log file and not for data files, then [provision the Ultra Disk](../../../virtual-machines/disks-enable-ultra-ssd.md) at required IOPS and throughput levels only for the log File. <br/><br/> - Standard storage is only recommended for development and test purposes or for backup files and should not be used for production workloads. <br/><br/> - Keep the [storage account](../../../storage/common/storage-account-create.md) and SQL Server VM in the same region.<br/><br/> - Disable Azure [geo-redundant storage](../../../storage/common/storage-redundancy.md) (geo-replication) on the storage account. |
-| [Disks](#disks-guidance) | - Use a minimum of 2 [premium SSD disks](../../../virtual-machines/disks-types.md#premium-ssd) (1 for log file and 1 for data files). <br/><br/> - For workloads requiring < 1-ms IO latencies, enable write accelerator for M series and consider using Ultra SSD disks for Es and DS series. <br/><br/> - Enable [read only caching](../../../virtual-machines/premium-storage-performance.md#disk-caching) on the disk(s) hosting the data files.<br/><br/> - Add an additional 20% premium IOPS/throughput capacity than your workload requires when [configuring storage for SQL Server data, log, and TempDB files](storage-configuration.md) <br/><br/> - Avoid using operating system or temporary disks for database storage or logging.<br/><br/> - Do not enable caching on disk(s) hosting the log file. **Important**: Stop the SQL Server service when changing the cache settings for an Azure Virtual Machines disk.<br/><br/> - Stripe multiple Azure data disks to get increased storage throughput.<br/><br/> - Format with documented allocation sizes. <br/><br/> - Place TempDB on the local SSD `D:\` drive for mission critical SQL Server workloads (after choosing correct VM size). If you create the VM from the Azure portal or Azure quickstart templates and [place Temp DB on the Local Disk](https://techcommunity.microsoft.com/t5/SQL-Server/Announcing-Performance-Optimized-Storage-Configuration-for-SQL/ba-p/891583), then you do not need any further action; for all other cases follow the steps in the blog for [Using SSDs to store TempDB](https://cloudblogs.microsoft.com/sqlserver/2014/09/25/using-ssds-in-azure-vms-to-store-sql-server-TempDB-and-buffer-pool-extensions/) to prevent failures after restarts. If the capacity of the local drive is not enough for your Temp DB size, then place Temp DB on a storage pool [striped](../../../virtual-machines/premium-storage-performance.md) on premium SSD disks with [read-only caching](../../../virtual-machines/premium-storage-performance.md#disk-caching). |
-| [I/O](#io-guidance) |- Enable database page compression.<br/><br/> - Enable instant file initialization for data files.<br/><br/> - Limit autogrowth of the database.<br/><br/> - Disable autoshrink of the database.<br/><br/> - Move all databases to data disks, including system databases.<br/><br/> - Move SQL Server error log and trace file directories to data disks.<br/><br/> - Configure default backup and database file locations.<br/><br/> - [Enable locked pages in memory](/sql/database-engine/configure-windows/enable-the-lock-pages-in-memory-option-windows).<br/><br/> - Evaluate and apply the [latest cumulative updates](/sql/database-engine/install-windows/latest-updates-for-microsoft-sql-server) for the installed version of SQL Server. |
-| [Feature-specific](#feature-specific-guidance) | - Back up directly to Azure Blob storage.<br/><br/>- Use [file snapshot backups](/sql/relational-databases/backup-restore/file-snapshot-backups-for-database-files-in-azure) for databases larger than 12 TB. <br/><br/>- Use multiple Temp DB files, 1 file per core, up to 8 files.<br/><br/>- Set max server memory at 90% or up to 50 GB left for the Operating System. <br/><br/>- Enable soft NUMA. |
--
-<br/>
-For more information on *how* and *why* to make these optimizations, please review the details and guidance provided in the following sections.
-<br/><br/>
-
-## Getting started
-
-If you are creating a new SQL Server on Azure VM and are not migrating a current source system, create your new SQL Server VM based on your vendor requirements. The vendor requirements for a SQL Server VM are the same as what you would deploy on-premises.
-
-If you are creating a new SQL Server VM with a new application built for the cloud, you can easily size your SQL Server VM as your data and usage requirements evolve.
-Start the development environments with the lower-tier D-Series, B-Series, or Av2-series and grow your environment over time.
-
-The recommended minimum for a production OLTP environment is 4 vCore, 32 GB of memory, and a memory-to-vCore ratio of 8. For new environments, start with 4 vCore machines and scale to 8, 16, 32 vCores or more when your data and compute requirements change. For OLTP throughput, target SQL Server VMs that have 5000 IOPS for every vCore.
-
-Use the SQL Server VM marketplace images with the storage configuration in the portal. This will make it easier to properly create the storage pools necessary to get the size, IOPS, and throughput necessary for your workloads. It is important to choose SQL Server VMs that support premium storage and premium storage caching. See the [storage](#storage-guidance) section to learn more.
-
-SQL Server data warehouse and mission critical environments will often need to scale beyond the 8 memory-to-vCore ratio. For medium environments, you may want to choose a 16 core-to-memory ratio, and a 32 core-to-memory ratio for larger data warehouse environments.
-
-SQL Server data warehouse environments often benefit from the parallel processing of larger machines. For this reason, the M-series and the Mv2-series are strong options for larger data warehouse environments.
-
-## VM size guidance
-
-Use the vCPU and memory configuration from your source machine as a baseline for migrating a current on-premises SQL Server database to SQL Server on Azure VMs. Bring your core license to Azure to take advantage of the [Azure Hybrid Benefit](https://azure.microsoft.com/pricing/hybrid-benefit/) and save on SQL Server licensing costs.
-
-**Microsoft recommends a memory-to-vCore ratio of 8 as a starting point for production SQL Server workloads.** Smaller ratios are acceptable for non-production workloads.
-
-Choose a [memory optimized](../../../virtual-machines/sizes-memory.md), [general purpose](../../../virtual-machines/sizes-general.md), [storage optimized](../../../virtual-machines/sizes-storage.md), or [constrained vCore](../../../virtual-machines/constrained-vcpu.md) virtual machine size that is most optimal for SQL Server performance based on your workload (OLTP or data warehouse).
-
-### Memory optimized
-
-The [memory optimized virtual machine sizes](../../../virtual-machines/sizes-memory.md) are a primary target for SQL Server VMs and the recommended choice by Microsoft. The memory optimized virtual machines offer stronger memory-to-CPU ratios and medium-to-large cache options.
-
-#### M and Mv2 series
-
-The [M-series](../../../virtual-machines/m-series.md) offers vCore counts and memory for some of the largest SQL Server workloads.
-
-The [Mv2-series](../../../virtual-machines/mv2-series.md) has the highest vCore counts and memory and is recommended for mission critical and data warehouse workloads. Mv2-series instances are memory optimized VM sizes providing unparalleled computational performance to support large in-memory databases and workloads with a high memory-to-CPU ratio that is perfect for relational database servers, large caches, and in-memory analytics.
-
-The [Standard_M64ms](../../../virtual-machines/m-series.md) has a 28 memory-to-vCore ratio for example.
-
-Some of the features of the M and Mv2-series attractive for SQL Server performance include [premium storage](../../../virtual-machines/premium-storage-performance.md) and [premium storage caching](../../../virtual-machines/premium-storage-performance.md#disk-caching) support, [ultra-disk](../../../virtual-machines/disks-enable-ultra-ssd.md) support, and [write acceleration](../../../virtual-machines/how-to-enable-write-accelerator.md).
-
-#### Edsv4-series
-
-The [Edsv4-series](../../../virtual-machines/edv4-edsv4-series.md) is designed for memory-intensive applications. These VMs have a large local storage SSD capacity, strong local disk IOPS, up to 504 GiB of RAM, and improved compute compared to the previous Ev3/Esv3 sizes with Gen2 VMs. There is a nearly consistent memory-to-vCore ratio of 8 across these virtual machines, which is ideal for standard SQL Server workloads.
-
-This VM series is ideal for memory-intensive enterprise applications and applications that benefit from low latency, high-speed local storage.
-
-The Edsv4-series virtual machines support [premium storage](../../../virtual-machines/premium-storage-performance.md), and [premium storage caching](../../../virtual-machines/premium-storage-performance.md#disk-caching).
-
-#### DSv2-series 11-15
-
-The [DSv2-series 11-15](../../../virtual-machines/dv2-dsv2-series-memory.md#dsv2-series-11-15) has the same memory and disk configurations as the previous D-series. This series has a consistent memory-to-CPU ratio of 7 across all virtual machines.
-
-The [DSv2-series 11-15](../../../virtual-machines/dv2-dsv2-series-memory.md#dsv2-series-11-15) supports [premium storage](../../../virtual-machines/premium-storage-performance.md) and [premium storage caching](../../../virtual-machines/premium-storage-performance.md#disk-caching), which is strongly recommended for optimal performance.
-
-### General Purpose
-
-The [general purpose virtual machine sizes](../../../virtual-machines/sizes-general.md) are designed to provide balanced memory-to-vCore ratios for smaller entry level workloads such as development and test, web servers, and smaller database servers.
-
-Because of the smaller memory-to-vCore ratios with the general purpose virtual machines, it is important to carefully monitor memory-based performance counters to ensure SQL Server is able to get the buffer cache memory it needs. See [memory performance baseline](#memory) for more information.
-
-Since the starting recommendation for production workloads is a memory-to-vCore ratio of 8, the minimum recommended configuration for a general purpose VM running SQL Server is 4 vCPU and 32 GB of memory.
-
-#### Ddsv4 series
-
-The [Ddsv4-series](../../../virtual-machines/ddv4-ddsv4-series.md) offers a fair combination of vCPU, memory, and temporary disk but with smaller memory-to-vCore support.
-
-The Ddsv4 VMs include lower latency and higher-speed local storage.
-
-These machines are ideal for side-by-side SQL and app deployments that require fast access to temp storage and departmental relational databases. There is a standard memory-to-vCore ratio of 4 across all of the virtual machines in this series.
-
-For this reason, it is recommended to leverage the D8ds_v4 as the starter virtual machine in this series, which has 8 vCores and 32 GBs of memory. The largest machine is the D64ds_v4, which has 64 vCores and 256 GBs of memory.
-
-The [Ddsv4-series](../../../virtual-machines/ddv4-ddsv4-series.md) virtual machines support [premium storage](../../../virtual-machines/premium-storage-performance.md) and [premium storage caching](../../../virtual-machines/premium-storage-performance.md#disk-caching).
-
-> [!NOTE]
-> The [Ddsv4-series](../../../virtual-machines/ddv4-ddsv4-series.md) does not have the memory-to-vCore ratio of 8 that is recommended for SQL Server workloads. As such, considering using these virtual machines for smaller application and development workloads only.
-
-#### B-series
-
-The [burstable B-series](../../../virtual-machines/sizes-b-series-burstable.md) virtual machine sizes are ideal for workloads that do not need consistent performance such as proof of concept and very small application and development servers.
-
-Most of the [burstable B-series](../../../virtual-machines/sizes-b-series-burstable.md) virtual machine sizes have a memory-to-vCore ratio of 4. The largest of these machines is the [Standard_B20ms](../../../virtual-machines/sizes-b-series-burstable.md) with 20 vCores and 80 GB of memory.
-
-This series is unique as the apps have the ability to **burst** during business hours with burstable credits varying based on machine size.
-
-When the credits are exhausted, the VM returns to the baseline machine performance.
-
-The benefit of the B-series is the compute savings you could achieve compared to the other VM sizes in other series especially if you need the processing power sparingly throughout the day.
-
-This series supports [premium storage](../../../virtual-machines/premium-storage-performance.md), but **does not support** [premium storage caching](../../../virtual-machines/premium-storage-performance.md#disk-caching).
-
-> [!NOTE]
-> The [burstable B-series](../../../virtual-machines/sizes-b-series-burstable.md) does not have the memory-to-vCore ratio of 8 that is recommended for SQL Server workloads. As such, consider using these virtual machines for smaller applications, web servers, and development workloads only.
-
-#### Av2-series
-
-The [Av2-series](../../../virtual-machines/av2-series.md) VMs are best suited for entry-level workloads like development and test, low traffic web servers, small to medium app databases, and proof-of-concepts.
-
-Only the [Standard_A2m_v2](../../../virtual-machines/av2-series.md) (2 vCores and 16GBs of memory), [Standard_A4m_v2](../../../virtual-machines/av2-series.md) (4 vCores and 32GBs of memory), and the [Standard_A8m_v2](../../../virtual-machines/av2-series.md) (8 vCores and 64GBs of memory) have a good memory-to-vCore ratio of 8 for these top three virtual machines.
-
-These virtual machines are both good options for smaller development and test SQL Server machines.
-
-The 8 vCore [Standard_A8m_v2](../../../virtual-machines/av2-series.md) may also be a good option for small application and web servers.
-
-> [!NOTE]
-> The Av2 series does not support premium storage and as such, is not recommended for production SQL Server workloads even with the virtual machines that have a memory-to-vCore ratio of 8.
-
-### Storage optimized
-
-The [storage optimized VM sizes](../../../virtual-machines/sizes-storage.md) are for specific use cases. These virtual machines are specifically designed with optimized disk throughput and IO. This virtual machine series is intended for big data scenarios, data warehousing, and large transactional databases.
-
-#### Lsv2-series
-
-The [Lsv2-series](../../../virtual-machines/lsv2-series.md) features high throughput, low latency, and local NVMe storage. The Lsv2-series VMs are optimized to use the local disk on the node attached directly to the VM rather than using durable data disks.
-
-These virtual machines are strong options for big data, data warehouse, reporting, and ETL workloads. The high throughput and IOPs of the local NVMe storage is a good use case for processing files that will be loaded into your database and other scenarios where the source data can be recreated from the source system or other repositories such as Azure Blob storage or Azure Data Lake. [Lsv2-series](../../../virtual-machines/lsv2-series.md) VMs can also burst their disk performance for up to 30 minutes at a time.
-
-These virtual machines size from 8 to 80 vCPU with 8 GiB of memory per vCPU and for every 8 vCPUs there is 1.92 TB of NVMe SSD. This means for the largest VM of this series, the [L80s_v2](../../../virtual-machines/lsv2-series.md), there is 80 vCPU and 640 BiB of memory with 10x1.92TB of NVMe storage. There is a consistent memory-to-vCore ratio of 8 across all of these virtual machines.
-
-The NVMe storage is ephemeral meaning that data will be lost on these disks if you restart your virtual machine.
-
-The Lsv2 and Ls series support [premium storage](../../../virtual-machines/premium-storage-performance.md), but not premium storage caching. The creation of a local cache to increase IOPs is not supported.
-
-> [!WARNING]
-> Storing your data files on the ephemeral NVMe storage could result in data loss when the VM is deallocated.
-
-### Constrained vCores
-
-High performing SQL Server workloads often need larger amounts of memory, IO, and throughput without the higher vCore counts.
-
-Most OLTP workloads are application databases driven by large numbers of smaller transactions. With OLTP workloads, only a small amount of the data is read or modified, but the volumes of transactions driven by user counts are much higher. It is important to have the SQL Server memory available to cache plans, store recently accessed data for performance, and ensure physical reads can be read into memory quickly.
-
-These OLTP environments need higher amounts of memory, fast storage, and the I/O bandwidth necessary to perform optimally.
-
-In order to maintain this level of performance without the higher SQL Server licensing costs, Azure offers VM sizes with [constrained vCPU counts](../../../virtual-machines/constrained-vcpu.md).
-
-This helps control licensing costs by reducing the available vCores while maintaining the same memory, storage, and I/O bandwidth of the parent virtual machine.
-
-The vCPU count can be constrained to one-half to one-quarter of the original VM size. Reducing the vCores available to the virtual machine, will achieve higher memory-to-vCore ratios.
-
-These new VM sizes have a suffix that specifies the number of active vCPUs to make them easier to identify.
-
-For example, the [M64-32ms](../../../virtual-machines/constrained-vcpu.md) requires licensing only 32 SQL Server vCores with the memory, IO, and throughput of the [M64ms](../../../virtual-machines/m-series.md) and the [M64-16ms](../../../virtual-machines/constrained-vcpu.md) requires licensing only 16 vCores. Though while the [M64-16ms](../../../virtual-machines/constrained-vcpu.md) has a quarter of the SQL Server licensing cost of the M64ms, the compute cost of the virtual machine will be the same.
-
-> [!NOTE]
-> - Medium to large data warehouse workloads may still benefit from [constrained vCore VMs](../../../virtual-machines/constrained-vcpu.md), but data warehouse workloads are commonly characterized by fewer users and processes addressing larger amounts of data through query plans that run in parallel.
-> - The compute cost, which includes operating system licensing, will remain the same as the parent virtual machine.
-
-## Storage guidance
-
-For detailed testing of SQL Server performance on Azure Virtual Machines with TPC-E and TPC-C benchmarks, refer to the blog [Optimize OLTP performance](https://techcommunity.microsoft.com/t5/SQL-Server/Optimize-OLTP-Performance-with-SQL-Server-on-Azure-VM/ba-p/916794).
-
-Azure blob cache with premium SSDs is recommended for all production workloads.
-
-> [!WARNING]
-> Standard HDDs and SSDs have varying latencies and bandwidth and are only recommended for dev/test workloads. Production workloads should use premium SSDs.
-
-In addition, we recommend that you create your Azure storage account in the same data center as your SQL Server virtual machines to reduce transfer delays. When creating a storage account, disable geo-replication as consistent write order across multiple disks is not guaranteed. Instead, consider configuring a SQL Server disaster recovery technology between two Azure data centers. For more information, see [High Availability and Disaster Recovery for SQL Server on Azure Virtual Machines](business-continuity-high-availability-disaster-recovery-hadr-overview.md).
-
-## Disks guidance
-
-There are three main disk types on Azure virtual machines:
-
-* **OS disk**: When you create an Azure virtual machine, the platform will attach at least one disk (labeled as the **C** drive) to the VM for your operating system disk. This disk is a VHD stored as a page blob in storage.
-* **Temporary disk**: Azure virtual machines contain another disk called the temporary disk (labeled as the **D**: drive). This is a disk on the node that can be used for scratch space.
-* **Data disks**: You can also attach additional disks to your virtual machine as data disks, and these will be stored in storage as page blobs.
-
-The following sections describe recommendations for using these different disks.
-
-### Operating system disk
-
-An operating system disk is a VHD that you can boot and mount as a running version of an operating system and is labeled as the **C** drive.
-
-Default caching policy on the operating system disk is **Read/Write**. For performance sensitive applications, we recommend that you use data disks instead of the operating system disk. See the section on Data Disks below.
-
-### Temporary disk
-
-The temporary storage drive, labeled as the **D** drive, is not persisted to Azure Blob storage. Do not store your user database files or user transaction log files on the **D**: drive.
-
-Place TempDB on the local SSD `D:\` drive for mission critical SQL Server workloads (after choosing correct VM size). If you create the VM from the Azure portal or Azure quickstart templates and [place Temp DB on the Local Disk](https://techcommunity.microsoft.com/t5/SQL-Server/Announcing-Performance-Optimized-Storage-Configuration-for-SQL/ba-p/891583), then you do not need any further action; for all other cases follow the steps in the blog for [Using SSDs to store TempDB](https://cloudblogs.microsoft.com/sqlserver/2014/09/25/using-ssds-in-azure-vms-to-store-sql-server-TempDB-and-buffer-pool-extensions/) to prevent failures after restarts. If the capacity of the local drive is not enough for your Temp DB size, then place Temp DB on a storage pool [striped](../../../virtual-machines/premium-storage-performance.md) on premium SSD disks with [read-only caching](../../../virtual-machines/premium-storage-performance.md#disk-caching).
-
-For VMs that support premium SSDs, you can also store TempDB on a disk that supports premium SSDs with read caching enabled.
--
-### Data disks
-
-* **Use premium SSD disks for data and log files**: If you are not using disk striping, use two premium SSD disks where one disk contains the log file and the other contains the data. Each premium SSD provides a number of IOPS and bandwidth (MB/s) depending on its size, as depicted in the article, [Select a disk type](../../../virtual-machines/disks-types.md). If you are using a disk striping technique, such as Storage Spaces, you achieve optimal performance by having two pools, one for the log file(s) and the other for the data files. However, if you plan to use SQL Server failover cluster instances (FCI), you must configure one pool, or utilize [premium file shares](failover-cluster-instance-premium-file-share-manually-configure.md) instead.
-
- > [!TIP]
- > - For test results on various disk and workload configurations, see the following blog post: [Storage Configuration Guidelines for SQL Server on Azure Virtual Machines](/archive/blogs/sqlserverstorageengine/storage-configuration-guidelines-for-sql-server-on-azure-vm).
- > - For mission critical performance for SQL Servers that need ~ 50,000 IOPS, consider replacing 10 -P30 disks with an Ultra SSD. For more information, see the following blog post: [Mission critical performance with Ultra SSD](https://azure.microsoft.com/blog/mission-critical-performance-with-ultra-ssd-for-sql-server-on-azure-vm/).
-
- > [!NOTE]
- > When you provision a SQL Server VM in the portal, you have the option of editing your storage configuration. Depending on your configuration, Azure configures one or more disks. Multiple disks are combined into a single storage pool with striping. Both the data and log files reside together in this configuration. For more information, see [Storage configuration for SQL Server VMs](storage-configuration.md).
-
-* **Disk striping**: For more throughput, you can add additional data disks and use disk striping. To determine the number of data disks, you need to analyze the number of IOPS and bandwidth required for your log file(s), and for your data and TempDB file(s). Notice that different VM sizes have different limits on the number of IOPs and bandwidth supported, see the tables on IOPS per [VM size](../../../virtual-machines/sizes.md?toc=%2fazure%2fvirtual-machines%2fwindows%2ftoc.json). Use the following guidelines:
-
- * For Windows 8/Windows Server 2012 or later, use [Storage Spaces](/previous-versions/windows/it-pro/windows-server-2012-R2-and-2012/hh831739(v=ws.11)) with the following guidelines:
-
- 1. Set the interleave (stripe size) to 64 KB (65,536 bytes) for OLTP workloads and 256 KB (262,144 bytes) for data warehousing workloads to avoid performance impact due to partition misalignment. This must be set with PowerShell.
- 2. Set column count = number of physical disks. Use PowerShell when configuring more than 8 disks (not Server Manager UI).
-
- For example, the following PowerShell creates a new storage pool with the interleave size to 64 KB and the number of columns equal to the amount of physical disk in the storage pool:
-
- ```powershell
- $PhysicalDisks = Get-PhysicalDisk | Where-Object {$_.FriendlyName -like "*2" -or $_.FriendlyName -like "*3"}
-
- New-StoragePool -FriendlyName "DataFiles" -StorageSubsystemFriendlyName "Storage Spaces*" `
- -PhysicalDisks $PhysicalDisks | New- VirtualDisk -FriendlyName "DataFiles" `
- -Interleave 65536 -NumberOfColumns $PhysicalDisks .Count -ResiliencySettingName simple `
- ΓÇôUseMaximumSize |Initialize-Disk -PartitionStyle GPT -PassThru |New-Partition -AssignDriveLetter `
- -UseMaximumSize |Format-Volume -FileSystem NTFS -NewFileSystemLabel "DataDisks" `
- -AllocationUnitSize 65536 -Confirm:$false
- ```
-
- * For Windows 2008 R2 or earlier, you can use dynamic disks (OS striped volumes) and the stripe size is always 64 KB. This option is deprecated as of Windows 8/Windows Server 2012. For information, see the support statement at [Virtual Disk Service is transitioning to Windows Storage Management API](/windows/win32/w8cookbook/vds-is-transitioning-to-wmiv2-based-windows-storage-management-api).
-
- * If you are using [Storage Spaces Direct (S2D)](/windows-server/storage/storage-spaces/storage-spaces-direct-in-vm) with [SQL Server Failover Cluster Instances](failover-cluster-instance-storage-spaces-direct-manually-configure.md), you must configure a single pool. Although different volumes can be created on that single pool, they will all share the same characteristics, such as the same caching policy.
-
- * Determine the number of disks associated with your storage pool based on your load expectations. Keep in mind that different VM sizes allow different numbers of attached data disks. For more information, see [Sizes for virtual machines](../../../virtual-machines/sizes.md?toc=%2fazure%2fvirtual-machines%2fwindows%2ftoc.json).
-
- * If you are not using premium SSDs (dev/test scenarios), the recommendation is to add the maximum number of data disks supported by your [VM size](../../../virtual-machines/sizes.md?toc=%2fazure%2fvirtual-machines%2fwindows%2ftoc.json) and use disk striping.
-
-* **Caching policy**: Note the following recommendations for caching policy depending on your storage configuration.
-
- * If you are using separate disks for data and log files, enable read caching on the data disks hosting your data files and TempDB data files. This can result in a significant performance benefit. Do not enable caching on the disk holding the log file as this causes a minor decrease in performance.
-
- * If you are using disk striping in a single storage pool, most workloads will benefit from read caching. If you have separate storage pools for the log and data files, enable read caching only on the storage pool for the data files. In certain heavy write workloads, better performance might be achieved with no caching. This can only be determined through testing.
-
- * The previous recommendations apply to premium SSDs. If you are not using premium SSDs, do not enable any caching on any data disks.
-
- * For instructions on configuring disk caching, see the following articles. For the classic (ASM) deployment model see: [Set-AzureOSDisk](/previous-versions/azure/jj152847(v=azure.100)) and [Set-AzureDataDisk](/previous-versions/azure/jj152851(v=azure.100)). For the Azure Resource Manager deployment model, see: [Set-AzOSDisk](/powershell/module/az.compute/set-azvmosdisk) and [Set-AzVMDataDisk](/powershell/module/az.compute/set-azvmdatadisk).
-
- > [!WARNING]
- > Stop the SQL Server service when changing the cache setting of Azure Virtual Machines disks to avoid the possibility of any database corruption.
-
-* **NTFS allocation unit size**: When formatting the data disk, it is recommended that you use a 64-KB allocation unit size for data and log files as well as TempDB. If TempDB is placed on the temporary disk (D:\ drive) the performance gained by leveraging this drive outweighs the need for a 64-KB allocation unit size.
-
-* **Disk management best practices**: When removing a data disk or changing its cache type, stop the SQL Server service during the change. When the caching settings are changed on the OS disk, Azure stops the VM, changes the cache type, and restarts the VM. When the cache settings of a data disk are changed, the VM is not stopped, but the data disk is detached from the VM during the change and then reattached.
-
- > [!WARNING]
- > Failure to stop the SQL Server service during these operations can cause database corruption.
--
-## I/O guidance
-
-* The best results with premium SSDs are achieved when you parallelize your application and requests. Premium SSDs are designed for scenarios where the IO queue depth is greater than 1, so you will see little or no performance gains for single-threaded serial requests (even if they are storage intensive). For example, this could impact the single-threaded test results of performance analysis tools, such as SQLIO.
-
-* Consider using [database page compression](/sql/relational-databases/data-compression/data-compression) as it can help improve performance of I/O intensive workloads. However, the data compression might increase the CPU consumption on the database server.
-
-* Consider enabling instant file initialization to reduce the time that is required for initial file allocation. To take advantage of instant file initialization, you grant the SQL Server (MSSQLSERVER) service account with SE_MANAGE_VOLUME_NAME and add it to the **Perform Volume Maintenance Tasks** security policy. If you are using a SQL Server platform image for Azure, the default service account (NT Service\MSSQLSERVER) isnΓÇÖt added to the **Perform Volume Maintenance Tasks** security policy. In other words, instant file initialization is not enabled in a SQL Server Azure platform image. After adding the SQL Server service account to the **Perform Volume Maintenance Tasks** security policy, restart the SQL Server service. There could be security considerations for using this feature. For more information, see [Database File Initialization](/sql/relational-databases/databases/database-instant-file-initialization).
-
-* Be aware that **autogrow** is considered to be merely a contingency for unexpected growth. Do not manage your data and log growth on a day-to-day basis with autogrow. If autogrow is used, pre-grow the file using the Size switch.
-
-* Make sure **autoshrink** is disabled to avoid unnecessary overhead that can negatively affect performance.
-
-* Move all databases to data disks, including system databases. For more information, see [Move System Databases](/sql/relational-databases/databases/move-system-databases).
-
-* Move SQL Server error log and trace file directories to data disks. This can be done in SQL Server Configuration Manager by right-clicking your SQL Server instance and selecting properties. The error log and trace file settings can be changed in the **Startup Parameters** tab. The Dump Directory is specified in the **Advanced** tab. The following screenshot shows where to look for the error log startup parameter.
-
- ![SQL ErrorLog Screenshot](./media/performance-guidelines-best-practices/sql_server_error_log_location.png)
-
-* Set up default backup and database file locations. Use the recommendations in this article, and make the changes in the Server properties window. For instructions, see [View or Change the Default Locations for Data and Log Files (SQL Server Management Studio)](/sql/database-engine/configure-windows/view-or-change-the-default-locations-for-data-and-log-files). The following screenshot demonstrates where to make these changes.
-
- ![SQL Data Log and Backup files](./media/performance-guidelines-best-practices/sql_server_default_data_log_backup_locations.png)
-* Enable locked pages to reduce IO and any paging activities. For more information, see [Enable the Lock Pages in Memory Option (Windows)](/sql/database-engine/configure-windows/enable-the-lock-pages-in-memory-option-windows).
-
-* If you are running SQL Server 2012, install Service Pack 1 Cumulative Update 10. This update contains the fix for poor performance on I/O when you execute select into temporary table statement in SQL Server 2012. For information, see this [knowledge base article](https://support.microsoft.com/kb/2958012).
-
-* Consider compressing any data files when transferring in/out of Azure.
-
-## Feature-specific guidance
-
-Some deployments may achieve additional performance benefits using more advanced configuration techniques. The following list highlights some SQL Server features that can help you to achieve better performance:
-
-### Back up to Azure Storage
-When performing backups for SQL Server running in Azure Virtual Machines, you can use [SQL Server Backup to URL](/sql/relational-databases/backup-restore/sql-server-backup-to-url). This feature is available starting with SQL Server 2012 SP1 CU2 and recommended for backing up to the attached data disks. When you backup/restore to/from Azure Storage, follow the recommendations provided at [SQL Server Backup to URL Best Practices and Troubleshooting and Restoring from Backups Stored in Azure Storage](/sql/relational-databases/backup-restore/sql-server-backup-to-url-best-practices-and-troubleshooting). You can also automate these backups using [Automated Backup for SQL Server on Azure Virtual Machines](../../../azure-sql/virtual-machines/windows/automated-backup-sql-2014.md).
-
-Prior to SQL Server 2012, you can use [SQL Server Backup to Azure Tool](https://www.microsoft.com/download/details.aspx?id=40740). This tool can help to increase backup throughput using multiple backup stripe targets.
-
-### SQL Server Data Files in Azure
-
-This new feature, [SQL Server Data Files in Azure](/sql/relational-databases/databases/sql-server-data-files-in-microsoft-azure), is available starting with SQL Server 2014. Running SQL Server with data files in Azure demonstrates comparable performance characteristics as using Azure data disks.
-
-### Failover cluster instance and Storage Spaces
-
-If you are using Storage Spaces, when adding nodes to the cluster on the **Confirmation** page, clear the check box labeled **Add all eligible storage to the cluster**.
-
-![Uncheck eligible storage](./media/performance-guidelines-best-practices/uncheck-eligible-cluster-storage.png)
-
-If you are using Storage Spaces and do not uncheck **Add all eligible storage to the cluster**, Windows detaches the virtual disks during the clustering process. As a result, they do not appear in Disk Manager or Explorer until the storage spaces are removed from the cluster and reattached using PowerShell. Storage Spaces groups multiple disks in to storage pools. For more information, see [Storage Spaces](/windows-server/storage/storage-spaces/overview).
-
-## Multiple instances
-
-Consider the following best practices when deploying multiple SQL Server instances to a single virtual machine:
--- Set the max server memory for each SQL Server instance, ensuring there is memory left over for the operating system. Be sure to update the memory restrictions for the SQL Server instances if you change how much memory is allocated to the virtual machine. -- Have separate LUNs for data, logs, and TempDB since they all have different workload patterns and you do not want them impacting each other. -- Thoroughly test your environment under heavy production-like workloads to ensure it can handle peak workload capacity within your application SLAs. -
-Signs of overloaded systems can include, but are not limited to, worker thread exhaustion, slow response times, and/or stalled dispatcher system memory.
---
-## Collect performance baseline
-
-For a more prescriptive approach, gather performance counters using PerfMon/LogMan and capture SQL Server wait statistics to better understand general pressures and potential bottlenecks of the source environment.
-
-Start by collecting the CPU, memory, [IOPS](../../../virtual-machines/premium-storage-performance.md#iops), [throughput](../../../virtual-machines/premium-storage-performance.md#throughput), and [latency](../../../virtual-machines/premium-storage-performance.md#latency) of the source workload at peak times following the [application performance checklist](../../../virtual-machines/premium-storage-performance.md#application-performance-requirements-checklist).
-
-Gather data during peak hours such as workloads during your typical business day, but also other high load processes such as end-of-day processing, and weekend ETL workloads. Consider scaling up your resources for atypically heavily workloads, such as end-of-quarter processing, and then scale done once the workload completes.
-
-Use the performance analysis to select the [VM Size](../../../virtual-machines/sizes-memory.md) that can scale to your workload's performance requirements.
--
-### IOPS and Throughput
-
-SQL Server performance depends heavily on the I/O subsystem. Unless your database fits into physical memory, SQL Server constantly brings database pages in and out of the buffer pool. The data files for SQL Server should be treated differently. Access to log files is sequential except when a transaction needs to be rolled back where data files, including TempDB, are randomly accessed. If you have a slow I/O subsystem, your users may experience performance issues such as slow response times and tasks that do not complete due to time-outs.
-
-The Azure Marketplace virtual machines have log files on a physical disk that is separate from the data files by default. The TempDB data files count and size meet best practices and are targeted to the ephemeral D:/ drive..
-
-The following PerfMon counters can help validate the IO throughput required by your SQL Server:
-* **\LogicalDisk\Disk Reads/Sec** (read and write IOPS)
-* **\LogicalDisk\Disk Writes/Sec** (read and write IOPS)
-* **\LogicalDisk\Disk Bytes/Sec** (throughput requirements for the data, log, and TempDB files)
-
-Using IOPS and throughput requirements at peak levels, evaluate VM sizes that match the capacity from your measurements.
-
-If your workload requires 20 K read IOPS and 10K write IOPS, you can either choose E16s_v3 (with up to 32 K cached and 25600 uncached IOPS) or M16_s (with up to 20 K cached and 10K uncached IOPS) with 2 P30 disks striped using Storage Spaces.
-
-Make sure to understand both throughput and IOPS requirements of the workload as VMs have different scale limits for IOPS and throughput.
-
-### Memory
-
-Track both external memory used by the OS as well as the memory used internally by SQL Server. Identifying pressure for either component will help size virtual machines and identify opportunities for tuning.
-
-The following PerfMon counters can help validate the memory health of a SQL Server virtual machine:
-* [\Memory\Available MBytes](/azure/monitoring/infrastructure-health/vmhealth-windows/winserver-memory-availmbytes)
-* [\SQLServer:Memory Manager\Target Server Memory (KB)](/sql/relational-databases/performance-monitor/sql-server-buffer-manager-object)
-* [\SQLServer:Memory Manager\Total Server Memory (KB)](/sql/relational-databases/performance-monitor/sql-server-buffer-manager-object)
-* [\SQLServer:Buffer Manager\Lazy writes/sec](/sql/relational-databases/performance-monitor/sql-server-buffer-manager-object)
-* [\SQLServer:Buffer Manager\Page life expectancy](/sql/relational-databases/performance-monitor/sql-server-buffer-manager-object)
-
-### Compute / Processing
-
-Compute in Azure is managed differently than on-premises. On-premises servers are built to last several years without an upgrade due to the management overhead and cost of acquiring new hardware. Virtualization mitigates some of these issues but applications are optimized to take the most advantage of the underlying hardware, meaning any significant change to resource consumption requires rebalancing the entire physical environment.
-
-This is not a challenge in Azure where a new virtual machine on a different series of hardware, and even in a different region, is easy to achieve.
-
-In Azure, you want to take advantage of as much of the virtual machines resources as possible, therefore, Azure virtual machines should be configured to keep the average CPU as high as possible without impacting the workload.
-
-The following PerfMon counters can help validate the compute health of a SQL Server virtual machine:
-* **\Processor Information(_Total)\% Processor Time**
-* **\Process(sqlservr)\% Processor Time**
-
-> [!NOTE]
-> Ideally, try to aim for using 80% of your compute, with peaks above 90% but not reaching 100% for any sustained period of time. Fundamentally, you only want to provision the compute the application needs and then plan to scale up or down as the business requires.
-
-## Next steps
-
-For security best practices, see [Security considerations for SQL Server on Azure Virtual Machines](security-considerations-best-practices.md).
-
-Review other SQL Server Virtual Machine articles at [SQL Server on Azure Virtual Machines Overview](sql-server-on-azure-vm-iaas-what-is-overview.md). If you have questions about SQL Server virtual machines, see the [Frequently Asked Questions](frequently-asked-questions-faq.md).
azure-sql Security Considerations Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/virtual-machines/windows/security-considerations-best-practices.md
You don't want attackers to easily guess account names or passwords. Use the fol
- Create a unique local administrator account that is not named **Administrator**. -- Use complex strong passwords for all your accounts. For more information about how to create a strong password, see [Create a strong password](https://support.microsoft.com/instantanswers/9bd5223b-efbe-aa95-b15a-2fb37bef637d/create-a-strong-password) article.
+- Use complex strong passwords for all your accounts. For more information about how to create a strong password, see [Create a strong password](https://support.microsoft.com/account-billing/how-to-create-a-strong-password-for-your-microsoft-account-f67e4ddd-0dbe-cd75-cebe-0cfda3cf7386) article.
- By default, Azure selects Windows Authentication during SQL Server virtual machine setup. Therefore, the **SA** login is disabled and a password is assigned by setup. We recommend that the **SA** login should not be used or enabled. If you must have a SQL login, use one of the following strategies:
azure-sql Sql Vulnerability Assessment Enable https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/virtual-machines/windows/sql-vulnerability-assessment-enable.md
Enable Azure Defender for SQL Servers on machines to implement the Vulnerability
To use the Azure Defender for SQL Server, follow these steps:
-1. [Install the SQL IaaS agent extension](sql-agent-extension-manually-register-single-vm.md)
-1. [Enable auto provisioning of the Log Analytics agent](../../../security-center/security-center-enable-data-collection.md#auto-provision-mma)
-1. [Enable the optional Security Center plan](../../../security-center/defender-for-sql-usage.md#step-2-enable-the-optional-plan-in-security-centers-pricing-and-settings-page)
+1. [Install the SQL IaaS agent extension](sql-agent-extension-manually-register-single-vm.md).
+1. [Enable auto provisioning of the Log Analytics agent](../../../security-center/security-center-enable-data-collection.md#auto-provision-mma).
+1. [Enable the optional Security Center plan](../../../security-center/defender-for-sql-usage.md#step-2-enable-the-optional-plan-in-security-centers-pricing-and-settings-page).
Since the Vulnerability Assessment is a part of Azure Defender for SQL, once Azure Defender is enabled on your virtual machine, your databases are automatically scanned every 12 hours to identify security vulnerabilities. Results are sent to Azure Security Center for a centralized aggregated view of the SQL data estate protected by Azure Defender for SQL.
azure-sql Storage Configuration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/virtual-machines/windows/storage-configuration.md
Title: Storage configuration for SQL Server VMs | Microsoft Docs
+ Title: Configure storage for SQL Server VMs | Microsoft Docs
description: This topic describes how Azure configures storage for SQL Server VMs during provisioning (Azure Resource Manager deployment model). It also explains how you can configure storage for your existing SQL Server VMs. documentationcenter: na
Last updated 12/26/2019
-# Storage configuration for SQL Server VMs
+# Configure storage for SQL Server VMs
[!INCLUDE[appliesto-sqlvm](../../includes/appliesto-sqlvm.md)]
-When you configure a SQL Server virtual machine (VM) image in Azure, the Azure portal helps to automate your storage configuration. This includes attaching storage to the VM, making that storage accessible to SQL Server, and configuring it to optimize for your specific performance requirements.
+This article teaches you how to configure your storage for your SQL Server on Azure Virtual Machines (VMs).
-This topic explains how Azure configures storage for your SQL Server VMs both during provisioning and for existing VMs. This configuration is based on the [performance best practices](performance-guidelines-best-practices.md) for Azure VMs running SQL Server.
+SQL Server VMs deployed through marketplace images automatically follow default [storage best practices](performance-guidelines-best-practices-storage.md) which can be modified during deployment. Some of these configuration settings can be changed after deployment.
## Prerequisites To use the automated storage configuration settings, your virtual machine requires the following characteristics:
-* Provisioned with a [SQL Server gallery image](sql-server-on-azure-vm-iaas-what-is-overview.md#payasyougo).
+* Provisioned with a [SQL Server gallery image](sql-server-on-azure-vm-iaas-what-is-overview.md#payasyougo) or registered with the [SQL IaaS extension]().
* Uses the [Resource Manager deployment model](../../../azure-resource-manager/management/deployment-models.md). * Uses [premium SSDs](../../../virtual-machines/disks-types.md).
When provisioning an Azure VM using a SQL Server gallery image, select **Change
![Screenshot that highlights the SQL Server settings tab and the Change configuration option.](./media/storage-configuration/sql-vm-storage-configuration-provisioning.png)
-Select the type of workload you're deploying your SQL Server for under **Storage optimization**. With the **General** optimization option, by default you will have one data disk with 5000 max IOPS, and you will use this same drive for your data, transaction log, and TempDB storage. Selecting either **Transactional processing** (OLTP) or **Data warehousing** will create a separate disk for data, a separate disk for the transaction log, and use local SSD for TempDB. There are no storage differences between **Transactional processing** and **Data warehousing**, but it does change your [stripe configuration, and trace flags](#workload-optimization-settings). Choosing premium storage sets the caching to *ReadOnly* for the data drive, and *None* for the log drive as per [SQL Server VM performance best practices](performance-guidelines-best-practices.md).
+Select the type of workload you're deploying your SQL Server for under **Storage optimization**. With the **General** optimization option, by default you will have one data disk with 5000 max IOPS, and you will use this same drive for your data, transaction log, and TempDB storage.
+
+Selecting either **Transactional processing** (OLTP) or **Data warehousing** will create a separate disk for data, a separate disk for the transaction log, and use local SSD for TempDB. There are no storage differences between **Transactional processing** and **Data warehousing**, but it does change your [stripe configuration, and trace flags](#workload-optimization-settings). Choosing premium storage sets the caching to *ReadOnly* for the data drive, and *None* for the log drive as per [SQL Server VM performance best practices](performance-guidelines-best-practices.md).
![SQL Server VM Storage Configuration During Provisioning](./media/storage-configuration/sql-vm-storage-configuration.png)
Based on your choices, Azure performs the following storage configuration tasks
* Associates the storage pool with a new drive on the virtual machine. * Optimizes this new drive based on your specified workload type (Data warehousing, Transactional processing, or General).
-For further details on how Azure configures storage settings, see the [Storage configuration section](#storage-configuration). For a full walkthrough of how to create a SQL Server VM in the Azure portal, see [the provisioning tutorial](../../../azure-sql/virtual-machines/windows/create-sql-vm-portal.md).
+For a full walkthrough of how to create a SQL Server VM in the Azure portal, see [the provisioning tutorial](../../../azure-sql/virtual-machines/windows/create-sql-vm-portal.md).
### Resource Manager templates
You can modify the disk settings for the drives that were configured during the
![Configure Storage for Existing SQL Server VM](./media/storage-configuration/sql-vm-storage-extend-drive.png)
-## Storage configuration
+## Automated changes
This section provides a reference for the storage configuration changes that Azure automatically performs during SQL Server VM provisioning or configuration in the Azure portal.
Azure uses the following settings to create the storage pool on SQL Server VMs.
<sup>1</sup> After the storage pool is created, you cannot alter the number of columns in the storage pool.
-## Workload optimization settings
+### Workload optimization settings
The following table describes the three workload type options available and their corresponding optimizations:
The following table describes the three workload type options available and thei
> [!NOTE] > You can only specify the workload type when you provision a SQL Server virtual machine by selecting it in the storage configuration step.
+## Enable caching
+
+Change the caching policy at the disk level. You can do so using the Azure portal, [PowerShell](/powershell/module/az.compute/set-azvmdatadisk), or the [Azure CLI](/cli/azure/vm/disk).
+
+To change your caching policy in the Azure portal, follow these steps:
+
+1. Stop your SQL Server service.
+1. Sign into the [Azure portal](https://portal.azure.com).
+1. Navigate to your virtual machine, select **Disks** under **Settings**.
+
+ ![Screenshot showing the VM disk configuration blade in the Azure portal.](./media/storage-configuration/disk-in-portal.png)
+
+1. Choose the appropriate caching policy for your disk from the drop-down.
+
+ ![Screenshot showing the disk caching policy configuration in the Azure portal.](./media/storage-configuration/azure-disk-config.png)
+
+1. After the change takes effect, reboot the SQL Server VM and start the SQL Server service.
++
+## Enable Write Accelerator
+
+Write Acceleration is a disk feature that is only available for the M-Series Virtual Machines (VMs). The purpose of write acceleration is to improve the I/O latency of writes against Azure Premium Storage when you need single digit I/O latency due to high volume mission critical OLTP workloads or data warehouse environments.
+
+Stop all SQL Server activity and shut down the SQL Server service before making changes to your write acceleration policy.
+
+If your disks are striped, enable Write Acceleration for each disk individually, and your Azure VM should be shut down before making any changes.
+
+To enable Write Acceleration using the Azure portal, follow these steps:
+
+1. Stop your SQL Server service. If your disks are striped, shut down the virtual machine.
+1. Sign into the [Azure portal](https://portal.azure.com).
+1. Navigate to your virtual machine, select **Disks** under **Settings**.
+
+ ![Screenshot showing the VM disk configuration blade in the Azure portal.](./media/storage-configuration/disk-in-portal.png)
+
+1. Choose the cache option with **Write Accelerator** for your disk from the drop-down.
+
+ ![Screenshot showing the write accelerator cache policy.](./media/storage-configuration/write-accelerator.png)
+
+1. After the change takes effect, start the virtual machine and SQL Server service.
+
+## Disk striping
+
+For more throughput, you can add additional data disks and use disk striping. To determine the number of data disks, analyze the throughput and bandwidth required for your SQL Server data files, including the log and tempdb. Throughput and bandwidth limits vary by VM size. To learn more, see [VM Size](../../../virtual-machines/sizes.md)
++
+* For Windows 8/Windows Server 2012 or later, use [Storage Spaces](https://docs.microsoft.com/previous-versions/windows/it-pro/windows-server-2012-R2-and-2012/hh831739(v=ws.11)) with the following guidelines:
+
+ 1. Set the interleave (stripe size) to 64 KB (65,536 bytes) to avoid performance impact due to partition misalignment. This must be set with PowerShell.
+
+ 2. Set column count = number of physical disks. Use PowerShell when configuring more than 8 disks (not Server Manager UI).
+
+For example, the following PowerShell creates a new storage pool with the interleave size to 64 KB and the number of columns equal to the amount of physical disk in the storage pool:
+
+ ```powershell
+ $PhysicalDisks = Get-PhysicalDisk | Where-Object {$_.FriendlyName -like "*2" -or $_.FriendlyName -like "*3"}
+
+ New-StoragePool -FriendlyName "DataFiles" -StorageSubsystemFriendlyName "Storage Spaces*" `
+ -PhysicalDisks $PhysicalDisks | New- VirtualDisk -FriendlyName "DataFiles" `
+ -Interleave 65536 -NumberOfColumns $PhysicalDisks .Count -ResiliencySettingName simple `
+ ΓÇôUseMaximumSize |Initialize-Disk -PartitionStyle GPT -PassThru |New-Partition -AssignDriveLetter `
+ -UseMaximumSize |Format-Volume -FileSystem NTFS -NewFileSystemLabel "DataDisks" `
+ -AllocationUnitSize 65536 -Confirm:$false
+ ```
+
+ * For Windows 2008 R2 or earlier, you can use dynamic disks (OS striped volumes) and the stripe size is always 64 KB. This option is deprecated as of Windows 8/Windows Server 2012. For information, see the support statement at [Virtual Disk Service is transitioning to Windows Storage Management API](https://docs.microsoft.com/windows/win32/w8cookbook/vds-is-transitioning-to-wmiv2-based-windows-storage-management-api).
+
+ * If you are using [Storage Spaces Direct (S2D)](https://docs.microsoft.com/windows-server/storage/storage-spaces/storage-spaces-direct-in-vm) with [SQL Server Failover Cluster Instances](https://docs.microsoft.com/azure/azure-sql/virtual-machines/windows/failover-cluster-instance-storage-spaces-direct-manually-configure), you must configure a single pool. Although different volumes can be created on that single pool, they will all share the same characteristics, such as the same caching policy.
+
+ * Determine the number of disks associated with your storage pool based on your load expectations. Keep in mind that different VM sizes allow different numbers of attached data disks. For more information, see [Sizes for virtual machines](../../../virtual-machines/sizes.md?toc=/azure/virtual-machines/windows/toc.json).
++ ## Next steps
-For other topics related to running SQL Server in Azure VMs, see [SQL Server on Azure Virtual Machines](sql-server-on-azure-vm-iaas-what-is-overview.md).
+For other topics related to running SQL Server in Azure VMs, see [SQL Server on Azure Virtual Machines](sql-server-on-azure-vm-iaas-what-is-overview.md).
azure-vmware Tutorial Expressroute Global Reach Private Cloud https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/tutorial-expressroute-global-reach-private-cloud.md
Now that you've created an authorization key for the private cloud ExpressRoute
We've augmented the [CLI commands](../expressroute/expressroute-howto-set-global-reach-cli.md) with specific details and examples to help you configure the ExpressRoute Global Reach peering between on-premises environments to an Azure VMware Solution private cloud. >[!TIP]
->For brevity in the Azure CLI command output, these instructions may use a [`ΓÇôquery` argument](https://docs.microsoft.com/cli/azure/query-azure-cli) to execute a JMESPath query to only show the required results.
+>For brevity in the Azure CLI command output, these instructions may use a [`ΓÇôquery` argument](/cli/azure/query-azure-cli) to execute a JMESPath query to only show the required results.
1. Sign in to the [Azure portal](https://portal.azure.com) using the same subscription as the on-premises ExpressRoute circuit.
Continue to the next tutorial to learn how to deploy and configure VMware HCX so
<!-- LINKS - external-->
-<!-- LINKS - internal -->
+<!-- LINKS - internal -->
azure-vmware Windows Server Failover Cluster https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/windows-server-failover-cluster.md
Cluster Service](https://docs.vmware.com/en/VMware-vSphere/6.7/vsphere-esxi-vcen
This article focuses on WSFC on Windows Server 2016 and Windows Server 2019. Older Windows Server versions are out of [mainstream support](https://support.microsoft.com/lifecycle/search?alpha=windows%20server) and so we don't consider them here.
-You'll need to first [create a WSFC](https://docs.microsoft.com/windows-server/failover-clustering/create-failover-cluster). For more information on WSFC, see [Failover Clustering in Windows Server](https://docs.microsoft.com/windows-server/failover-clustering/failover-clustering-overview). Use the information we provide in this article for the specifics of a WSFC deployment on Azure VMware Solution.
+You'll need to first [create a WSFC](/windows-server/failover-clustering/create-failover-cluster). For more information on WSFC, see [Failover Clustering in Windows Server](/windows-server/failover-clustering/failover-clustering-overview). Use the information we provide in this article for the specifics of a WSFC deployment on Azure VMware Solution.
## Prerequisites
The following activities aren't supported and might cause WSFC node failover:
## Related information -- [Failover Clustering in Windows Server](https://docs.microsoft.com/windows-server/failover-clustering/failover-clustering-overview)
+- [Failover Clustering in Windows Server](/windows-server/failover-clustering/failover-clustering-overview)
- [Guidelines for Microsoft Clustering on vSphere (1037959) (vmware.com)](https://kb.vmware.com/s/article/1037959) - [About Setup for Failover Clustering and Microsoft Cluster Service (vmware.com)](https://docs.vmware.com/en/VMware-vSphere/6.7/com.vmware.vsphere.mscs.doc/GUID-1A2476C0-CA66-4B80-B6F9-8421B6983808.html) - [vSAN 6.7 U3 - WSFC with Shared Disks &amp; SCSI-3 Persistent Reservations (vmware.com)](https://blogs.vmware.com/virtualblocks/2019/08/23/vsan67-u3-wsfc-shared-disksupport/)
Now that you've covered setting up a WSFC in Azure VMware Solution, you may want
- Setting up your new WSFC by adding more applications that require the WSFC capability. For instance, SQL Server and SAP ASCS. - Setting up a backup solution.
- - [Setting up Azure Backup Server for Azure VMware Solution](https://docs.microsoft.com/azure/azure-vmware/set-up-backup-server-for-azure-vmware-solution)
- - [Backup solutions for Azure VMware Solution virtual machines](https://docs.microsoft.com/azure/azure-vmware/ecosystem-back-up-vms)
+ - [Setting up Azure Backup Server for Azure VMware Solution](./set-up-backup-server-for-azure-vmware-solution.md)
+ - [Backup solutions for Azure VMware Solution virtual machines](./ecosystem-back-up-vms.md)
backup Archive Tier Support https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/archive-tier-support.md
Supported clients:
install-module -name Az.RecoveryServices -Repository PSGallery -RequiredVersion 4.0.0-preview -AllowPrerelease -force ```
-1. Connect to Azure using the [Connect-AzAccount](https://docs.microsoft.com/powershell/module/az.accounts/connect-azaccount) cmdlet.
+1. Connect to Azure using the [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) cmdlet.
1. Sign into your subscription: `Set-AzContext -Subscription "SubscriptionName"`
$rp = Get-AzRecoveryServicesBackupRecoveryPoint -VaultId $vault.ID -Item $bckItm
For recovery points in archive, Azure Backup provides an integrated restore methodology.
-The integrated restore is a two-step process. The first step involves rehydrating the recovery points stored in archive and temporarily storing it in the vault-standard tier for a duration (also known as the rehydration duration) ranging from a period of 10 to 30 days. The default is 15 days. There are two different priorities of rehydration ΓÇô Standard and High priority. Learn more about [rehydration priority](https://docs.microsoft.com/azure/storage/blobs/storage-blob-rehydration#rehydrate-an-archived-blob-to-an-online-tier).
+The integrated restore is a two-step process. The first step involves rehydrating the recovery points stored in archive and temporarily storing it in the vault-standard tier for a duration (also known as the rehydration duration) ranging from a period of 10 to 30 days. The default is 15 days. There are two different priorities of rehydration ΓÇô Standard and High priority. Learn more about [rehydration priority](../storage/blobs/storage-blob-rehydration.md#rehydrate-an-archived-blob-to-an-online-tier).
>[!NOTE] >
The recovery point will remain in archive forever. For more information, see [Im
## Next steps -- [Azure Backup pricing](azure-backup-pricing.md)
+- [Azure Backup pricing](azure-backup-pricing.md)
backup Backup Azure Arm Restore Vms https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-azure-arm-restore-vms.md
There are a few things to note after restoring a VM:
- Extensions present during the backup configuration are installed, but not enabled. If you see an issue, reinstall the extensions. - If the backed-up VM had a static IP address, the restored VM will have a dynamic IP address to avoid conflict. You can [add a static IP address to the restored VM](/powershell/module/az.network/set-aznetworkinterfaceipconfig#description). - A restored VM doesn't have an availability set. If you use the restore disk option, then you can [specify an availability set](../virtual-machines/windows/tutorial-availability-sets.md) when you create a VM from the disk using the provided template or PowerShell.-- If you use a cloud-init-based Linux distribution, such as Ubuntu, for security reasons the password is blocked after the restore. Use the VMAccess extension on the restored VM to [reset the password](../virtual-machines/troubleshooting/reset-password.md). We recommend using SSH keys on these distributions, so you don't need to reset the password after the restore.
+- If you use a cloud-init-based Linux distribution, such as Ubuntu, for security reasons the password is blocked after the restore. Use the VMAccess extension on the restored VM to [reset the password](/troubleshoot/azure/virtual-machines/reset-password). We recommend using SSH keys on these distributions, so you don't need to reset the password after the restore.
- If you're unable to access a VM once restored because the VM has a broken relationship with the domain controller, then follow the steps below to bring up the VM: - Attach OS disk as a data disk to a recovered VM.
- - Manually install VM agent if Azure Agent is found to be unresponsive by following this [link](../virtual-machines/troubleshooting/install-vm-agent-offline.md).
+ - Manually install VM agent if Azure Agent is found to be unresponsive by following this [link](/troubleshoot/azure/virtual-machines/install-vm-agent-offline).
- Enable Serial Console access on VM to allow command-line access to VM ```cmd
backup Backup Azure Monitoring Built In Monitor https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-azure-monitoring-built-in-monitor.md
Currently, this feature is available for Azure Databases for PostgreSQL Server,
- Backup Failure (to get alerts for Backup Failure, you need to register the AFEC flag named **EnableAzureBackupJobFailureAlertsToAzureMonitor** via the preview portal) - Restore Failure (to get alerts for Restore Failure, you need to register the AFEC flag named **EnableAzureBackupJobFailureAlertsToAzureMonitor** via the preview portal)
-For more information about Azure Monitor alerts, see [Overview of alerts in Azure](https://docs.microsoft.com/azure/azure-monitor/alerts/alerts-overview).
+For more information about Azure Monitor alerts, see [Overview of alerts in Azure](../azure-monitor/alerts/alerts-overview.md).
## Next steps
-[Monitor Azure Backup workloads using Azure Monitor](backup-azure-monitoring-use-azuremonitor.md)
+[Monitor Azure Backup workloads using Azure Monitor](backup-azure-monitoring-use-azuremonitor.md)
backup Backup Azure Vms Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-azure-vms-troubleshoot.md
To verify, navigate to ***System and Event Viewer Application logs*** and check
Solution:
-* Check for possibilities to distribute the load across the VM disks. This will reduce the load on single disks. You can [check the IOPs throttling by enabling diagnostic metrics at storage level](../virtual-machines/troubleshooting/performance-diagnostics.md#install-and-run-performance-diagnostics-on-your-vm).
+* Check for possibilities to distribute the load across the VM disks. This will reduce the load on single disks. You can [check the IOPs throttling by enabling diagnostic metrics at storage level](/troubleshoot/azure/virtual-machines/performance-diagnostics#install-and-run-performance-diagnostics-on-your-vm).
* Change the backup policy to perform backups during off peak hours, when the load on the VM is at its lowest. * Upgrade the Azure disks to support higher IOPs. [Learn more here](../virtual-machines/disks-types.md)
Typically, the VM Agent is already present in VMs that are created from the Azur
#### Windows VMs - Set up the agent * Download and install the [agent MSI](https://go.microsoft.com/fwlink/?LinkID=394789&clcid=0x409). You need Administrator privileges to finish the installation.
-* For virtual machines created by using the classic deployment model, [update the VM property](../virtual-machines/troubleshooting/install-vm-agent-offline.md#use-the-provisionguestagent-property-for-classic-vms) to indicate that the agent is installed. This step isn't required for Azure Resource Manager virtual machines.
+* For virtual machines created by using the classic deployment model, [update the VM property](/troubleshoot/azure/virtual-machines/install-vm-agent-offline#use-the-provisionguestagent-property-for-classic-vms) to indicate that the agent is installed. This step isn't required for Azure Resource Manager virtual machines.
#### Linux VMs - Set up the agent * Install the latest version of the agent from the distribution repository. For details on the package name, see the [Linux Agent repository](https://github.com/Azure/WALinuxAgent).
-* For VMs created by using the classic deployment model, [update the VM property](../virtual-machines/troubleshooting/install-vm-agent-offline.md#use-the-provisionguestagent-property-for-classic-vms) and verify that the agent is installed. This step isn't required for Resource Manager virtual machines.
+* For VMs created by using the classic deployment model, [update the VM property](/troubleshoot/azure/virtual-machines/install-vm-agent-offline#use-the-provisionguestagent-property-for-classic-vms) and verify that the agent is installed. This step isn't required for Resource Manager virtual machines.
### Update the VM Agent
backup Backup Mabs Protection Matrix https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-mabs-protection-matrix.md
Azure Backup Server can protect cluster workloads that are located in the same d
* File Server * Hyper-V
- These workloads can be running on a single server or in a cluster configuration. To protect a workload that isn't in a trusted domain, see [Prepare computers in workgroups and untrusted domains](https://docs.microsoft.com/system-center/dpm/prepare-environment-for-dpm) for exact details of what's supported and what authentication is required.
+ These workloads can be running on a single server or in a cluster configuration. To protect a workload that isn't in a trusted domain, see [Prepare computers in workgroups and untrusted domains](/system-center/dpm/prepare-environment-for-dpm) for exact details of what's supported and what authentication is required.
## Unsupported data types
MABS doesn't support protecting the following data types:
## Next steps
-* [Support matrix for backup with Microsoft Azure Backup Server or System Center DPM](backup-support-matrix-mabs-dpm.md)
+* [Support matrix for backup with Microsoft Azure Backup Server or System Center DPM](backup-support-matrix-mabs-dpm.md)
backup Backup Managed Disks https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-managed-disks.md
Last updated 01/07/2021
-# Back up Azure Managed Disks (in preview)
-
->[!IMPORTANT]
->Azure Disk Backup is in preview without a service level agreement, and it's not recommended for production workloads. For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). For region availability, see the [support matrix](disk-backup-support-matrix.md).
->
->[Fill out this form](https://forms.office.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR1vE8L51DIpDmziRt_893LVUNFlEWFJBN09PTDhEMjVHS05UWFkxUlUzUS4u) to sign-up for the preview.
+# Back up Azure Managed Disks
This article explains how to back up [Azure Managed Disk](../virtual-machines/managed-disks-overview.md) from the Azure portal.
backup Backup Reports Email https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-reports-email.md
To configure email tasks via Backup Reports, perform the following steps:
## Authorize connections to Azure Monitor Logs and Office 365
-The logic app uses the [azuremonitorlogs](https://docs.microsoft.com/connectors/azuremonitorlogs/) connector for querying the LA workspace(s) and uses the [Office365 Outlook](https://docs.microsoft.com/connectors/office365connector/) connector for sending emails. You will need to perform a one-time authorization for these two connectors.
+The logic app uses the [azuremonitorlogs](/connectors/azuremonitorlogs/) connector for querying the LA workspace(s) and uses the [Office365 Outlook](/connectors/office365connector/) connector for sending emails. You will need to perform a one-time authorization for these two connectors.
To perform the authorization, follow the steps below:
To troubleshoot this issue:
If the issues persist, contact Microsoft support. ## Next steps
-[Learn more about Backup Reports](https://docs.microsoft.com/azure/backup/configure-reports)
+[Learn more about Backup Reports](./configure-reports.md)
backup Backup Reports System Functions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-reports-system-functions.md
Last updated 03/01/2021
Azure Backup provides a set of functions, called system functions or solution functions, that are available by default in your Log Analytics (LA) workspaces.
-These functions operate on data in the [raw Azure Backup tables](https://docs.microsoft.com/azure/backup/backup-azure-reports-data-model) in LA and return formatted data that helps you easily retrieve information of all your backup-related entities, using simple queries. Users can pass parameters to these functions to filter the data that is returned by these functions.
+These functions operate on data in the [raw Azure Backup tables](./backup-azure-reports-data-model.md) in LA and return formatted data that helps you easily retrieve information of all your backup-related entities, using simple queries. Users can pass parameters to these functions to filter the data that is returned by these functions.
It's recommended to use system functions for querying your backup data in LA workspaces for creating custom reports, as they provide a number of benefits, as detailed in the section below. ## Benefits of using system functions
-* **Simpler queries**: Using functions helps you reduce the number of joins needed in your queries. By default, the functions return ΓÇÿflattenedΓÇÖ schemas, that incorporate all information pertaining to the entity (backup instance, job, vault, and so on) being queried. For example, if you need to get a list of successful backup jobs by backup item name and its associated container, a simple call to the **_AzureBackup_getJobs()** function will give you all of this information for each job. On the other hand, querying the raw tables directly would require you to perform multiple joins between [AddonAzureBackupJobs](https://docs.microsoft.com/azure/backup/backup-azure-reports-data-model#addonazurebackupjobs) and [CoreAzureBackup](https://docs.microsoft.com/azure/backup/backup-azure-reports-data-model#coreazurebackup) tables.
+* **Simpler queries**: Using functions helps you reduce the number of joins needed in your queries. By default, the functions return ΓÇÿflattenedΓÇÖ schemas, that incorporate all information pertaining to the entity (backup instance, job, vault, and so on) being queried. For example, if you need to get a list of successful backup jobs by backup item name and its associated container, a simple call to the **_AzureBackup_getJobs()** function will give you all of this information for each job. On the other hand, querying the raw tables directly would require you to perform multiple joins between [AddonAzureBackupJobs](./backup-azure-reports-data-model.md#addonazurebackupjobs) and [CoreAzureBackup](./backup-azure-reports-data-model.md#coreazurebackup) tables.
-* **Smoother transition from the legacy diagnostics event**: Using system functions helps you transition smoothly from the [legacy diagnostics event](https://docs.microsoft.com/azure/backup/backup-azure-diagnostic-events#legacy-event) (AzureBackupReport in AzureDiagnostics mode) to the [resource-specific events](https://docs.microsoft.com/azure/backup/backup-azure-diagnostic-events#diagnostics-events-available-for-azure-backup-users). All the system functions provided by Azure Backup allow you to specify a parameter that lets you choose whether the function should query data only from the resource-specific tables, or query data from both the legacy table and the resource-specific tables (with deduplication of records).
+* **Smoother transition from the legacy diagnostics event**: Using system functions helps you transition smoothly from the [legacy diagnostics event](./backup-azure-diagnostic-events.md#legacy-event) (AzureBackupReport in AzureDiagnostics mode) to the [resource-specific events](./backup-azure-diagnostic-events.md#diagnostics-events-available-for-azure-backup-users). All the system functions provided by Azure Backup allow you to specify a parameter that lets you choose whether the function should query data only from the resource-specific tables, or query data from both the legacy table and the resource-specific tables (with deduplication of records).
* If you have successfully migrated to the resource-specific tables, you can choose to exclude the legacy table from being queried by the function. * If you are currently in the process of migration and have some data in the legacy tables which you require for analysis, you can choose to include the legacy table. When the transition is complete, and you no longer need data from the legacy table, you can simply update the value of the parameter passed to the function in your queries, to exclude the legacy table.
- * If you are still using only the legacy table, the functions will still work if you choose to include the legacy table via the same parameter. However, it is recommended to [switch to the resource-specific tables](https://docs.microsoft.com/azure/backup/backup-azure-diagnostic-events#steps-to-move-to-new-diagnostics-settings-for-a-log-analytics-workspace) at the earliest.
+ * If you are still using only the legacy table, the functions will still work if you choose to include the legacy table via the same parameter. However, it is recommended to [switch to the resource-specific tables](./backup-azure-diagnostic-events.md#steps-to-move-to-new-diagnostics-settings-for-a-log-analytics-workspace) at the earliest.
* **Reduces possibility of custom queries breaking**: If Azure Backup introduces improvements to the schema of the underlying LA tables to accommodate future reporting scenarios, the definition of the functions will also be updated to take into account the schema changes. Thus, if you use system functions for creating custom queries, your queries will not break, even if there are changes in the underlying schema of the tables. > [!NOTE]
-> System functions are maintained by Microsoft and their definitions cannot be edited by users. If you require editable functions, you can create [saved functions](https://docs.microsoft.com/azure/azure-monitor/logs/functions) in LA.
+> System functions are maintained by Microsoft and their definitions cannot be edited by users. If you require editable functions, you can create [saved functions](../azure-monitor/logs/functions.md) in LA.
## Types of system functions offered by Azure Backup
Below are some sample queries to help you get started with using system function
```` ## Next steps
-[Learn more about Backup Reports](https://docs.microsoft.com/azure/backup/configure-reports)
+[Learn more about Backup Reports](./configure-reports.md)
backup Backup Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-support-matrix.md
The following table describes the features of Recovery Services vaults:
**Move data between vaults** | Moving backed-up data between vaults isn't supported. **Modify vault storage type** | You can modify the storage replication type (either geo-redundant storage or locally redundant storage) for a vault before backups are stored. After backups begin in the vault, the replication type can't be modified. **Zone-redundant storage (ZRS)** | Available in the UK South (UKS) and South East Asia (SEA) regions.
-**Private Endpoints** | See [this section](https://docs.microsoft.com/azure/backup/private-endpoints#before-you-start) for requirements to create private endpoints for a recovery service vault.
+**Private Endpoints** | See [this section](./private-endpoints.md#before-you-start) for requirements to create private endpoints for a recovery service vault.
## On-premises backup support
Azure Backup has added the Cross Region Restore feature to strengthen data avail
[green]: ./media/backup-support-matrix/green.png [yellow]: ./media/backup-support-matrix/yellow.png
-[red]: ./media/backup-support-matrix/red.png
+[red]: ./media/backup-support-matrix/red.png
backup Blob Backup Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/blob-backup-overview.md
Operational backup of blobs is a **local backup** solution. So the backup data i
Operational backup uses blob platform capabilities to protect your data and allow recovery when required: -- **Point-in-time restore**: [Blob point-in-time restore](https://docs.microsoft.com/azure/storage/blobs/point-in-time-restore-overview) allows restoring blob data to an earlier state. This, in turn, uses **soft delete**, **change feed** and **blob versioning** to retain data for the specified duration. Operational backup takes care of enabling point-in-time restore as well as the underlying capabilities to ensure data is retained for the specified duration.
+- **Point-in-time restore**: [Blob point-in-time restore](../storage/blobs/point-in-time-restore-overview.md) allows restoring blob data to an earlier state. This, in turn, uses **soft delete**, **change feed** and **blob versioning** to retain data for the specified duration. Operational backup takes care of enabling point-in-time restore as well as the underlying capabilities to ensure data is retained for the specified duration.
- **Delete lock**: Delete lock prevents the storage account from being deleted accidentally or by unauthorized users. Operational backup when configured also automatically applies a delete lock to reduce the possibilities of data loss because of storage account deletion.
Operational backup gives you the option to restore all block blobs in the storag
You won't incur any management charges or instance fee when using operational backup for blobs. However, you will incur the following charges: -- Restores are done using blob point-in-time restore and attract charges based on the amount of data processed. For more information, see [point-in-time restore pricing](https://docs.microsoft.com/azure/storage/blobs/point-in-time-restore-overview#pricing-and-billing).
+- Restores are done using blob point-in-time restore and attract charges based on the amount of data processed. For more information, see [point-in-time restore pricing](../storage/blobs/point-in-time-restore-overview.md#pricing-and-billing).
-- Retention of data because of [Soft delete for blobs](https://docs.microsoft.com/azure/storage/blobs/soft-delete-blob-overview), [Change feed support in Azure Blob Storage](https://docs.microsoft.com/azure/storage/blobs/storage-blob-change-feed), and [Blob versioning](https://docs.microsoft.com/azure/storage/blobs/versioning-overview).
+- Retention of data because of [Soft delete for blobs](../storage/blobs/soft-delete-blob-overview.md), [Change feed support in Azure Blob Storage](../storage/blobs/storage-blob-change-feed.md), and [Blob versioning](../storage/blobs/versioning-overview.md).
## Next steps -- [Configure and manage Azure Blobs backup](blob-backup-configure-manage.md)
+- [Configure and manage Azure Blobs backup](blob-backup-configure-manage.md)
backup Blob Backup Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/blob-backup-support-matrix.md
Operational backup of blobs uses blob point-in-time restore, blob versioning, so
**Other limitations:** -- If you've deleted a container during the retention period, that container won't be restored with the point-in-time restore operation. If you attempt to restore a range of blobs that includes blobs in a deleted container, the point-in-time restore operation will fail. For more information about protecting containers from deletion, see [Soft delete for containers (preview)](https://docs.microsoft.com/azure/storage/blobs/soft-delete-container-overview).-- If a blob has moved between the hot and cool tiers in the period between the present moment and the restore point, the blob is restored to its previous tier. Restoring block blobs in the archive tier isn't supported. For example, if a blob in the hot tier was moved to the archive tier two days ago, and a restore operation restores to a point three days ago, the blob isn't restored to the hot tier. To restore an archived blob, first move it out of the archive tier. For more information, see [Rehydrate blob data from the archive tier](https://docs.microsoft.com/azure/storage/blobs/storage-blob-rehydration).-- A block that has been uploaded via [Put Block](https://docs.microsoft.com/rest/api/storageservices/put-block) or [Put Block from URL](https://docs.microsoft.com/rest/api/storageservices/put-block-from-url), but not committed via [Put Block List](https://docs.microsoft.com/rest/api/storageservices/put-block-list), isn't part of a blob and so isn't restored as part of a restore operation.
+- If you've deleted a container during the retention period, that container won't be restored with the point-in-time restore operation. If you attempt to restore a range of blobs that includes blobs in a deleted container, the point-in-time restore operation will fail. For more information about protecting containers from deletion, see [Soft delete for containers (preview)](../storage/blobs/soft-delete-container-overview.md).
+- If a blob has moved between the hot and cool tiers in the period between the present moment and the restore point, the blob is restored to its previous tier. Restoring block blobs in the archive tier isn't supported. For example, if a blob in the hot tier was moved to the archive tier two days ago, and a restore operation restores to a point three days ago, the blob isn't restored to the hot tier. To restore an archived blob, first move it out of the archive tier. For more information, see [Rehydrate blob data from the archive tier](../storage/blobs/storage-blob-rehydration.md).
+- A block that has been uploaded via [Put Block](/rest/api/storageservices/put-block) or [Put Block from URL](/rest/api/storageservices/put-block-from-url), but not committed via [Put Block List](/rest/api/storageservices/put-block-list), isn't part of a blob and so isn't restored as part of a restore operation.
- A blob with an active lease can't be restored. If a blob with an active lease is included in the range of blobs to restore, the restore operation will fail automatically. Break any active leases before starting the restore operation. - Snapshots aren't created or deleted as part of a restore operation. Only the base blob is restored to its previous state. ## Next steps -- [Overview of operational backup for Azure Blobs (in preview)](blob-backup-overview.md)
+- [Overview of operational backup for Azure Blobs (in preview)](blob-backup-overview.md)
backup Configure Reports https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/configure-reports.md
There are two types of policy adherence views available:
* **Policy Adherence by Time Period**: Using this view, you can identify how many items have had at least one successful backup in a given day and how many have not had a successful backup in that day. You can click on a row to see details of all backup jobs that have been triggered on the selected day. Note that if you increase the time range to a larger value, such as the last 60 days, the grid is rendered in weekly view, and displays the count of all items that have had at least one successful backup on every day in the given week. Similarly, there is a monthly view for larger time ranges.
-In the case of items backed up weekly, this grid helps you identify all items that have had at least one successful backup in the given week. For a larger time range, such as the last 120 days, the grid is rendered in monthly view, and displays the count of all items that have had at least one successful backup in every week in the given month. Refer [Conventions used in Backup Reports](https://docs.microsoft.com/azure/backup/configure-reports#conventions-used-in-backup-reports) for more details around daily, weekly and monthly views.
+In the case of items backed up weekly, this grid helps you identify all items that have had at least one successful backup in the given week. For a larger time range, such as the last 120 days, the grid is rendered in monthly view, and displays the count of all items that have had at least one successful backup in every week in the given month. Refer [Conventions used in Backup Reports](#conventions-used-in-backup-reports) for more details around daily, weekly and monthly views.
![Policy Adherence By Time Period](./media/backup-azure-configure-backup-reports/policy-adherence-by-time-period.png)
Once the logic app is created, you'll need to authorize connections to Azure Mon
Backup Reports uses [system functions on Azure Monitor logs](backup-reports-system-functions.md). These functions operate on data in the raw Azure Backup tables in LA and return formatted data that helps you easily retrieve information of all your backup-related entities, using simple queries.
-To create your own reporting workbooks using Backup Reports as a base, you can navigate to Backup Reports, click on **Edit** at the top of the report, and view/edit the queries being used in the reports. Refer to [Azure workbooks documentation](https://docs.microsoft.com/azure/azure-monitor/visualize/workbooks-overview) to learn more about how to create custom reports.
+To create your own reporting workbooks using Backup Reports as a base, you can navigate to Backup Reports, click on **Edit** at the top of the report, and view/edit the queries being used in the reports. Refer to [Azure workbooks documentation](../azure-monitor/visualize/workbooks-overview.md) to learn more about how to create custom reports.
## Export to Excel
The widgets in the Backup report are powered by Kusto queries, which run on the
## Next steps
-[Learn more about monitoring and reporting with Azure Backup](./backup-azure-monitor-alert-faq.md)
+[Learn more about monitoring and reporting with Azure Backup](./backup-azure-monitor-alert-faq.md)
backup Disk Backup Faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/disk-backup-faq.md
Last updated 01/07/2021
-# Frequently asked questions about Azure Disk Backup (in preview)
-
->[!IMPORTANT]
->Azure Disk Backup is in preview without a service level agreement, and it's not recommended for production workloads. For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). For region availability, see the [support matrix](disk-backup-support-matrix.md).
->
->[Fill out this form](https://forms.office.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR1vE8L51DIpDmziRt_893LVUNFlEWFJBN09PTDhEMjVHS05UWFkxUlUzUS4u) to sign-up for the preview.
+# Frequently asked questions about Azure Disk Backup
This article answers frequently asked questions about Azure Disk Backup. For more information on the [Azure Disk backup](disk-backup-overview.md) region availability, supported scenarios and limitations, see the [support matrix](disk-backup-support-matrix.md).
backup Disk Backup Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/disk-backup-overview.md
Last updated 01/07/2021
-# Overview of Azure Disk Backup (in preview)
-
->[!IMPORTANT]
->Azure Disk Backup is in preview without a service level agreement, and it's not recommended for production workloads. For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). For region availability, see the [support matrix](disk-backup-support-matrix.md).
->
->[Fill out this form](https://forms.office.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR1vE8L51DIpDmziRt_893LVUNFlEWFJBN09PTDhEMjVHS05UWFkxUlUzUS4u) to sign-up for the preview.
+# Overview of Azure Disk Backup
Azure Disk Backup is a native, cloud-based backup solution that protects your data in managed disks. It's a simple, secure, and cost-effective solution that enables you to configure protection for managed disks in a few steps. It assures that you can recover your data in a disaster scenario.
backup Disk Backup Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/disk-backup-support-matrix.md
Last updated 01/07/2021
-# Azure Disk Backup support matrix (in preview)
-
->[!IMPORTANT]
->Azure Disk Backup is in preview without a service level agreement, and it's not recommended for production workloads. For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
->
->[Fill out this form](https://forms.office.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR1vE8L51DIpDmziRt_893LVUNFlEWFJBN09PTDhEMjVHS05UWFkxUlUzUS4u) to sign-up for the preview.
+# Azure Disk Backup support matrix
You can use [Azure Backup](./backup-overview.md) to protect Azure Disks. This article summarizes region availability, supported scenarios, and limitations. ## Supported regions
-Azure Disk Backup is available in preview in the following regions: West US, West Central US, East US2, Canada Central, UK West, Switzerland North, Switzerland West, Australia Central, Australia Central 2, Korea Central, Korea South, Japan West, East Asia, UAE North, Brazil South, Central India.
+Azure Disk Backup is available in the following regions: West US, West US 2, West Central US, East US, East US2, Central US, South Central US, North Central US, Canada Central, Brazil South, South Africa North, UK South, UK West, West Europe, North Europe, Switzerland North, Switzerland West, Germany West Central, France Central, Norway East, UAE North, Australia Central, Australia Central 2, Australia East, Korea Central, Korea South, Japan East, Japan West, East Asia, Southeast Asia, Central India.
More regions will be announced when they become available.
backup Disk Backup Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/disk-backup-troubleshoot.md
Last updated 01/07/2021
-# Troubleshooting backup failures in Azure Disk Backup (in preview)
-
->[!IMPORTANT]
->Azure Disk Backup is in preview without a service level agreement, and it's not recommended for production workloads. For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). For region availability, see the [support matrix](disk-backup-support-matrix.md).
->
->[Fill out this form](https://forms.office.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR1vE8L51DIpDmziRt_893LVUNFlEWFJBN09PTDhEMjVHS05UWFkxUlUzUS4u) to sign-up for the preview.
+# Troubleshooting backup failures in Azure Disk Backup
This article provides troubleshooting information on backup and restore issues faced with Azure Disk. For more information on the [Azure Disk backup](disk-backup-overview.md) region availability, supported scenarios and limitations, see the [support matrix](disk-backup-support-matrix.md).
Recommended Action: Consider using another recovery point to restore. For more i
Error Message: Disk Backup is not yet available in the region of the Backup Vault under which Configure Protection is being tried.
-Recommended Action: Backup Vault must be in a preview supported region. For region availability see the [the support matrix](disk-backup-support-matrix.md).
+Recommended Action: Backup Vault must be in a supported region. For region availability see the [the support matrix](disk-backup-support-matrix.md).
### Error Code: UserErrorDppDatasourceAlreadyHasBackupInstance
backup Private Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/private-endpoints.md
This article will help you understand the process of creating private endpoints
- A private endpoint connection for Backup uses a total of 11 private IPs in your subnet, including those used by Azure Backup for storage. This number may be higher (up to 25) for certain Azure regions. So we suggest that you have enough private IPs available when you attempt to create private endpoints for Backup. - While a Recovery Services vault is used by (both) Azure Backup and Azure Site Recovery, this article discusses use of private endpoints for Azure Backup only. - Azure Active Directory doesn't currently support private endpoints. So IPs and FQDNs required for Azure Active Directory to work in a region will need to be allowed outbound access from the secured network when performing backup of databases in Azure VMs and backup using the MARS agent. You can also use NSG tags and Azure Firewall tags for allowing access to Azure AD, as applicable.-- Virtual networks with Network Policies aren't supported for Private Endpoints. You'll need to [disable Network Polices](https://docs.microsoft.com/azure/private-link/disable-private-endpoint-network-policy) before continuing.
+- Virtual networks with Network Policies aren't supported for Private Endpoints. You'll need to [disable Network Polices](../private-link/disable-private-endpoint-network-policy.md) before continuing.
- You need to re-register the Recovery Services resource provider with the subscription if you registered it before May 1 2020. To re-register the provider, go to your subscription in the Azure portal, navigate to **Resource provider** on the left navigation bar, then select **Microsoft.RecoveryServices** and select **Re-register**. - [Cross-region restore](backup-create-rs-vault.md#set-cross-region-restore) for SQL and SAP HANA database backups aren't supported if the vault has private endpoints enabled. - When you move a Recovery Services vault already using private endpoints to a new tenant, you'll need to update the Recovery Services vault to recreate and reconfigure the vaultΓÇÖs managed identity and create new private endpoints as needed (which should be in the new tenant). If this isn't done, the backup and restore operations will start failing. Also, any role-based access control (RBAC) permissions set up within the subscription will need to be reconfigured.
But if you remove private endpoints for the vault after a MARS agent has been re
## Deleting Private EndPoints
-See [this section](https://docs.microsoft.com/rest/api/virtualnetwork/privateendpoints/delete) to learn how to delete Private EndPoints.
+See [this section](/rest/api/virtualnetwork/privateendpoints/delete) to learn how to delete Private EndPoints.
## Additional topics
A. After following the process detailed in this article, you don't need to do ad
## Next steps -- Read about all the [security features in Azure Backup](security-overview.md).
+- Read about all the [security features in Azure Backup](security-overview.md).
backup Restore Managed Disks https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/restore-managed-disks.md
Last updated 01/07/2021
-# Restore Azure Managed Disks (in preview)
-
->[!IMPORTANT]
->Azure Disk Backup is in preview without a service level agreement, and it's not recommended for production workloads. For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). For region availability, see the [support matrix](disk-backup-support-matrix.md).
->
->[Fill out this form](https://forms.office.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR1vE8L51DIpDmziRt_893LVUNFlEWFJBN09PTDhEMjVHS05UWFkxUlUzUS4u) to sign-up for the preview.
+# Restore Azure Managed Disks
This article explains how to restore [Azure Managed Disks](../virtual-machines/managed-disks-overview.md) from a restore point created by Azure Backup.
backup Security Baseline https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/security-baseline.md
file](https://github.com/MicrosoftDocs/SecurityBenchmarks/tree/master/Azure%20Of
**Guidance**: The endpoints used by Azure Backup (including the Microsoft Azure Recovery Services agent) are all managed by Microsoft. You are responsible for any additional controls you wish to deploy to your on-premises systems. -- [Understand networking and access support for the MARS agent](https://docs.microsoft.com/azure/backup/backup-support-matrix-mars-agent#networking-and-access-support)
+- [Understand networking and access support for the MARS agent](./backup-support-matrix-mars-agent.md#networking-and-access-support)
**Responsibility**: Customer
file](https://github.com/MicrosoftDocs/SecurityBenchmarks/tree/master/Azure%20Of
**Guidance**: If you are using the MARS agent on an Azure Virtual Machine that is being protected by an network security group or Azure Firewall, use Azure Activity Log to monitor configuration of the NSG or Firewall. You may create alerts within Azure Monitor that will trigger when changes to these resources take place. -- [View and retrieve Azure Activity Log events](/azure/azure-monitor/platform/activity-log#view-the-activity-log)
+- [View and retrieve Azure Activity Log events](../azure-monitor/essentials/activity-log.md#view-the-activity-log)
-- [Create, view, and manage activity log alerts by using Azure Monitor](/azure/azure-monitor/platform/alerts-activity-log)
+- [Create, view, and manage activity log alerts by using Azure Monitor](../azure-monitor/alerts/alerts-activity-log.md)
**Responsibility**: Customer
file](https://github.com/MicrosoftDocs/SecurityBenchmarks/tree/master/Azure%20Of
Also, ingest logs via Azure Monitor to aggregate security data generated by Azure Backup. In Azure Monitor, use Log Analytics workspaces to query and perform analytics, and use storage accounts for long-term/archival storage. Alternatively, you can on-board data to Azure Sentinel or a third-party Security Incident and Event Management (SIEM). -- [How to enable diagnostic settings for Azure Activity Log](/azure/azure-monitor/platform/activity-log)
+- [How to enable diagnostic settings for Azure Activity Log](../azure-monitor/essentials/activity-log.md)
- [Using diagnostic settings to for Recovery Services Vaults](backup-azure-diagnostic-events.md)
Also, ingest logs via Azure Monitor to aggregate security data generated by Azur
Additionally, Azure Backup sends diagnostics events that can be collected and used for the purposes of analysis, alerting and reporting. You can configure diagnostics settings for a Recovery Services vault via the Azure portal. You can send one or more diagnostics events to a Storage Account, Event Hub, or a Log Analytics workspace. -- [How to enable Diagnostic Settings for Azure Activity Log](/azure/azure-monitor/platform/activity-log)
+- [How to enable Diagnostic Settings for Azure Activity Log](../azure-monitor/essentials/activity-log.md)
- [Using diagnostic settings to for Recovery Services Vaults](backup-azure-diagnostic-events.md)
Additionally, Azure Backup sends diagnostics events that can be collected and us
**Guidance**: In Azure Monitor, set log retention period for Log Analytics workspaces associated with your Azure Recovery Services vaults according to your organization's compliance regulations. -- [How to set log retention parameters](/azure/azure-monitor/platform/manage-cost-storage#change-the-data-retention-period)
+- [How to set log retention parameters](../azure-monitor/logs/manage-cost-storage.md#change-the-data-retention-period)
**Responsibility**: Customer
Enable Azure Activity Log diagnostic settings and send the logs to a Log Analyti
- [Monitoring Azure Backup workloads](backup-azure-monitoring-built-in-monitor.md) -- [How to enable Diagnostic Settings for Azure Activity Log](/azure/azure-monitor/platform/activity-log)
+- [How to enable Diagnostic Settings for Azure Activity Log](../azure-monitor/essentials/activity-log.md)
-- [How to collect and analyze Azure activity logs in Log Analytics workspace in Azure Monitor](/azure/azure-monitor/platform/activity-log)
+- [How to collect and analyze Azure activity logs in Log Analytics workspace in Azure Monitor](../azure-monitor/essentials/activity-log.md)
**Responsibility**: Customer
You can also onboard a Log Analytics workspace to Azure Sentinel as it provides
- [How to onboard Azure Sentinel](../sentinel/quickstart-onboard.md) -- [Create, view, and manage log alerts using Azure Monitor](/azure/azure-monitor/platform/alerts-log)
+- [Create, view, and manage log alerts using Azure Monitor](../azure-monitor/alerts/alerts-log.md)
**Responsibility**: Customer
In addition, use Azure AD risk detections to view alerts and reports on risky us
**Guidance**: Azure Active Directory (Azure AD) provides logs to help you discover stale accounts. In addition, use Azure AD access reviews to efficiently manage group memberships, access to enterprise applications, and role assignments. User access should be reviewed on a regular basis to make sure only the right Users have continued access. -- [Understand Azure AD reporting](/azure/active-directory/reports-monitoring/)
+- [Understand Azure AD reporting](../active-directory/reports-monitoring/index.yml)
- [How to use Azure AD access reviews](../active-directory/governance/access-reviews-overview.md)
You have access to Azure AD sign-in activity, audit and risk event log sources,
You can streamline this process by creating diagnostic settings for Azure AD user accounts and sending the audit logs and sign-in logs to a Log Analytics workspace. You can configure desired log alerts within Log Analytics. -- [How to integrate Azure Activity Logs into Azure Monitor](/azure/active-directory/reports-monitoring/howto-integrate-activity-logs-with-log-analytics)
+- [How to integrate Azure Activity Logs into Azure Monitor](../active-directory/reports-monitoring/howto-integrate-activity-logs-with-log-analytics.md)
- [How to on-board Azure Sentinel](../sentinel/quickstart-onboard.md)
You can streamline this process by creating diagnostic settings for Azure AD use
**Guidance**: Currently not available; Customer Lockbox is not yet supported for Azure Backup. -- [List of Customer Lockbox-supported services](https://docs.microsoft.com/azure/security/fundamentals/customer-lockbox-overview#supported-services-and-scenarios-in-general-availability)
+- [List of Customer Lockbox-supported services](../security/fundamentals/customer-lockbox-overview.md#supported-services-and-scenarios-in-general-availability)
**Responsibility**: Customer
When backing up with the MARS agent or using a Recovery Services vault encrypted
**Guidance**: Use Azure Monitor with the Azure Activity log to create alerts for when changes take place to production Azure Recovery Services vaults as well as other critical or related resources. -- [How to create alerts for Azure Activity Log events](/azure/azure-monitor/platform/alerts-activity-log)
+- [How to create alerts for Azure Activity Log events](../azure-monitor/alerts/alerts-activity-log.md)
**Responsibility**: Customer
When backing up with the MARS agent or using a Recovery Services vault encrypted
Underlying platform scanned and patched by Microsoft. Review security controls available for Azure Backup to reduce service configuration related vulnerabilities. -- [Understanding security controls available for Azure Backup](/azure/backup/backup-security-controls)
+- [Understanding security controls available for Azure Backup]()
**Responsibility**: Customer
Although classic Azure resources may be discovered via Resource Graph, it is hig
- [How to create queries with Azure Resource Graph](../governance/resource-graph/first-query-portal.md) -- [How to view your Azure Subscriptions](https://docs.microsoft.com/powershell/module/az.accounts/get-azsubscription?view=azps-4.8.0&amp;preserve-view=true)
+- [How to view your Azure Subscriptions](/powershell/module/az.accounts/get-azsubscription?preserve-view=true&view=azps-4.8.0)
- [Understand Azure RBAC](../role-based-access-control/overview.md)
Additional information is available at the referenced links.
- [How to configure and manage Azure Policy](../governance/policy/tutorials/create-and-manage.md) -- [How to deny a specific resource type with Azure Policy](https://docs.microsoft.com/azure/governance/policy/samples/built-in-policies#general)
+- [How to deny a specific resource type with Azure Policy](../governance/policy/samples/built-in-policies.md#general)
**Responsibility**: Customer
Additional information is available at the referenced links.
**Guidance**: Define and implement standard security configurations for your Recovery Services vault with Azure Policy. Use Azure Policy aliases in the "Microsoft.RecoveryServices" namespace to create custom policies to audit or enforce the configuration of your Recovery Services vaults. -- [How to view available Azure Policy Aliases](https://docs.microsoft.com/powershell/module/az.resources/get-azpolicyalias?view=azps-4.8.0&amp;preserve-view=true)
+- [How to view available Azure Policy Aliases](/powershell/module/az.resources/get-azpolicyalias?preserve-view=true&view=azps-4.8.0)
- [How to configure and manage Azure Policy](../governance/policy/tutorials/create-and-manage.md)
Backup customer-managed keys within Azure Key Vault.
**Responsibility**: Customer
-**Azure Security Center monitoring**: The [Azure Security Benchmark](/home/mbaldwin/docs/asb/azure-docs-pr/articles/governance/policy/samples/azure-security-benchmark.md) is the default policy initiative for Security Center and is the foundation for [Security Center's recommendations](/home/mbaldwin/docs/asb/azure-docs-pr/articles/security-center/security-center-recommendations.md). The Azure Policy definitions related to this control are enabled automatically by Security Center. Alerts related to this control may require an [Azure Defender](/home/mbaldwin/docs/asb/azure-docs-pr/articles/security-center/azure-defender.md) plan for the related services.
+**Azure Security Center monitoring**: The [Azure Security Benchmark](/azure/governance/policy/samples/azure-security-benchmark) is the default policy initiative for Security Center and is the foundation for [Security Center's recommendations](/azure/security-center/security-center-recommendations). The Azure Policy definitions related to this control are enabled automatically by Security Center. Alerts related to this control may require an [Azure Defender](/azure/security-center/azure-defender) plan for the related services.
**Azure Policy built-in definitions - Microsoft.RecoveryServices**:
Backup customer-managed keys within Azure Key Vault.
**Guidance**: For on-premises backup, encryption-at-rest is provided using the passphrase you provide when backing up to Azure. For Azure VMs, data is encrypted-at-rest using Storage Service Encryption (SSE). You may enable soft-delete in Key Vault to protect keys against accidental or malicious deletion. -- [How to enable soft-delete in Key Vault](https://docs.microsoft.com/azure/storage/blobs/soft-delete-blob-overview?tabs=azure-portal)
+- [How to enable soft-delete in Key Vault](../storage/blobs/soft-delete-blob-overview.md?tabs=azure-portal)
**Responsibility**: Customer
Additionally, clearly mark subscriptions and create a naming system to clearly i
## Next steps -- See the [Azure Security Benchmark V2 overview](/azure/security/benchmarks/overview)-- Learn more about [Azure security baselines](/azure/security/benchmarks/security-baselines-overview)
+- See the [Azure Security Benchmark V2 overview](../security/benchmarks/overview.md)
+- Learn more about [Azure security baselines](../security/benchmarks/security-baselines-overview.md)
cdn Cdn Pop Locations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cdn/cdn-pop-locations.md
This article lists current Metros containing point-of-presence (POP) locations,
## Next steps
-* To get the latest IP addresses for allow listing, see the [Azure CDN Edge Nodes API](https://docs.microsoft.com/rest/api/cdn/edgenodes).
+* To get the latest IP addresses for allow listing, see the [Azure CDN Edge Nodes API](/rest/api/cdn/edgenodes).
cloud-services-extended-support Deploy Template https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services-extended-support/deploy-template.md
This tutorial explains how to create a Cloud Service (extended support) deployme
1. Review the [deployment prerequisites](deploy-prerequisite.md) for Cloud Services (extended support) and create the associated resources.
-2. Create a new resource group using the [Azure portal](/azure/azure-resource-manager/management/manage-resource-groups-portal) or [PowerShell](/azure/azure-resource-manager/management/manage-resource-groups-powershell). This step is optional if you are using an existing resource group.
+2. Create a new resource group using the [Azure portal](../azure-resource-manager/management/manage-resource-groups-portal.md) or [PowerShell](../azure-resource-manager/management/manage-resource-groups-powershell.md). This step is optional if you are using an existing resource group.
-3. Create a new storage account using the [Azure portal](/azure/storage/common/storage-account-create?tabs=azure-portal) or [PowerShell](/azure/storage/common/storage-account-create?tabs=azure-powershell). This step is optional if you are using an existing storage account.
+3. Create a new storage account using the [Azure portal](../storage/common/storage-account-create.md?tabs=azure-portal) or [PowerShell](../storage/common/storage-account-create.md?tabs=azure-powershell). This step is optional if you are using an existing storage account.
-4. Upload your Service Definition (.csdef) and Service Configuration (.cscfg) files to the storage account using the [Azure portal](/azure/storage/blobs/storage-quickstart-blobs-portal#upload-a-block-blob), [AzCopy](/azure/storage/common/storage-use-azcopy-blobs-upload?toc=/azure/storage/blobs/toc.json) or [PowerShell](/azure/storage/blobs/storage-quickstart-blobs-powershell#upload-blobs-to-the-container). Obtain the SAS URIs of both files to be added to the ARM template later in this tutorial.
+4. Upload your Service Definition (.csdef) and Service Configuration (.cscfg) files to the storage account using the [Azure portal](../storage/blobs/storage-quickstart-blobs-portal.md#upload-a-block-blob), [AzCopy](../storage/common/storage-use-azcopy-blobs-upload.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json) or [PowerShell](../storage/blobs/storage-quickstart-blobs-powershell.md#upload-blobs-to-the-container). Obtain the SAS URIs of both files to be added to the ARM template later in this tutorial.
5. (Optional) Create a key vault and upload the certificates.
- - Certificates can be attached to cloud services to enable secure communication to and from the service. In order to use certificates, their thumbprints must be specified in your Service Configuration (.cscfg) file and uploaded to a key vault. A key vault can be created through the [Azure portal](/azure/key-vault/general/quick-create-portal) or [PowerShell](/azure/key-vault/general/quick-create-powershell).
+ - Certificates can be attached to cloud services to enable secure communication to and from the service. In order to use certificates, their thumbprints must be specified in your Service Configuration (.cscfg) file and uploaded to a key vault. A key vault can be created through the [Azure portal](../key-vault/general/quick-create-portal.md) or [PowerShell](../key-vault/general/quick-create-powershell.md).
- The associated key vault must be located in the same region and subscription as cloud service. - The associated key vault for must be enabled appropriate permissions so that Cloud Services (extended support) resource can retrieve certificates from Key Vault. For more information, see [Certificates and Key Vault](certificates-and-key-vault.md) - The key vault needs to be referenced in the OsProfile section of the ARM template shown in the below steps.
This tutorial explains how to create a Cloud Service (extended support) deployme
- Review [frequently asked questions](faq.md) for Cloud Services (extended support). - Deploy a Cloud Service (extended support) using the [Azure portal](deploy-portal.md), [PowerShell](deploy-powershell.md), [Template](deploy-template.md) or [Visual Studio](deploy-visual-studio.md).-- Visit the [Cloud Services (extended support) samples repository](https://github.com/Azure-Samples/cloud-services-extended-support)
+- Visit the [Cloud Services (extended support) samples repository](https://github.com/Azure-Samples/cloud-services-extended-support)
cloud-services-extended-support Faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services-extended-support/faq.md
No, Cloud Service (extended support) deployments are tied to a cluster like Clou
### When do I need to migrate? Estimating the time required and complexity migration depends on a range of variables. Planning is the most effective step to understand the scope of work, blockers and complexity of migration.
-## Networking
+## Networking
### Why canΓÇÖt I create a deployment without virtual network? Virtual networks are a required resource for any deployment on Azure Resource Manager. Cloud Services (extended support) deployment must live inside a virtual network.
Customers are billed for IP Address use on Cloud Services (extended support) jus
### Can I use a DNS name with Cloud Services (extended support)? Yes. Cloud Services (extended support) can also be given a DNS name. With Azure Resource Manager, the DNS label is an optional property of the public IP address that is assigned to the Cloud Service. The format of the DNS name for Azure Resource Manager based deployments is `<userlabel>.<region>.cloudapp.azure.com`
+### Can I update or change the virtual network reference for an existing cloud service (extended support)?
+No. Virtual network reference is mandatory during the creation of a cloud service. For an existing cloud service, the virtual network reference cannot be changed. The virtual network address space itself can be modified using VNet APIs.
+ ## Certificates & Key Vault ### Why do I need to manage my certificates on Cloud Services (extended support)?
cloud-services Cloud Services Guestos Msrc Releases https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services/cloud-services-guestos-msrc-releases.md
The following tables show the Microsoft Security Response Center (MSRC) updates
[4015221]: https://support.microsoft.com/kb/4015221 [4015583]: https://support.microsoft.com/kb/4015583 [4015219]: https://support.microsoft.com/kb/4015219
-[4023136]: https://support.microsoft.com/kb/4023136
+[4023136]: https://support.microsoft.com/topic/how-to-configure-daylight-saving-time-for-microsoft-windows-operating-systems-83a0992c-bce3-336a-d64d-f7bdfdbcd7c8
[4019264]: https://support.microsoft.com/kb/4019264 [4014545]: https://support.microsoft.com/kb/4014545 [4014508]: https://support.microsoft.com/kb/4014508
cloud-services Cloud Services Troubleshoot Location Not Found For Role Size https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services/cloud-services-troubleshoot-location-not-found-for-role-size.md
In this scenario, you should select a different region or SKU to deploy your Clo
### List SKUs in region using Azure CLI
-You can use the [az vm list-skus](https://docs.microsoft.com/cli/azure/vm.html#az_vm_list_skus) command.
+You can use the [az vm list-skus](/cli/azure/vm.html#az_vm_list_skus) command.
- Use the `--location` parameter to filter output to location you're using. - Use the `--size` parameter to search by a partial size name.
You can use the [az vm list-skus](https://docs.microsoft.com/cli/azure/vm.html#a
#### List SKUs in region using PowerShell
-You can use the [Get-AzComputeResourceSku](https://docs.microsoft.com/powershell/module/az.compute/get-azcomputeresourcesku) command.
+You can use the [Get-AzComputeResourceSku](/powershell/module/az.compute/get-azcomputeresourcesku) command.
- Filter the results by location. - You must have the latest version of PowerShell for this command.
Get-AzComputeResourceSku | where {$_.Locations.Contains("centralus") -and $_.Res
#### List SKUs in region using REST API
-You can use the [Resource Skus - List](https://docs.microsoft.com/rest/api/compute/resourceskus/list) operation. It returns available SKUs and regions in the following format:
+You can use the [Resource Skus - List](/rest/api/compute/resourceskus/list) operation. It returns available SKUs and regions in the following format:
```json {
For more allocation failure solutions and to better understand how they're gener
> [!div class="nextstepaction"] > [Allocation failures - Cloud service (classic)](cloud-services-allocation-failures.md)
-If your Azure issue isn't addressed in this article, visit the Azure forums on [MSDN and Stack Overflow](https://azure.microsoft.com/support/forums/). You can post your issue in these forums, or post to [@AzureSupport on Twitter](https://twitter.com/AzureSupport). You also can submit an Azure support request. To submit a support request, on the [Azure support](https://azure.microsoft.com/support/options/) page, select *Get support*.
+If your Azure issue isn't addressed in this article, visit the Azure forums on [MSDN and Stack Overflow](https://azure.microsoft.com/support/forums/). You can post your issue in these forums, or post to [@AzureSupport on Twitter](https://twitter.com/AzureSupport). You also can submit an Azure support request. To submit a support request, on the [Azure support](https://azure.microsoft.com/support/options/) page, select *Get support*.
cloud-services Security Baseline https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services/security-baseline.md
Microsoft Azure Cloud Services (Classic) cannot be placed in Azure Resource Mana
- [Network Security Group overview](../virtual-network/network-security-groups-overview.md) -- [Virtual Network peering](https://docs.microsoft.com/azure/cloud-services/cloud-services-connectivity-and-networking-faq#how-can-i-use-azure-resource-manager-virtual-networks-with-cloud-services)
+- [Virtual Network peering](./cloud-services-connectivity-and-networking-faq.md#how-can-i-use-azure-resource-manager-virtual-networks-with-cloud-services)
**Responsibility**: Customer
Prevent incoming traffic to the default URL or name of your Cloud Services, for
Configure a Deny Apply rule to classic subscription administrator assignments. By default, after an internal endpoint is defined, communication can flow from any role to the internal endpoint of a role without any restrictions. To restrict communication, you must add a NetworkTrafficRules element to the ServiceDefinition element in the service definition file. -- [How can I block/disable incoming traffic to the default URL of my cloud service](https://docs.microsoft.com/azure/cloud-services/cloud-services-connectivity-and-networking-faq#how-can-i-blockdisable-incoming-traffic-to-the-default-url-of-my-cloud-service)
+- [How can I block/disable incoming traffic to the default URL of my cloud service](./cloud-services-connectivity-and-networking-faq.md#how-can-i-blockdisable-incoming-traffic-to-the-default-url-of-my-cloud-service)
-- [Azure DDOS protection](https://docs.microsoft.com/azure/cloud-services/cloud-services-connectivity-and-networking-faq#how-do-i-prevent-receiving-thousands-of-hits-from-unknown-ip-addresses-that-might-indicate-a-malicious-attack-to-the-cloud-service)
+- [Azure DDOS protection](./cloud-services-connectivity-and-networking-faq.md#how-do-i-prevent-receiving-thousands-of-hits-from-unknown-ip-addresses-that-might-indicate-a-malicious-attack-to-the-cloud-service)
-- [Block a specific IP address](https://docs.microsoft.com/azure/cloud-services/cloud-services-startup-tasks-common#block-a-specific-ip-address)
+- [Block a specific IP address](./cloud-services-startup-tasks-common.md#block-a-specific-ip-address)
**Responsibility**: Customer
Gather insight from Activity log, a platform log in Azure, into subscription-lev
Create a diagnostic setting to send the Activity log to Azure Monitor, Azure Event Hubs to forward outside of Azure, or to Azure Storage for archival. Configure Azure Monitor for notification alerts when critical resources in your Azure Cloud Services are changed. -- [Azure Activity log](/azure/azure-monitor/platform/activity-log)
+- [Azure Activity log](../azure-monitor/essentials/activity-log.md)
-- [Create, view, and manage activity log alerts by using Azure Monitor](/azure/azure-monitor/platform/alerts-activity-log)
+- [Create, view, and manage activity log alerts by using Azure Monitor](../azure-monitor/alerts/alerts-activity-log.md)
- [Traffic Manager Monitoring](../traffic-manager/traffic-manager-monitoring.md)
Create a diagnostic setting to send the Activity log to Azure Monitor, Azure Eve
**Guidance**: Microsoft maintains time sources for Azure resources for Azure Cloud Services. Customers might need to create a network rule to allow access to a time server used in their environment, over port 123 with UDP protocol. -- [NTP server access](https://docs.microsoft.com/azure/firewall/protect-windows-virtual-desktop#additional-considerations)
+- [NTP server access](../firewall/protect-windows-virtual-desktop.md#additional-considerations)
**Responsibility**: Shared
Azure Cloud Services can be monitored by Application Insights for availability,
- [Turn on diagnostics in Visual Studio before deployment](/visualstudio/azure/vs-azure-tools-diagnostics-for-cloud-services-and-virtual-machines#to-turn-on-diagnostics-in-visual-studio-before-deployment) -- [View change history](/azure/azure-monitor/platform/activity-log#view-change-history)
+- [View change history](../azure-monitor/essentials/activity-log.md#view-change-history)
- [Application Insights for Azure Cloud service (Classic)](../azure-monitor/app/cloudservices.md)
The Azure Diagnostic extension collects and stores data in an Azure Storage acco
- [Enable diagnostics in Azure Cloud Services using PowerShell](cloud-services-diagnostics-powershell.md) -- [Store and view diagnostic data in Azure Storage](https://docs.microsoft.com/azure/cloud-services/diagnostics-extension-to-storage?&amp;preserve-view=true)
+- [Store and view diagnostic data in Azure Storage](./diagnostics-extension-to-storage.md?preserve-view=)
**Responsibility**: Customer
The Azure Diagnostic extension collects and stores data in an Azure Storage acco
**Guidance**: Microsoft Antimalware for Azure, protects Azure Cloud Services and virtual machines. You have the option to deploy third-party security solutions in addition, such as web application fire walls, network firewalls, antimalware, intrusion detection and prevention systems (IDS or IPS), and more. -- [What are the features and capabilities that Azure basic IPS/IDS and DDOS provides](https://docs.microsoft.com/azure/cloud-services/cloud-services-configuration-and-management-faq#what-are-the-features-and-capabilities-that-azure-basic-ipsids-and-ddos-provides)
+- [What are the features and capabilities that Azure basic IPS/IDS and DDOS provides](./cloud-services-configuration-and-management-faq.md#what-are-the-features-and-capabilities-that-azure-basic-ipsids-and-ddos-provides)
**Responsibility**: Customer
Get-AzRoleAssignment -IncludeClassicAdministrators
Review the differences between classic subscription administrative roles. -- [Differences between three classic subscription administrative roles](https://docs.microsoft.com/azure/role-based-access-control/rbac-and-directory-admin-roles#classic-subscription-administrator-roles)
+- [Differences between three classic subscription administrative roles](../role-based-access-control/rbac-and-directory-admin-roles.md#classic-subscription-administrator-roles)
**Responsibility**: Customer
Review the differences between classic subscription administrative roles.
**Guidance**: It is recommended to create standard operating procedures around the use of dedicated administrative accounts, based on available roles and the permissions required to operate and manage the Azure Cloud Services resources. -- [Differences between the classic subscription administrative roles](https://docs.microsoft.com/azure/role-based-access-control/rbac-and-directory-admin-roles#classic-subscription-administrator-roles)
+- [Differences between the classic subscription administrative roles](../role-based-access-control/rbac-and-directory-admin-roles.md#classic-subscription-administrator-roles)
**Responsibility**: Customer
You can also edit the "permissionLevel" in Azure Cloud Service's Certificate ele
- [How to create management groups](../governance/management-groups/create-management-group-portal.md) -- [WebRole Schema](https://docs.microsoft.com/azure/cloud-services/schema-csdef-webrole#Certificate)
+- [WebRole Schema](./schema-csdef-webrole.md#Certificate)
**Responsibility**: Customer
The application data stored in temporary disks is not encrypted. The customer is
Additionally, Application Insights can monitor Azure Cloud Services apps for availability, performance, failures, and usage. This uses combined data from Application Insights SDKs with Azure Diagnostics data from your Azure Cloud Services. -- [Create, view, and manage classic metric alerts using Azure Monitor](/azure/azure-monitor/platform/alerts-classic-portal)
+- [Create, view, and manage classic metric alerts using Azure Monitor](../azure-monitor/alerts/alerts-classic-portal.md)
-- [Metric Alerts Overview](/azure/azure-monitor/platform/alerts-metric-overview)
+- [Metric Alerts Overview](../azure-monitor/alerts/alerts-metric-overview.md)
- [Application Insights for Azure Cloud service (Classic)](../azure-monitor/app/cloudservices.md)
When a customer chooses a specific operating system version for their Azure Clou
- [How to Configure Cloud service (Classic)](cloud-services-how-to-configure-portal.md) -- [Manage Guest OS version](https://docs.microsoft.com/azure/cloud-services/cloud-services-how-to-configure-portal#manage-guest-os-version)
+- [Manage Guest OS version](./cloud-services-how-to-configure-portal.md#manage-guest-os-version)
**Responsibility**: Shared
We suggest thinking through these scenarios:
Supporting documentation: -- [Risk evaluation of your Azure resources](https://docs.microsoft.com/azure/security/fundamentals/ddos-best-practices#risk-evaluation-of-your-azure-resources)
+- [Risk evaluation of your Azure resources](../security/fundamentals/ddos-best-practices.md#risk-evaluation-of-your-azure-resources)
**Responsibility**: Customer
To begin with, specify a plain text password, convert it to a secure string usin
Additionally, it is recommended to store the private keys for certificates used in Azure Cloud Services to a secured storage. -- [Configure Remote Desktop from PowerShell](https://docs.microsoft.com/azure/cloud-services/cloud-services-role-enable-remote-desktop-powershell#configure-remote-desktop-from-powershell)
+- [Configure Remote Desktop from PowerShell](./cloud-services-role-enable-remote-desktop-powershell.md#configure-remote-desktop-from-powershell)
**Responsibility**: Customer
To begin, specify a plain text password, change it to a secure string using Conv
Store the private keys for certificates used in Azure Cloud Services to a secured storage location. -- [Configure Remote Desktop from PowerShell](https://docs.microsoft.com/azure/cloud-services/cloud-services-role-enable-remote-desktop-powershell#configure-remote-desktop-from-powershell)
+- [Configure Remote Desktop from PowerShell](./cloud-services-role-enable-remote-desktop-powershell.md#configure-remote-desktop-from-powershell)
**Responsibility**: Customer
Enable the Antimalware extension with a PowerShell script in the Startup Task in
Choose the Adaptive application control feature in Azure Security Center, an intelligent, automated, end-to-end solution. It helps harden your machines against malware and enables you to block or alert on attempts to run malicious applications, including those that might otherwise be missed by antimalware solutions. -- [How can I add an Antimalware extension for my Azure Cloud Services in an automated way](https://docs.microsoft.com/azure/cloud-services/cloud-services-configuration-and-management-faq#how-can-i-add-an-antimalware-extension-for-my-cloud-services-in-an-automated-way)
+- [How can I add an Antimalware extension for my Azure Cloud Services in an automated way](./cloud-services-configuration-and-management-faq.md#how-can-i-add-an-antimalware-extension-for-my-cloud-services-in-an-automated-way)
-- [Antimalware Deployment Scenarios](https://docs.microsoft.com/azure/security/fundamentals/antimalware#antimalware-deployment-scenarios)
+- [Antimalware Deployment Scenarios](../security/fundamentals/antimalware.md#antimalware-deployment-scenarios)
- [Adaptive application controls](../security-center/security-center-adaptive-application.md)
Clearly mark subscriptions (for example, production, non-production) and create
## Next steps -- See the [Azure Security Benchmark V2 overview](/azure/security/benchmarks/overview)-- Learn more about [Azure security baselines](/azure/security/benchmarks/security-baselines-overview)
+- See the [Azure Security Benchmark V2 overview](../security/benchmarks/overview.md)
+- Learn more about [Azure security baselines](../security/benchmarks/security-baselines-overview.md)
cloud-shell Security Baseline https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-shell/security-baseline.md
Additionally, clearly mark subscriptions (for ex. production, non-prod) and crea
## Next steps -- See the [Azure Security Benchmark V2 overview](/azure/security/benchmarks/overview)-- Learn more about [Azure security baselines](/azure/security/benchmarks/security-baselines-overview)
+- See the [Azure Security Benchmark V2 overview](../security/benchmarks/overview.md)
+- Learn more about [Azure security baselines](../security/benchmarks/security-baselines-overview.md)
cognitive-services Batch Anomaly Detection Powerbi https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Anomaly-Detector/tutorials/batch-anomaly-detection-powerbi.md
Apply colors to your chart by clicking on the **Format** tool and **Data colors*
## Next steps > [!div class="nextstepaction"]
->[Streaming anomaly detection with Azure Databricks](anomaly-detection-streaming-databricks.md)
+>[Streaming anomaly detection with Azure Databricks](../overview.md)
cognitive-services Video Reviews Quickstart Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Content-Moderator/video-reviews-quickstart-dotnet.md
For the video frames (images), use the following images:
![Video frame thumbnail 1](images/ams-video-frame-thumbnails-1.PNG) | ![Video frame thumbnail 2](images/ams-video-frame-thumbnails-2.PNG) | ![Video frame thumbnail 3](images/ams-video-frame-thumbnails-3.PNG) | | :: | :: | :: |
-[Frame 1](https://blobthebuilder.blob.core.windows.net/sampleframes/ams-video-frame1-00-17.PNG) | [Frame 2](https://blobthebuilder.blob.core.windows.net/sampleframes/ams-video-frame-2-01-04.PNG) | [Frame 3](https://blobthebuilder.blob.core.windows.net/sampleframes/ams-video-frame-3-02-24.PNG) |
+Frame 1 | Frame 2 | Frame 3 |
## Create your Visual Studio project
cognitive-services Speech Synthesis Markup https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/speech-synthesis-markup.md
Use this table to determine which speaking styles are supported for each neural
|-||-| | `en-US-AriaNeural` | `style="newscast-formal"` | Expresses a formal, confident and authoritative tone for news delivery | | | `style="newscast-casual"` | Expresses a versatile and casual tone for general news delivery |
+| | `style="narration-professional"` | Express a professional, objective tone for content reading |
| | `style="customerservice"` | Expresses a friendly and helpful tone for customer support | | | `style="chat"` | Expresses a casual and relaxed tone | | | `style="cheerful"` | Expresses a positive and happy tone |
For more information, see <a href="https://docs.microsoft.com/swift/cognitive-se
## Next steps
-* [Language support: voices, locales, languages](language-support.md)
+* [Language support: voices, locales, languages](language-support.md)
communication-services Authentication https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/authentication.md
Another type of authentication uses *user access tokens* to authenticate against
## Authentication Options
-The following table shows the Azure Communication Services client libraries and their authentication options:
+The following table shows the Azure Communication Services SDKs and their authentication options:
-| Client Library | Authentication option |
+| SDK | Authentication option |
| -- | -| | Identity | Access Key or Managed Identity | | SMS | Access Key or Managed Identity |
Each authorization option is briefly described below:
### Access Key
-Access key authentication is suitable for service applications running in a trusted service environment. Your access key can be found in the Azure Communication Services portal. The service application uses it as a credential to initialize the corresponding client libraries. See an example of how it is used in the [Identity client library](../quickstarts/access-tokens.md).
+Access key authentication is suitable for service applications running in a trusted service environment. Your access key can be found in the Azure Communication Services portal. The service application uses it as a credential to initialize the corresponding SDKs. See an example of how it is used in the [Identity SDK](../quickstarts/access-tokens.md).
Since the access key is part of the connection string of your resource, authentication with a connection string is equivalent to authentication with an access key.
If you wish to call ACS' APIs manually using an access key, then you will need t
Managed Identities, provides superior security and ease of use over other authorization options. For example, by using Azure AD, you avoid having to store your account access key within your code, as you do with Access Key authorization. While you can continue to use Access Key authorization with communication services applications, Microsoft recommends moving to Azure AD where possible.
-To set up a managed identity, [create a registered application from the Azure CLI](../quickstarts/managed-identity-from-cli.md). Then, the endpoint and credentials can be used to authenticate the client libraries. See examples of how [managed identity](../quickstarts/managed-identity.md) is used.
+To set up a managed identity, [create a registered application from the Azure CLI](../quickstarts/managed-identity-from-cli.md). Then, the endpoint and credentials can be used to authenticate the SDKs. See examples of how [managed identity](../quickstarts/managed-identity.md) is used.
### User Access Tokens
-User access tokens are generated using the Identity client library and are associated with users created in the Identity client library. See an example of how to [create users and generate tokens](../quickstarts/access-tokens.md). Then, user access tokens are used to authenticate participants added to conversations in the Chat or Calling SDK. For more information, see [add chat to your app](../quickstarts/chat/get-started.md). User access token authentication is different compared to access key and managed identity authentication in that it is used to authenticate a user rather than a secured Azure resource.
+User access tokens are generated using the Identity SDK and are associated with users created in the Identity SDK. See an example of how to [create users and generate tokens](../quickstarts/access-tokens.md). Then, user access tokens are used to authenticate participants added to conversations in the Chat or Calling SDK. For more information, see [add chat to your app](../quickstarts/chat/get-started.md). User access token authentication is different compared to access key and managed identity authentication in that it is used to authenticate a user rather than a secured Azure resource.
## Next steps
communication-services Call Flows https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/call-flows.md
The section below gives an overview of the call flows in Azure Communication Ser
When you establish a peer-to-peer or group call, two protocols are used behind the scenes - HTTP (REST) for signaling and SRTP for media.
-Signaling between the client libraries or between client libraries and Communication Services Signaling Controllers is handled with HTTP REST (TLS). For Real-Time Media Traffic (RTP), the User Datagram Protocol (UDP) is preferred. If the use of UDP is prevented by your firewall, the client library will use the Transmission Control Protocol (TCP) for media.
+Signaling between the SDKs or between SDKs and Communication Services Signaling Controllers is handled with HTTP REST (TLS). For Real-Time Media Traffic (RTP), the User Datagram Protocol (UDP) is preferred. If the use of UDP is prevented by your firewall, the SDK will use the Transmission Control Protocol (TCP) for media.
Let's review the signaling and media protocols in various scenarios.
Let's review the signaling and media protocols in various scenarios.
### Case 1: VoIP where a direct connection between two devices is possible
-In one-to-one VoIP or video calls, traffic prefers the most direct path. "Direct path" means that if two client libraries can reach each other directly, they'll establish a direct connection. This is usually possible when two client libraries are in the same subnet (for example, in a subnet 192.168.1.0/24) or two when the devices each live in subnets that can see each other (client libraries in subnet 10.10.0.0/16 and 192.168.1.0/24 can reach out each other).
+In one-to-one VoIP or video calls, traffic prefers the most direct path. "Direct path" means that if two SDKs can reach each other directly, they'll establish a direct connection. This is usually possible when two SDKs are in the same subnet (for example, in a subnet 192.168.1.0/24) or two when the devices each live in subnets that can see each other (SDKs in subnet 10.10.0.0/16 and 192.168.1.0/24 can reach out each other).
:::image type="content" source="./media/call-flows/about-voice-case-1.png" alt-text="Diagram showing a Direct VOIP call between users and Communication Services."::: ### Case 2: VoIP where a direct connection between devices is not possible, but where connection between NAT devices is possible
-If two devices are located in subnets that can't reach each other (for example, Alice works from a coffee shop and Bob works from his home office) but the connection between the NAT devices is possible, the client side client libraries will establish connectivity via NAT devices.
+If two devices are located in subnets that can't reach each other (for example, Alice works from a coffee shop and Bob works from his home office) but the connection between the NAT devices is possible, the client side SDKs will establish connectivity via NAT devices.
-For Alice it will be the NAT of the coffee shop and for Bob it will be the NAT of the home office. Alice's device will send the external address of her NAT and Bob's will do the same. The client libraries learn the external addresses from a STUN (Session Traversal Utilities for NAT) service that Azure Communication Services provides free of charge. The logic that handles the handshake between Alice and Bob is embedded within the Azure Communication Services provided client libraries. (You don't need any additional configuration)
+For Alice it will be the NAT of the coffee shop and for Bob it will be the NAT of the home office. Alice's device will send the external address of her NAT and Bob's will do the same. The SDKs learn the external addresses from a STUN (Session Traversal Utilities for NAT) service that Azure Communication Services provides free of charge. The logic that handles the handshake between Alice and Bob is embedded within the Azure Communication Services provided SDKs. (You don't need any additional configuration)
:::image type="content" source="./media/call-flows/about-voice-case-2.png" alt-text="Diagram showing a VOIP call which utilizes a STUN connection."::: ### Case 3: VoIP where neither a direct nor NAT connection is possible
-If one or both client devices are behind a symmetric NAT, a separate cloud service to relay the media between the two client libraries is required. This service is called TURN (Traversal Using Relays around NAT) and is also provided by the Communication Services. The Communication Services calling client library automatically uses TURN services based on detected network conditions. Use of Microsoft's TURN service is charged separately.
+If one or both client devices are behind a symmetric NAT, a separate cloud service to relay the media between the two SDKs is required. This service is called TURN (Traversal Using Relays around NAT) and is also provided by the Communication Services. The Communication Services Calling SDK automatically uses TURN services based on detected network conditions. Use of Microsoft's TURN service is charged separately.
:::image type="content" source="./media/call-flows/about-voice-case-3.png" alt-text="Diagram showing a VOIP call which utilizes a TURN connection.":::
The default real-time protocol (RTP) for group calls is User Datagram Protocol (
:::image type="content" source="./media/call-flows/about-voice-group-calls.png" alt-text="Diagram showing UDP media process flow within Communication Services.":::
-If the client library can't use UDP for media due to firewall restrictions, an attempt will be made to use the Transmission Control Protocol (TCP). Note that the Media Processor component requires UDP, so when this happens, the Communication Services TURN service will be added to the group call to translate TCP to UDP. TURN charges will be incurred in this case unless TURN capabilities are manually disabled.
+If the SDK can't use UDP for media due to firewall restrictions, an attempt will be made to use the Transmission Control Protocol (TCP). Note that the Media Processor component requires UDP, so when this happens, the Communication Services TURN service will be added to the group call to translate TCP to UDP. TURN charges will be incurred in this case unless TURN capabilities are manually disabled.
:::image type="content" source="./media/call-flows/about-voice-group-calls-2.png" alt-text="Diagram showing TCP media process flow within Communication Services.":::
-### Case 5: Communication Services client library and Microsoft Teams in a scheduled Teams meeting
+### Case 5: Communication Services SDK and Microsoft Teams in a scheduled Teams meeting
Signaling flows through the signaling controller. Media flows through the Media Processor. The signaling controller and Media Processor are shared between Communication Services and Microsoft Teams.
communication-services Concepts https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/chat/concepts.md
[!INCLUDE [Public Preview Notice](../../includes/public-preview-include.md)]
-Azure Communication Services Chat client libraries can be used to add real-time text chat to your applications. This page summarizes key Chat concepts and capabilities.
+Azure Communication Services Chat SDKs can be used to add real-time text chat to your applications. This page summarizes key Chat concepts and capabilities.
-See the [Communication Services Chat client library Overview](./sdk-features.md) to learn more about specific client library languages and capabilities.
+See the [Communication Services Chat SDK Overview](./sdk-features.md) to learn more about specific SDK languages and capabilities.
## Chat overview
Communication Services Chat shares user-generated messages as well as system-gen
## Real-time signaling
-The Chat JavaScript client library includes real-time signaling. This allows clients to listen for real-time updates and incoming messages to a chat thread without having to poll the APIs. Available events include:
+The Chat JavaScript SDK includes real-time signaling. This allows clients to listen for real-time updates and incoming messages to a chat thread without having to poll the APIs. Available events include:
- `ChatMessageReceived` - when a new message is sent to a chat thread. This event is not sent for auto generated system messages which were discussed in the previous topic. - `ChatMessageEdited` - when a message is edited in a chat thread.
The Chat JavaScript client library includes real-time signaling. This allows cli
Real-time signaling allows your users to chat in real-time. Your services can use Azure Event Grid to subscribe to chat-related events. For more details, see [Event Handling conceptual](https://docs.microsoft.com/azure/event-grid/event-schema-communication-services?tabs=event-grid-event-schema).
-## Using Cognitive Services with Chat client library to enable intelligent features
+## Using Cognitive Services with Chat SDK to enable intelligent features
-You can use [Azure Cognitive APIs](../../../cognitive-services/index.yml) with the Chat client library to add intelligent features to your applications. For example, you can:
+You can use [Azure Cognitive APIs](../../../cognitive-services/index.yml) with the Chat SDK to add intelligent features to your applications. For example, you can:
- Enable users to chat with each other in different languages. - Help a support agent prioritize tickets by detecting a negative sentiment of an incoming issue from a customer.
This way, the message history will contain both original and translated messages
> [Get started with chat](../../quickstarts/chat/get-started.md) The following documents may be interesting to you: -- Familiarize yourself with the [Chat client library](sdk-features.md)
+- Familiarize yourself with the [Chat SDK](sdk-features.md)
communication-services Sdk Features https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/chat/sdk-features.md
Title: Chat client library overview for Azure Communication Services
+ Title: Chat SDK overview for Azure Communication Services
-description: Learn about the Azure Communication Services chat client library.
+description: Learn about the Azure Communication Services Chat SDK.
-# Chat client library overview
+# Chat SDK overview
[!INCLUDE [Public Preview Notice](../../includes/public-preview-include.md)]
-Azure Communication Services Chat client libraries can be used to add rich, real-time chat to your applications.
+Azure Communication Services Chat SDKs can be used to add rich, real-time chat to your applications.
-## Chat client library capabilities
+## Chat SDK capabilities
-The following list presents the set of features which are currently available in the Communication Services chat client libraries.
+The following list presents the set of features which are currently available in the Communication Services chat SDKs.
| Group of features | Capability | JavaScript | Java | .NET | Python | iOS | Android | |--|-||--|-|--|-|-|
The following list presents the set of features which are currently available in
**The proprietary signaling package is implemented using web sockets. It will fallback to long polling if web sockets are unsupported.
-## JavaScript chat client library support by OS and browser
+## JavaScript Chat SDK support by OS and browser
The following table represents the set of supported browsers and versions which are currently available. | | Windows | macOS | Ubuntu | Linux | Android | iOS | iPad OS| |--|-|--|-||||-|
-| **Chat client library** | Firefox*, Chrome*, new Edge | Firefox*, Chrome*, Safari* | Chrome* | Chrome* | Chrome* | Safari* | Safari* |
+| **Chat SDK** | Firefox*, Chrome*, new Edge | Firefox*, Chrome*, Safari* | Chrome* | Chrome* | Chrome* | Safari* | Safari* |
*Note that the latest version is supported in addition to the previous two releases.<br/>
communication-services Client And Server Architecture https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/client-and-server-architecture.md
Every Azure Communication Services application will have **client applications**
## User access management
-Azure Communication Services client libraries require `user access tokens` to access Communication Services resources securely. `User access tokens` should be generated and managed by a trusted service due to the sensitive nature of the token and the connection string necessary to generate them. Failure to properly manage access tokens can result in additional charges due to misuse of resources. It is highly recommended to make use of a trusted service for user management. The trusted service will generate the tokens and pass them back to the client using proper encryption. A sample architecture flow can be found below:
+Azure Communication Services SDKs require `user access tokens` to access Communication Services resources securely. `User access tokens` should be generated and managed by a trusted service due to the sensitive nature of the token and the connection string necessary to generate them. Failure to properly manage access tokens can result in additional charges due to misuse of resources. It is highly recommended to make use of a trusted service for user management. The trusted service will generate the tokens and pass them back to the client using proper encryption. A sample architecture flow can be found below:
:::image type="content" source="../media/scenarios/archdiagram-access.png" alt-text="Diagram showing user access token architecture.":::
For additional information review [best identity management practices](../../sec
## Browser communication
-Azure Communications JavaScript client libraries can enable web applications with rich text, voice, and video interaction. The application directly interacts with Azure Communication Services through the client library to access the data plane and deliver real-time text, voice, and video communication. A sample architecture flow can be found below:
+Azure Communications JavaScript SDKs can enable web applications with rich text, voice, and video interaction. The application directly interacts with Azure Communication Services through the SDK to access the data plane and deliver real-time text, voice, and video communication. A sample architecture flow can be found below:
:::image type="content" source="../media/scenarios/archdiagram-browser.png" alt-text="Diagram showing the browser to browser Architecture for Communication Services.":::
Many scenarios are best served with native applications. Azure Communication Ser
## Voice and SMS over the public switched telephony network (PSTN)
-Communicating over the phone system can dramatically increase the reach of your application. To support PSTN voice and SMS scenarios, Azure Communication Services helps you [acquire phone numbers](../quickstarts/telephony-sms/get-phone-number.md) directly from the Azure portal or using REST APIs and client libraries. Once phone numbers are acquired, they can be used to reach customers using both PSTN calling and SMS in both inbound and outbound scenarios. A sample architecture flow can be found below:
+Communicating over the phone system can dramatically increase the reach of your application. To support PSTN voice and SMS scenarios, Azure Communication Services helps you [acquire phone numbers](../quickstarts/telephony-sms/get-phone-number.md) directly from the Azure portal or using REST APIs and SDKs. Once phone numbers are acquired, they can be used to reach customers using both PSTN calling and SMS in both inbound and outbound scenarios. A sample architecture flow can be found below:
> [!Note] > During public preview, the provisioning of US phone numbers is available to customers with billing addresses located within the US and Canada.
For more information on PSTN phone numbers, see [Phone number types](../concepts
## Humans communicating with bots and other services
-Azure Communication Services supports human-to-system communication though text and voice channels, with services that directly access the Azure Communication Services data plane. For example, you can have a bot answer incoming phone calls or participate in a web chat. Azure Communication Services provides client libraries that enable these scenarios for calling and chat. A sample architecture flow can be found below:
+Azure Communication Services supports human-to-system communication though text and voice channels, with services that directly access the Azure Communication Services data plane. For example, you can have a bot answer incoming phone calls or participate in a web chat. Azure Communication Services provides SDKs that enable these scenarios for calling and chat. A sample architecture flow can be found below:
:::image type="content" source="../media/scenarios/archdiagram-bot.png" alt-text="Diagram showing Communication Services Bot architecture.":::
Azure Communication Services supports human-to-system communication though text
You may want to exchange arbitrary data between users, for example to synchronize a shared mixed reality or gaming experience. The real-time data plane used for text, voice, and video communication is available to you directly in two ways: -- **Calling client library** - Devices in a call have access to APIs for sending and receiving data over the call channel. This is the easiest way to add data communication to an existing interaction.
+- **Calling SDK** - Devices in a call have access to APIs for sending and receiving data over the call channel. This is the easiest way to add data communication to an existing interaction.
- **STUN/TURN** - Azure Communication Services makes standards-compliant STUN and TURN services available to you. This allows you to build a heavily customized transport layer on top of these standardized primitives. You can author your own standards-compliant client or use open-source libraries such as [WinRTC](https://github.com/microsoft/winrtc). ## Next steps
communication-services Known Issues https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/known-issues.md
Title: Azure Communication Services - FAQ / Known issues
+ Title: Azure Communication Services - known issues
description: Learn more about Azure Communication Services-+
-# FAQ / Known issues
-This article provides information about known issues and frequently asked questions related to Azure Communication Services.
+# Known issues: Azure Communication Services client libraries
+This article provides information about limitations and known issues related to the Azure Communication Services client libraries.
-## FAQ
+> [!IMPORTANT]
+> There are multiple factors that can affect the quality of your calling experience. Refer to the **[network requirements](https://docs.microsoft.com/azure/communication-services/concepts/voice-video-calling/network-requirements)** documentation to learn more about Communication Services network configuration and testing best practices.
-### Why is the quality of my video degraded?
-The quality of video streams is determined by the size of the client-side renderer that was used to initiate that stream. When subscribing to a remote stream, a receiver will determine its own resolution based on the sender's client-side renderer dimensions.
+## JavaScript client library
-### Why is it not possible to enumerate/select mic/speaker devices on Safari?
+This section provides information about known issues associated with JavaScript voice and video calling client libraries in Azure Communication Services.
-Applications can't enumerate/select mic/speaker devices (like bluetooth) on Safari iOS/iPad. It's a limitation of the OS - there's always only one device.
+### After refreshing the page, user is not removed from the call immediately
+If user is in a call and decides to refresh the page, the Communication Services client library may not be able to inform the Communication Services media service that it's about to disconnect. The Communication Services media service will not remove such user immediately from the call but it will wait for a user to rejoin assuming problems with network connectivity. User will be removed from the call after media service will timeout.
-For Safari on MacOS - app can't enumerate/select speaker through Communication Services Device Manager - these have to be selected via the OS. If you use Chrome on MacOS, the app can enumerate/select devices through the Communication Services Device Manager.
+We encourage developers build experiences that don't require end-users to refresh the page of your application while participating in a call. If user will refresh the page, the best way to handle it for the app is to reuse the same Communication Services user ID for the user after he returns back to the application after refreshes.
-## Known issues
+For the perspective of other participants in the call, such user will remain in the call for predefined amount of time (1-2 mins).
+If user will rejoin with the same Communication Services user ID, he will be represented as the same, existing object in the `remoteParticipants` collection.
+If previously user was sending video, `videoStreams` collection will keep previous stream information until service will timeout and remove it, in this scenario application may decide to observe any new streams added to the collection and render one with highest `id`.
-This section provides information about known issues associated with Azure Communication Services.
+
+### It's not possible to render multiple previews from multiple devices on web
+This is a known limitation. Refer to the [calling client library overview](https://docs.microsoft.com/azure/communication-services/concepts/voice-video-calling/calling-sdk-features) for more information.
+
+### Enumeration of the microphone and speaker devices is not possible in Safari when the application runs on iOS or iPadOS
+Applications can't enumerate/select mic/speaker devices (like Bluetooth) on Safari iOS/iPad. This is a known operating system limitation.
+
+If you're using Safari on macOS, your app will not be able to enumerate/select speakers through the Communication Services Device Manager. In this scenario, devices must be selected via the OS. If you use Chrome on macOS, the app can enumerate/select devices through the Communication Services Device Manager.
+
+### Audio connectivity is lost when receiving SMS messages or calls during an ongoing VoIP call
+Mobile browsers don't maintain connectivity while in the background state. This can lead to a degraded call experience if the VoIP call was interrupted by text message or incoming PSTN call that pushes your application into the background.
+
+<br/>Client library: Calling (JavaScript)
+<br/>Browsers: Safari, Chrome
+<br/>Operating System: iOS, Android
### Repeatedly switching video devices may cause video streaming to temporarily stop
Switching between video devices may cause your video stream to pause while the s
#### Possible causes Streaming from and switching between media devices is computationally intensive. Switching frequently can cause performance degradation. Developers are encouraged to stop one device stream before starting another.+
+### Bluetooth headset microphone is not detected therefore is not audible during the call on Safari on iOS
+Bluetooth headsets aren't supported by Safari on iOS. Your Bluetooth device will not be listed in available microphone options and other participants will not be able to hear you if you try using Bluetooth over Safari.
+
+#### Possible causes
+This is a known macOS/iOS/iPadOS operating system limitation.
+
+With Safari on **macOS** and **iOS/iPadOS**, it is not possible to enumerating/selecting speaker devices through the Communication Services Device Manager since speakers enumeration/selection is not supported by Safari. In this scenario, your device selection should be updated via the operating system.
+
+### Rotation of a device can create poor video quality
+Users may experience degraded video quality when devices are rotated.
+
+<br/>Devices affected: Google Pixel 5, Google Pixel 3a, Apple iPad 8, and Apple iPad X
+<br/>Client library: Calling (JavaScript)
+<br/>Browsers: Safari, Chrome
+<br/>Operating System: iOS, Android
++
+### Camera switching makes the screen freeze
+When a Communication Services user joins a call using the JavaScript calling client library and then hits the camera switch button, the UI may become completely unresponsive until the application is refreshed or browser is pushed to the background by user.
+
+<br/>Devices affected: Google Pixel 4a
+<br/>Client library: Calling (JavaScript)
+<br/>Browsers: Chrome
+<br/>Operating System: iOS, Android
++
+#### Possible causes
+Under investigation.
+
+### If the video signal was stopped while the call is in "connecting" state, the video will not be sent after the call started
+If users decide to quickly turn video on/off while call is in `Connecting` state - this may lead to problem with stream acquired for the call. We encourage developers to build their apps in a way that doesn't require video to be turned on/off while call is in `Connecting` state. This issue may cause degraded video performance in the following scenarios:
+
+ - If User starts with audio and then start and stop video while the call is in `Connecting` state.
+ - If User starts with audio and then start and stop video while the call is in `Lobby` state.
++
+#### Possible causes
+Under investigation.
+
+### Sometimes it takes a long time to render remote participant videos
+During an ongoing group call, _User A_ sends video and then _User B_ joins the call. Sometimes, User B doesn't see video from User A, or User A's video begins rendering after a long delay. This could be caused by a network environment that requires further configuration. Refer to the [network requirements](https://docs.microsoft.com/azure/communication-services/concepts/voice-video-calling/network-requirements) documentation for network configuration guidance.
communication-services Notifications https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/notifications.md
[!INCLUDE [Public Preview Notice](../includes/public-preview-include.md)]
-The Azure Communication Services chat and calling client libraries create a real-time messaging channel that allows signaling messages to be pushed to connected clients in an efficient, reliable manner. This enables you to build rich, real-time communication functionality into your applications without the need to implement complicated HTTP polling logic. However, on mobile applications, this signaling channel only remains connected when your application is active in the foreground. If you want your users to receive incoming calls or chat messages while your application is in the background, you should use push notifications.
+The Azure Communication Services chat and calling SDKs create a real-time messaging channel that allows signaling messages to be pushed to connected clients in an efficient, reliable manner. This enables you to build rich, real-time communication functionality into your applications without the need to implement complicated HTTP polling logic. However, on mobile applications, this signaling channel only remains connected when your application is active in the foreground. If you want your users to receive incoming calls or chat messages while your application is in the background, you should use push notifications.
Push notifications allow you to send information from your application to users' mobile devices. You can use push notifications to show a dialog, play a sound, or display incoming call UI. Azure Communication Services provides integrations with [Azure Event Grid](../../event-grid/overview.md) and [Azure Notification Hubs](../../notification-hubs/notification-hubs-push-notification-overview.md) that enable you to add push notifications to your apps.
communication-services Pricing https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/pricing.md
Prices for Azure Communication Services are generally based on a pay-as-you-go m
## Voice/Video calling and screen sharing
-Azure Communication Services allow for adding voice/video calling and screen sharing to your applications. You can embed the experience into your applications using JavaScript, Objective-C (Apple), Java (Android), or .NET client libraries. Refer to our [full list of available client libraries](./sdk-options.md).
+Azure Communication Services allow for adding voice/video calling and screen sharing to your applications. You can embed the experience into your applications using JavaScript, Objective-C (Apple), Java (Android), or .NET SDKs. Refer to our [full list of available SDKs](./sdk-options.md).
### Pricing
Calling and screen-sharing services are charged on a per minute per participant
Each participant of the call will count in billing for each minute they're connected to the call. This holds true regardless of whether the user is video calling, voice calling, or screen-sharing.
-### Pricing example: Group audio/video call using JS and iOS client libraries
+### Pricing example: Group audio/video call using JS and iOS SDKs
-Alice made a group call with her colleagues, Bob and Charlie. Alice and Bob used the JS client libraries, Charlie iOS client libraries.
+Alice made a group call with her colleagues, Bob and Charlie. Alice and Bob used the JS SDKs, Charlie iOS SDKs.
- The call lasts a total of 60 minutes. - Alice and Bob participated for the entire call. Alice turned on her video for five minutes and shared her screen for 23 minutes. Bob had the video on for the whole call (60 minutes) and shared his screen for 12 minutes.
Alice made a group call with her colleagues, Bob and Charlie. Alice and Bob used
**Total cost for the group call**: $0.48 + $0.172 = $0.652
-### Pricing example: A user of the Communication Services JS client library joins a scheduled Microsoft Teams meeting
+### Pricing example: A user of the Communication Services JavaScript SDK joins a scheduled Microsoft Teams meeting
-Alice is a doctor meeting with her patient, Bob. Alice will be joining the visit from the Teams Desktop application. Bob will receive a link to join using the healthcare provider website, which connects to the meeting using the Communication Services JS client library. Bob will use his mobile phone to enter the meeting using a web browser (iPhone with Safari). Chat will be available during the virtual visit.
+Alice is a doctor meeting with her patient, Bob. Alice will be joining the visit from the Teams Desktop application. Bob will receive a link to join using the healthcare provider website, which connects to the meeting using the Communication Services JavaScript SDK. Bob will use his mobile phone to enter the meeting using a web browser (iPhone with Safari). Chat will be available during the virtual visit.
- The call lasts a total of 30 minutes. - Alice and Bob participate for the entire call. Alice turns on her video five minutes after the call starts and shares her screen for 13 minutes. Bob has his video on for the whole call.
Alice is a doctor meeting with her patient, Bob. Alice will be joining the visit
*Alice's participation is covered by her Teams license. Your Azure invoice will show the minutes and chat messages that Teams users had with Communication Services Users for your convenience, but those minutes and messages originating from the Teams client will not cost. **Total cost for the visit**:-- User joining using the Communication Services JS client library: $0.12 + $0.0024 = $0.1224
+- User joining using the Communication Services JavaScript SDK: $0.12 + $0.0024 = $0.1224
- User joining on Teams Desktop Application: $0 (covered by Teams license) ## Chat
-With Communication Services you can enhance your application with the ability to send and receive chat messages between two or more users. Chat client libraries are available for JavaScript, .NET, Python and Java. Refer to [this page to learn about client libraries](./sdk-options.md)
+With Communication Services you can enhance your application with the ability to send and receive chat messages between two or more users. Chat SDKs are available for JavaScript, .NET, Python and Java. Refer to [this page to learn about SDKs](./sdk-options.md)
### Price
communication-services Sdk Options https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/sdk-options.md
Title: Client libraries and REST APIs for Azure Communication Services
+ Title: SDKs and REST APIs for Azure Communication Services
-description: Learn more about Azure Communication Services client libraries and REST APIs.
+description: Learn more about Azure Communication Services SDKs and REST APIs.
Previously updated : 03/10/2021 Last updated : 03/25/2021
-# Client libraries and REST APIs
+# SDKs and REST APIs
+Azure Communication Services capabilities are conceptually organized into six areas. Most areas have fully open-sourced client libraries programmed against published REST APIs that you can use directly over the Internet. The Calling client library uses proprietary network interfaces and is currently closed-source. Samples and more technical details for SDKs are published in the [Azure Communication Services GitHub repo](https://github.com/Azure/communication).
+
+## REST APIs
+Communication Services APIs are documented alongside other Azure REST APIs in [docs.microsoft.com](/rest/api/azure/). This documentation will tell you how to structure your HTTP messages and offers guidance for using Postman. This documentation is also offered in Swagger format on [GitHub](https://github.com/Azure/azure-rest-api-specs).
-Azure Communication Services capabilities are conceptually organized into six areas. Some areas have fully open-sourced client libraries. The Calling client library uses proprietary network interfaces and is currently closed-source, and the Chat library includes a closed-source dependency. Samples and additional technical details for client libraries are published in the [Azure Communication Services GitHub repo](https://github.com/Azure/communication).
+## SDKs
-## Client libraries
+| Assembly | Namespaces| Protocols | Capabilities |
+||-||--|
+| Azure Resource Manager | Azure.ResourceManager.Communication | [REST](https://docs.microsoft.com/rest/api/communication/communicationservice)| Provision and manage Communication Services resources|
+| Common | Azure.Communication.Common| REST | Provides base types for other client libraries |
+| Identity | Azure.Communication.Identity| [REST](https://docs.microsoft.com/rest/api/communication/communicationidentity)| Manage users, access tokens|
+| Phone numbers _(beta)_| Azure.Communication.PhoneNumbers| [REST](https://docs.microsoft.com/rest/api/communication/phonenumberadministration)| Acquire and manage phone numbers |
+| Chat | Azure.Communication.Chat| [REST](https://docs.microsoft.com/rest/api/communication/) with proprietary signaling | Add real-time text based chat to your applications |
+| SMS| Azure.Communication.SMS | [REST](https://docs.microsoft.com/rest/api/communication/sms)| Send and receive SMS messages|
+| Calling| Azure.Communication.Calling | Proprietary transport | Use voice, video, screen-sharing, and other real-time data communication capabilities |
-| Assembly | Protocols |Open vs. Closed Source| Namespaces | Capabilities |
-| - | | |-- | |
-| Azure Resource Manager | REST | Open | Azure.ResourceManager.Communication | Provision and manage Communication Services resources |
-| Common | REST | Open | Azure.Communication.Common | Provides base types for other client libraries |
-| Identity | REST | Open | Azure.Communication.Identity | Manage users, access tokens |
-| Phone numbers | REST | Open | Azure.Communication.PhoneNumbers | Managing phone numbers |
-| Chat | REST with proprietary signaling | Open with closed source signaling package | Azure.Communication.Chat | Add real-time text based chat to your applications |
-| SMS | REST | Open | Azure.Communication.SMS | Send and receive SMS messages |
-| Calling | Proprietary transport | Closed |Azure.Communication.Calling | Leverage voice, video, screen-sharing, and other real-time data communication capabilities |
+The Azure Resource Manager, Identity, and SMS client libraries are focused on service integration, and in many cases security issues arise if you integrate these functions into end-user applications. The Common and Chat client libraries are suitable for service and client applications. The Calling client library is designed for client applications. A client library focused on service scenarios is in development.
-Note that the Azure Resource Manager, Identity, and SMS client libraries are focused on service integration, and in many cases security issues arise if you integrate these functions into end-user applications. The Common and Chat client libraries are suitable for service and client applications. The Calling client library is designed for client applications. A client library focused on service scenarios is in development.
### Languages and publishing locations
-Publishing locations for individual client library packages are detailed below.
+Publishing locations for individual SDK packages are detailed below.
| Area | JavaScript | .NET | Python | Java SE | iOS | Android | Other | | -- | - | - | | - | -- | -- | |
Publishing locations for individual client library packages are detailed below.
| Calling | [npm](https://www.npmjs.com/package/@azure/communication-calling) | - | - | - | [GitHub](https://github.com/Azure/Communication/releases) | [Maven](https://search.maven.org/artifact/com.azure.android/azure-communication-calling/) | - | | Reference Documentation | [docs](https://azure.github.io/azure-sdk-for-js/communication.html) | [docs](https://azure.github.io/azure-sdk-for-net/communication.html) | - | [docs](http://azure.github.io/azure-sdk-for-java/communication.html) | [docs](/objectivec/communication-services/calling/) | [docs](/java/api/com.azure.communication.calling) | - |
-## REST APIs
-Communication Services APIs are documented alongside other Azure REST APIs in [docs.microsoft.com](/rest/api/azure/). This documentation will tell you how to structure your HTTP messages and offers guidance for using Postman. This documentation is also offered in Swagger format on [GitHub](https://github.com/Azure/azure-rest-api-specs).
+## REST API Throttles
+Certain REST APIs and corresponding SDK methods have throttle limits you should be mindful of. Exceeding these throttle limits will trigger a `429 - Too Many Requests` error response. These limits can be increased through [a request to Azure Support](https://docs.microsoft.com/azure/azure-portal/supportability/how-to-create-azure-support-request).
+
+| API | Throttle |
+|||
+| [All Search Telephone Number Plan APIs](https://docs.microsoft.com/rest/api/communication/phonenumberadministration) | 4 requests/day |
+| [Purchase Telephone Number Plan](https://docs.microsoft.com/rest/api/communication/phonenumberadministration/purchasesearch) | 1 request/day |
+| [Send SMS](https://docs.microsoft.com/rest/api/communication/sms/send) | 200 requests/minute |
-## Additional support details
-### iOS and Android support details
+## SDK platform support details
-- Communication Services iOS client libraries target iOS version 13+, and Xcode 11+.-- Android Java client libraries target Android API level 21+ and Android Studio 4.0+
+### iOS and Android
-### .NET support details
+- Communication Services iOS SDKs target iOS version 13+, and Xcode 11+.
+- Android Java SDKs target Android API level 21+ and Android Studio 4.0+
-With the exception of Calling, Communication Services packages target .NET Standard 2.0 which supports the platforms listed below.
+### .NET
+
+Except for Calling, Communication Services packages target .NET Standard 2.0, which supports the platforms listed below.
Support via .NET Framework 4.6.1 - Windows 10, 8.1, 8 and 7
Support via .NET Core 2.0:
- Xamarin iOS 10.14 - Xamarin Mac 3.8
-## Calling client library timeouts
-
-The following timeouts apply to the Communication Services calling client libraries:
-
-| Action | Timeout in seconds |
-| -- | - |
-| Reconnect/removal participant | 120 |
-| Add or remove new modality from a call (Start/stop video or screen sharing) | 40 |
-| Call Transfer operation timeout | 60 |
-| 1:1 call establishment timeout | 85 |
-| Group call establishment timeout | 85 |
-| PSTN call establishment timeout | 115 |
-| Promote 1:1 call to a group call timeout | 115 |
-- ## API stability expectations > [!IMPORTANT]
-> This section provides guidance on REST APIs and client libraries marked **stable**. APIs marked pre-release, preview, or beta may be changed or deprecated **without notice**.
+> This section provides guidance on REST APIs and SDKs marked **stable**. APIs marked pre-release, preview, or beta may be changed or deprecated **without notice**.
-In the future we may retire versions of the Communication Services client libraries, and we may introduce breaking changes to our REST APIs and released client libraries. Azure Communication Services will *generally* follow two supportability policies for retiring service versions:
+In the future we may retire versions of the Communication Services SDKs, and we may introduce breaking changes to our REST APIs and released SDKs. Azure Communication Services will *generally* follow two supportability policies for retiring service versions:
-- You'll be notified at least three years before being required to change code due to a Communication Services interface change. All documented REST APIs and client library APIs generally enjoy at least three years warning before interfaces are decommissioned.-- You'll be notified at least one year before having to update client library assemblies to the latest minor version. These required updates shouldn't require any code changes because they're in the same major version. This is especially true for the Calling and Chat libraries which have real-time components that frequently require security and performance updates. We highly encourage you to keep your Communication Services client libraries updated.
+- You'll be notified at least three years before being required to change code due to a Communication Services interface change. All documented REST APIs and SDK APIs generally enjoy at least three years warning before interfaces are decommissioned.
+- You'll be notified at least one year before having to update SDK assemblies to the latest minor version. These required updates shouldn't require any code changes because they're in the same major version. This is especially true for the Calling and Chat libraries which have real-time components that frequently require security and performance updates. We highly encourage you to keep your Communication Services SDKs updated.
-### API and client library decommissioning examples
+### API and SDK decommissioning examples
**You've integrated the v24 version of the SMS REST API into your application. Azure Communication releases v25.**
-You'll get 3 years warning before these APIs stop working and are forced to update to v25. This update might require a code change.
+You'll get three years warning before these APIs stop working and are forced to update to v25. This update might require a code change.
-**You've integrated the v2.02 version of the Calling client library into your application. Azure Communication releases v2.05.**
+**You've integrated the v2.02 version of the Calling SDK into your application. Azure Communication releases v2.05.**
-You may be required to update to the v2.05 version of the Calling client library within 12 months of the release of v2.05. This should be a simple replacement of the artifact without requiring a code change because v2.05 is in the v2 major version and has no breaking changes.
+You may be required to update to the v2.05 version of the Calling SDK within 12 months of the release of v2.05. This should be a simple replacement of the artifact without requiring a code change because v2.05 is in the v2 major version and has no breaking changes.
## Next steps
-For more information, see the following client library overviews:
+For more information, see the following SDK overviews:
-- [Calling client library Overview](../concepts/voice-video-calling/calling-sdk-features.md)-- [Chat client library Overview](../concepts/chat/sdk-features.md)-- [SMS client library Overview](../concepts/telephony-sms/sdk-features.md)
+- [Calling SDK Overview](../concepts/voice-video-calling/calling-sdk-features.md)
+- [Chat SDK Overview](../concepts/chat/sdk-features.md)
+- [SMS SDK Overview](../concepts/telephony-sms/sdk-features.md)
To get started with Azure Communication
communication-services Teams Interop https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/teams-interop.md
Teams interoperability allows you to create custom applications that connect use
1. Meeting details are shared with external users through your custom application. * **Using Graph API** Your custom Communication Services application uses the Microsoft Graph APIs to access meeting details to be shared. * **Using other options** For example, your meeting link can be copied from your calendar in Microsoft Teams.
-1. External users use your custom application to join the Teams meeting (via the Communication Services Calling and Chat client libraries)
+1. External users use your custom application to join the Teams meeting (via the Communication Services Calling and Chat SDKs)
The high-level architecture for this use-case looks like this:
The high-level architecture for this use-case looks like this:
While certain Teams meeting features such as raised hand, together mode, and breakout rooms will only be available for Teams users, your custom application will have access to the meeting's core audio, video, chat, and screen sharing capabilities. Meeting chat will be accessible to your custom application user while they're in the call. They won't be able to send or receive messages before joining or after leaving the call.
-When a Communication Services user joins the Teams meeting, the display name provided through the Calling client library will be shown to Teams users. The Communication Services user will otherwise be treated like an anonymous user in Teams. Your custom application should consider user authentication and other security measures to protect Teams meetings. Be mindful of the security implications of enabling anonymous users to join meetings, and use the [Teams security guide](/microsoftteams/teams-security-guide#addressing-threats-to-teams-meetings) to configure capabilities available to anonymous users.
+When a Communication Services user joins the Teams meeting, the display name provided through the Calling SDK will be shown to Teams users. The Communication Services user will otherwise be treated like an anonymous user in Teams. Your custom application should consider user authentication and other security measures to protect Teams meetings. Be mindful of the security implications of enabling anonymous users to join meetings, and use the [Teams security guide](/microsoftteams/teams-security-guide#addressing-threats-to-teams-meetings) to configure capabilities available to anonymous users.
Communication Services Teams Interop is currently in private preview. When generally available, Communication Services users will be treated like "External access users". Learn more about external access in [Call, chat, and collaborate with people outside your organization in Microsoft Teams](/microsoftteams/communicate-with-users-from-other-organizations).
communication-services Concepts https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/telephony-sms/concepts.md
[!INCLUDE [Regional Availability Notice](../../includes/regional-availability-include.md)]
-Azure Communication Services enables you to send and receive SMS text messages using the Communication Services SMS client libraries. These client libraries can be used to support customer service scenarios, appointment reminders, two-factor authentication, and other real-time communication needs. Communication Services SMS allows you to reliably send messages while exposing deliverability and response rate insights surrounding your campaigns.
+Azure Communication Services enables you to send and receive SMS text messages using the Communication Services SMS SDKs. These SDKs can be used to support customer service scenarios, appointment reminders, two-factor authentication, and other real-time communication needs. Communication Services SMS allows you to reliably send messages while exposing deliverability and response rate insights surrounding your campaigns.
-Key features of Azure Communication Services SMS client libraries include:
+Key features of Azure Communication Services SMS SDKs include:
- **Simple** setup experience for adding SMS capability to your applications. - **High Velocity** message support over toll free numbers for A2P (Application to Person) use cases in the United States.
Key features of Azure Communication Services SMS client libraries include:
The following documents may be interesting to you: -- Familiarize yourself with the [SMS client library](../telephony-sms/sdk-features.md)
+- Familiarize yourself with the [SMS SDK](../telephony-sms/sdk-features.md)
- Get an SMS capable [phone number](../../quickstarts/telephony-sms/get-phone-number.md) - [Phone number types in Azure Communication Services](../telephony-sms/plan-solution.md)
communication-services Sdk Features https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/telephony-sms/sdk-features.md
Title: SMS client library overview for Azure Communication Services
+ Title: SMS SDK overview for Azure Communication Services
-description: Provides an overview of the SMS client library and its offerings.
+description: Provides an overview of the SMS SDK and its offerings.
Last updated 03/10/2021
-# SMS client library overview
+# SMS SDK overview
[!INCLUDE [Public Preview Notice](../../includes/public-preview-include.md)] [!INCLUDE [Regional Availability Notice](../../includes/regional-availability-include.md)]
-Azure Communication Services SMS client libraries can be used to add SMS messaging to your applications.
+Azure Communication Services SMS SDKs can be used to add SMS messaging to your applications.
-## SMS client library capabilities
+## SMS SDK capabilities
-The following list presents the set of features which are currently available in our client libraries.
+The following list presents the set of features which are currently available in our SDKs.
| Group of features | Capability | JS | Java | .NET | Python | | -- | - | | - | - | |
communication-services Telephony Concept https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/telephony-sms/telephony-concept.md
[!INCLUDE [Private Preview Notice](../../includes/private-preview-include.md)] [!INCLUDE [Regional Availability Notice](../../includes/regional-availability-include.md)]
-Azure Communication Services Calling client libraries can be used to add telephony and PSTN to your applications. This page summarizes key telephony concepts and capabilities. See the [calling library](../../quickstarts/voice-video-calling/calling-client-samples.md) to learn more about specific client library languages and capabilities.
+Azure Communication Services Calling SDKs can be used to add telephony and PSTN to your applications. This page summarizes key telephony concepts and capabilities. See the [calling library](../../quickstarts/voice-video-calling/calling-client-samples.md) to learn more about specific SDK languages and capabilities.
## Overview of telephony Whenever your users interact with a traditional telephone number, calls are facilitated by PSTN (Public Switched Telephone Network) voice calling. To make and receive PSTN calls, you need to add telephony capabilities to your Azure Communication Services resource. In this case, signaling and media use a combination of IP-based and PSTN-based technologies to connect your users. Communication Services provides two discrete ways to reach the PSTN network: Azure Cloud Calling and SIP interface.
communication-services Troubleshooting Info https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/troubleshooting-info.md
To help you troubleshoot certain types of issues, you may be asked for any of th
## Access your MS-CV ID
-The MS-CV ID can be accessed by configuring diagnostics in the `clientOptions` object instance when initializing your client libraries. Diagnostics can be configured for any of the Azure client libraries including Chat, Identity, and VoIP calling.
+The MS-CV ID can be accessed by configuring diagnostics in the `clientOptions` object instance when initializing your SDKs. Diagnostics can be configured for any of the Azure SDKs including Chat, Identity, and VoIP calling.
### Client options example
-The following code snippets demonstrate diagnostics configuration. When the client libraries are used with diagnostics enabled, diagnostics details will be emitted to the configured event listener:
+The following code snippets demonstrate diagnostics configuration. When the SDKs are used with diagnostics enabled, diagnostics details will be emitted to the configured event listener:
# [C#](#tab/csharp) ```
chat_client = ChatClient(
## Access your call ID
-When filing a support request through the Azure portal related to calling issues, you may be asked to provide ID of the call you're referring to. This can be accessed through the calling client library:
+When filing a support request through the Azure portal related to calling issues, you may be asked to provide ID of the call you're referring to. This can be accessed through the Calling SDK:
# [JavaScript](#tab/javascript) ```javascript
console.log(result); // your message ID will be in the result
# [JavaScript](#tab/javascript)
-The following code can be used to configure `AzureLogger` to output logs to the console using the JavaScript client library:
+The following code can be used to configure `AzureLogger` to output logs to the console using the JavaScript SDK:
```javascript import { AzureLogger } from '@azure/logger';
On Android Studio, navigate to the Device File Explorer by selecting View > Tool
-## Calling client library error codes
+## Calling SDK error codes
-The Azure Communication Services calling client library uses the following error codes to help you troubleshoot calling issues. These error codes are exposed through the `call.callEndReason` property after a call ends.
+The Azure Communication Services Calling SDK uses the following error codes to help you troubleshoot calling issues. These error codes are exposed through the `call.callEndReason` property after a call ends.
| Error code | Description | Action to take | | -- | | | | 403 | Forbidden / Authentication failure. | Ensure that your Communication Services token is valid and not expired. | | 404 | Call not found. | Ensure that the number you're calling (or call you're joining) exists. | | 408 | Call controller timed out. | Call Controller timed out waiting for protocol messages from user endpoints. Ensure clients are connected and available. |
-| 410 | Local media stack or media infrastructure error. | Ensure that you're using the latest client library in a supported environment. |
+| 410 | Local media stack or media infrastructure error. | Ensure that you're using the latest SDK in a supported environment. |
| 430 | Unable to deliver message to client application. | Ensure that the client application is running and available. | | 480 | Remote client endpoint not registered. | Ensure that the remote endpoint is available. | | 481 | Failed to handle incoming call. | File a support request through the Azure portal. |
communication-services Teams Embed https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/ui-framework/teams-embed.md
Teams Embed is an Azure Communication Services capability focused on common business-to-consumer and business-to-business calling interactions. The core of the Teams Embed system is [video and voice calling](../voice-video-calling/calling-sdk-features.md), but the Teams Embed system builds on Azure's calling primitives to deliver a complete user experience based on Microsoft Teams meetings.
-Teams Embed client libraries are closed-source and make these capabilities available to you in a turnkey, composite format. You drop Teams Embed into your app's canvas and the client library generates a complete user experience. Because this user experience is very similar to Microsoft Teams meetings you can take advantage of:
+Teams Embed SDKs are closed-source and make these capabilities available to you in a turnkey, composite format. You drop Teams Embed into your app's canvas and the SDK generates a complete user experience. Because this user experience is very similar to Microsoft Teams meetings you can take advantage of:
- Reduced development time and engineering complexity - End-user familiarity with Teams
communication-services Ui Sdk Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/ui-framework/ui-sdk-overview.md
Azure Communication Services UI Framework makes it easy for you to build modern
- **Composite Components** - These components are turn-key solutions that implement common communication scenarios. You can quickly add video calling or chat experiences to their applications. Composites are open-source components built using base components. - **Base Components** - These components are open-source building blocks that let you build custom communications experience. Components are offered for both calling and chat capabilities that can be combined to build experiences.
-These UI client libraries all use [Microsoft's Fluent design language](https://developer.microsoft.com/fluentui/) and assets. Fluent UI provides a foundational layer for the UI Framework that has been battle tested across Microsoft products.
+These UI SDKs all use [Microsoft's Fluent design language](https://developer.microsoft.com/fluentui/) and assets. Fluent UI provides a foundational layer for the UI Framework that has been battle tested across Microsoft products.
## **Differentiating Components and Composites**
-**Base Components** are built on top of core Azure Communication Services client libraries and implement basic actions such as initializing the core client libraries, rendering video, and providing user controls for muting, video on/off, etc. You can use these **Base Components** to build your own custom layout experiences using pre-built, production ready communication components.
+**Base Components** are built on top of core Azure Communication Services SDKs and implement basic actions such as initializing the core SDKs, rendering video, and providing user controls for muting, video on/off, etc. You can use these **Base Components** to build your own custom layout experiences using pre-built, production ready communication components.
:::image type="content" source="../media/ui-framework/component-overview.png" alt-text="Overview of component for UI Framework":::
These UI client libraries all use [Microsoft's Fluent design language](https://d
## What UI Framework is best for my project?
-Understanding these requirements will help you choose the right client library:
+Understanding these requirements will help you choose the right SDK:
-- **How much customization do you desire?** Azure Communication core client libraries don't have a UX and are designed so you can build whatever UX you want. UI Framework components provide UI assets at the cost of reduced customization.-- **Do you require Meeting features?** The Meeting system has several unique capabilities not currently available in the core Azure Communication Services client libraries, such as blurred background and raised hand.
+- **How much customization do you desire?** Azure Communication core SDKs don't have a UX and are designed so you can build whatever UX you want. UI Framework components provide UI assets at the cost of reduced customization.
+- **Do you require Meeting features?** The Meeting system has several unique capabilities not currently available in the core Azure Communication Services SDKs, such as blurred background and raised hand.
- **What platforms are you targeting?** Different platforms have different capabilities. Details about feature availability in the varied [UI SDKs is available here](ui-sdk-features.md), but key trade-offs are summarized below.
-|Client library / SDK|Implementation Complexity| Customization Ability| Calling| Chat| [Teams Interop](./../teams-interop.md)
+|SDK / SDK|Implementation Complexity| Customization Ability| Calling| Chat| [Teams Interop](./../teams-interop.md)
|||||||| |Composite Components|Low|Low|Γ£ö|Γ£ö|Γ£ò |Base Components|Medium|Medium|Γ£ö|Γ£ö|Γ£ò
-|Core client libraries|High|High|Γ£ö|Γ£ö |Γ£ö
+|Core SDKs|High|High|Γ£ö|Γ£ö |Γ£ö
## Cost
An Azure Communication Services identity is required to initialize the UI Framew
Composite and Base Components are initialized using an Azure Communication Services access token. Access tokens should be procured from Azure Communication Services through a trusted service that you manage. See [Quickstart: Create Access Tokens](../../quickstarts/access-tokens.md) and [Trusted Service Tutorial](../../tutorials/trusted-service-tutorial.md) for more information.
-These client libraries also require the context for the call or chat they will join. Similar to user access tokens, this context should be disseminated to clients via your own trusted service. The list below summarizes the initialization and resource management functions that you need to operationalize.
+These SDKs also require the context for the call or chat they will join. Similar to user access tokens, this context should be disseminated to clients via your own trusted service. The list below summarizes the initialization and resource management functions that you need to operationalize.
| Contoso Responsibilities | UI Framework Responsibilities | |-|--|
communication-services About Call Types https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/voice-video-calling/about-call-types.md
[!INCLUDE [Public Preview Notice](../../includes/public-preview-include.md)]
-You can use Azure Communication Services to make and receive one to one or group voice and video calls. Your calls can be made to other Internet-connected devices and to plain-old telephones. You can use the Communication Services JavaScript, Android, or iOS client libraries to build applications that allow your users to speak to one another in private conversations or in group discussions. Azure Communication Services supports calls to and from services or Bots.
+You can use Azure Communication Services to make and receive one to one or group voice and video calls. Your calls can be made to other Internet-connected devices and to plain-old telephones. You can use the Communication Services JavaScript, Android, or iOS SDKs to build applications that allow your users to speak to one another in private conversations or in group discussions. Azure Communication Services supports calls to and from services or Bots.
## Call types in Azure Communication Services
Any time your users interact with a traditional telephone number, calls are faci
### One-to-one call
-A one-to-one call on Azure Communication Services happens when one of your users connects to another user using one of our client libraries. The call can be either VoIP or PSTN.
+A one-to-one call on Azure Communication Services happens when one of your users connects to another user using one of our SDKs. The call can be either VoIP or PSTN.
### Group call
During the preview you can use the group ID to join the same conversation. You c
For more information, see the following articles: - Familiarize yourself with general [call flows](../call-flows.md) - [Phone number types](../telephony-sms/plan-solution.md)-- Learn about the [calling client library capabilities](../voice-video-calling/calling-sdk-features.md)
+- Learn about the [Calling SDK capabilities](../voice-video-calling/calling-sdk-features.md)
communication-services Calling Sdk Features https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/voice-video-calling/calling-sdk-features.md
Title: Azure Communication Services calling client library overview
+ Title: Azure Communication Services Calling SDK overview
-description: Provides an overview of the calling client library.
+description: Provides an overview of the Calling SDK.
Last updated 03/10/2021
-# Calling client library overview
+# Calling SDK overview
[!INCLUDE [Public Preview Notice](../../includes/public-preview-include.md)]
-There are two separate families of Calling client libraries, for *clients* and *services.* Currently available client libraries are intended for end-user experiences: websites and native apps.
+There are two separate families of Calling SDKs, for *clients* and *services.* Currently available SDKs are intended for end-user experiences: websites and native apps.
-The Service client libraries are not yet available, and provide access to the raw voice and video data planes, suitable for integration with bots and other services.
+The Service SDKs are not yet available, and provide access to the raw voice and video data planes, suitable for integration with bots and other services.
-## Calling client library capabilities
+## Calling SDK capabilities
-The following list presents the set of features which are currently available in the Azure Communication Services Calling client libraries.
+The following list presents the set of features which are currently available in the Azure Communication Services Calling SDKs.
| Group of features | Capability | JS | Java (Android) | Objective-C (iOS) | -- | - | | -- | -
The following list presents the set of features which are currently available in
| | Set / update scaling mode | ✔️ | ✔️ | ✔️ | | Render remote video stream | ✔️ | ✔️ | ✔️
+## Calling client library streaming support
+The Communication Services Calling client library supports the following streaming configurations:
+
+| Limit |Web | Android/iOS|
+|--|-||
+|**# of outgoing streams that can be sent simultaneously** |1 video + 1 screen sharing | 1 video + 1 screen sharing|
+|**# of incoming streams that can be rendered simultaneously** |1 video + 1 screen sharing| 6 video + 1 screen sharing |
+
+## Calling client library timeouts
+
+The following timeouts apply to the Communication Services Calling client libraries:
+| Action | Timeout in seconds |
+| -- | - |
+| Reconnect/removal participant | 120 |
+| Add or remove new modality from a call (Start/stop video or screen sharing) | 40 |
+| Call Transfer operation timeout | 60 |
+| 1:1 call establishment timeout | 85 |
+| Group call establishment timeout | 85 |
+| PSTN call establishment timeout | 115 |
+| Promote 1:1 call to a group call timeout | 115 |
-## JavaScript calling client library support by OS and browser
+## JavaScript Calling SDK support by OS and browser
The following table represents the set of supported browsers which are currently available. We support the most recent three versions of the browser unless otherwise indicated.
-| | Chrome | Safari* | Edge (Chromium) |
+| Platform | Chrome | Safari* | Edge (Chromium) |
| -- | -| | -- | | Android | ✔️ | ❌ | ❌ | | iOS | ❌ | ✔️**** | ❌ |
For example, this iframe allows both camera and microphone access:
<iframe allow="camera *; microphone *"> ```
-## Calling client library streaming support
-The Communication Services calling client library supports the following streaming configurations:
-
-| |Web | Android/iOS|
-|--|-||
-|**# of outgoing streams that can be sent simultaneously** |1 video or 1 screen sharing | 1 video + 1 screen sharing|
-|**# of incoming streams that can be rendered simultaneously** |1 video or 1 screen sharing| 6 video + 1 screen sharing |
-- ## Next steps > [!div class="nextstepaction"]
communication-services Network Requirements https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/voice-video-calling/network-requirements.md
The quality of real-time media over IP is significantly impacted by the quality
Ensure that your network is configured to support the bandwidth required by concurrent Azure Communication Services media sessions and other business applications. Testing the end-to-end network path for bandwidth bottlenecks is critical to the successful deployment of your multimedia Communication Services solution.
-Below are the bandwidth requirements for the JavaScript client libraries:
+Below are the bandwidth requirements for the JavaScript SDKs:
|Bandwidth|Scenarios| |:--|:--|
Below are the bandwidth requirements for the JavaScript client libraries:
|500 kbps|Group video calling 360p at 30fps| |1.2 Mbps|HD Group video calling with resolution of HD 720p at 30fps|
-Below are the bandwidth requirements for the native Android and iOS client libraries:
+Below are the bandwidth requirements for the native Android and iOS SDKs:
|Bandwidth|Scenarios| |:--|:--|
You might want to optimize further if:
| Network optimization task | Details | | :-- | :-- | | Plan your network | In this documentation you can find minimal requirements to your network for calls. Refer to the [Teams example for planning your network](https://docs.microsoft.com/microsoftteams/tutorial-network-planner-example) |
-| External name resolution | Be sure that all computers running the Azure Communications Services client libraries can resolve external DNS queries to discover the services provided by Azure Communication Servicers and that your firewalls are not preventing access. Please ensure that the client libraries can resolve addresses *.skype.com, *.microsoft.com, *.azure.net, *.azureedge.net, *.office.com, *.trouter.io |
+| External name resolution | Be sure that all computers running the Azure Communications Services SDKs can resolve external DNS queries to discover the services provided by Azure Communication Servicers and that your firewalls are not preventing access. Please ensure that the SDKs can resolve addresses *.skype.com, *.microsoft.com, *.azure.net, *.azureedge.net, *.office.com, *.trouter.io |
| Maintain session persistence | Make sure your firewall doesn't change the mapped Network Address Translation (NAT) addresses or ports for UDP Validate NAT pool size | Validate the network address translation (NAT) pool size required for user connectivity. When multiple users and devices access Azure Communication Services using [Network Address Translation (NAT) or Port Address Translation (PAT)](https://docs.microsoft.com/office365/enterprise/nat-support-with-office-365), ensure that the devices hidden behind each publicly routable IP address do not exceed the supported number. Ensure that adequate public IP addresses are assigned to the NAT pools to prevent port exhaustion. Port exhaustion will contribute to internal users and devices being unable to connect to the Azure Communication Services | | Intrusion Detection and Prevention Guidance | If your environment has an [Intrusion Detection](https://docs.microsoft.com/azure/network-watcher/network-watcher-intrusion-detection-open-source-tools) or Prevention System (IDS/IPS) deployed for an extra layer of security for outbound connections, allow all Azure Communication Services URLs |
Validate NAT pool size | Validate the network address translation (NAT) pool siz
-### Operating system and Browsers (for JavaScript client libraries)
+### Operating system and Browsers (for JavaScript SDKs)
-Azure Communication Services voice/video client libraries support certain operating systems and browsers.
-Learn about the operating systems and browsers that the calling client libraries support in the [calling conceptual documentation](https://docs.microsoft.com/azure/communication-services/concepts/voice-video-calling/calling-sdk-features).
+Azure Communication Services voice/video SDKs support certain operating systems and browsers.
+Learn about the operating systems and browsers that the calling SDKs support in the [calling conceptual documentation](https://docs.microsoft.com/azure/communication-services/concepts/voice-video-calling/calling-sdk-features).
## Next steps
communication-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/overview.md
> Applications that you build using Azure Communication Services can talk to Microsoft Teams. To learn more, visit our [Teams Interop](./quickstarts/voice-video-calling/get-started-teams-interop.md) documentation.
-Azure Communication Services allows you to easily add real-time multimedia voice, video, and telephony-over-IP communications features to your applications. The Communication Services client libraries also allow you to add chat and SMS functionality to your communications solutions.
+Azure Communication Services allows you to easily add real-time multimedia voice, video, and telephony-over-IP communications features to your applications. The Communication Services SDKs also allow you to add chat and SMS functionality to your communications solutions.
<br>
Mixed scenarios are supported. For example, a Communication Services application
## Common scenarios
-The following resources are a great place to start if you're new to Azure Communication
+The following resources are a great place to get started with Azure Communication Services.
<br> | Resource |Description | | | |
-|**[Join your calling app to a Teams meeting](./quickstarts/voice-video-calling/get-started-teams-interop.md)**|Azure Communication Services can be used to build custom meeting experiences that interact with Microsoft Teams. Users of your Communication Services solution(s) can interact with Teams participants over voice, video, chat, and screen sharing.|
|**[Create a Communication Services resource](./quickstarts/create-communication-resource.md)**|You can begin using Azure Communication Services by using the Azure portal or Communication Services client library to provision your first Communication Services resource. Once you have your Communication Services resource connection string, you can provision your first user access tokens.|
-|**[Create your first user access token](./quickstarts/access-tokens.md)**|User access tokens are used to authenticate your services against your Azure Communication Services resource. These tokens are provisioned and reissued using the Communication Services client library.|
|**[Get a phone number](./quickstarts/telephony-sms/get-phone-number.md)**|You can use Azure Communication Services to provision and release telephone numbers. These telephone numbers can be used to initiate outbound calls and build SMS communications solutions.| |**[Send an SMS from your app](./quickstarts/telephony-sms/send.md)**|The Azure Communication Services SMS client library allows you to send and receive SMS messages from your .NET and JavaScript applications.|+
+After creating an Communication Services resource you can start building client scenarios, such as voice and video calling or text chat.
+
+| Resource |Description |
+| | |
+|**[Create your first user access token](./quickstarts/access-tokens.md)**|User access tokens are used to authenticate your services against your Azure Communication Services resource. These tokens are provisioned and reissued using the Communication Services client library.|
|**[Get started with voice and video calling](./quickstarts/voice-video-calling/getting-started-with-calling.md)**| Azure Communication Services allows you to add voice and video calling to your apps using the Calling client library. This library is powered by WebRTC and allows you to establish peer-to-peer, multimedia, real-time communications within your applications.|
+|**[Join your calling app to a Teams meeting](./quickstarts/voice-video-calling/get-started-teams-interop.md)**|Azure Communication Services can be used to build custom meeting experiences that interact with Microsoft Teams. Users of your Communication Services solution(s) can interact with Teams participants over voice, video, chat, and screen sharing.|
|**[Get started with chat](./quickstarts/chat/get-started.md)**|The Azure Communication Services Chat client library can be used to integrate real-time chat into your applications.| - ## Samples
-The following samples demonstrate end-to-end utilization of the Azure Communication Services client libraries. Feel free to use these samples to bootstrap your own Communication Services solutions.
+The following samples demonstrate end-to-end utilization of the Azure Communication Services SDKs. Feel free to use these samples to bootstrap your own Communication Services solutions.
<br> | Sample name | Description | | | |
-|**[The Group Calling Hero Sample](./samples/calling-hero-sample.md)**|See how the Communication Services client libraries can be used to build a group calling experience.|
-|**[The Group Chat Hero Sample](./samples/chat-hero-sample.md)**|See how the Communication Services client libraries can be used to build a group chat experience.|
+|**[The Group Calling Hero Sample](./samples/calling-hero-sample.md)**|See how the Communication Services SDKs can be used to build a group calling experience.|
+|**[The Group Chat Hero Sample](./samples/chat-hero-sample.md)**|See how the Communication Services SDKs can be used to build a group chat experience.|
-## Platforms and client libraries
+## Platforms and SDKs
-The following resources will help you learn about the Azure Communication Services client libraries:
+The following resources will help you learn about the Azure Communication Services SDKs:
| Resource | Description | | | |
-|**[Client libraries and REST APIs](./concepts/sdk-options.md)**|Azure Communication Services capabilities are conceptually organized into six areas, each represented by a client library. You can decide which client libraries to use based on your real-time communication needs.|
-|**[Calling client library overview](./concepts/voice-video-calling/calling-sdk-features.md)**|Review the Communication Services Calling client library overview.|
-|**[Chat client library overview](./concepts/chat/sdk-features.md)**|Review the Communication Services Chat client library overview.|
-|**[SMS client library overview](./concepts/telephony-sms/sdk-features.md)**|Review the Communication Services SMS client library overview.|
+|**[Client libraries and REST APIs](./concepts/sdk-options.md)**|Azure Communication Services capabilities are conceptually organized into six areas, each represented by an SDK. You can decide which SDKs to use based on your real-time communication needs.|
+|**[Calling SDK overview](./concepts/voice-video-calling/calling-sdk-features.md)**|Review the Communication Services Calling SDK overview.|
+|**[Chat SDK overview](./concepts/chat/sdk-features.md)**|Review the Communication Services Chat SDK overview.|
+|**[SMS SDK overview](./concepts/telephony-sms/sdk-features.md)**|Review the Communication Services SMS SDK overview.|
## Compare Azure Communication Services There are two other Microsoft communication products you may consider leveraging that are not directly interoperable with Communication Services at this time:
+ - [Microsoft Graph Cloud Communication APIs](/graph/cloud-communications-concept-overview) allow organizations to build communication experiences tied to Azure Active Directory users with Microsoft 365 licenses. This is ideal for applications tied to Azure Active Directory or where you want to extend productivity experiences in Microsoft Teams. There are also APIs to build applications and customization within the [Teams experience.](/microsoftteams/platform/?preserve-view=true&view=msteams-client-js-latest)
- [Azure PlayFab Party](/gaming/playfab/features/multiplayer/networking/) simplifies adding low-latency chat and data communication to games. While you can power gaming chat and networking systems with Communication Services, PlayFab is a tailored option and free on Xbox.
communication-services Access Tokens https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/quickstarts/access-tokens.md
Title: Quickstart - Create and manage access tokens
-description: Learn how to manage identities and access tokens using the Azure Communication Services Identity client library.
+description: Learn how to manage identities and access tokens using the Azure Communication Services Identity SDK.
zone_pivot_groups: acs-js-csharp-java-python
# Quickstart: Create and manage access tokens
-Get started with Azure Communication Services by using the Communication Services Identity client library. It allows you to create identities and manage your access tokens. Identity is representing entity of your application in the Azure Communication Service (for example, user or device). Access tokens let your Chat and Calling client libraries authenticate directly against Azure Communication Services. We recommend generating access tokens on a server-side service. Access tokens are then used to initialize the Communication Services client libraries on client devices.
+Get started with Azure Communication Services by using the Communication Services Identity SDK. It allows you to create identities and manage your access tokens. Identity is representing entity of your application in the Azure Communication Service (for example, user or device). Access tokens let your Chat and Calling SDKs authenticate directly against Azure Communication Services. We recommend generating access tokens on a server-side service. Access tokens are then used to initialize the Communication Services SDKs on client devices.
Any prices seen in images throughout this tutorial are for demonstration purposes only.
In this quickstart, you learned how to:
> [!div class="checklist"] > * Manage identities > * Issue access tokens
-> * Use the Communication Services Identity client library
+> * Use the Communication Services Identity SDK
> [!div class="nextstepaction"]
communication-services Get Started https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/quickstarts/chat/get-started.md
zone_pivot_groups: acs-js-csharp-java-python-swift-android
[!INCLUDE [Public Preview Notice](../../includes/public-preview-include.md)]
-Get started with Azure Communication Services by using the Communication Services Chat client library to add real-time chat to your application. In this quickstart, we'll use the Chat client library to create chat threads that allow users to have conversations with one another. To learn more about Chat concepts, visit the [chat conceptual documentation](../../concepts/chat/concepts.md).
+Get started with Azure Communication Services by using the Communication Services Chat SDK to add real-time chat to your application. In this quickstart, we'll use the Chat SDK to create chat threads that allow users to have conversations with one another. To learn more about Chat concepts, visit the [chat conceptual documentation](../../concepts/chat/concepts.md).
::: zone pivot="programming-language-javascript" ::: zone-end ::: zone pivot="programming-language-python" ::: zone-end ::: zone pivot="programming-language-java" ::: zone-end ::: zone pivot="programming-language-android" ::: zone-end ::: zone pivot="programming-language-csharp" ::: zone-end ::: zone pivot="programming-language-swift" ::: zone-end ## Clean up resources
In this quickstart you learned how to:
You may also want to: - Learn about [chat concepts](../../concepts/chat/concepts.md)
+ - Familiarize yourself with [Chat SDK](../../concepts/chat/sdk-features.md)
communication-services Meeting Interop https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/quickstarts/chat/meeting-interop.md
Title: Getting started with Teams interop on Azure Communication Services
-description: In this quickstart, you'll learn how to join a Teams meeting with the Azure Communication Chat client library
+description: In this quickstart, you'll learn how to join a Teams meeting with the Azure Communication Chat SDK
Last updated 03/10/2021
> [!IMPORTANT] > To enable/disable [Teams tenant interoperability](../../concepts/teams-interop.md), complete [this form](https://forms.office.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR21ouQM6BHtHiripswZoZsdURDQ5SUNQTElKR0VZU0VUU1hMOTBBMVhESS4u).
-Get started with Azure Communication Services by connecting your chat solution to Microsoft Teams using the JavaScript client library.
+Get started with Azure Communication Services by connecting your chat solution to Microsoft Teams using the JavaScript SDK.
## Prerequisites
communication-services Create Communication Resource https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/quickstarts/create-communication-resource.md
zone_pivot_groups: acs-plat-azp-net
[!INCLUDE [Public Preview Notice](../includes/public-preview-include.md)]
-Get started with Azure Communication Services by provisioning your first Communication Services resource. Communication services resources can be provisioned through the [Azure portal](https://portal.azure.com) or with the .NET management client library. The management client library and the Azure portal allow you to create, configure, update and delete your resources and interface with [Azure Resource Manager](../../azure-resource-manager/management/overview.md), Azure's deployment and management service. All functionality available in the client libraries is available in the Azure portal.
+Get started with Azure Communication Services by provisioning your first Communication Services resource. Communication services resources can be provisioned through the [Azure portal](https://portal.azure.com) or with the .NET management SDK. The management SDK and the Azure portal allow you to create, configure, update and delete your resources and interface with [Azure Resource Manager](../../azure-resource-manager/management/overview.md), Azure's deployment and management service. All functionality available in the SDKs is available in the Azure portal.
-Get started with Azure Communication Services by provisioning your first Communication Services resource. Communication services resources can be provisioned through the [Azure portal](https://portal.azure.com) or with the .NET management client library. The management client library and the Azure portal allow you to create, configure, update and delete your resources and interface with [Azure Resource Manager](../../azure-resource-manager/management/overview.md), Azure's deployment and management service. All functionality available in the client libraries is available in the Azure portal.
+Get started with Azure Communication Services by provisioning your first Communication Services resource. Communication services resources can be provisioned through the [Azure portal](https://portal.azure.com) or with the .NET management SDK. The management SDK and the Azure portal allow you to create, configure, update and delete your resources and interface with [Azure Resource Manager](../../azure-resource-manager/management/overview.md), Azure's deployment and management service. All functionality available in the SDKs is available in the Azure portal.
> [!WARNING] > Note that while Communication Services is available in multiple geographies, in order to get a phone number the resource must have a data location set to 'US'. Also note that communication resources cannot be transferred to a different subscription during public preview.
Get started with Azure Communication Services by provisioning your first Communi
## Access your connection strings and service endpoints
-Connection strings allow the Communication Services client libraries to connect and authenticate to Azure. You can access your Communication Services connection strings and service endpoints from the Azure portal or programmatically with Azure Resource Manager APIs.
+Connection strings allow the Communication Services SDKs to connect and authenticate to Azure. You can access your Communication Services connection strings and service endpoints from the Azure portal or programmatically with Azure Resource Manager APIs.
-After navigating to your Communication Services resource, select **Keys** from the navigation menu and copy the **Connection string** or **Endpoint** values for usage by the Communication Services client libraries. Note that you have access to primary and secondary keys. This can be useful in scenarios where you would like to provide temporary access to your Communication Services resources to a third party or staging environment.
+After navigating to your Communication Services resource, select **Keys** from the navigation menu and copy the **Connection string** or **Endpoint** values for usage by the Communication Services SDKs. Note that you have access to primary and secondary keys. This can be useful in scenarios where you would like to provide temporary access to your Communication Services resources to a third party or staging environment.
:::image type="content" source="./media/key.png" alt-text="Screenshot of Communication Services Key page.":::
-You can also access key information using Azure CLI:
+You can also access key information using Azure CLI, like your resource group or the keys for a specific resource.
+Install [Azure CLI](https://docs.microsoft.com/cli/azure/install-azure-cli-windows?tabs=azure-cli) and use the following command to login. You will need to provide your credentials to connect with your azure account.
+```azurecli
+az login
+```
+
+Now you can access important information about your resources.
```azurecli az communication list --resource-group "<resourceGroup>" az communication list-key --name "<communicationName>" --resource-group "<resourceGroup>" ```
+If you would like to select a specific subscription you can also specify the ```--subscription``` flag and provide the subscription ID.
+```
+az communication list --resource-group "resourceGroup>" --subscription "<subscriptionID>"
+
+az communication list-key --name "<communicationName>" --resource-group "resourceGroup>" --subscription "<subscriptionID>"
+```
+ ## Store your connection string
-Communication Services client libraries use connection strings to authorize requests made to Communication Services. You have several options for storing your connection string:
+Communication Services SDKs use connection strings to authorize requests made to Communication Services. You have several options for storing your connection string:
* An application running on the desktop or on a device can store the connection string in an **app.config** or **web.config** file. Add the connection string to the **AppSettings** section in these files. * An application running in an Azure App Service can store the connection string in the [App Service application settings](../../app-service/configure-common.md). Add the connection string to the **Connection Strings** section of the Application Settings tab within the portal.
communication-services Managed Identity From Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/quickstarts/managed-identity-from-cli.md
# Authorize access with managed identity to your communication resource in your development environment
-The Azure Identity client library provides Azure Active Directory (Azure AD) token authentication support for the Azure SDK. The latest versions of the Azure Communication Services client libraries for .NET, Java, Python, and JavaScript integrate with the Azure Identity library to provide a simple and secure means to acquire an OAuth 2.0 token for authorization of Azure Communication Services requests.
+The Azure Identity SDK provides Azure Active Directory (Azure AD) token authentication support for the Azure SDK. The latest versions of the Azure Communication Services SDKs for .NET, Java, Python, and JavaScript integrate with the Azure Identity library to provide a simple and secure means to acquire an OAuth 2.0 token for authorization of Azure Communication Services requests.
-An advantage of the Azure Identity client library is that it enables you to use the same code to authenticate across multiple services whether your application is running in the development environment or in Azure. The Azure Identity client library authenticates a security principal. When your code is running in Azure, the security principal is a managed identity for Azure resources. In the development environment, the managed identity does not exist, so the client library authenticates either the user or a registered application for testing purposes.
+An advantage of the Azure Identity SDK is that it enables you to use the same code to authenticate across multiple services whether your application is running in the development environment or in Azure. The Azure Identity SDK authenticates a security principal. When your code is running in Azure, the security principal is a managed identity for Azure resources. In the development environment, the managed identity does not exist, so the SDK authenticates either the user or a registered application for testing purposes.
## Prerequisites
Managed identities should be enabled on the Azure resources that you're authoriz
- [Azure PowerShell](../../active-directory/managed-identities-azure-resources/qs-configure-powershell-windows-vm.md) - [Azure CLI](../../active-directory/managed-identities-azure-resources/qs-configure-cli-windows-vm.md) - [Azure Resource Manager template](../../active-directory/managed-identities-azure-resources/qs-configure-template-windows-vm.md)-- [Azure Resource Manager client libraries](../../active-directory/managed-identities-azure-resources/qs-configure-sdk-windows-vm.md)
+- [Azure Resource Manager SDKs](../../active-directory/managed-identities-azure-resources/qs-configure-sdk-windows-vm.md)
- [App services](../../app-service/overview-managed-identity.md) ## Authenticate a registered application in the development environment
The `az ad sp create-for-rbac` command will return a list of service principal p
#### Set environment variables
-The Azure Identity client library reads values from three environment variables at runtime to authenticate the application. The following table describes the value to set for each environment variable.
+The Azure Identity SDK reads values from three environment variables at runtime to authenticate the application. The following table describes the value to set for each environment variable.
|Environment variable|Value |-|-
communication-services Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/quickstarts/managed-identity.md
zone_pivot_groups: acs-js-csharp-java-python
# Use managed identities
-Get started with Azure Communication Services by using managed identities. The Communication Services Identity and SMS client libraries support Azure Active Directory (Azure AD) authentication with [managed identities for Azure resources](../../active-directory/managed-identities-azure-resources/overview.md).
+Get started with Azure Communication Services by using managed identities. The Communication Services Identity and SMS SDKs support Azure Active Directory (Azure AD) authentication with [managed identities for Azure resources](../../active-directory/managed-identities-azure-resources/overview.md).
-This quickstart shows you how to authorize access to the Identity and SMS client libraries from an Azure environment that supports managed identities. It also describes how to test your code in a development environment.
+This quickstart shows you how to authorize access to the Identity and SMS SDKs from an Azure environment that supports managed identities. It also describes how to test your code in a development environment.
::: zone pivot="programming-language-csharp" [!INCLUDE [.NET](./includes/managed-identity-net.md)]
communication-services Getting Started With Teams Embed https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/quickstarts/meeting/getting-started-with-teams-embed.md
zone_pivot_groups: acs-plat-ios-android
[!INCLUDE [Public Preview Notice](../../includes/public-preview-include.md)]
-Get started with Azure Communication Services by using the Communication Services Teams Embed client library to add teams meetings to your app.
+Get started with Azure Communication Services by using the Communication Services Teams Embed SDK to add teams meetings to your app.
[!INCLUDE [Emergency Calling Notice](../../includes/emergency-calling-notice-include.md)]
communication-services Send https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/quickstarts/telephony-sms/send.md
zone_pivot_groups: acs-js-csharp-java-python
> For more information, see **[Phone number types](../../concepts/telephony-sms/plan-solution.md)**. ::: zone pivot="programming-language-csharp" ::: zone-end ::: zone pivot="programming-language-javascript" ::: zone-end ::: zone pivot="programming-language-python" ::: zone-end ::: zone pivot="programming-language-java" ::: zone-end ## Troubleshooting
communication-services Create Your Own Components https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/quickstarts/ui-framework/create-your-own-components.md
At the end of this process you should have a full application inside of the fold
### Install the package
-Use the `npm install` command to install the Azure Communication Services Calling client library for JavaScript. Move the provided tarball (Private Preview) over to the my-app directory.
+Use the `npm install` command to install the Azure Communication Services Calling SDK for JavaScript. Move the provided tarball (Private Preview) over to the my-app directory.
```console
npm run start
## Object model
-The following classes and interfaces handle some of the major features of the Azure Communication Services UI client library:
+The following classes and interfaces handle some of the major features of the Azure Communication Services UI SDK:
| Name | Description | | - | |
communication-services Get Started With Components https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/quickstarts/ui-framework/get-started-with-components.md
At the end of this process, you should have a full application inside of the fol
### Install the package
-Use the `npm install` command to install the Azure Communication Services Calling client library for JavaScript. Move the provided tarball (Private Preview) over to the my-app directory.
+Use the `npm install` command to install the Azure Communication Services Calling SDK for JavaScript. Move the provided tarball (Private Preview) over to the my-app directory.
```console
npm run start
## Object model
-The following classes and interfaces handle some of the major features of the Azure Communication Services UI client library:
+The following classes and interfaces handle some of the major features of the Azure Communication Services UI SDK:
| Name | Description | | - | |
communication-services Get Started With Composites https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/quickstarts/ui-framework/get-started-with-composites.md
At the end of this process, you should have a full application inside of the fol
### Install the package
-Use the `npm install` command to install the Azure Communication Services Calling client library for JavaScript. Move the provided tarball (Private Preview) over to the my-app directory.
+Use the `npm install` command to install the Azure Communication Services Calling SDK for JavaScript. Move the provided tarball (Private Preview) over to the my-app directory.
```console
npm run start
## Object model
-The following classes and interfaces handle some of the major features of the Azure Communication Services UI client library:
+The following classes and interfaces handle some of the major features of the Azure Communication Services UI SDK:
| Name | Description | | - | |
communication-services Calling Client Samples https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/quickstarts/voice-video-calling/calling-client-samples.md
Title: Quickstart - Use the Azure Communication Services calling client library
+ Title: Quickstart - Use the Azure Communication Services Calling SDK
-description: Learn about the Communication Services calling client library capabilities.
+description: Learn about the Communication Services Calling SDK capabilities.
zone_pivot_groups: acs-plat-web-ios-android
-# Quickstart: Use the Communication Services calling client library
+# Quickstart: Use the Communication Services Calling SDK
[!INCLUDE [Public Preview Notice](../../includes/public-preview-include.md)]
-Get started with Azure Communication Services by using the Communication Services calling client library to add voice and video calling to your app.
+Get started with Azure Communication Services by using the Communication Services Calling SDK to add voice and video calling to your app.
::: zone pivot="platform-web" [!INCLUDE [Calling with JavaScript](./includes/calling-sdk-js.md)]
communication-services Get Started Teams Interop https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/quickstarts/voice-video-calling/get-started-teams-interop.md
zone_pivot_groups: acs-plat-web-ios-android
> [!IMPORTANT] > To enable/disable [Teams tenant interoperability](../../concepts/teams-interop.md), complete [this form](https://forms.office.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR21ouQM6BHtHiripswZoZsdURDQ5SUNQTElKR0VZU0VUU1hMOTBBMVhESS4u).
-Get started with Azure Communication Services by connecting your calling solution to Microsoft Teams using the JavaScript client library.
+Get started with Azure Communication Services by connecting your calling solution to Microsoft Teams using the JavaScript SDK.
::: zone pivot="platform-web" [!INCLUDE [Calling with JavaScript](./includes/teams-interop-javascript.md)]
If you want to clean up and remove a Communication Services subscription, you ca
For more information, see the following articles: - Check out our [calling hero sample](../../samples/calling-hero-sample.md)-- Learn about [calling client library capabilities](./calling-client-samples.md)
+- Learn about [Calling SDK capabilities](./calling-client-samples.md)
- Learn more about [how calling works](../../concepts/voice-video-calling/about-call-types.md)
communication-services Get Started With Video Calling https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/quickstarts/voice-video-calling/get-started-with-video-calling.md
Open your terminal or command window create a new directory for your app, and na
mkdir calling-quickstart && cd calling-quickstart ``` ### Install the package
-Use the `npm install` command to install the Azure Communication Services Calling client library for JavaScript.
+Use the `npm install` command to install the Azure Communication Services Calling SDK for JavaScript.
+
+> [!IMPORTANT]
+> This quickstart uses the Azure Communication Services Calling SDK version `1.0.0.beta-10`.
-This quickstart used Azure Communication Calling Client Library `1.0.0.beta-6`.
```console npm install @azure/communication-common --save
Here's the code:
Create a file in the root directory of your project called `client.js` to contain the application logic for this quickstart. Add the following code to import the calling client and get references to the DOM elements. ```JavaScript
-import { CallClient, CallAgent, Renderer, LocalVideoStream } from "@azure/communication-calling";
+import { CallClient, CallAgent, VideoStreamRenderer, LocalVideoStream } from "@azure/communication-calling";
import { AzureCommunicationTokenCredential } from '@azure/communication-common'; let call;
let rendererRemote;
``` ## Object model
-The following classes and interfaces handle some of the major features of the Azure Communication Services Calling client library:
+The following classes and interfaces handle some of the major features of the Azure Communication Services Calling SDK:
| Name | Description | | : | :- |
-| CallClient | The CallClient is the main entry point to the Calling client library. |
+| CallClient | The CallClient is the main entry point to the Calling SDK. |
| CallAgent | The CallAgent is used to start and manage calls. | | DeviceManager | The DeviceManager is used to manage media devices. | | AzureCommunicationTokenCredential | The AzureCommunicationTokenCredential class implements the CommunicationTokenCredential interface which is used to instantiate the CallAgent. | ## Authenticate the client and access DeviceManager
-You need to replace <USER_ACCESS_TOKEN> with a valid user access token for your resource. Refer to the user access token documentation if you don't already have a token available. Using the CallClient, initialize a CallAgent instance with a CommunicationUserCredential which will enable us to make and receive calls.
-To access the DeviceManager a callAgent instance must first be created. You can then use the `getDeviceManager` method on the `CallClient` instance to get the `DeviceManager`.
+You need to replace <USER_ACCESS_TOKEN> with a valid user access token for your resource. Refer to the user access token documentation if you don't already have a token available. Using the `CallClient`, initialize a `CallAgent` instance with a `CommunicationUserCredential` which will enable us to make and receive calls.
+To access the `DeviceManager` a callAgent instance must first be created. You can then use the `getDeviceManager` method on the `CallClient` instance to get the `DeviceManager`.
Add the following code to `client.js`:
init();
Add an event listener to initiate a call when the `callButton` is clicked:
-First you have to enumerate local cameras using the deviceManager getCameraList API. In this quickstart we're using the first camera in the collection. Once the desired camera is selected, a LocalVideoStream instance will be constructed and passed within videoOptions as an item within the localVideoStream array to the call method. Once your call connects it will automatically start sending a video stream to the other participant.
+First you have to enumerate local cameras using the deviceManager `getCameraList` API. In this quickstart we're using the first camera in the collection. Once the desired camera is selected, a LocalVideoStream instance will be constructed and passed within `videoOptions` as an item within the localVideoStream array to the call method. Once your call connects it will automatically start sending a video stream to the other participant.
```JavaScript callButton.addEventListener("click", async () => {
callButton.addEventListener("click", async () => {
callButton.disabled = true; }); ```
-To render a `LocalVideoStream`, you need to create a new instance of `Renderer`, and then create a new RendererView instance using the asynchronous `createView` method. You may then attach `view.target` to any UI element.
+To render a `LocalVideoStream`, you need to create a new instance of `VideoStreamRenderer`, and then create a new `VideoStreamRendererView` instance using the asynchronous `createView` method. You may then attach `view.target` to any UI element.
```JavaScript async function localVideoView() {
- rendererLocal = new Renderer(localVideoStream);
+ rendererLocal = new VideoStreamRenderer(localVideoStream);
const view = await rendererLocal.createView(); document.getElementById("myVideo").appendChild(view.target); } ```
-All remote participants are available through the `remoteParticipants` collection on a call instance. You need to subscribe to the remote participants of the current call and listen to the event `remoteParticipantsUpdated` to subscribe to added remote participants.
+All remote participants are available through the `remoteParticipants` collection on a call instance. You need to listen to the event `remoteParticipantsUpdated`to be notified when a new remote participant is added into the call. You also need to iterate the `remoteParticipants` collection to subscribe to each of them in order to subscribe to their video streams.
```JavaScript function subscribeToRemoteParticipantInCall(callInstance) {
- callInstance.remoteParticipants.forEach( p => {
- subscribeToRemoteParticipant(p);
- })
callInstance.on('remoteParticipantsUpdated', e => { e.added.forEach( p => {
- subscribeToRemoteParticipant(p);
+ subscribeToParticipantVideoStreams(p);
})
- });
+ });
+ callInstance.remoteParticipants.forEach( p => {
+ subscribeToParticipantVideoStreams(p);
+ })
} ```
-You can subscribe to the `remoteParticipants` collection of the current call and inspect the `videoStreams` collections to list the streams of each participant. You also need to subscribe to the remoteParticipantsUpdated event to handle added remote participants.
+You need to subscribe to the `videoStreamsUpdated` event to handle added video streams of remote participants. You can inspect the `videoStreams` collections to list the streams of each participant while going through the `remoteParticipants` collection of the current call.
```JavaScript
-function subscribeToRemoteParticipant(remoteParticipant) {
- remoteParticipant.videoStreams.forEach(v => {
- handleVideoStream(v);
- });
+function subscribeToParticipantVideoStreams(remoteParticipant) {
remoteParticipant.on('videoStreamsUpdated', e => { e.added.forEach(v => { handleVideoStream(v); }) });
+ remoteParticipant.videoStreams.forEach(v => {
+ handleVideoStream(v);
+ });
} ``` You have to subscribe to a `isAvailableChanged` event to render the `remoteVideoStream`. If the `isAvailable` property changes to `true`, a remote participant is sending a stream. Whenever availability of a remote stream changes you can choose to destroy the whole `Renderer`, a specific `RendererView` or keep them, but this will result in displaying blank video frame.
function handleVideoStream(remoteVideoStream) {
} } ```
-To render a `RemoteVideoStream`, you need to create a new instance of `Renderer`, and then create a new `RendererView` instance using the asynchronous `createView` method. You may then attach `view.target` to any UI element.
+To render a `RemoteVideoStream`, you need to create a new instance of `VideoStreamRenderer`, and then create a new `VideoStreamRendererView` instance using the asynchronous `createView` method. You may then attach `view.target` to any UI element.
```JavaScript async function remoteVideoView(remoteVideoStream) {
- rendererRemote = new Renderer(remoteVideoStream);
+ rendererRemote = new VideoStreamRenderer(remoteVideoStream);
const view = await rendererRemote.createView(); document.getElementById("remoteVideo").appendChild(view.target); }
callAgent.on('incomingCall', async e => {
const addedCall = await e.incomingCall.accept({videoOptions: {localVideoStreams:[localVideoStream]}}); call = addedCall;
- subscribeToRemoteParticipantInCall(addedCall);
+ subscribeToRemoteParticipantInCall(addedCall);
}); ``` ## End the current call
If you want to clean up and remove a Communication Services subscription, you ca
## Next steps For more information, see the following articles:-- Check out our [web calling sample](../../samples/web-calling-sample.md)-- Learn about [calling client library capabilities](./calling-client-samples.md?pivots=platform-web)-- Learn more about [how calling works](../../concepts/voice-video-calling/about-call-types.md)+
+- Check out our [web calling sample](https://docs.microsoft.com/azure/communication-services/samples/web-calling-sample)
+- Learn about [Calling SDK capabilities](https://docs.microsoft.com/azure/communication-services/quickstarts/voice-video-calling/calling-client-samples?pivots=platform-web)
+- Learn more about [how calling works](https://docs.microsoft.com/azure/communication-services/concepts/voice-video-calling/about-call-types)
+
communication-services Getting Started With Calling https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/quickstarts/voice-video-calling/getting-started-with-calling.md
zone_pivot_groups: acs-plat-web-ios-android
[!INCLUDE [Public Preview Notice](../../includes/public-preview-include.md)]
-Get started with Azure Communication Services by using the Communication Services calling client library to add voice and video calling to your app.
+Get started with Azure Communication Services by using the Communication Services Calling SDK to add voice and video calling to your app.
[!INCLUDE [Emergency Calling Notice](../../includes/emergency-calling-notice-include.md)]
If you want to clean up and remove a Communication Services subscription, you ca
For more information, see the following articles: - Check out our [calling hero sample](../../samples/calling-hero-sample.md)-- Learn about [calling client library capabilities](./calling-client-samples.md)
+- Learn about [Calling SDK capabilities](./calling-client-samples.md)
- Learn more about [how calling works](../../concepts/voice-video-calling/about-call-types.md)
communication-services Pstn Call https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/quickstarts/voice-video-calling/pstn-call.md
zone_pivot_groups: acs-plat-web-ios-android
[!INCLUDE [Public Preview Notice](../../includes/public-preview-include.md)]
-Get started with Azure Communication Services by using the Communication Services calling client library to add PSTN calling to your app.
+Get started with Azure Communication Services by using the Communication Services Calling SDK to add PSTN calling to your app.
::: zone pivot="platform-web" [!INCLUDE [Calling with JavaScript](./includes/pstn-call-js.md)]
If you want to clean up and remove a Communication Services subscription, you ca
For more information, see the following articles: -- Learn about [calling client library capabilities](./calling-client-samples.md)
+- Learn about [Calling SDK capabilities](./calling-client-samples.md)
- Learn more about [how calling works](../../concepts/voice-video-calling/about-call-types.md)
communication-services Chat Hero Sample https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/samples/chat-hero-sample.md
> [This sample is available on GitHub.](https://github.com/Azure-Samples/communication-services-web-chat-hero)
-The Azure Communication Services **Group Chat Hero Sample** demonstrates how the Communication Services Chat Web client library can be used to build a group calling experience.
+The Azure Communication Services **Group Chat Hero Sample** demonstrates how the Communication Services Chat Web SDK can be used to build a group calling experience.
In this Sample quickstart, we'll learn how the sample works before we run the sample on your local machine. We'll then deploy the sample to Azure using your own Azure Communication Services resources.
If you want to clean up and remove a Communication Services subscription, you ca
For more information, see the following articles: - Learn about [chat concepts](../concepts/chat/concepts.md)-- Familiarize yourself with our [chat client library](../concepts/chat/sdk-features.md)
+- Familiarize yourself with our [Chat SDK](../concepts/chat/sdk-features.md)
- Review the [Contoso Med App](https://github.com/Azure-Samples/communication-services-contoso-med-app) sample ## Additional reading
communication-services Web Calling Sample https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/samples/web-calling-sample.md
# Get started with the web calling sample
-The web calling sample is a web application that serves as a step-by-step walkthrough of the various capabilities provided by the Communication Services web calling client library.
+The web calling sample is a web application that serves as a step-by-step walkthrough of the various capabilities provided by the Communication Services web Calling SDK.
This sample was built for developers and makes it very easy for you to get started with Communication Services. Its user interface is divided into multiple sections, each featuring a "Show code" button that allows you to copy code directly from your browser into your own Communication Services application.
You're now ready to begin placing calls using your Communication Services resour
## Placing and receiving calls
-The Communication Services web calling SDK allows for **1:1**, **1:N**, and **group** calling.
+The Communication Services web Calling SDK allows for **1:1**, **1:N**, and **group** calling.
For 1:1 or 1:N outgoing calls, you can specify multiple Communication Services User Identities to call using comma-separated values. You can can also specify traditional (PSTN) phone numbers to call using comma-separated values.
This sample also provides code snippets for the following capabilities:
For more information, see the following articles: -- Familiarize yourself with [using the calling client library](../quickstarts/voice-video-calling/calling-client-samples.md)
+- Familiarize yourself with [using the Calling SDK](../quickstarts/voice-video-calling/calling-client-samples.md)
- Learn more about [how calling works](../concepts/voice-video-calling/about-call-types.md) - Review the [API Reference docs](/javascript/api/azure-communication-services/@azure/communication-calling/) - Review the [Contoso Med App](https://github.com/Azure-Samples/communication-services-contoso-med-app) sample
communication-services Building App Start https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/tutorials/building-app-start.md
You can use Azure Communication Services to add real-time communications to your applications. In this tutorial, you'll learn how to set up a web application that supports Azure Communication Services. This is an introductory tutorial for new developers who want to get started with real-time communications.
-By the end of this tutorial, you'll have a baseline web application that's configured with Azure Communication Services client libraries. You can then use that application to begin building your real-time communications solution.
+By the end of this tutorial, you'll have a baseline web application that's configured with Azure Communication Services SDKs. You can then use that application to begin building your real-time communications solution.
Feel free to visit the [Azure Communication Services GitHub page](https://github.com/Azure/communication) to provide feedback.
In this tutorial, you learn how to:
- An Azure account with an active subscription. For details, see [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). The free account gives you $200 in Azure credits to try out any combination of services. - [Visual Studio Code](https://code.visualstudio.com/) for editing code in your local development environment. - [webpack](https://webpack.js.org/) to bundle and locally host your code.-- [Node.js](https://nodejs.org/en/) to install and manage dependencies like Azure Communication Services client libraries and webpack.
+- [Node.js](https://nodejs.org/en/) to install and manage dependencies like Azure Communication Services SDKs and webpack.
- [nvm and npm](/windows/nodejs/setup-on-windows) to handle version control. - The [Azure Storage extension](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azurestorage) for Visual Studio Code. You need this extension to publish your application in Azure Storage. [Read more about hosting static websites in Azure Storage](../../storage/blobs/storage-blob-static-website.md). - The [Azure App Service extension](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azureappservice). The extension allows deploying websites with the option to configure fully managed continuous integration and continuous delivery (CI/CD).
To stop your server, you can run `Ctrl+C` in your terminal. To start your server
## Add the Azure Communication Services packages
-Use the `npm install` command to install the Azure Communication Services calling client library for JavaScript.
+Use the `npm install` command to install the Azure Communication Services Calling SDK for JavaScript.
```Console npm install @azure/communication-common --save
container-instances Security Baseline https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-instances/security-baseline.md
file](https://github.com/MicrosoftDocs/SecurityBenchmarks/tree/master/Azure%20Of
Control outbound network access from a subnet delegated to Azure Container Instances by using Azure Firewall. -- [Deploy container instances into an Azure virtual network](/azure/container-instances/container-instance-vnet)
+- [Deploy container instances into an Azure virtual network](/azure/container-instances/container-instances-vnet)
- [How to deploy and configure Azure Firewall](../firewall/tutorial-firewall-deploy-portal.md)
container-registry Security Baseline https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-registry/security-baseline.md
file](https://github.com/MicrosoftDocs/SecurityBenchmarks/tree/master/Azure%20Of
**Responsibility**: Customer
-**Azure Security Center monitoring**: The [Azure Security Benchmark](/home/mbaldwin/docs/asb/azure-docs-pr/articles/governance/policy/samples/azure-security-benchmark.md) is the default policy initiative for Security Center and is the foundation for [Security Center's recommendations](/home/mbaldwin/docs/asb/azure-docs-pr/articles/security-center/security-center-recommendations.md). The Azure Policy definitions related to this control are enabled automatically by Security Center. Alerts related to this control may require an [Azure Defender](/home/mbaldwin/docs/asb/azure-docs-pr/articles/security-center/azure-defender.md) plan for the related services.
+**Azure Security Center monitoring**: The [Azure Security Benchmark](/azure/governance/policy/samples/azure-security-benchmark) is the default policy initiative for Security Center and is the foundation for [Security Center's recommendations](/azure/security-center/security-center-recommendations). The Azure Policy definitions related to this control are enabled automatically by Security Center. Alerts related to this control may require an [Azure Defender](/azure/security-center/azure-defender) plan for the related services.
**Azure Policy built-in definitions - Microsoft.ContainerRegistry**:
file](https://github.com/MicrosoftDocs/SecurityBenchmarks/tree/master/Azure%20Of
You may use Azure Security Center Just In Time Network access to configure NSGs to limit exposure of endpoints to approved IP addresses for a limited period. Also, use Azure Security Center Adaptive Network Hardening to recommend NSG configurations that limit Ports and Source IPs based on actual traffic and threat intelligence. -- [How to configure DDoS protection](/azure/virtual-network/manage-ddos-protection)
+- [How to configure DDoS protection](../ddos-protection/manage-ddos-protection.md)
- [How to deploy Azure Firewall](../firewall/tutorial-firewall-deploy-portal.md)-- [Understand Azure Security Center Integrated Threat Intelligence](../security-center/security-center-alerts-service-layer.md)
+- [Understand Azure Security Center Integrated Threat Intelligence](../security-center/azure-defender.md)
- [Understand Azure Security Center Adaptive Network Hardening](../security-center/security-center-adaptive-network-hardening.md) - [Azure Security Center Just In Time Network Access Control](../security-center/security-center-just-in-time.md)
Deploy the firewall solution of your choice at each of your organization's netwo
**Guidance**: For resources that need access to your container registry, use virtual network service tags for the Azure Container Registry service to define network access controls on Network Security Groups or Azure Firewall. You can use service tags in place of specific IP addresses when creating security rules. By specifying the service tag name "AzureContainerRegistry" in the appropriate source or destination field of a rule, you can allow or deny the traffic for the corresponding service. Microsoft manages the address prefixes encompassed by the service tag and automatically updates the service tag as addresses change. -- [Allow access by service tag](https://docs.microsoft.com/azure/container-registry/container-registry-firewall-access-rules#allow-access-by-service-tag)
+- [Allow access by service tag](./container-registry-firewall-access-rules.md#allow-access-by-service-tag)
**Responsibility**: Customer
You may use Azure Blueprints to simplify large-scale Azure deployments by packag
**Guidance**: Use Azure Activity Log to monitor network resource configurations and detect changes for network resources related to your container registries. Create alerts within Azure Monitor that will trigger when changes to critical network resources take place. -- [How to view and retrieve Azure Activity Log events](/azure/azure-monitor/platform/activity-log-view)
+- [How to view and retrieve Azure Activity Log events](../azure-monitor/essentials/activity-log.md#view-the-activity-log)
-- [How to create alerts in Azure Monitor](/azure/azure-monitor/platform/alerts-activity-log)
+- [How to create alerts in Azure Monitor](../azure-monitor/alerts/alerts-activity-log.md)
**Responsibility**: Customer
You may use Azure Blueprints to simplify large-scale Azure deployments by packag
**Guidance**: Within Azure Monitor, set your Log Analytics Workspace retention period according to your organization's compliance regulations. Use Azure Storage Accounts for long-term/archival storage. -- [How to set log retention parameters for Log Analytics Workspaces](/azure/azure-monitor/platform/manage-cost-storage#change-the-data-retention-period)
+- [How to set log retention parameters for Log Analytics Workspaces](../azure-monitor/logs/manage-cost-storage.md#change-the-data-retention-period)
**Responsibility**: Customer
You may use Azure Blueprints to simplify large-scale Azure deployments by packag
- [Azure Container Registry logs for diagnostic evaluation and auditing](container-registry-diagnostics-audit-logs.md) -- [Understand Log Analytics Workspace](/azure/azure-monitor/log-query/get-started-portal)
+- [Understand Log Analytics Workspace](../azure-monitor/logs/log-analytics-tutorial.md)
-- [How to perform custom queries in Azure Monitor](/azure/azure-monitor/log-query/get-started-queries)
+- [How to perform custom queries in Azure Monitor](../azure-monitor/logs/get-started-queries.md)
**Responsibility**: Customer
You may use Azure Blueprints to simplify large-scale Azure deployments by packag
- [Azure Container Registry logs for diagnostic evaluation and auditing](container-registry-diagnostics-audit-logs.md) -- [How to alert on log analytics log data](/azure/azure-monitor/learn/tutorial-response)
+- [How to alert on log analytics log data](../azure-monitor/alerts/tutorial-response.md)
**Responsibility**: Customer
You may use Azure Blueprints to simplify large-scale Azure deployments by packag
For each Azure container registry, track whether the built-in admin account is enabled or disabled. Disable the account when not in use. -- [How to get a directory role in Azure AD with PowerShell](https://docs.microsoft.com/powershell/module/azuread/get-azureaddirectoryrole?view=azureadps-2.0&amp;preserve-view=true)
+- [How to get a directory role in Azure AD with PowerShell](/powershell/module/azuread/get-azureaddirectoryrole?preserve-view=true&view=azureadps-2.0)
-- [How to get members of a directory role in Azure AD with PowerShell](https://docs.microsoft.com/powershell/module/azuread/get-azureaddirectoryrolemember?view=azureadps-2.0&amp;preserve-view=true)
+- [How to get members of a directory role in Azure AD with PowerShell](/powershell/module/azuread/get-azureaddirectoryrolemember?preserve-view=true&view=azureadps-2.0)
-- [Azure Container Registry admin account](https://docs.microsoft.com/azure/container-registry/container-registry-authentication#admin-account)
+- [Azure Container Registry admin account](./container-registry-authentication.md#admin-account)
**Responsibility**: Customer
For each Azure container registry, track whether the built-in admin account is e
If the default admin account of an Azure container registry is enabled, complex passwords are automatically created and should be rotated. Disable the account when not in use. -- [Azure Container Registry admin account](https://docs.microsoft.com/azure/container-registry/container-registry-authentication#admin-account)
+- [Azure Container Registry admin account](./container-registry-authentication.md#admin-account)
**Responsibility**: Customer
Also, create procedures to enable the built-in admin account of a container regi
- [Understand Azure Security Center Identity and Access](../security-center/security-center-identity-access.md) -- [Azure Container Registry admin account](https://docs.microsoft.com/azure/container-registry/container-registry-authentication#admin-account)
+- [Azure Container Registry admin account](./container-registry-authentication.md#admin-account)
**Responsibility**: Customer
For individual access to the container registry, use individual login integrated
- [Understand SSO with Azure AD](../active-directory/manage-apps/what-is-single-sign-on.md) -- [Individual login to a container registry](https://docs.microsoft.com/azure/container-registry/container-registry-authentication#admin-account)
+- [Individual login to a container registry](./container-registry-authentication.md#admin-account)
**Responsibility**: Customer
For individual access to the container registry, use individual login integrated
**Guidance**: Use Azure Active Directory (Azure AD) security reports for generation of logs and alerts when suspicious or unsafe activity occurs in the environment. Use Azure Security Center to monitor identity and access activity. -- [How to identify Azure AD users flagged for risky activity](/azure/active-directory/reports-monitoring/concept-user-at-risk)
+- [How to identify Azure AD users flagged for risky activity](../active-directory/identity-protection/overview-identity-protection.md)
- [How to monitor users' identity and access activity in Azure Security Center](../security-center/security-center-identity-access.md)
For individual access to the container registry, use individual login integrated
**Guidance**: Azure Active Directory (Azure AD) provides logs to help discover stale accounts. In addition, use Azure Identity Access Reviews to efficiently manage group memberships, access to enterprise applications, and role assignments. User access can be reviewed on a regular basis to make sure only the right Users have continued access. -- [Understand Azure AD reporting](/azure/active-directory/reports-monitoring/)
+- [Understand Azure AD reporting](../active-directory/reports-monitoring/index.yml)
- [How to use Azure identity access reviews](../active-directory/governance/access-reviews-overview.md)
For individual access to the container registry, use individual login integrated
You can streamline this process by creating Diagnostic Settings for Azure AD user accounts and sending the audit logs and sign-in logs to a Log Analytics Workspace. You can configure desired Alerts within Log Analytics Workspace. -- [How to integrate Azure Activity Logs into Azure Monitor](/azure/active-directory/reports-monitoring/howto-integrate-activity-logs-with-log-analytics)
+- [How to integrate Azure Activity Logs into Azure Monitor](../active-directory/reports-monitoring/howto-integrate-activity-logs-with-log-analytics.md)
**Responsibility**: Customer
You can streamline this process by creating Diagnostic Settings for Azure AD use
**Guidance**: Use Azure Active Directory (Azure AD) Risk and Identity Protection features to configure automated responses to detected suspicious actions related to user identities. -- [How to view Azure AD risky sign-ins](/azure/active-directory/reports-monitoring/concept-risky-sign-ins)
+- [How to view Azure AD risky sign-ins](../active-directory/identity-protection/overview-identity-protection.md)
- [How to configure and enable Identity Protection risk policies](../active-directory/identity-protection/howto-identity-protection-configure-risk-policies.md)
You can streamline this process by creating Diagnostic Settings for Azure AD use
**Guidance**: Not available; Customer Lockbox not currently supported for Azure Container Registry. -- [List of Customer Lockbox supported services](https://docs.microsoft.com/azure/security/fundamentals/customer-lockbox-overview#supported-services-and-scenarios-in-general-availability)
+- [List of Customer Lockbox supported services](../security/fundamentals/customer-lockbox-overview.md#supported-services-and-scenarios-in-general-availability)
**Responsibility**: Customer
You can streamline this process by creating Diagnostic Settings for Azure AD use
Tag and version container images or other artifacts in a registry, and lock images or repositories, to assist in tracking images that store or process sensitive information. -- [How to create and use tags](/azure/azure-resource-manager/resource-group-using-tags)
+- [How to create and use tags](../azure-resource-manager/management/tag-resources.md)
- [Recommendations for tagging and versioning container images](container-registry-image-tag-version.md)
Tag and version container images or other artifacts in a registry, and lock imag
Resources should be separated by virtual network or subnet, tagged appropriately, and secured by an network security group (NSG) or Azure Firewall. -- [How to create additional Azure subscriptions](/azure/billing/billing-create-subscription)
+- [How to create additional Azure subscriptions](../cost-management-billing/manage/create-subscription.md)
-- [How to create management groups](/azure/governance/management-groups/create)
+- [How to create management groups](../governance/management-groups/create-management-group-portal.md)
-- [How to create and use tags](/azure/azure-resource-manager/resource-group-using-tags)
+- [How to create and use tags](../azure-resource-manager/management/tag-resources.md)
- [Restrict access to an Azure container registry using an Azure virtual network or firewall rules](container-registry-vnet.md)
For the underlying platform which is managed by Microsoft, Microsoft treats all
Follow Azure Security Center recommendations for encryption at rest and encryption in transit, where applicable. -- [Understand encryption in transit with Azure](https://docs.microsoft.com/azure/security/fundamentals/encryption-overview#encryption-of-data-in-transit)
+- [Understand encryption in transit with Azure](../security/fundamentals/encryption-overview.md#encryption-of-data-in-transit)
**Responsibility**: Shared
For the underlying platform which is managed by Microsoft, Microsoft treats all
- [Understand encryption at rest in Azure](../security/fundamentals/encryption-atrest.md) -- [Customer-managed keys in Azure Container Registry](https://aka.ms/acr/cmk)
+- [Customer-managed keys in Azure Container Registry](./container-registry-customer-managed-keys.md)
**Responsibility**: Customer
For the underlying platform which is managed by Microsoft, Microsoft treats all
**Guidance**: Follow recommendations from Azure Security Center on performing vulnerability assessments on your container images. Optionally deploy third-party solutions from Azure Marketplace to perform image vulnerability assessments. -- [How to implement Azure Security Center vulnerability assessment recommendations](/azure/security-center/security-center-vulnerability-assessment-recommendations)
+- [How to implement Azure Security Center vulnerability assessment recommendations](../security-center/deploy-vulnerability-assessment-vm.md)
-- [Azure Container Registry integration with Security Center (Preview)](/azure/security-center/azure-container-registry-integration)
+- [Azure Container Registry integration with Security Center (Preview)](../security-center/defender-for-container-registries-introduction.md)
**Responsibility**: Customer
Automate container image updates when updates to base images from operating syst
**Guidance**: Integrate Azure Container Registry (ACR) with Azure Security Center to enable periodic scanning of container images for vulnerabilities. Optionally deploy third-party solutions from Azure Marketplace to perform periodic image vulnerability scans. -- [Azure Container Registry integration with Security Center (Preview)](/azure/security-center/azure-container-registry-integration)
+- [Azure Container Registry integration with Security Center (Preview)](../security-center/defender-for-container-registries-introduction.md)
**Responsibility**: Customer
Automate container image updates when updates to base images from operating syst
**Guidance**: Integrate Azure Container Registry (ACR) with Azure Security Center to enable periodic scanning of container images for vulnerabilities and to classify risks. Optionally deploy third-party solutions from Azure Marketplace to perform periodic image vulnerability scans and risk classification. -- [Azure Container Registry integration with Security Center (Preview)](/azure/security-center/azure-container-registry-integration)
+- [Azure Container Registry integration with Security Center (Preview)](../security-center/defender-for-container-registries-introduction.md)
**Responsibility**: Customer
Although classic Azure resources may be discovered via Resource Graph, it is hig
- [How to create queries with Azure Resource Graph](../governance/resource-graph/first-query-portal.md) -- [How to view your Azure Subscriptions](https://docs.microsoft.com/powershell/module/az.accounts/get-azsubscription?view=azps-4.8.0&amp;preserve-view=true)
+- [How to view your Azure Subscriptions](/powershell/module/az.accounts/get-azsubscription?preserve-view=true&view=azps-4.8.0)
- [Understand Azure RBAC](../role-based-access-control/overview.md)
Use Azure Resource Graph to query/discover resources within their subscription(s
- [Azure Container Registry logs for diagnostic evaluation and auditing](container-registry-diagnostics-audit-logs.md) -- [Understand Log Analytics Workspace](/azure/azure-monitor/log-query/get-started-portal)
+- [Understand Log Analytics Workspace](../azure-monitor/logs/log-analytics-tutorial.md)
-- [How to perform custom queries in Azure Monitor](/azure/azure-monitor/log-query/get-started-queries)
+- [How to perform custom queries in Azure Monitor](../azure-monitor/logs/get-started-queries.md)
**Responsibility**: Customer
Use Azure Resource Graph to query/discover resources within their subscription(s
- [How to configure and manage Azure Policy](../governance/policy/tutorials/create-and-manage.md) -- [How to deny a specific resource type with Azure Policy](https://docs.microsoft.com/azure/governance/policy/samples/built-in-policies#general)
+- [How to deny a specific resource type with Azure Policy](../governance/policy/samples/built-in-policies.md#general)
**Responsibility**: Customer
Use Azure Resource Graph to query/discover resources within their subscription(s
**Guidance**: Use operating system specific configurations or third-party resources to limit users' ability to execute scripts within Azure compute resources. -- [For example, how to control PowerShell script execution in Windows Environments](https://docs.microsoft.com/powershell/module/microsoft.powershell.security/set-executionpolicy?view=powershell-7&amp;preserve-view=true)
+- [For example, how to control PowerShell script execution in Windows Environments](/powershell/module/microsoft.powershell.security/set-executionpolicy?preserve-view=true&view=powershell-7)
**Responsibility**: Customer
Use Azure Resource Graph to query/discover resources within their subscription(s
**Guidance**: If using custom Azure Policy definitions, use Azure Repos to securely store and manage your code. -- [How to store code in Azure DevOps](https://docs.microsoft.com/azure/devops/repos/git/gitworkflow?view=azure-devops&amp;preserve-view=true)
+- [How to store code in Azure DevOps](/azure/devops/repos/git/gitworkflow?preserve-view=true&view=azure-devops)
-- [Azure Repos Documentation](https://docs.microsoft.com/azure/devops/repos/?view=azure-devops&amp;preserve-view=true)
+- [Azure Repos Documentation](/azure/devops/repos/?preserve-view=true&view=azure-devops)
**Responsibility**: Customer
Back up customer-managed keys in Azure Key Vault using Azure command-line tools
- [Import container images to a container registry](container-registry-import-images.md) -- [How to backup key vault keys in Azure](https://docs.microsoft.com/powershell/module/az.keyvault/backup-azkeyvaultkey?view=azps-4.8.0&amp;preserve-view=true)
+- [How to backup key vault keys in Azure](/powershell/module/az.keyvault/backup-azkeyvaultkey?preserve-view=true&view=azps-4.8.0)
**Responsibility**: Customer
Back up customer-managed keys in Azure Key Vault using Azure command-line tools
**Guidance**: Test restoration of backed up customer-managed keys in Azure Key Vault using Azure command-line tools or SDKs. -- [How to restore Azure Key Vault keys in Azure](https://docs.microsoft.com/powershell/module/az.keyvault/restore-azkeyvaultkey?view=azps-4.8.0&amp;preserve-view=true)
+- [How to restore Azure Key Vault keys in Azure](/powershell/module/az.keyvault/restore-azkeyvaultkey?preserve-view=true&view=azps-4.8.0)
**Responsibility**: Customer
Back up customer-managed keys in Azure Key Vault using Azure command-line tools
**Guidance**: You may enable Soft-Delete in Azure Key Vault to protect keys against accidental or malicious deletion. -- [How to enable Soft-Delete in Key Vault](https://docs.microsoft.com/azure/storage/blobs/storage-blob-soft-delete?tabs=azure-portal)
+- [How to enable Soft-Delete in Key Vault](../storage/blobs/soft-delete-blob-overview.md?tabs=azure-portal)
**Responsibility**: Customer
Additionally, clearly mark subscriptions (for ex. production, non-prod) and crea
## Next steps -- See the [Azure Security Benchmark V2 overview](/azure/security/benchmarks/overview)-- Learn more about [Azure security baselines](/azure/security/benchmarks/security-baselines-overview)
+- See the [Azure Security Benchmark V2 overview](../security/benchmarks/overview.md)
+- Learn more about [Azure security baselines](../security/benchmarks/security-baselines-overview.md)
cosmos-db How To Configure Firewall https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/how-to-configure-firewall.md
When you access your Azure Cosmos DB account from a computer on the internet, th
To access a current list of outbound IP ranges to add to your firewall settings, please see [Download Azure IP Ranges and Service Tags](https://www.microsoft.com/download/details.aspx?id=56519).
-To automate the list, please see [Use the Service Tag Discovery API (public preview)](https://docs.microsoft.com/azure/virtual-network/service-tags-overview#use-the-service-tag-discovery-api-public-preview).
+To automate the list, please see [Use the Service Tag Discovery API (public preview)](../virtual-network/service-tags-overview.md#use-the-service-tag-discovery-api-public-preview).
## <a id="configure-ip-firewall-arm"></a>Configure an IP firewall by using a Resource Manager template
cosmos-db How To Setup Rbac https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/how-to-setup-rbac.md
To use the Azure Cosmos DB RBAC in your application, you have to update the way
The way you create a `TokenCredential` instance is beyond the scope of this article. There are many ways to create such an instance depending on the type of AAD identity you want to use (user principal, service principal, group etc.). Most importantly, your `TokenCredential` instance must resolve to the identity (principal ID) that you've assigned your roles to. You can find examples of creating a `TokenCredential` class: -- [in .NET](https://docs.microsoft.com/dotnet/api/overview/azure/identity-readme#credential-classes)-- [in Java](https://docs.microsoft.com/java/api/overview/azure/identity-readme#credential-classes)-- [in JavaScript](https://docs.microsoft.com/javascript/api/overview/azure/identity-readme#credential-classes)
+- [in .NET](/dotnet/api/overview/azure/identity-readme#credential-classes)
+- [in Java](/java/api/overview/azure/identity-readme#credential-classes)
+- [in JavaScript](/javascript/api/overview/azure/identity-readme#credential-classes)
The examples below use a service principal with a `ClientSecretCredential` instance.
Disabling the account primary key is not currently possible.
## Next steps - Get an overview of [secure access to data in Cosmos DB](secure-access-to-data.md).-- Learn more about [RBAC for Azure Cosmos DB management](role-based-access-control.md).
+- Learn more about [RBAC for Azure Cosmos DB management](role-based-access-control.md).
cosmos-db Local Emulator Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/local-emulator-release-notes.md
Last updated 09/21/2020
# Azure Cosmos DB Emulator - Release notes and download information [!INCLUDE[appliesto-all-apis](includes/appliesto-all-apis.md)]
-This article shows the Azure Cosmos DB Emulator release notes with a list of feature updates that were made in each release. It also lists the latest version of emulator to download and use.
+This article shows the Azure Cosmos DB Emulator release notes with a list of feature updates that were made in each release. It also lists the latest version of the emulator to download and use.
## Download
-| |Links |
+| | Link |
||| |**MSI download**|[Microsoft Download Center](https://aka.ms/cosmosdb-emulator)| |**Get started**|[Develop locally with Azure Cosmos DB Emulator](local-emulator.md)|
cosmos-db Security Baseline https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/security-baseline.md
You can also secure the data stored in your Azure Cosmos account by using IP fir
**Responsibility**: Customer
-**Azure Security Center monitoring**: The [Azure Security Benchmark](/home/mbaldwin/docs/asb/azure-docs-pr/articles/governance/policy/samples/azure-security-benchmark.md) is the default policy initiative for Security Center and is the foundation for [Security Center's recommendations](/home/mbaldwin/docs/asb/azure-docs-pr/articles/security-center/security-center-recommendations.md). The Azure Policy definitions related to this control are enabled automatically by Security Center. Alerts related to this control may require an [Azure Defender](/home/mbaldwin/docs/asb/azure-docs-pr/articles/security-center/azure-defender.md) plan for the related services.
+**Azure Security Center monitoring**: The [Azure Security Benchmark](/azure/governance/policy/samples/azure-security-benchmark) is the default policy initiative for Security Center and is the foundation for [Security Center's recommendations](/azure/security-center/security-center-recommendations). The Azure Policy definitions related to this control are enabled automatically by Security Center. Alerts related to this control may require an [Azure Defender](/azure/security-center/azure-defender) plan for the related services.
**Azure Policy built-in definitions - Microsoft.DocumentDB**:
Enable DDoS Protection Standard on the Virtual Networks associated with your Azu
- [How to configure Azure Cosmos DB Advanced Threat Protection](cosmos-db-advanced-threat-protection.md) -- [How to configure DDoS protection](/azure/virtual-network/manage-ddos-protection)
+- [How to configure DDoS protection](../ddos-protection/manage-ddos-protection.md)
-- [Understand Azure Security Center Integrated Threat Intelligence](/azure/security-center/security-center-alerts-service-layer)
+- [Understand Azure Security Center Integrated Threat Intelligence](../security-center/azure-defender.md)
**Responsibility**: Customer
You may also use Azure Blueprints to simplify large-scale Azure deployments by p
**Guidance**: Use tags for network resources associated with your Azure Cosmos DB deployment in order to logically organize them into a taxonomy. -- [How to create and use tags](/azure/azure-resource-manager/resource-group-using-tags)
+- [How to create and use tags](../azure-resource-manager/management/tag-resources.md)
**Responsibility**: Customer
You may also use Azure Blueprints to simplify large-scale Azure deployments by p
**Guidance**: Use Azure Activity Log to monitor network resource configurations and detect changes for network resources related to your Azure Cosmos DB instances. Create alerts within Azure Monitor that will trigger when changes to critical network resources take place. -- [How to view and retrieve Azure Activity Log events](/azure/azure-monitor/platform/activity-log-view)
+- [How to view and retrieve Azure Activity Log events](../azure-monitor/essentials/activity-log.md#view-the-activity-log)
-- [How to create alerts in Azure Monitor](/azure/azure-monitor/platform/alerts-activity-log)
+- [How to create alerts in Azure Monitor](../azure-monitor/alerts/alerts-activity-log.md)
**Responsibility**: Customer
You may also use Azure Blueprints to simplify large-scale Azure deployments by p
**Guidance**: Ingest logs via Azure Monitor to aggregate security data generated by Azure Cosmos DB. In Azure Monitor, use Log Analytics workspaces to query and perform analytics, and use storage accounts for long-term/archival storage. Alternatively, you may on-board data to Azure Sentinel or a third-party Security Incident and Event Management (SIEM). -- [How to enable diagnostic logs for Azure Cosmos DB](/azure/cosmos-db/logging)
+- [How to enable diagnostic logs for Azure Cosmos DB](./monitor-cosmos-db.md)
- [How to onboard Azure Sentinel](../sentinel/quickstart-onboard.md)
You may also use Azure Blueprints to simplify large-scale Azure deployments by p
You can also enable Azure Activity Log diagnostic settings and send those logs to the same Log Analytics Workspace you use for Azure Cosmos DB logs. -- [How to enable diagnostic settings for Azure Cosmos DB](/azure/cosmos-db/logging)
+- [How to enable diagnostic settings for Azure Cosmos DB](./monitor-cosmos-db.md)
-- [How to enable diagnostic settings for Azure Activity Log](/azure/azure-monitor/platform/diagnostic-settings-legacy)
+- [How to enable diagnostic settings for Azure Activity Log](../azure-monitor/essentials/activity-log.md)
**Responsibility**: Customer
You can also enable Azure Activity Log diagnostic settings and send those logs t
**Guidance**: In Azure Monitor, set the log retention period for Log Analytics workspaces associated with your Azure Cosmos DB instances according to your organization's compliance regulations. -- [How to set log retention parameters](/azure/azure-monitor/platform/manage-cost-storage#change-the-data-retention-period)
+- [How to set log retention parameters](../azure-monitor/logs/manage-cost-storage.md#change-the-data-retention-period)
**Responsibility**: Customer
You can also enable Azure Activity Log diagnostic settings and send those logs t
You can also onboard your Log Analytics workspace to Azure Sentinel as it provides a security orchestration automated response (SOAR) solution. This allows for playbooks (automated solutions) to be created and used to remediate security issues. Additionally, you can create custom log alerts in your Log Analytics workspace using Azure Monitor. -- [List of threat protection alerts for Azure Cosmos DB](https://docs.microsoft.com/azure/security-center/alerts-reference#alerts-azurecosmos)
+- [List of threat protection alerts for Azure Cosmos DB](../security-center/alerts-reference.md#alerts-azurecosmos)
- [How to onboard Azure Sentinel](../sentinel/quickstart-onboard.md) -- [Create, view, and manage log alerts using Azure Monitor](/azure/azure-monitor/platform/alerts-log)
+- [Create, view, and manage log alerts using Azure Monitor](../azure-monitor/alerts/alerts-log.md)
**Responsibility**: Customer
Additionally, some actions in Azure Cosmos DB can be controlled with Azure AD an
- [Understand role-based access control in Azure Cosmos DB](role-based-access-control.md) -- [Build your own custom roles using Azure Cosmos DB Actions (Microsoft.DocumentDB namespace)](https://docs.microsoft.com/azure/role-based-access-control/resource-provider-operations#microsoftdocumentdb)
+- [Build your own custom roles using Azure Cosmos DB Actions (Microsoft.DocumentDB namespace)](../role-based-access-control/resource-provider-operations.md#microsoftdocumentdb)
- [Create a new role in Azure AD](../role-based-access-control/custom-roles.md) -- [How to get a directory role in Azure AD with PowerShell](https://docs.microsoft.com/powershell/module/azuread/get-azureaddirectoryrole?view=azureadps-2.0&amp;preserve-view=true)
+- [How to get a directory role in Azure AD with PowerShell](/powershell/module/azuread/get-azureaddirectoryrole?preserve-view=true&view=azureadps-2.0)
-- [How to get members of a directory role in Azure AD with PowerShell](https://docs.microsoft.com/powershell/module/azuread/get-azureaddirectoryrolemember?view=azureadps-2.0&amp;preserve-view=true)
+- [How to get members of a directory role in Azure AD with PowerShell](/powershell/module/azuread/get-azureaddirectoryrolemember?preserve-view=true&view=azureadps-2.0)
- [Restrict user access to data operations only](how-to-restrict-user-data.md)
Additionally, some actions in Azure Cosmos DB can be controlled with Azure AD an
- [Understanding secure access to data in Azure Cosmos DB](secure-access-to-data.md) -- [How to regenerate Azure Cosmos DB Keys](https://docs.microsoft.com/azure/cosmos-db/manage-with-powershell#regenerate-keys)
+- [How to regenerate Azure Cosmos DB Keys](./manage-with-powershell.md#regenerate-keys)
- [How to programmatically access keys using Azure AD](certificate-based-authentication.md)
Use Azure AD Risk Detections to view alerts and reports on risky user behavior.
- [How to deploy Privileged Identity Management (PIM)](../active-directory/privileged-identity-management/pim-deployment-plan.md) -- [Understand Azure AD risk detections](/azure/active-directory/reports-monitoring/concept-risk-events)
+- [Understand Azure AD risk detections](../active-directory/identity-protection/overview-identity-protection.md)
**Responsibility**: Customer
Use Azure AD Risk Detections to view alerts and reports on risky user behavior.
- [How to create and configure an Azure AD instance](../active-directory-domain-services/tutorial-create-instance.md) -- [How to configure and manage Azure AD authentication with Azure SQL](/azure/sql-database/sql-database-aad-authentication-configure)
+- [How to configure and manage Azure AD authentication with Azure SQL](../azure-sql/database/authentication-aad-configure.md)
**Responsibility**: Customer
Use Azure AD Risk Detections to view alerts and reports on risky user behavior.
You can also use Azure Active Directory (Azure AD) Identity Protection and risk detections feature to configure automated responses to detected suspicious actions related to user identities. Additionally, you can ingest logs into Azure Sentinel for further investigation. -- [How to view Azure AD risky sign-ins](/azure/active-directory/reports-monitoring/concept-risky-sign-ins)
+- [How to view Azure AD risky sign-ins](../active-directory/identity-protection/overview-identity-protection.md)
- [How to configure and enable Identity Protection risk policies](../active-directory/identity-protection/howto-identity-protection-configure-risk-policies.md)
You can also use Azure Active Directory (Azure AD) Identity Protection and risk
**Guidance**: Use tags to assist in tracking Azure Cosmos DB instances that store or process sensitive information. -- [How to create and use tags](/azure/azure-resource-manager/resource-group-using-tags)
+- [How to create and use tags](../azure-resource-manager/management/tag-resources.md)
**Responsibility**: Customer
You can also use Azure Active Directory (Azure AD) Identity Protection and risk
**Guidance**: Implement separate subscriptions and/or management groups for development, test, and production. Azure Cosmos DB instances are separated by virtual network/subnet, tagged appropriately, and secured within a network security group (NSG) or Azure Firewall. Azure Cosmos DB instances storing sensitive data should be isolated. By using Azure Private Link, you can connect to an Azure Cosmos DB instance account via a private endpoint. The private endpoint is a set of private IP addresses in a subnet within your virtual network. You can then limit access to the selected private IP addresses. -- [How to create additional Azure subscriptions](/azure/billing/billing-create-subscription)
+- [How to create additional Azure subscriptions](../cost-management-billing/manage/create-subscription.md)
-- [How to create management groups](/azure/governance/management-groups/create)
+- [How to create management groups](../governance/management-groups/create-management-group-portal.md)
-- [How to create and use tags](/azure/azure-resource-manager/resource-group-using-tags)
+- [How to create and use tags](../azure-resource-manager/management/tag-resources.md)
- [How to configure a Private Endpoint for Azure Cosmos DB](how-to-configure-private-endpoints.md)
Additionally, when using virtual machines to access your Azure Cosmos DB instanc
For the underlying platform which is managed by Microsoft, Microsoft treats all customer content as sensitive and goes to great lengths to guard against customer data loss and exposure. To ensure customer data within Azure remains secure, Microsoft has implemented and maintains a suite of robust data protection controls and capabilities. -- [Index Azure Cosmos DB data with Azure Cognitive Search](https://docs.microsoft.com/azure/search/search-howto-index-cosmosdb?toc=/azure/cosmos-db/toc.json&amp;bc=/azure/cosmos-db/breadcrumb/toc.json)
+- [Index Azure Cosmos DB data with Azure Cognitive Search](../search/search-howto-index-cosmosdb.md?bc=%2fazure%2fcosmos-db%2fbreadcrumb%2ftoc.json&toc=%2fazure%2fcosmos-db%2ftoc.json)
- [Understand customer data protection in Azure](../security/fundamentals/protection-customer-data.md)
By default, Microsoft manages the keys that are used to encrypt the data in your
- [Understanding encryption at rest with Azure Cosmos DB](database-encryption-at-rest.md) -- [Understanding key management for encryption at rest with Azure Cosmos DB](/azure/cosmos-db/cosmos-db-security-controls)
+- [Understanding key management for encryption at rest with Azure Cosmos DB]()
- [How to configure customer-managed keys for your Azure Cosmos DB account](how-to-setup-cmk.md)
By default, Microsoft manages the keys that are used to encrypt the data in your
**Guidance**: Use Azure Monitor with the Azure Activity Log to create alerts for when changes take place to production instances of Azure Cosmos DB. -- [How to create alerts for Azure Activity Log events](/azure/azure-monitor/platform/alerts-activity-log)
+- [How to create alerts for Azure Activity Log events](../azure-monitor/alerts/alerts-activity-log.md)
-- [How to create alerts for Azure Activity Log events](/azure/azure-monitor/platform/alerts-activity-log)
+- [How to create alerts for Azure Activity Log events](../azure-monitor/alerts/alerts-activity-log.md)
**Responsibility**: Customer
By default, Microsoft manages the keys that are used to encrypt the data in your
Microsoft performs system patching and vulnerability management on the underlying hosts that support your Azure Cosmos DB instances. To ensure customer data within Azure remains secure, Microsoft has implemented and maintains a suite of robust data protection controls and capabilities. -- [Supported features available in Azure Security Center](https://docs.microsoft.com/azure/security-center/security-center-services?tabs=features-windows)
+- [Supported features available in Azure Security Center](../security-center/security-center-services.md?tabs=features-windows)
**Responsibility**: Shared
Although classic Azure resources may be discovered via Resource Graph, it is hig
- [How to create queries with Azure Resource Graph](../governance/resource-graph/first-query-portal.md) -- [How to view your Azure Subscriptions](https://docs.microsoft.com/powershell/module/az.accounts/get-azsubscription?view=azps-4.8.0&amp;preserve-view=true)
+- [How to view your Azure Subscriptions](/powershell/module/az.accounts/get-azsubscription?preserve-view=true&view=azps-4.8.0)
- [Understanding Azure role-based access control](../role-based-access-control/overview.md)
Although classic Azure resources may be discovered via Resource Graph, it is hig
**Guidance**: Apply tags to your Azure Cosmos DB instances and related resources with metadata to logically organize them into a taxonomy. -- [How to create and use tags](/azure/azure-resource-manager/resource-group-using-tags)
+- [How to create and use tags](../azure-resource-manager/management/tag-resources.md)
-- [Which Azure Cosmos DB resources support tags](https://docs.microsoft.com/azure/azure-resource-manager/management/tag-support#microsoftdocumentdb)
+- [Which Azure Cosmos DB resources support tags](../azure-resource-manager/management/tag-support.md#microsoftdocumentdb)
**Responsibility**: Customer
Although classic Azure resources may be discovered via Resource Graph, it is hig
**Guidance**: Use tagging, management groups, and separate subscriptions, where appropriate, to organize and track assets, including but not limited to Azure Cosmos DB resources. Reconcile inventory on a regular basis and ensure unauthorized resources are deleted from the subscription in a timely manner. -- [How to create additional Azure subscriptions](/azure/billing/billing-create-subscription)
+- [How to create additional Azure subscriptions](../cost-management-billing/manage/create-subscription.md)
-- [How to create Management Groups](/azure/governance/management-groups/create)
+- [How to create Management Groups](../governance/management-groups/create-management-group-portal.md)
-- [How to create and use Tags](/azure/azure-resource-manager/resource-group-using-tags)
+- [How to create and use Tags](../azure-resource-manager/management/tag-resources.md)
**Responsibility**: Customer
In addition, use the Azure Resource Graph to query for and discover resources wi
- [How to configure and manage Azure Policy](../governance/policy/tutorials/create-and-manage.md) -- [How to deny a specific resource type with Azure Policy](https://docs.microsoft.com/azure/governance/policy/samples/built-in-policies#general)
+- [How to deny a specific resource type with Azure Policy](../governance/policy/samples/built-in-policies.md#general)
**Responsibility**: Customer
In addition, use the Azure Resource Graph to query for and discover resources wi
- Deploy Advanced Threat Protection for Cosmos DB Accounts - Cosmos DB should use a virtual network service endpoint -- [How to view available Azure Policy aliases](https://docs.microsoft.com/powershell/module/az.resources/get-azpolicyalias?view=azps-4.8.0&amp;preserve-view=true)
+- [How to view available Azure Policy aliases](/powershell/module/az.resources/get-azpolicyalias?preserve-view=true&view=azps-4.8.0)
- [How to configure and manage Azure Policy](../governance/policy/tutorials/create-and-manage.md)
In addition, use the Azure Resource Graph to query for and discover resources wi
- [How to integrate with Azure Managed Identities](../azure-app-configuration/howto-integrate-azure-managed-service-identity.md) -- [How to create a Key Vault](/azure/key-vault/quick-create-portal)
+- [How to create a Key Vault](../key-vault/secrets/quick-create-portal.md)
- [How to authenticate to Key Vault](../key-vault/general/authentication.md)
If using Key Vault to store credentials for your Cosmos DB instances, ensure reg
- [Understand Azure Cosmos DB Automated Backups](online-backup-and-restore.md) -- [How to restore data in Azure Cosmos DB](/azure/cosmos-db/how-to-backup-and-restore)
+- [How to restore data in Azure Cosmos DB](./online-backup-and-restore.md)
- [How to backup Key Vault Keys](/powershell/module/azurerm.keyvault/backup-azurekeyvaultkey)
Test restoration of your secrets stored in Azure Key Vault using PowerShell. The
- [Understand Azure Cosmos DB Automated Backups](online-backup-and-restore.md) -- [How to restore data in Azure Cosmos DB](/azure/cosmos-db/how-to-backup-and-restore)
+- [How to restore data in Azure Cosmos DB](./online-backup-and-restore.md)
-- [How to restore Azure Key Vault Secrets](https://docs.microsoft.com/powershell/module/az.keyvault/restore-azkeyvaultkey?view=azps-4.8.0&amp;preserve-view=true)
+- [How to restore Azure Key Vault Secrets](/powershell/module/az.keyvault/restore-azkeyvaultkey?preserve-view=true&view=azps-4.8.0)
**Responsibility**: Shared
Enable Soft-Delete in Key Vault to protect keys against accidental or malicious
- [Understand data encryption in Azure Cosmos DB](database-encryption-at-rest.md) -- [How to enable Soft-Delete in Key Vault](https://docs.microsoft.com/azure/storage/blobs/storage-blob-soft-delete?tabs=azure-portal)
+- [How to enable Soft-Delete in Key Vault](../storage/blobs/soft-delete-blob-overview.md?tabs=azure-portal)
**Responsibility**: Shared
Additionally, clearly mark subscriptions (for ex. production, non-prod) and crea
## Next steps -- See the [Azure Security Benchmark V2 overview](/azure/security/benchmarks/overview)-- Learn more about [Azure security baselines](/azure/security/benchmarks/security-baselines-overview)
+- See the [Azure Security Benchmark V2 overview](../security/benchmarks/overview.md)
+- Learn more about [Azure security baselines](../security/benchmarks/security-baselines-overview.md)
cosmos-db Sql Api Java Application https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql-api-java-application.md
Before you begin this application development tutorial, you must have the follow
* [Eclipse IDE for Java EE Developers.](https://www.eclipse.org/downloads/packages/release/luna/sr1/eclipse-ide-java-ee-developers) * [An Azure Web Site with a Java runtime environment (e.g. Tomcat or Jetty) enabled.](../app-service/quickstart-java.md)
-If you're installing these tools for the first time, coreservlets.com provides a walk-through of the installation process in the quickstart section of their [Tutorial: Installing TomCat7 and Using it with Eclipse](http://www.coreservlets.com/Apache-Tomcat-Tutorial/tomcat-7-with-eclipse.html) article.
+If you're installing these tools for the first time, coreservlets.com provides a walk-through of the installation process in the quickstart section of their [Tutorial: Installing TomCat7 and Using it with Eclipse](https://www.youtube.com/watch?v=jOdCfW7-ybI&t=2s) article.
## <a id="CreateDB"></a>Create an Azure Cosmos DB account
cosmos-db Sql Api Sdk Java Spring V3 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql-api-sdk-java-spring-v3.md
You can use Spring Data Azure Cosmos DB in your [Azure Spring Cloud](https://azu
## Get started fast
- Get up and running with Spring Data Azure Cosmos DB by following our [Spring Boot Starter guide](https://docs.microsoft.com/azure/developer/java/spring-framework/configure-spring-boot-starter-java-app-with-cosmos-db). The Spring Boot Starter approach is the recommended way to get started using the Spring Data Azure Cosmos DB connector.
+ Get up and running with Spring Data Azure Cosmos DB by following our [Spring Boot Starter guide](/azure/developer/java/spring-framework/configure-spring-boot-starter-java-app-with-cosmos-db). The Spring Boot Starter approach is the recommended way to get started using the Spring Data Azure Cosmos DB connector.
Alternatively, you can add the Spring Data Azure Cosmos DB dependency to your `pom.xml` file as shown below:
cosmos-db Table Sdk Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/table-sdk-dotnet.md
> * [Node.js](table-sdk-nodejs.md) > * [Python](table-sdk-python.md)
-| | Links |
+| | Links|
||| |**SDK download**|[NuGet](https://www.nuget.org/packages/Microsoft.Azure.CosmosDB.Table)| |**Quickstart**|[Azure Cosmos DB: Build an app with .NET and the Table API](create-table-dotnet.md)|
cost-management-billing Understand Rhel Reservation Charges https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/reservations/understand-rhel-reservation-charges.md
Azure portal marketplace names:
- Red Hat Enterprise Linux 7.6 - Red Hat Enterprise Linux 8.2
-[Check Red Hat Enterprise Linux meters that the plan applies to](https://isfratio.blob.core.windows.net/isfratio/RHELRatios.csv)
+[Check Red Hat Enterprise Linux meters that the plan applies to](https://phoenixnap.com/kb/how-to-check-redhat-version)
## Next steps
To learn more about reservations, see the following articles:
## Need help? Contact us
-If you have questions or need help, [create a support request](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest).
+If you have questions or need help, [create a support request](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest).
cost-management-billing Understand Suse Reservation Charges https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/reservations/understand-suse-reservation-charges.md
Previously updated : 08/06/2020 Last updated : 03/25/2021
The following tables show the software plans you can buy a reservation for, thei
### SUSE Linux Enterprise Server for HPC Priority
-Azure portal marketplace name:
--- SLES 12 SP3 for HPC (Priority)- |SUSE VM | MeterId| Ratio| Example VM size| | -| | | |
-|SLES for HPC 1-2 vCPUs|e275a668-ce79-44e2-a659-f43443265e98|1|D2s_v3|
-|SLES for HPC 3-4 vCPUs|e531e1c0-09c9-4d83-b7d0-a2c6741faa22|2|D4s_v3|
-|SLES for HPC 5+ vCPUs|4edcd5a5-8510-49a8-a9fc-c9721f501913|2.6|D8s_v3|
+|SUSE Linux Enterprise Server for HPC Priority 1-2 vCPUs|e275a668-ce79-44e2-a659-f43443265e98|1|D2s_v3|
+|SUSE Linux Enterprise Server for HPC Priority 3-4 vCPUs|e531e1c0-09c9-4d83-b7d0-a2c6741faa22|2|D4s_v3|
+|SUSE Linux Enterprise Server for HPC Priority 5+ vCPUs|4edcd5a5-8510-49a8-a9fc-c9721f501913|2.6|D8s_v3|
### SUSE Linux Enterprise Server for HPC Standard
-Azure portal marketplace name:
--- SLES 12 SP3 for HPC- |SUSE VM | MeterId | Ratio|Example VM size| | - | | | |
-|SLES for HPC 1-2 vCPUs |8c94ad45-b93b-4772-aab1-ff92fcec6610|1|D2s_v3|
-|SLES for HPC 3-4 vCPUs|4ed70d2d-e2bb-4dcd-b6fa-42da71861a1c|1.92308|D4s_v3|
-|SLES for HPC 5+ vCPUs |907a85de-024f-4dd6-969c-347d47a1bdff|2.92308|D8s_v3|
+|SUSE Linux Enterprise Server for HPC Standard 1-2 vCPUs |8c94ad45-b93b-4772-aab1-ff92fcec6610|1|D2s_v3|
+|SUSE Linux Enterprise Server for HPC Standard 3-4 vCPUs|4ed70d2d-e2bb-4dcd-b6fa-42da71861a1c|1.92308|D4s_v3|
+|SUSE Linux Enterprise Server for HPC Standard 5+ vCPUs |907a85de-024f-4dd6-969c-347d47a1bdff|2.92308|D8s_v3|
-### SUSE Linux Enterprise Server for SAP Priority
+### SUSE Linux Enterprise Server for SAP Standard
-Azure portal marketplace names:
--- SLES for SAP 15 (Priority)-- SLES for SAP 12 SP3 (Priority)-- SLES for SAP 12 SP2 (Priority)
+Previously, SUSE Linux Enterprise Server for SAP Standard was named SUSE Linux Enterprise Server for SAP Priority.
|SUSE VM | MeterId | Ratio|Example VM size| | - || | |
-|SLES for SAP Priority 1-2 vCPUs|497fe0b6-fa3c-4e3d-a66b-836097244142|1|D2s_v3|
-|SLES for SAP Priority 3-4 vCPUs |847887de-68ce-4adc-8a33-7a3f4133312f|2|D4s_v3|
-|SLES for SAP Priority 5+ vCPUs |18ae79cd-dfce-48c9-897b-ebd3053c6058|2.41176|D8s_v3|
+|SUSE Linux Enterprise Server for SAP Standard 1-2 vCPUs|497fe0b6-fa3c-4e3d-a66b-836097244142|1|D2s_v3|
+|SUSE Linux Enterprise Server for SAP Standard 3-4 vCPUs |847887de-68ce-4adc-8a33-7a3f4133312f|2|D4s_v3|
+|SUSE Linux Enterprise Server for SAP Standard 5+ vCPUs |18ae79cd-dfce-48c9-897b-ebd3053c6058|2.41176|D8s_v3|
### SUSE Linux Enterprise Server Standard
-Azure portal marketplace names:
--- SLES 15-- SLES 15 (Standard)-- SLES 12 SP3 (Standard)- |SUSE VM | MeterId | Ratio|Example VM size| | - || | |
-|SLES 1-2 cores vCPUs |4b2fecfc-b110-4312-8f9d-807db1cb79ae|1|D2s_v3|
-|SLES 3-4 cores vCPUs |0c3ebb4c-db7d-4125-b45a-0534764d4bda|1.92308|D4s_v3|
-|SLES 5+ vCPUs |7b349b65-d906-42e5-833f-b2af38513468|2.30769| D8s_v3|
+|SUSE Linux Enterprise Server Standard 1-2 cores vCPUs |4b2fecfc-b110-4312-8f9d-807db1cb79ae|1|D2s_v3|
+|SUSE Linux Enterprise Server Standard 3-4 cores vCPUs |0c3ebb4c-db7d-4125-b45a-0534764d4bda|1.92308|D4s_v3|
+|SUSE Linux Enterprise Server Standard 5+ vCPUs |7b349b65-d906-42e5-833f-b2af38513468|2.30769| D8s_v3|
## Need help? Contact us
data-factory Compute Linked Services https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/compute-linked-services.md
This article explains different compute environments that you can use to process
The following table provides a list of compute environments supported by Data Factory and the activities that can run on them.
-| Compute environment | activities |
+| Compute environment | Activities |
| | | | [On-demand HDInsight cluster](#azure-hdinsight-on-demand-linked-service) or [your own HDInsight cluster](#azure-hdinsight-linked-service) | [Hive](transform-data-using-hadoop-hive.md), [Pig](transform-data-using-hadoop-pig.md), [Spark](transform-data-using-spark.md), [MapReduce](transform-data-using-hadoop-map-reduce.md), [Hadoop Streaming](transform-data-using-hadoop-streaming.md) | | [Azure Batch](#azure-batch-linked-service) | [Custom](transform-data-using-dotnet-custom-activity.md) |
You create an Azure Machine Learning linked service to connect an Azure Machine
``` ### Properties+ | Property | Description | Required | | - | - | - | | Type | The type property should be set to: **AzureMLService**. | Yes |
data-factory Configure Bcdr Azure Ssis Integration Runtime https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/configure-bcdr-azure-ssis-integration-runtime.md
Last updated 03/05/2021
Azure SQL Database/Managed Instance and SQL Server Integration Services (SSIS) in Azure Data Factory (ADF) can be combined as the recommended all-Platform as a Service (PaaS) solution for SQL Server migration. You can deploy your SSIS projects into SSIS catalog database (SSISDB) hosted by Azure SQL Database/Managed Instance and run your SSIS packages on Azure SSIS integration runtime (IR) in ADF.
-For business continuity and disaster recovery (BCDR), Azure SQL Database/Managed Instance can be configured with a [geo-replication/failover group](https://docs.microsoft.com/azure/azure-sql/database/auto-failover-group-overview), where SSISDB in a primary Azure region with read-write access (primary role) will be continuously replicated to a secondary region with read-only access (secondary role). When a disaster occurs in the primary region, a failover will be triggered, where the primary and secondary SSISDBs will swap roles.
+For business continuity and disaster recovery (BCDR), Azure SQL Database/Managed Instance can be configured with a [geo-replication/failover group](../azure-sql/database/auto-failover-group-overview.md), where SSISDB in a primary Azure region with read-write access (primary role) will be continuously replicated to a secondary region with read-only access (secondary role). When a disaster occurs in the primary region, a failover will be triggered, where the primary and secondary SSISDBs will swap roles.
For BCDR, you can also configure a dual standby Azure SSIS IR pair that works in sync with Azure SQL Database/Managed Instance failover group. This allows you to have a pair of running Azure-SSIS IRs that at any given time, only one can access the primary SSISDB to fetch and execute packages, as well as write package execution logs (primary role), while the other can only do the same for packages deployed somewhere else, for example in Azure Files (secondary role). When SSISDB failover occurs, the primary and secondary Azure-SSIS IRs will also swap roles and if both are running, there'll be a near-zero downtime.
To configure a dual standby Azure-SSIS IR pair that works in sync with Azure SQL
When [selecting to use SSISDB](./tutorial-deploy-ssis-packages-azure.md#creating-ssisdb) on the **Deployment settings** page of **Integration runtime setup** pane, select also the **Use dual standby Azure-SSIS Integration Runtime pair with SSISDB failover** check box. For **Dual standby pair name**, enter a name to identify your pair of primary and secondary Azure-SSIS IRs. When you complete the creation of your primary Azure-SSIS IR, it will be started and attached to a primary SSISDB that will be created on your behalf with read-write access. If you've just reconfigured it, you need to restart it.
-1. Using Azure portal, you can check whether the primary SSISDB has been created on the **Overview** page of your primary Azure SQL Database server. Once it's created, you can [create a failover group for your primary and secondary Azure SQL Database servers and add SSISDB to it](https://docs.microsoft.com/azure/azure-sql/database/failover-group-add-single-database-tutorial?tabs=azure-portal#2create-the-failover-group) on the **Failover groups** page. Once your failover group is created, you can check whether the primary SSISDB has been replicated to a secondary one with read-only access on the **Overview** page of your secondary Azure SQL Database server.
+1. Using Azure portal, you can check whether the primary SSISDB has been created on the **Overview** page of your primary Azure SQL Database server. Once it's created, you can [create a failover group for your primary and secondary Azure SQL Database servers and add SSISDB to it](../azure-sql/database/failover-group-add-single-database-tutorial.md?tabs=azure-portal#2create-the-failover-group) on the **Failover groups** page. Once your failover group is created, you can check whether the primary SSISDB has been replicated to a secondary one with read-only access on the **Overview** page of your secondary Azure SQL Database server.
1. Using Azure portal/ADF UI, you can create another Azure-SSIS IR with your secondary Azure SQL Database server to host SSISDB in the secondary region. This will be your secondary Azure-SSIS IR. For complete BCDR, make sure that all resources it depends on are also created in the secondary region, for example Azure Storage for storing custom setup script/files, ADF for orchestration/scheduling package executions, etc.
To configure a dual standby Azure-SSIS IR pair that works in sync with Azure SQL
1. If you [use ADF for orchestration/scheduling package executions](./how-to-invoke-ssis-package-ssis-activity.md), make sure that all relevant ADF pipelines with Execute SSIS Package activities and associated triggers are copied to your secondary ADF with the triggers initially disabled. When SSISDB failover occurs, you need to enable them.
-1. You can [test your Azure SQL Database failover group](https://docs.microsoft.com/azure/azure-sql/database/failover-group-add-single-database-tutorial?tabs=azure-portal#3test-failover) and check on [Azure-SSIS IR monitoring page in ADF portal](./monitor-integration-runtime.md#monitor-the-azure-ssis-integration-runtime-in-azure-portal) whether your primary and secondary Azure-SSIS IRs have swapped roles.
+1. You can [test your Azure SQL Database failover group](../azure-sql/database/failover-group-add-single-database-tutorial.md?tabs=azure-portal#3test-failover) and check on [Azure-SSIS IR monitoring page in ADF portal](./monitor-integration-runtime.md#monitor-the-azure-ssis-integration-runtime-in-azure-portal) whether your primary and secondary Azure-SSIS IRs have swapped roles.
## Configure a dual standby Azure-SSIS IR pair with Azure SQL Managed Instance failover group To configure a dual standby Azure-SSIS IR pair that works in sync with Azure SQL Managed Instance failover group, complete the following steps.
-1. Using Azure portal, you can [create a failover group for your primary and secondary Azure SQL Managed Instances](https://docs.microsoft.com/azure/azure-sql/managed-instance/failover-group-add-instance-tutorial?tabs=azure-portal) on the **Failover groups** page of your primary Azure SQL Managed Instance.
+1. Using Azure portal, you can [create a failover group for your primary and secondary Azure SQL Managed Instances](../azure-sql/managed-instance/failover-group-add-instance-tutorial.md?tabs=azure-portal) on the **Failover groups** page of your primary Azure SQL Managed Instance.
1. Using Azure portal/ADF UI, you can create a new Azure-SSIS IR with your primary Azure SQL Managed Instance to host SSISDB in the primary region. If you have an existing Azure-SSIS IR that's already attached to SSIDB hosted by your primary Azure SQL Managed Instance and it's still running, you need to stop it first to reconfigure it. This will be your primary Azure-SSIS IR.
To configure a dual standby Azure-SSIS IR pair that works in sync with Azure SQL
1. If you [use ADF for orchestration/scheduling package executions](./how-to-invoke-ssis-package-ssis-activity.md), make sure that all relevant ADF pipelines with Execute SSIS Package activities and associated triggers are copied to your secondary ADF with the triggers initially disabled. When SSISDB failover occurs, you need to enable them.
-1. You can [test your Azure SQL Managed Instance failover group](https://docs.microsoft.com/azure/azure-sql/managed-instance/failover-group-add-instance-tutorial?tabs=azure-portal#test-failover) and check on [Azure-SSIS IR monitoring page in ADF portal](./monitor-integration-runtime.md#monitor-the-azure-ssis-integration-runtime-in-azure-portal) whether your primary and secondary Azure-SSIS IRs have swapped roles.
+1. You can [test your Azure SQL Managed Instance failover group](../azure-sql/managed-instance/failover-group-add-instance-tutorial.md?tabs=azure-portal#test-failover) and check on [Azure-SSIS IR monitoring page in ADF portal](./monitor-integration-runtime.md#monitor-the-azure-ssis-integration-runtime-in-azure-portal) whether your primary and secondary Azure-SSIS IRs have swapped roles.
## Attach a new Azure-SSIS IR to existing SSISDB hosted by Azure SQL Database/Managed Instance
You can consider these other configuration options for your Azure-SSIS IR:
- [Configure virtual network injection for your Azure-SSIS IR](./join-azure-ssis-integration-runtime-virtual-network.md) -- [Configure self-hosted IR as a proxy for your Azure-SSIS IR](./self-hosted-integration-runtime-proxy-ssis.md)
+- [Configure self-hosted IR as a proxy for your Azure-SSIS IR](./self-hosted-integration-runtime-proxy-ssis.md)
data-factory Connector Dynamics Crm Office 365 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-dynamics-crm-office-365.md
The optimal combination of **writeBatchSize** and **parallelCopies** depends on
To retrieve data from Dynamics views, you need to get the saved query of the view, and use the query to get the data.
-There are two entities which store different types of view: "saved query" stores system view and "user query" stores user view. To get the information of the views, refer to the following FetchXML query and replace the "TARGETENTITY" with `savedquery` or `userquery`. Each entity type has more available attributes that you can add to the query based on your need. Learn more about [savedquery entity](https://docs.microsoft.com/dynamics365/customer-engagement/web-api/savedquery) and [userquery entity](https://docs.microsoft.com/dynamics365/customer-engagement/web-api/userquery).
+There are two entities which store different types of view: "saved query" stores system view and "user query" stores user view. To get the information of the views, refer to the following FetchXML query and replace the "TARGETENTITY" with `savedquery` or `userquery`. Each entity type has more available attributes that you can add to the query based on your need. Learn more about [savedquery entity](/dynamics365/customer-engagement/web-api/savedquery) and [userquery entity](/dynamics365/customer-engagement/web-api/userquery).
```xml <fetch top="5000" >
data-factory Continuous Integration Deployment https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/continuous-integration-deployment.md
Here's an explanation of how the preceding template is constructed, broken down
* Although type-specific customization is available for datasets, you can provide configuration without explicitly having a \*-level configuration. In the preceding example, all dataset properties under `typeProperties` are parameterized. > [!NOTE]
-> **Azure alerts and matrices** if configured for a pipeline are not currently supported as parameters for ARM deployments. To reapply the alerts and matrices in new environment, please follow [Data Factory Monitoring, Alerts and Matrices.](https://docs.microsoft.com/azure/data-factory/monitor-using-azure-monitor#data-factory-metrics)
+> **Azure alerts and matrices** if configured for a pipeline are not currently supported as parameters for ARM deployments. To reapply the alerts and matrices in new environment, please follow [Data Factory Monitoring, Alerts and Matrices.](./monitor-using-azure-monitor.md#data-factory-metrics)
> ### Default parameterization template
else {
Start-AzDataFactoryV2Trigger -ResourceGroupName $ResourceGroupName -DataFactoryName $DataFactoryName -Name $_.Name -Force } }
-```
+```
data-factory Data Factory Service Identity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/data-factory-service-identity.md
description: Learn about managed identity for Azure Data Factory.
Previously updated : 03/23/2021 Last updated : 03/25/2021
When creating a data factory, a managed identity can be created along with facto
Managed identity for Data Factory benefits the following features: - [Store credential in Azure Key Vault](store-credentials-in-key-vault.md), in which case data factory managed identity is used for Azure Key Vault authentication.-- Connectors including [Azure Blob storage](connector-azure-blob-storage.md), [Azure Data Lake Storage Gen1](connector-azure-data-lake-store.md), [Azure Data Lake Storage Gen2](connector-azure-data-lake-storage.md), [Azure SQL Database](connector-azure-sql-database.md), and [Azure Synapse Analytics](connector-azure-sql-data-warehouse.md).-- [Web activity](control-flow-web-activity.md).
+- Access data stores or computes using managed identity authentication, including Azure Blob storage, Azure Data Explorer, Azure Data Lake Storage Gen1, Azure Data Lake Storage Gen2, Azure SQL Database, Azure SQL Managed Instance, Azure Synapse Analytics, REST, Databricks activity, Web activity, and more. Check the connector and activity articles for details.
## Generate managed identity
You can find the managed identity information from Azure portal -> your data fac
- Managed Identity Object ID - Managed Identity Tenant-- Managed Identity Application ID The managed identity information will also show up when you create linked service, which supports managed identity authentication, like Azure Blob, Azure Data Lake Storage, Azure Key Vault, etc.
-When granting permission, use object ID or data factory name (as managed identity name) to find this identity.
+When granting permission, in Azure resource's Access Control (IAM) tab -> Add role assignment -> Assign access to -> select Data Factory under System assigned managed identity -> select by factory name; or in general, you can use object ID or data factory name (as managed identity name) to find this identity. If you need to get managed identity's application ID, you can use PowerShell.
### Retrieve managed identity using PowerShell
data-factory Data Flow Troubleshoot Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/data-flow-troubleshoot-guide.md
This article explores common troubleshooting methods for mapping data flows in A
### Error code: DF-Excel-InvalidRange - **Message**: Invalid range is provided.-- **Recommendation**: Check the parameter value and specify the valid range by the following reference: [Excel format in Azure Data Factory-Dataset properties](https://docs.microsoft.com/azure/data-factory/format-excel#dataset-properties).
+- **Recommendation**: Check the parameter value and specify the valid range by the following reference: [Excel format in Azure Data Factory-Dataset properties](./format-excel.md#dataset-properties).
### Error code: DF-Excel-WorksheetNotExist - **Message**: Excel worksheet does not exist.
For more help with troubleshooting, see these resources:
* [Data Factory feature requests](https://feedback.azure.com/forums/270578-data-factory) * [Azure videos](https://azure.microsoft.com/resources/videos/index/?sort=newest&services=data-factory) * [Stack Overflow forum for Data Factory](https://stackoverflow.com/questions/tagged/azure-data-factory)
-* [Twitter information about Data Factory](https://twitter.com/hashtag/DataFactory)
-
+* [Twitter information about Data Factory](https://twitter.com/hashtag/DataFactory)
data-factory How To Expression Language Functions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/how-to-expression-language-functions.md
In this document, we will primarily focus on learning fundamental concepts with
## Azure data factory UI and parameters
-If you are new to Azure data factory parameter usage in ADF user interface, please review [Data factory UI for linked services with parameters](https://docs.microsoft.com/azure/data-factory/parameterize-linked-services#data-factory-ui) and [Data factory UI for metadata driven pipeline with parameters](https://docs.microsoft.com/azure/data-factory/how-to-use-trigger-parameterization#data-factory-ui) for visual explanation.
+If you are new to Azure data factory parameter usage in ADF user interface, please review [Data factory UI for linked services with parameters](./parameterize-linked-services.md#data-factory-ui) and [Data factory UI for metadata driven pipeline with parameters](./how-to-use-trigger-parameterization.md#data-factory-ui) for visual explanation.
## Parameter and expression concepts
This [Azure Data factory copy pipeline parameter passing tutorial](https://azure
### Detailed Mapping data flow pipeline with parameters
-Please follow [Mapping data flow with parameters](https://docs.microsoft.com/azure/data-factory/parameters-data-flow) for comprehensive example on how to use parameters in data flow.
+Please follow [Mapping data flow with parameters](./parameters-data-flow.md) for comprehensive example on how to use parameters in data flow.
### Detailed Metadata driven pipeline with parameters
-Please follow [Metadata driven pipeline with parameters](https://docs.microsoft.com/azure/data-factory/how-to-use-trigger-parameterization) to learn more about how to use parameters to design metadata driven pipelines. This is a popular use case for parameters.
+Please follow [Metadata driven pipeline with parameters](./how-to-use-trigger-parameterization.md) to learn more about how to use parameters to design metadata driven pipelines. This is a popular use case for parameters.
## Next steps
-For a list of system variables you can use in expressions, see [System variables](control-flow-system-variables.md).
+For a list of system variables you can use in expressions, see [System variables](control-flow-system-variables.md).
data-factory How To Use Sql Managed Instance With Ir https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/how-to-use-sql-managed-instance-with-ir.md
You can now move your SQL Server Integration Services (SSIS) projects, packages,
- Inside the same virtual network as the managed instance, with **different subnet**. - Inside a different virtual network than the the managed instance, via virtual network peering (which is limited to the same region due to Global VNet peering constraints) or a connection from virtual network to virtual network.
- For more info on SQL Managed Instance connectivity, see [Connect your application to Azure SQL Managed Instance](/azure/sql-database/sql-database-managed-instance-connect-app).
+ For more info on SQL Managed Instance connectivity, see [Connect your application to Azure SQL Managed Instance](../azure-sql/managed-instance/connect-application-instance.md).
1. [Configure virtual network](#configure-virtual-network).
data-factory Pipeline Trigger Troubleshoot Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/pipeline-trigger-troubleshoot-guide.md
Azure Data Factory evaluates the outcome of all leaf-level activities. Pipeline
* Implement activity-level checks by following [How to handle pipeline failures and errors](https://techcommunity.microsoft.com/t5/azure-data-factory/understanding-pipeline-failures-and-error-handling/ba-p/1630459). * Use Azure Logic Apps to monitor pipelines in regular intervals following [Query By Factory](/rest/api/datafactory/pipelineruns/querybyfactory).
-* [Visually Monitor Pipeline](https://docs.microsoft.com/azure/data-factory/monitor-visually)
+* [Visually Monitor Pipeline](./monitor-visually.md)
### How to monitor pipeline failures in regular intervals
You might need to monitor failed Data Factory pipelines in intervals, say 5 minu
**Resolution** * You can set up an Azure logic app to query all of the failed pipelines every 5 minutes, as described in [Query By Factory](/rest/api/datafactory/pipelineruns/querybyfactory). Then, you can report incidents to your ticketing system.
-* [Visually Monitor Pipeline](https://docs.microsoft.com/azure/data-factory/monitor-visually)
+* [Visually Monitor Pipeline](./monitor-visually.md)
### Degree of parallelism increase does not result in higher throughput
This can happen if you have not implemented time to live feature for Data Flow o
**Resolution**
-* If each copy activity is taking up to 2 minutes to start, and the problem occurs primarily on a VNet join (vs. Azure IR), this can be a copy performance issue. To review troubleshooting steps, go to [Copy Performance Improvement.](https://docs.microsoft.com/azure/data-factory/copy-activity-performance-troubleshooting)
-* You can use time to live feature to decrease cluster start up time for data flow activities. Please review [Data Flow Integration Runtime.](https://docs.microsoft.com/azure/data-factory/control-flow-execute-data-flow-activity#data-flow-integration-runtime)
+* If each copy activity is taking up to 2 minutes to start, and the problem occurs primarily on a VNet join (vs. Azure IR), this can be a copy performance issue. To review troubleshooting steps, go to [Copy Performance Improvement.](./copy-activity-performance-troubleshooting.md)
+* You can use time to live feature to decrease cluster start up time for data flow activities. Please review [Data Flow Integration Runtime.](./control-flow-execute-data-flow-activity.md#data-flow-integration-runtime)
### Hitting capacity issues in SHIR(Self Hosted Integration Runtime)
This can happen if you have not scaled up SHIR as per your workload.
**Resolution**
-* If you encounter a capacity issue from SHIR, upgrade the VM to increase the node to balance the activities. If you receive an error message about a self-hosted IR general failure or error, a self-hosted IR upgrade, or self-hosted IR connectivity issues, which can generate a long queue, go to [Troubleshoot self-hosted integration runtime.](https://docs.microsoft.com/azure/data-factory/self-hosted-integration-runtime-troubleshoot-guide)
+* If you encounter a capacity issue from SHIR, upgrade the VM to increase the node to balance the activities. If you receive an error message about a self-hosted IR general failure or error, a self-hosted IR upgrade, or self-hosted IR connectivity issues, which can generate a long queue, go to [Troubleshoot self-hosted integration runtime.](./self-hosted-integration-runtime-troubleshoot-guide.md)
### Error messages due to long queues for ADF Copy and Data Flow
This can happen if you have not scaled up SHIR as per your workload.
Long queue related error messages can appear for various reasons. **Resolution**
-* If you receive an error message from any source or destination via connectors, which can generate a long queue, go to [Connector Troubleshooting Guide.](https://docs.microsoft.com/azure/data-factory/connector-troubleshoot-guide)
-* If you receive an error message about Mapping Data Flow, which can generate a long queue, go to [Data Flows Troubleshooting Guide.](https://docs.microsoft.com/azure/data-factory/data-flow-troubleshoot-guide)
-* If you receive an error message about other activities, such as Databricks, custom activities, or HDI, which can generate a long queue, go to [Activity Troubleshooting Guide.](https://docs.microsoft.com/azure/data-factory/data-factory-troubleshoot-guide)
-* If you receive an error message about running SSIS packages, which can generate a long queue, go to the [Azure-SSIS Package Execution Troubleshooting Guide](https://docs.microsoft.com/azure/data-factory/ssis-integration-runtime-ssis-activity-faq) and [Integration Runtime Management Troubleshooting Guide.](https://docs.microsoft.com/azure/data-factory/ssis-integration-runtime-management-troubleshoot)
+* If you receive an error message from any source or destination via connectors, which can generate a long queue, go to [Connector Troubleshooting Guide.](./connector-troubleshoot-guide.md)
+* If you receive an error message about Mapping Data Flow, which can generate a long queue, go to [Data Flows Troubleshooting Guide.](./data-flow-troubleshoot-guide.md)
+* If you receive an error message about other activities, such as Databricks, custom activities, or HDI, which can generate a long queue, go to [Activity Troubleshooting Guide.](./data-factory-troubleshoot-guide.md)
+* If you receive an error message about running SSIS packages, which can generate a long queue, go to the [Azure-SSIS Package Execution Troubleshooting Guide](./ssis-integration-runtime-ssis-activity-faq.md) and [Integration Runtime Management Troubleshooting Guide.](./ssis-integration-runtime-management-troubleshoot.md)
## Next steps
For more troubleshooting help, try these resources:
* [Data Factory feature requests](https://feedback.azure.com/forums/270578-data-factory) * [Azure videos](https://azure.microsoft.com/resources/videos/index/?sort=newest&services=data-factory) * [Microsoft Q&A question page](/answers/topics/azure-data-factory.html)
-* [Twitter information about Data Factory](https://twitter.com/hashtag/DataFactory)
+* [Twitter information about Data Factory](https://twitter.com/hashtag/DataFactory)
data-factory Security And Access Control Troubleshoot Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/security-and-access-control-troubleshoot-guide.md
For example: The Azure Blob Storage sink was using Azure IR (public, not Managed
#### Cause
-ADF may still use Managed VNet IR, but you could encounter such error because the public endpoint to Azure Blob Storage in Managed VNet is not reliable based on the testing result, and Azure Blob Storage and Azure Data Lake Gen2 are not supported to be connected through public endpoint from ADF Managed Virtual Network according to [Managed virtual network & managed private endpoints](https://docs.microsoft.com/azure/data-factory/managed-virtual-network-private-endpoint#outbound-communications-through-public-endpoint-from-adf-managed-virtual-network).
+ADF may still use Managed VNet IR, but you could encounter such error because the public endpoint to Azure Blob Storage in Managed VNet is not reliable based on the testing result, and Azure Blob Storage and Azure Data Lake Gen2 are not supported to be connected through public endpoint from ADF Managed Virtual Network according to [Managed virtual network & managed private endpoints](./managed-virtual-network-private-endpoint.md#outbound-communications-through-public-endpoint-from-adf-managed-virtual-network).
#### Solution
data-factory Security Baseline https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/security-baseline.md
You may use Azure PowerShell or Azure CLI to look up or perform actions on resou
**Guidance**: Use Azure Activity Log to monitor network resource configurations and detect changes for network resources related to your Azure Data Factory instances. Create alerts within Azure Monitor that will trigger when changes to critical network resources take place. -- [How to view and retrieve Azure Activity Log events](/azure/azure-monitor/platform/activity-log#view-the-activity-log)
+- [How to view and retrieve Azure Activity Log events](../azure-monitor/essentials/activity-log.md#view-the-activity-log)
-- [How to create alerts in Azure Monitor](/azure/azure-monitor/platform/alerts-activity-log)
+- [How to create alerts in Azure Monitor](../azure-monitor/alerts/alerts-activity-log.md)
**Responsibility**: Customer
You may use Azure PowerShell or Azure CLI to look up or perform actions on resou
Alternatively, you may enable and on-board data to Azure Sentinel or a third-party Security Incident and Event Management (SIEM).You can also integrate Azure Data Factory with Git to leverage several source control benefits, such as the ability to track/audit changes and the ability to revert changes that introduce bugs. -- [How to configure diagnostic settings](/azure/azure-monitor/platform/diagnostic-settings#create-in-azure-portal)
+- [How to configure diagnostic settings](../azure-monitor/essentials/diagnostic-settings.md#create-in-azure-portal)
- [How to onboard Azure Sentinel](../sentinel/quickstart-onboard.md)
Alternatively, you may enable and on-board data to Azure Sentinel or a third-par
Use diagnostic settings to configure diagnostic logs for noncompute resources in Azure Data Factory, such as metrics and pipeline-run data. Azure Data Factory stores pipeline-run data for 45 days. To retain this data for longer period of time, save your diagnostic logs to a storage account for auditing or manual inspection and specify the retention time in days. You can also stream the logs to Azure Event Hubs or send the logs to a Log Analytics workspace for analysis. -- [How to enable Diagnostic Settings for Azure Activity Log](/azure/azure-monitor/platform/activity-log)
+- [How to enable Diagnostic Settings for Azure Activity Log](../azure-monitor/essentials/activity-log.md)
- [Understand Azure Data Factory diagnostic logs](monitor-using-azure-monitor.md)
Use diagnostic settings to configure diagnostic logs for noncompute resources in
If your organization would like to retain the security event log data, it can be stored within a Data Collection tier, at which point it can be queried in Log Analytics. -- [How to collect data from Azure Virtual Machines in Azure Monitor](/azure/azure-monitor/learn/quick-collect-azurevm)
+- [How to collect data from Azure Virtual Machines in Azure Monitor](../azure-monitor/vm/quick-collect-azurevm.md)
-- [Enabling Data Collection in Azure Security Center](https://docs.microsoft.com/azure/security-center/security-center-enable-data-collection#data-collection-tier)
+- [Enabling Data Collection in Azure Security Center](../security-center/security-center-enable-data-collection.md#data-collection-tier)
**Responsibility**: Customer
If your organization would like to retain the security event log data, it can b
- [How to enable diagnostic logs in Azure Data Factory](monitor-using-azure-monitor.md) -- [How to set log retention parameters for Log Analytics Workspaces](/azure/azure-monitor/platform/manage-cost-storage#change-the-data-retention-period)
+- [How to set log retention parameters for Log Analytics Workspaces](../azure-monitor/logs/manage-cost-storage.md#change-the-data-retention-period)
**Responsibility**: Customer
If you are running your Integration Runtime in an Azure Virtual Machine (VM), en
Alternatively, you may enable and on-board data to Azure Sentinel or a third-party SIEM. -- [Log Analytics schema](https://docs.microsoft.com/azure/data-factory/monitor-using-azure-monitor#schema-of-logs-and-events)
+- [Log Analytics schema](./monitor-using-azure-monitor.md#schema-of-logs-and-events)
-- [How to collect data from an Azure Virtual Machine with Azure Monitor](/azure/azure-monitor/learn/quick-collect-azurevm)
+- [How to collect data from an Azure Virtual Machine with Azure Monitor](../azure-monitor/vm/quick-collect-azurevm.md)
- [How to onboard Azure Sentinel](../sentinel/quickstart-onboard.md)
Configure diagnostic settings for Azure Data Factory and send logs to a Log Anal
Additionally, ensure that you enable diagnostic settings for services related to your data stores. You can refer to each service's security baseline for guidance. -- [Alerts in Azure Data Factory](https://docs.microsoft.com/azure/data-factory/monitor-visually#alerts)
+- [Alerts in Azure Data Factory](./monitor-visually.md#alerts)
-- [All supported metrics page](/azure/azure-monitor/platform/metrics-supported)
+- [All supported metrics page](../azure-monitor/essentials/metrics-supported.md)
-- [How to configure alerts in Log Analytics Workspace](/azure/azure-monitor/platform/alerts-log)
+- [How to configure alerts in Log Analytics Workspace](../azure-monitor/alerts/alerts-log.md)
**Responsibility