Updates from: 04/09/2022 01:10:30
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Add Identity Provider https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/add-identity-provider.md
Previously updated : 12/02/2021 Last updated : 04/08/2022
You typically use only one identity provider in your applications, but you have
* [Google](identity-provider-google.md) * [LinkedIn](identity-provider-linkedin.md) * [Microsoft Account](identity-provider-microsoft-account.md)
+* [Mobile ID](identity-provider-mobile-id.md)
* [PingOne](identity-provider-ping-one.md) (PingIdentity) * [QQ](identity-provider-qq.md) * [Salesforce](identity-provider-salesforce.md)
active-directory-b2c Identity Provider Mobile Id https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-mobile-id.md
+
+ Title: Set up sign-up and sign-in with Mobile ID
+
+description: Provide sign-up and sign-in to customers with Mobile ID in your applications using Azure Active Directory B2C.
+++++++ Last updated : 04/08/2022++
+zone_pivot_groups: b2c-policy-type
++
+# Set up sign-up and sign-in with Mobile ID using Azure Active Directory B2C
++
+In this article, you learn how to provide sign-up and sign-in to customers with [Mobile ID](https://www.mobileid.ch) in your applications using Azure Active Directory B2C (Azure AD B2C). The Mobile ID solution protects access to your company data and applications with a comprehensive end-to- end solution for a strong multi-factor authentication (MFA). You add the Mobile ID to your user flows or custom policy using OpenID Connect protocol.
+
+## Prerequisites
++
+## Create a Mobile ID application
+
+To enable sign-in for users with Mobile ID in Azure AD B2C, you need to create an application. To create Mobile ID application, follow these steps:
+
+1. Contact [Mobile ID support](https://www.mobileid.ch/en/contact).
+1. Provide the Mobile ID the information about your Azure AD B2C tenant:
++
+ |Key |Note |
+ |||
+ |Redirect URI | Provide the `https://your-tenant-name.b2clogin.com/your-tenant-name.onmicrosoft.com/oauth2/authresp` URI. If you use a [custom domain](custom-domain.md), enter `https://your-domain-name/your-tenant-name.onmicrosoft.com/oauth2/authresp`. Replace `your-tenant-name` with the name of your tenant, and `your-domain-name` with your custom domain. |
+ |Token endpoint authentication method| `client_secret_post`|
++
+1. After the app is registered, the following information will be provided by the Mobile ID. Use this information to configure your user flow, or custom policy.
+
+ |Key |Note |
+ |||
+ | Client ID | The Mobile ID client ID. For example, 11111111-2222-3333-4444-555555555555. |
+ | Client Secret| The Mobile ID client secret.|
+++
+## Configure Mobile ID as an identity provider
+
+1. Make sure you're using the directory that contains Azure AD B2C tenant. Select the **Directory + subscription** filter in the top menu and choose the directory that contains your Azure AD B2C tenant.
+1. Choose **All services** in the top-left corner of the Azure portal, and then search for and select **Azure AD B2C**.
+1. Select **Identity providers**, and then select **New OpenID Connect provider**.
+1. Enter a **Name**. For example, enter *Mobile ID*.
+1. For **Metadata url**, enter the URL Mobile ID OpenId well-known configuration endpoint. For example:
+
+ ```http
+ https://openid.mobileid.ch/.well-known/openid-configuration
+ ```
+
+1. For **Client ID**, enter the Mobile ID Client ID.
+1. For **Client secret**, enter the Mobile ID client secret.
+1. For the **Scope**, enter the `openid, profile, phone, mid_profile`.
+1. Leave the default values for **Response type** (`code`), and **Response mode** (`form_post`).
+1. (Optional) For the **Domain hint**, enter `mobileid.ch`. For more information, see [Set up direct sign-in using Azure Active Directory B2C](direct-signin.md#redirect-sign-in-to-a-social-provider).
+1. Under **Identity provider claims mapping**, select the following claims:
+
+ - **User ID**: *sub*
+ - **Display name**: *name*
++
+1. Select **Save**.
+
+## Add Mobile ID identity provider to a user flow
+
+At this point, the Mobile ID identity provider has been set up, but it's not yet available in any of the sign-in pages. To add the Mobile ID identity provider to a user flow:
+
+1. In your Azure AD B2C tenant, select **User flows**.
+1. Select the user flow that you want to add the Mobile ID identity provider.
+1. Under the **Social identity providers**, select **Mobile ID**.
+1. Select **Save**.
+1. To test your policy, select **Run user flow**.
+1. For **Application**, select the web application named *testapp1* that you previously registered. The **Reply URL** should show `https://jwt.ms`.
+1. Select the **Run user flow** button.
+1. From the sign-up or sign-in page, select **Mobile ID** to sign in with Mobile ID.
+
+If the sign-in process is successful, your browser is redirected to `https://jwt.ms`, which displays the contents of the token returned by Azure AD B2C.
+++
+## Create a policy key
+
+You need to store the client secret that you received from Mobile ID in your Azure AD B2C tenant.
+
+1. Sign in to the [Azure portal](https://portal.azure.com/).
+2. Make sure you're using the directory that contains your Azure AD B2C tenant. Select the **Directory + subscription** filter in the top menu and choose the directory that contains your tenant.
+3. Choose **All services** in the top-left corner of the Azure portal, and then search for and select **Azure AD B2C**.
+4. On the Overview page, select **Identity Experience Framework**.
+5. Select **Policy Keys** and then select **Add**.
+6. For **Options**, choose `Manual`.
+7. Enter a **Name** for the policy key. For example, `Mobile IDSecret`. The prefix `B2C_1A_` is added automatically to the name of your key.
+8. In **Secret**, enter your Mobile ID client secret.
+9. For **Key usage**, select `Signature`.
+10. Select **Create**.
+
+## Configure Mobile ID as an identity provider
+
+To enable users to sign in using a Mobile ID, you need to define the Mobile ID as a claims provider that Azure AD B2C can communicate with through an endpoint. The endpoint provides a set of claims that are used by Azure AD B2C to verify that a specific user has authenticated.
+
+You can define a Mobile ID as a claims provider by adding it to the **ClaimsProviders** element in the extension file of your policy.
+
+1. Open the *TrustFrameworkExtensions.xml*.
+2. Find the **ClaimsProviders** element. If it doesn't exist, add it under the root element.
+3. Add a new **ClaimsProvider** as follows:
+
+ ```xml
+ <ClaimsProvider>
+ <Domain>mobileid.ch</Domain>
+ <DisplayName>Mobile-ID</DisplayName>
+ <TechnicalProfiles>
+ <TechnicalProfile Id="MobileID-OAuth2">
+ <DisplayName>Mobile-ID</DisplayName>
+ <Protocol Name="OAuth2" />
+ <Metadata>
+ <Item Key="ProviderName">Mobile-ID</Item>
+ <Item Key="authorization_endpoint">https://m.mobileid.ch/oidc/authorize</Item>
+ <Item Key="AccessTokenEndpoint">https://openid.mobileid.ch/token</Item>
+ <Item Key="ClaimsEndpoint">https://openid.mobileid.ch/userinfo</Item>
+ <Item Key="scope">openid, profile, phone, mid_profile</Item>
+ <Item Key="HttpBinding">POST</Item>
+ <Item Key="UsePolicyInRedirectUri">false</Item>
+ <Item Key="token_endpoint_auth_method">client_secret_post</Item>
+ <Item Key="BearerTokenTransmissionMethod">AuthorizationHeader</Item>
+ <Item Key="client_id">Your application ID</Item>
+ </Metadata>
+ <CryptographicKeys>
+ <Key Id="client_secret" StorageReferenceId="B2C_1A_MobileIdSecret" />
+ </CryptographicKeys>
+ <OutputClaims>
+ <OutputClaim ClaimTypeReferenceId="issuerUserId" PartnerClaimType="sub"/>
+ <OutputClaim ClaimTypeReferenceId="displayName" PartnerClaimType="name"/>
+ <OutputClaim ClaimTypeReferenceId="identityProvider" DefaultValue="mobileid.ch" />
+ <OutputClaim ClaimTypeReferenceId="authenticationSource" DefaultValue="socialIdpAuthentication" />
+ </OutputClaims>
+ <OutputClaimsTransformations>
+ <OutputClaimsTransformation ReferenceId="CreateRandomUPNUserName" />
+ <OutputClaimsTransformation ReferenceId="CreateUserPrincipalName" />
+ <OutputClaimsTransformation ReferenceId="CreateAlternativeSecurityId" />
+ <OutputClaimsTransformation ReferenceId="CreateSubjectClaimFromAlternativeSecurityId" />
+ </OutputClaimsTransformations>
+ <UseTechnicalProfileForSessionManagement ReferenceId="SM-SocialLogin" />
+ </TechnicalProfile>
+ </TechnicalProfiles>
+ </ClaimsProvider>
+ ```
+
+4. Set **client_id** to the Mobile ID client ID.
+5. Save the file.
+++
+```xml
+<OrchestrationStep Order="1" Type="CombinedSignInAndSignUp" ContentDefinitionReferenceId="api.signuporsignin">
+ <ClaimsProviderSelections>
+ ...
+ <ClaimsProviderSelection TargetClaimsExchangeId="MobileIDExchange" />
+ </ClaimsProviderSelections>
+ ...
+</OrchestrationStep>
+
+<OrchestrationStep Order="2" Type="ClaimsExchange">
+ ...
+ <ClaimsExchanges>
+ <ClaimsExchange Id="MobileIDExchange" TechnicalProfileReferenceId="MobileID-OAuth2" />
+ </ClaimsExchanges>
+</OrchestrationStep>
+```
++
+## Test your custom policy
+
+1. Select your relying party policy, for example `B2C_1A_signup_signin`.
+1. For **Application**, select a web application that you [previously registered](tutorial-register-applications.md). The **Reply URL** should show `https://jwt.ms`.
+1. Select the **Run now** button.
+1. From the sign-up or sign-in page, select **Mobile ID** to sign in with Mobile ID.
+
+If the sign-in process is successful, your browser is redirected to `https://jwt.ms`, which displays the contents of the token returned by Azure AD B2C.
+++
+## Next steps
+
+Learn how to [pass Mobile ID token to your application](idp-pass-through-user-flow.md).
active-directory-b2c Page Layout https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/page-layout.md
Previously updated : 09/22/2021 Last updated : 04/08/2022
Azure AD B2C page layout uses the following versions of the [jQuery library](htt
## Self-asserted page (selfasserted)
+**2.1.9**
+
+- TOTP multifactor authentication support. Adding links that allows users to download and install the Microsoft authenticator app to complete the enrollment of the TOTP on the authenticator.
+ **2.1.8** - The claim name is added to the `class` attribute of the `<li>` HTML element that surrounding the user's attribute input elements. The class name allows you to create a CSS selector to select the parent `<li>` for a certain user attribute input element. The following HTML markup shows the class attribute for the sign-up page:
Azure AD B2C page layout uses the following versions of the [jQuery library](htt
> [!TIP] > If you localize your page to support multiple locales, or languages in a user flow. The [localization IDs](localization-string-ids.md) article provides the list of localization IDs that you can use for the page version you select. +
+**2.1.7**
+
+- Accessibility fix - correcting to the tab index
+
+**2.1.6**
+
+- Accessibility fix - set the focus on the input field for verification.
+- Updates to the UI elements and CSS classes
+ **2.1.5** - Fixed an issue on tab order when idp selector template is used on sign in page. - Fixed an encoding issue on sign-in link text.
active-directory-b2c Saml Issuer Technical Profile https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/saml-issuer-technical-profile.md
Previously updated : 10/12/2020 Last updated : 04/08/2022
The CryptographicKeys element contains the following attributes:
| | -- | -- | | MetadataSigning | Yes | The X509 certificate (RSA key set) to use to sign SAML metadata. Azure AD B2C uses this key to sign the metadata. | | SamlMessageSigning| Yes| Specify the X509 certificate (RSA key set) to use to sign SAML messages. Azure AD B2C uses this key to signing the response `<samlp:Response>` send to the relying party.|
+| SamlAssertionSigning| No| Specify the X509 certificate (RSA key set) to use to sign SAML assertion `<saml:Assertion>` element of the SAML token. If not provided, the `SamlMessageSigning` cryptographic key is used instead.|
## Session management
active-directory B2b Direct Connect Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/b2b-direct-connect-overview.md
Previously updated : 03/21/2022 Last updated : 04/08/2022
# B2B direct connect overview (Preview)
-Azure Active Directory (Azure AD) B2B direct connect is a feature of External Identities that lets you set up a mutual trust relationship with another Azure AD organization for seamless collaboration. With B2B direct connect, users from both organizations can work together using their home credentials and B2B direct connect-enabled apps, without having to be added to each otherΓÇÖs organizations as guests. Use B2B direct connect to share resources with external Azure AD organizations. Or use it to share resources across multiple Azure AD tenants within your own organization.
+Azure Active Directory (Azure AD) B2B direct connect is a feature of External Identities that lets you set up a mutual trust relationship with another Azure AD organization for seamless collaboration. This feature currently works with Microsoft Teams shared channels. With B2B direct connect, users from both organizations can work together using their home credentials and a shared channel in Teams, without having to be added to each otherΓÇÖs organizations as guests. Use B2B direct connect to share resources with external Azure AD organizations. Or use it to share resources across multiple Azure AD tenants within your own organization.
![Diagram illustrating B2B direct connect](media/b2b-direct-connect-overview/b2b-direct-connect-overview.png) B2B direct connect requires a mutual trust relationship between two Azure AD organizations to allow access to each other's resources. Both the resource organization and the external organization need to mutually enable B2B direct connect in their cross-tenant access settings. When the trust is established, the B2B direct connect user has single sign-on access to resources outside their organization using credentials from their home Azure AD organization.
-Currently, B2B direct connect capabilities work with Teams Connect shared channels. This means that users in one organization can create a shared channel in Teams and invite an external B2B direct connect user to it. Then from within Teams, the B2B direct connect user can seamlessly access the shared channel in their home tenant Teams instance, without having to manually sign in to the organization hosting the shared channel.
+Currently, B2B direct connect capabilities work with Teams shared channels. When B2B direct connect is established between two organizations, users in one organization can create a shared channel in Teams and invite an external B2B direct connect user to it. Then from within Teams, the B2B direct connect user can seamlessly access the shared channel in their home tenant Teams instance, without having to manually sign in to the organization hosting the shared channel.
For licensing and pricing information related to B2B direct connect users, refer to [Azure Active Directory pricing](https://azure.microsoft.com/pricing/details/active-directory/).
active-directory Permissions Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/permissions-reference.md
The [Authentication Policy Administrator](#authentication-policy-administrator)
> [!IMPORTANT] > This role can't manage MFA settings in the legacy MFA management portal or Hardware OATH tokens. The same functions can be accomplished using the [Set-MsolUser](/powershell/module/msonline/set-msoluser) commandlet Azure AD PowerShell module.
+Users with this role can't change the credentials or reset MFA for members and owners of a [role-assignable group](groups-concept.md).
+ > [!div class="mx-tableFixed"] > | Actions | Description | > | | |
Users with this role can change passwords, invalidate refresh tokens, create and
>- Administrators in other services outside of Azure AD like Exchange Online, Office Security and Compliance Center, and human resources systems. >- Non-administrators like executives, legal counsel, and human resources employees who may have access to sensitive or private information.
+Users with this role can't change the credentials or reset MFA for members and owners of a [role-assignable group](groups-concept.md).
+ Delegating administrative permissions over subsets of users and applying policies to a subset of users is possible with [Administrative Units](administrative-units.md). This role was previously called "Password Administrator" in the [Azure portal](https://portal.azure.com/). The "Helpdesk Administrator" name in Azure AD now matches its name in Azure AD PowerShell and the Microsoft Graph API.
Do not use. This role has been deprecated and will be removed from Azure AD in t
Users with this role have limited ability to manage passwords. This role does not grant the ability to manage service requests or monitor service health. Whether a Password Administrator can reset a user's password depends on the role the user is assigned. For a list of the roles that a Password Administrator can reset passwords for, see [Password reset permissions](#password-reset-permissions).
+Users with this role can't change the credentials or reset MFA for members and owners of a [role-assignable group](groups-concept.md).
+ > [!div class="mx-tableFixed"] > | Actions | Description | > | | |
Users with this role can create users, and manage all aspects of users with some
>- Administrators in other services outside of Azure AD like Exchange Online, Office Security and Compliance Center, and human resources systems. >- Non-administrators like executives, legal counsel, and human resources employees who may have access to sensitive or private information.
+Users with this role can't change the credentials or reset MFA for members and owners of a [role-assignable group](groups-concept.md).
+ > [!div class="mx-tableFixed"] > | Actions | Description | > | | |
Privileged Authentication Admin | &nbsp; | &nbsp; | &nbsp; | &nbsp; | :heavy_che
Privileged Role Admin | &nbsp; | &nbsp; | &nbsp; | &nbsp; | :heavy_check_mark: | :heavy_check_mark: Reports Reader | &nbsp; | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: User<br/>(no admin role) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark:
-User<br/>(no admin role, but member of a role-assignable group) | &nbsp; | &nbsp; | &nbsp; | &nbsp; | :heavy_check_mark: | :heavy_check_mark:
+User<br/>(no admin role, but member or owner of a role-assignable group) | &nbsp; | &nbsp; | &nbsp; | &nbsp; | :heavy_check_mark: | :heavy_check_mark:
User Admin | &nbsp; | &nbsp; | &nbsp; | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: Usage Summary Reports Reader | &nbsp; | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark:
active-directory Andromedascm Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/andromedascm-tutorial.md
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
### Create Andromeda test user
-In this section, a user called Britta Simon is created in Andromeda. Andromeda supports just-in-time user provisioning, which is enabled by default. There is no action item for you in this section. If a user doesn't already exist in Andromeda, a new one is created after authentication. If you need to create a user manually, contact [Andromeda Client support team](https://www.ngcsoftware.com/support/).
+In this section, a user called Britta Simon is created in Andromeda. Andromeda supports just-in-time user provisioning, which is enabled by default. There is no action item for you in this section. If a user doesn't already exist in Andromeda, a new one is created after authentication. If you need to create a user manually, contact Andromeda Client support team.
## Test SSO
active-directory F5 Big Ip Sap Erp Easy Button https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/f5-big-ip-sap-erp-easy-button.md
Prior BIG-IP experience isnΓÇÖt necessary, but you will need:
* An Azure AD free subscription or above
-* An existing BIG-IP or [deploy a BIG-IP Virtual Edition (VE) in Azure](/azure/active-directory/manage-apps/f5-big-ip-kerberos-advanced/f5-bigip-deployment-guide)
+* An existing BIG-IP or [deploy a BIG-IP Virtual Edition (VE) in Azure](/azure/active-directory/manage-apps/f5-bigip-deployment-guide)
* Any of the following F5 BIG-IP license offers
Easy Button provides a set of pre-defined application templates for Oracle Peopl
When a user successfully authenticates to Azure AD, it issues a SAML token with a default set of claims and attributes uniquely identifying the user. The **User Attributes & Claims tab** shows the default claims to issue for the new application. It also lets you configure more claims.
-As our example AD infrastructure is based on a .com domain suffix used both, internally and externally, we donΓÇÖt require any additional attributes to achieve a functional KCD SSO implementation. See the [advanced tutorial](/azure/active-directory/manage-apps/f5-big-ip-kerberos-advanced/f5-big-ip-kerberos-advanced) for cases where you have multiple domains or userΓÇÖs log-in using an alternate suffix.
+As our example AD infrastructure is based on a .com domain suffix used both, internally and externally, we donΓÇÖt require any additional attributes to achieve a functional KCD SSO implementation. See the [advanced tutorial](/azure/active-directory/manage-apps/f5-big-ip-kerberos-advanced) for cases where you have multiple domains or userΓÇÖs log-in using an alternate suffix.
![Screenshot for user attributes and claims](./media/f5-big-ip-easy-button-sap-erp/user-attributes-claims.png)
You can fail to access the SHA protected application due to any number of factor
* Ensure there are no duplicate SPNs in your AD environment by executing the following query at the command line on a domain PC: setspn -q HTTP/my_target_SPN
-You can refer to our [App Proxy guidance](../app-proxy/application-proxy-back-end-kerberos-constrained-delegation-how-to.md) to validate an IIS application is configured appropriately for KCD. F5ΓÇÖs article on [how the APM handles Kerberos SSO](https://techdocs.f5.com/bigip-15-1-0/big-ip-access-policy-manager-single-sign-on-concepts-configuration/kerberos-single-sign-on-method.html) is also a valuable resource.
+You can refer to our [App Proxy guidance](../app-proxy/application-proxy-back-end-kerberos-constrained-delegation-how-to.md) to validate an IIS application is configured appropriately for KCD. F5ΓÇÖs article on [how the APM handles Kerberos SSO](https://techdocs.f5.com/en-us/bigip-15-1-0/big-ip-access-policy-manager-single-sign-on-concepts-configuration/kerberos-single-sign-on-method.html) is also a valuable resource.
### Log analysis
If you donΓÇÖt see a BIG-IP error page, then the issue is probably more related
2. Select the link for your active session. The **View Variables** link in this location may also help determine root cause KCD issues, particularly if the BIG-IP APM fails to obtain the right user and domain identifiers from session variables
-See [BIG-IP APM variable assign examples]( https://devcentral.f5.com/s/articles/apm-variable-assign-examples-1107) and [F5 BIG-IP session variables reference]( https://techdocs.f5.com/bigip-15-0-0/big-ip-access-policy-manager-visual-policy-editor/session-variables.html) for more info.
+See [BIG-IP APM variable assign examples]( https://devcentral.f5.com/s/articles/apm-variable-assign-examples-1107) and [F5 BIG-IP session variables reference]( https://techdocs.f5.com/en-us/bigip-15-0-0/big-ip-access-policy-manager-visual-policy-editor/session-variables.html) for more info.
active-directory Github Enterprise Managed User Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/github-enterprise-managed-user-provisioning-tutorial.md
This tutorial describes the steps you need to perform in both GitHub Enterprise Managed User and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and de-provisions users and groups to GitHub Enterprise Managed User using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md). > [!NOTE]
-> [GitHub Enterprise Managed Users](https://docs.github.com/enterprise-cloud@latest/admin/authentication/managing-your-enterprise-users-with-your-identity-provider/about-enterprise-managed-users) is a feature of GitHub Enterprise Cloud which is different from GitHub Enterprise's standard SAML SSO and user provisioning implementation. If you haven't specifically requested EMU instance, you have standard GitHub Enterprise Cloud plan. In that case, please refer to [the documentation](./github-provisioning-tutorial.md) to configure user provisioning in your non-EMU organisation. User provisioning is not supported for [GitHub Enteprise Accounts](https://docs.github.com/enterprise-cloud@latest/admin/overview/about-enterprise-accounts)
+> [GitHub Enterprise Managed Users](https://docs.github.com/enterprise-cloud@latest/admin/authentication/managing-your-enterprise-users-with-your-identity-provider/about-enterprise-managed-users) is a feature of GitHub Enterprise Cloud which is different from GitHub Enterprise's standard SAML SSO and user provisioning implementation. If you haven't specifically requested an EMU instance, you have a standard GitHub Enterprise Cloud plan. In that case, please refer to [the documentation](./github-provisioning-tutorial.md) to configure user provisioning in your non-EMU organization. User provisioning is not supported for [GitHub Enterprise Accounts](https://docs.github.com/enterprise-cloud@latest/admin/overview/about-enterprise-accounts)
## Capabilities Supported > [!div class="checklist"]
Once you've configured provisioning, use the following resources to monitor your
## Next steps
-* [Learn how to review logs and get reports on provisioning activity](../app-provisioning/check-status-user-account-provisioning.md)
+* [Learn how to review logs and get reports on provisioning activity](../app-provisioning/check-status-user-account-provisioning.md)
active-directory Maverics Identity Orchestrator Saml Connector Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/maverics-identity-orchestrator-saml-connector-tutorial.md
Edit the browser machine's (your laptop's) hosts file, using a hypothetical Orch
12.34.56.78 connectulum.maverics.com ```
-To confirm that DNS is configured as expected, you can make a request to the Orchestrator's status endpoint. From your browser, request http://sonar.maverics.com:7474/status.
+To confirm that DNS is configured as expected, you can make a request to the Orchestrator's status endpoint. From your browser, request `http://sonar.maverics.com:7474/status`.
### Configure TLS
active-directory Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/whats-new.md
This article lists the latest features, improvements, and changes in the Azure A
## March 2022 - Azure AD Verifiable Credentials customers can now change the [domain linked](how-to-dnsbind.md) to their DID easily from the Azure Portal.
+- We made updates to Microsoft Authenticator that change the interaction between the Issuer of a verifiable credential and the user presenting the verifiable credential. This update forces all Verifiable Credentials to be reissued in Microsoft Authenticator for iOS. [More information](whats-new.md?#microsoft-authenticator-did-generation-update)
## February 2022
We are rolling out some breaking changes to our service. These updates require A
- The Azure AD Verifiable Credentials service can now store and handle data processing in the Azure European region. [More information](whats-new.md?#azure-ad-verifiable-credentials-available-in-europe) - Azure AD Verifiable Credentials customers can take advantage of enhancements to credential revocation. These changes add a higher degree of privacy through the implementation of the [W3C Status List 2021](https://w3c-ccg.github.io/vc-status-list-2021/) standard. [More information](whats-new.md?#credential-revocation-with-enhanced-privacy)-- We made updates to Microsoft Authenticator that change the interaction between the Issuer of a verifiable credential and the user presenting the verifiable credential. This update forces all Verifiable Credentials to be reissued in Microsoft Authenticator for Android. [More information](whats-new.md?#microsoft-authenticator-android-did-generation-update)
+- We made updates to Microsoft Authenticator that change the interaction between the Issuer of a verifiable credential and the user presenting the verifiable credential. This update forces all Verifiable Credentials to be reissued in Microsoft Authenticator for Android. [More information](whats-new.md?#microsoft-authenticator-did-generation-update)
>[!IMPORTANT] > All Azure AD Verifiable Credential customers receiving a banner notice in the Azure portal need to go through a service reconfiguration before March 31st 2022. On March 31st 2022 tenants that have not been reconfigured will lose access to any previous configuration. Administrators will have to set up a new instance of the Azure AD Verifiable Credential service. Learn more about how to [reconfigure your tenant](verifiable-credentials-faq.md?#how-do-i-reconfigure-the-azure-ad-verifiable-credentials-service).
Sample contract file:
>[!IMPORTANT] > You have to reconfigure your Azure AD Verifiable Credential service instance to create your new Identity hub endpoint. You have until March 31st 2022, to schedule and manage the reconfiguration of your deployment. On March 31st, 2022 deployments that have not been reconfigured will lose access to any previous Azure AD Verifiable Credentials service configuration. Administrators will need to set up a new service instance.
-### Microsoft Authenticator Android DID Generation Update
+### Microsoft Authenticator DID Generation Update
-We are making protocol updates in Microsoft Authenticator to support Single Long Form DID, thus deprecating the use of pairwise. With this update, your DID in Microsoft Authenticator will be used of every issuer and relaying party exchange. Holders of verifiable credentials using Microsoft Authenticator for Android must get their verifiable credentials reissued as any previous credentials aren't going to continue working.
+We are making protocol updates in Microsoft Authenticator to support Single Long Form DID, thus deprecating the use of pairwise. With this update, your DID in Microsoft Authenticator will be used of every issuer and relaying party exchange. Holders of verifiable credentials using Microsoft Authenticator must get their verifiable credentials reissued as any previous credentials aren't going to continue working.
## December 2021
advisor Advisor Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/advisor/advisor-overview.md
Title: Introduction to Azure Advisor description: Use Azure Advisor to optimize your Azure deployments. Previously updated : 09/27/2020 Last updated : 04/07/2022 # Introduction to Azure Advisor
The Advisor dashboard displays personalized recommendations for all your subscri
* **Security**: To detect threats and vulnerabilities that might lead to security breaches. For more information, see [Advisor Security recommendations](advisor-security-recommendations.md). * **Performance**: To improve the speed of your applications. For more information, see [Advisor Performance recommendations](advisor-performance-recommendations.md). * **Cost**: To optimize and reduce your overall Azure spending. For more information, see [Advisor Cost recommendations](advisor-cost-recommendations.md).
-* **Operational Excellence**: To help you achieve process and workflow efficiency, resource manageability and deployment best practices. . For more information, see [Advisor Operational Excellence recommendations](advisor-operational-excellence-recommendations.md).
+* **Operational Excellence**: To help you achieve process and workflow efficiency, resource manageability and deployment best practices. For more information, see [Advisor Operational Excellence recommendations](advisor-operational-excellence-recommendations.md).
![Advisor recommendation types](./media/advisor-overview/advisor-dashboard.png)
aks Custom Node Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/custom-node-configuration.md
Title: Customize the node configuration for Azure Kubernetes Service (AKS) node pools (preview)
+ Title: Customize the node configuration for Azure Kubernetes Service (AKS) node pools
description: Learn how to customize the configuration on Azure Kubernetes Service (AKS) cluster nodes and node pools.
-# Customize node configuration for Azure Kubernetes Service (AKS) node pools (preview)
+# Customize node configuration for Azure Kubernetes Service (AKS) node pools
Customizing your node configuration allows you to configure or tune your operating system (OS) settings or the kubelet parameters to match the needs of the workloads. When you create an AKS cluster or add a node pool to your cluster, you can customize a subset of commonly used OS and kubelet settings. To configure settings beyond this subset, [use a daemon set to customize your needed configurations without losing AKS support for your nodes](support-policies.md#shared-responsibility).
-## Register the `CustomNodeConfigPreview` preview feature
-
-To create an AKS cluster or node pool that can customize the kubelet parameters or OS settings, you must enable the `CustomNodeConfigPreview` feature flag on your subscription.
-
-Register the `CustomNodeConfigPreview` feature flag by using the [az feature register][az-feature-register] command, as shown in the following example:
-
-```azurecli-interactive
-az feature register --namespace "Microsoft.ContainerService" --name "CustomNodeConfigPreview"
-```
-
-It takes a few minutes for the status to show *Registered*. Verify the registration status by using the [az feature list][az-feature-list] command:
-
-```azurecli-interactive
-az feature list -o table --query "[?contains(name, 'Microsoft.ContainerService/CustomNodeConfigPreview')].{Name:name,State:properties.state}"
-```
-
-When ready, refresh the registration of the *Microsoft.ContainerService* resource provider by using the [az provider register][az-provider-register] command:
-
-```azurecli-interactive
-az provider register --namespace Microsoft.ContainerService
-```
--
-## Install aks-preview CLI extension
-
-To create an AKS cluster or a node pool that can customize the kubelet parameters or OS settings, you need the latest *aks-preview* Azure CLI extension. Install the *aks-preview* Azure CLI extension by using the [az extension add][az-extension-add] command. Or install any available updates by using the [az extension update][az-extension-update] command.
-
-```azurecli-interactive
-# Install the aks-preview extension
-az extension add --name aks-preview
-
-# Update the extension to make sure you have the latest version installed
-az extension update --name aks-preview
-```
- ## Use custom node configuration ### Kubelet custom configuration
aks Use Group Managed Service Accounts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-group-managed-service-accounts.md
Title: Enable Group Managed Service Accounts (GMSA) for your Windows Server nodes on your Azure Kubernetes Service (AKS) cluster (Preview)
+ Title: Enable Group Managed Service Accounts (GMSA) for your Windows Server nodes on your Azure Kubernetes Service (AKS) cluster
description: Learn how to enable Group Managed Service Accounts (GMSA) for your Windows Server nodes on your Azure Kubernetes Service (AKS) cluster for securing your pods. Last updated 11/01/2021
-# Enable Group Managed Service Accounts (GMSA) for your Windows Server nodes on your Azure Kubernetes Service (AKS) cluster (Preview)
+# Enable Group Managed Service Accounts (GMSA) for your Windows Server nodes on your Azure Kubernetes Service (AKS) cluster
[Group Managed Service Accounts (GMSA)][gmsa-overview] is a managed domain account for multiple servers that provides automatic password management, simplified service principal name (SPN) management and the ability to delegate the management to other administrators. AKS provides the ability to enable GMSA on your Windows Server nodes, which allows containers running on Windows Server nodes to integrate with and be managed by GMSA.
-Enabling GMSA with Windows Server nodes on AKS is in preview.
- ## Pre-requisites Enabling GMSA with Windows Server nodes on AKS requires: * Kubernetes 1.19 or greater.
-* The `aks-preview` extension version 0.5.37 or greater.
-* The Docker container runtime, which is currently the default.
+* Azure CLI version 2.35.0 or greater
* [Managed identities][aks-managed-id] with your AKS cluster. * Permissions to create or update an Azure Key Vault. * Permissions to configure GMSA on Active Directory Domain Service or on-prem Active Directory. * The domain controller must have Active Directory Web Services enabled and must be reachable on port 9389 by the AKS cluster.
-### Install the `aks-preview` Azure CLI
-
-You will need the *aks-preview* Azure CLI extension. Install the *aks-preview* Azure CLI extension by using the [az extension add][az-extension-add] command. Or install any available updates by using the [az extension update][az-extension-update] command.
-
-```azurecli-interactive
-# Install the aks-preview extension
-az extension add --name aks-preview
-
-# Update the extension to make sure you have the latest version installed
-az extension update --name aks-preview
-```
-
-### Register the `AKSWindowsGmsaPreview` preview feature
-
-To use the feature, you must also enable the `AKSWindowsGmsaPreview` feature flag on your subscription.
-
-Register the `AKSWindowsGmsaPreview` feature flag by using the [az feature register][az-feature-register] command, as shown in the following example:
-
-```azurecli-interactive
-az feature register --namespace "Microsoft.ContainerService" --name "AKSWindowsGmsaPreview"
-```
-
-It takes a few minutes for the status to show *Registered*. Verify the registration status by using the [az feature list][az-feature-list] command:
-
-```azurecli-interactive
-az feature list -o table --query "[?contains(name, 'Microsoft.ContainerService/AKSWindowsGmsaPreview')].{Name:name,State:properties.state}"
-```
-
-When ready, refresh the registration of the *Microsoft.ContainerService* resource provider by using the [az provider register][az-provider-register] command:
-
-```azurecli-interactive
-az provider register --namespace Microsoft.ContainerService
-```
- ## Configure GMSA on Active Directory domain controller To use GMSA with AKS, you need both GMSA and a standard domain user credential to access the GMSA credential configured on your domain controller. To configure GMSA on your domain controller, see [Getting Started with Group Managed Service Accounts][gmsa-getting-started]. For the standard domain user credential, you can use an existing user or create a new one, as long as it has access to the GMSA credential.
application-gateway Troubleshoot App Service Redirection App Service Url https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/troubleshoot-app-service-redirection-app-service-url.md
Learn how to diagnose and resolve issues you might encounter when Azure App Serv
## Overview
-In this article, you'll learn how to troubleshoot the following issues, as described in more detail in Architecture Center: [Preserve the original HTTP host name between a reverse proxy and its backend web application](/azure/architecture/best-practices/host-name-preservation.md#potential-issues)
+In this article, you'll learn how to troubleshoot the following issues, as described in more detail in Architecture Center: [Preserve the original HTTP host name between a reverse proxy and its backend web application](/azure/architecture/best-practices/host-name-preservation#potential-issues)
-* [Incorrect absolute URLs](/azure/architecture/best-practices/host-name-preservation.md#incorrect-absolute-urls)
-* [Incorrect redirect URLs](/azure/architecture/best-practices/host-name-preservation.md#incorrect-redirect-urls)
+* [Incorrect absolute URLs](/azure/architecture/best-practices/host-name-preservation#incorrect-absolute-urls)
+* [Incorrect redirect URLs](/azure/architecture/best-practices/host-name-preservation#incorrect-redirect-urls)
* the app service URL is exposed in the browser when there's a redirection * an example of this: an OIDC authentication flow is broken because of a redirect with wrong hostname; this includes the use of [App Service Authentication and Authorization](../app-service/overview-authentication-authorization.md)
-* [Broken cookies](/azure/architecture/best-practices/host-name-preservation.md#broken-cookies)
+* [Broken cookies](/azure/architecture/best-practices/host-name-preservation#broken-cookies)
* cookies are not propagated between the browser and the App Service * an example of this: the app service ARRAffinity cookie domain is set to the app service host name and is tied to "example.azurewebsites.net", instead of the original host. As a result, session affinity is broken.
attestation Quickstart Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/attestation/quickstart-Bicep.md
+
+ Title: Create an Azure Attestation certificate by using Bicep
+description: Learn how to create an Azure Attestation certificate by using Bicep.
++++++ Last updated : 03/08/2022++
+# Quickstart: Create an Azure Attestation provider with a Bicep file
+
+[Microsoft Azure Attestation](overview.md) is a solution for attesting Trusted Execution Environments (TEEs). This quickstart focuses on the process of deploying a Bicep file to create a Microsoft Azure Attestation policy.
++
+## Prerequisites
+
+If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+
+## Review the Bicep file
+
+The Bicep file used in this quickstart is from [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/attestation-provider-create/).
++
+Azure resources defined in the Bicep file:
+
+- [Microsoft.Attestation/attestationProviders](/azure/templates/microsoft.attestation/attestationproviders?tabs=bicep)
+
+## Deploy the Bicep file
+
+1. Save the Bicep file as **main.bicep** to your local computer.
+1. Deploy the Bicep file using either Azure CLI or Azure PowerShell.
+
+ # [CLI](#tab/CLI)
+
+ ```azurecli
+ az group create --name exampleRG --location eastus
+
+ az deployment group create --resource-group exampleRG --template-file main.bicep
+ ```
+
+ # [PowerShell](#tab/PowerShell)
+
+ ```azurepowershell
+ New-AzResourceGroup -Name exampleRG -Location eastus
+
+ New-AzResourceGroupDeployment -ResourceGroupName exampleRG -TemplateFile ./main.bicep
+ ```
+
+
+
+ When the deployment finishes, you should see a message indicating the deployment succeeded.
+
+## Validate the deployment
+
+Use the Azure portal, Azure CLI, or Azure PowerShell to verify the resource group and server resource were created.
+
+# [CLI](#tab/CLI)
+
+```azurecli-interactive
+az resource list --resource-group exampleRG
+```
+
+# [PowerShell](#tab/PowerShell)
+
+```azurepowershell-interactive
+Get-AzResource -ResourceGroupName exampleRG
+```
+++
+## Clean up resources
+
+Other Azure Attestation build upon this quickstart. If you plan to continue on to work with subsequent quickstarts and tutorials, you may wish to leave these resources in place.
+
+When no longer needed, delete the resource group, which deletes the Attestation resource. To delete the resource group by using Azure CLI or Azure PowerShell:
+
+# [CLI](#tab/CLI)
+
+```azurecli-interactive
+az group delete --name exampleRG
+```
+
+# [PowerShell](#tab/PowerShell)
+
+```azurepowershell-interactive
+Remove-AzResourceGroup -Name exampleRG
+```
+++
+## Next steps
+
+In this quickstart, you created an attestation resource using a Bicep file, and validated the deployment. To learn more about Azure Attestation, see [Overview of Azure Attestation](overview.md).
automation Automation Linux Hrw Install https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-linux-hrw-install.md
The Hybrid Runbook Worker feature supports the following distributions. All oper
* Oracle Linux 6, 7, and 8 * Red Hat Enterprise Linux Server 5, 6, 7, and 8 * Debian GNU/Linux 6, 7, and 8
-* Ubuntu 12.04 LTS, 14.04 LTS, 16.04 LTS, 18.04, and 20.04 LTS
-* SUSE Linux Enterprise Server 12, 15, and 15.1 (SUSE didn't release versions numbered 13 or 14)
+* SUSE Linux Enterprise Server 12, 15, and 15.1 (SUSE didn't release versions numbered 13 or 14)
+* Ubuntu
+
+ **Linux OS** | **Name** |
+ | |
+ 20.04 LTS | Focal Fossa
+ 18.04 LTS | Bionic Beaver
+ 16.04 LTS | Xenial Xerus
+ 14.04 LTS | Trusty Tahr
> [!IMPORTANT] > Before enabling the Update Management feature, which depends on the system Hybrid Runbook Worker role, confirm the distributions it supports [here](update-management/operating-system-requirements.md).
automation Deploy Updates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/update-management/deploy-updates.md
To schedule a new update deployment, perform the following steps. Depending on t
12. Use the **Maintenance window (minutes)** field to specify the amount of time allowed for updates to install. Consider the following details when specifying a maintenance window: * Maintenance windows control how many updates are installed.
+ * If the next step in the update process is to install a Service Pack, there must be 20 minutes left in the maintenance window, or that update will be skipped.
+ * If the next step in the update process is to install any other kind of update besides Service Pack, there must be 15 minutes left in the maintenance window, or that update will be skipped.
+ * If the next step in the update process is a reboot, there must be 10 minutes left in the maintenance window, or the reboot will be skipped.
* Update Management doesn't stop installing new updates if the end of a maintenance window is approaching. * Update Management doesn't terminate in-progress updates if the maintenance window is exceeded. Any remaining updates to be installed are not attempted. If this is consistently happening, you should reevaluate the duration of your maintenance window. * If the maintenance window is exceeded on Windows, it's often because a service pack update is taking a long time to install.
automation Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/update-management/overview.md
You can deploy and install software updates on machines that require the updates
The scheduled deployment defines which target machines receive the applicable updates. It does so either by explicitly specifying certain machines or by selecting a [computer group](../../azure-monitor/logs/computer-groups.md) that's based on log searches of a specific set of machines (or based on an [Azure query](query-logs.md) that dynamically selects Azure VMs based on specified criteria). These groups differ from [scope configuration](../../azure-monitor/insights/solution-targeting.md), which is used to control the targeting of machines that receive the configuration to enable Update Management. This prevents them from performing and reporting update compliance, and install approved required updates.
-While defining a deployment, you also specify a schedule to approve and set a time period during which updates can be installed. This period is called the maintenance window. A 20-minute span of the maintenance window is reserved for reboots, assuming one is needed and you selected the appropriate reboot option. If patching takes longer than expected and there's less than 20 minutes in the maintenance window, a reboot won't occur.
+While defining a deployment, you also specify a schedule to approve and set a time period during which updates can be installed. This period is called the maintenance window. A 10-minute span of the maintenance window is reserved for reboots, assuming one is needed and you selected the appropriate reboot option. If patching takes longer than expected and there's less than 10 minutes in the maintenance window, a reboot won't occur.
After an update package is scheduled for deployment, it takes 2 to 3 hours for the update to show up for Linux machines for assessment. For Windows machines, it takes 12 to 15 hours for the update to show up for assessment after it's been released. Before and after update installation, a scan for update compliance is performed and the log data results is forwarded to the workspace.
azure-arc Version Log https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/version-log.md
This article identifies the component versions with each release of Azure Arc-en
|CRD names and versions |`datacontrollers.arcdata.microsoft.com`: v1beta1, v1, v2, v3, v4</br>`exporttasks.tasks.arcdata.microsoft.com`: v1beta1, v1, v2</br>`kafkas.arcdata.microsoft.com`: v1beta1</br>`monitors.arcdata.microsoft.com`: v1beta1, v1, v2</br>`sqlmanagedinstances.sql.arcdata.microsoft.com`: v1beta1, v1, v2, v3, v4</br>`postgresqls.arcdata.microsoft.com`: v1beta1, v1beta2</br>`sqlmanagedinstancerestoretasks.tasks.sql.arcdata.microsoft.com`: v1beta1, v1</br>`failovergroups.sql.arcdata.microsoft.com`: v1beta1, v1beta2</br>`activedirectoryconnectors.arcdata.microsoft.com`: v1beta1, v1beta2| |ARM API version|2021-11-01| |`arcdata` Azure CLI extension version| 1.3.0|
-|Arc enabled Kubernetes helm chart extension version|1.1.19091004|
-|Arc Data extension for Azure Data Studio|1.0|
+|Arc enabled Kubernetes helm chart extension version|1.1.19211001|
+|Arc Data extension for Azure Data Studio|1.1.0|
## March 08, 2022
azure-arc Deploy Azure Iot Edge Workloads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/deploy-azure-iot-edge-workloads.md
- Title: "Deploy Azure IoT Edge workloads"--
-#
Previously updated : 03/03/2021-
-description: "Deploy Azure IoT Edge workloads"
-keywords: "Kubernetes, Arc, Azure, K8s, containers"
---
-# Deploy Azure IoT Edge workloads
-
-## Overview
-
-Azure Arc and Azure IoT Edge easily complement each other's capabilities.
-
-Azure Arc provides mechanisms for cluster operators to configure the foundational components of a cluster, and apply and enforce cluster policies.
-
-Azure IoT Edge allows application operators to remotely deploy and manage the workloads at scale with convenient cloud ingestion and bi-directional communication primitives.
-
-The diagram below illustrates Azure Arc and Azure IoT Edge's relationship:
-
-![IoT Arc configuration](./media/edge-arc.png)
-
-## Pre-requisites
-
-* [Register an IoT Edge device](../../iot-edge/quickstart-linux.md#register-an-iot-edge-device) and [deploy the simulated temperature sensor module](../../iot-edge/quickstart-linux.md#deploy-a-module). Note the device's connection string for the *values.yaml* mentioned below.
-
-* Use [IoT Edge's support for Kubernetes](https://aka.ms/edgek8sdoc) to deploy it via Azure Arc's Flux operator.
-
-* Download the [*values.yaml*](https://github.com/Azure/iotedge/blob/preview/iiot/kubernetes/charts/edge-kubernetes/values.yaml) file for IoT Edge Helm chart and replace the `deviceConnectionString` placeholder at the end of the file with the connection string you noted earlier. Set any other supported chart installation options as needed. Create a namespace for the IoT Edge workload and generate a secret in it:
-
- ```
- $ kubectl create ns iotedge
-
- $ kubectl create secret generic dcs --from-file=fully-qualified-path-to-values.yaml --namespace iotedge
- ```
-
- You can also set up remotely using the [cluster config example](./tutorial-use-gitops-connected-cluster.md).
-
-## Connect a cluster
-
-Use the `az` Azure CLI `connectedk8s` extension to connect a Kubernetes cluster to Azure Arc:
-
- ```
- az connectedk8s connect --name AzureArcIotEdge --resource-group AzureArcTest
- ```
-
-## Create a configuration for IoT Edge
-
-The [example Git repo](https://github.com/veyalla/edgearc) points to the IoT Edge Helm chart and references the secret created in the pre-requisites section.
-
-Use the `az` Azure CLI `k8s-configuration` extension to create a configuration that links the connected cluster to the Git repo:
-
- ```
- az k8s-configuration create --name iotedge --cluster-name AzureArcIotEdge --resource-group AzureArcTest --operator-instance-name iotedge --operator-namespace azure-arc-iot-edge --enable-helm-operator --helm-operator-chart-version 0.6.0 --helm-operator-chart-values "--set helm.versions=v3" --repository-url "git://github.com/veyalla/edgearc.git" --cluster-scoped
- ```
-
-In a few minutes, you should see the IoT Edge workload modules deployed into your cluster's `iotedge` namespace.
-
-View the `SimulatedTemperatureSensor` pod logs in that namespace to see the sample values being generated. You can also watch the messages arrive at your IoT hub by using the [Azure IoT Hub Toolkit extension for Visual Studio Code](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-toolkit).
-
-## Cleanup
-
-Remove the configuration using:
-
-```
-az k8s-configuration delete -g AzureArcTest --cluster-name AzureArcIotEdge --name iotedge
-```
-
-## Next steps
-
-Learn how to [use Azure Policy to govern cluster configuration](./use-azure-policy.md).
azure-cache-for-redis Cache Dotnet Core Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-dotnet-core-quickstart.md
ms.devlang: csharp Previously updated : 06/18/2020
-#Customer intent: As a .NET Core developer, new to Azure Cache for Redis, I want to create a new .NET Core app that uses Azure Cache for Redis.
Last updated : 03/25/2022+ # Quickstart: Use Azure Cache for Redis in .NET Core
In this quickstart, you incorporate Azure Cache for Redis into a .NET Core app t
## Skip to the code on GitHub
-If you want to skip straight to the code, see the [.NET Core quickstart](https://github.com/Azure-Samples/azure-cache-redis-samples/tree/main/quickstart/dotnet-core) on GitHub.
+Clone the repo [https://github.com/Azure-Samples/azure-cache-redis-samples/tree/main/quickstart/dotnet-core](https://github.com/Azure-Samples/azure-cache-redis-samples/tree/main/quickstart/dotnet-core) on GitHub.
## Prerequisites
If you want to skip straight to the code, see the [.NET Core quickstart](https:/
- [.NET Core SDK](https://dotnet.microsoft.com/download) ## Create a cache [!INCLUDE [redis-cache-access-keys](includes/redis-cache-access-keys.md)]
-Make a note of the **HOST NAME** and the **Primary** access key. You will use these values later to construct the *CacheConnection* secret.
---
-## Create a console app
-
-Open a new command window and execute the following command to create a new .NET Core console app:
-
-```
-dotnet new console -o Redistest
-```
-
-In your command window, change to the new *Redistest* project directory.
---
-## Add Secret Manager to the project
-
-In this section, you will add the [Secret Manager tool](/aspnet/core/security/app-secrets) to your project. The Secret Manager tool stores sensitive data for development work outside of your project tree. This approach helps prevent the accidental sharing of app secrets within source code.
-
-Open your *Redistest.csproj* file. Add a `DotNetCliToolReference` element to include *Microsoft.Extensions.SecretManager.Tools*. Also add a `UserSecretsId` element as shown below, and save the file.
-
-```xml
-<Project Sdk="Microsoft.NET.Sdk">
- <PropertyGroup>
- <OutputType>Exe</OutputType>
- <TargetFramework>net5.0</TargetFramework>
- <UserSecretsId>Redistest</UserSecretsId>
- </PropertyGroup>
- <ItemGroup>
- <DotNetCliToolReference Include="Microsoft.Extensions.SecretManager.Tools" Version="2.0.0" />
- </ItemGroup>
-</Project>
-```
+Make a note of the **HOST NAME** and the **Primary** access key. You'll use these values later to construct the *CacheConnection* secret.
-Execute the following command to add the *Microsoft.Extensions.Configuration.UserSecrets* package to the project:
-
-```
-dotnet add package Microsoft.Extensions.Configuration.UserSecrets
-```
-
-Execute the following command to restore your packages:
-
-```
-dotnet restore
-```
+## Add a local secret for the connection string
In your command window, execute the following command to store a new secret named *CacheConnection*, after replacing the placeholders (including angle brackets) for your cache name and primary access key:
-```
+```dos
dotnet user-secrets set CacheConnection "<cache name>.redis.cache.windows.net,abortConnect=false,ssl=true,allowAdmin=true,password=<primary-access-key>" ```
-Add the following `using` statement to *Program.cs*:
+## Connect to the cache with RedisConnection
-```csharp
-using Microsoft.Extensions.Configuration;
-```
-
-Add the following members to the `Program` class in *Program.cs*. This code initializes a configuration to access the user secret for the Azure Cache for Redis connection string.
+The connection to your cache is managed by the `RedisConnection` class. The connection is first made in this statement from `Program.cs`:
```csharp
-private static IConfigurationRoot Configuration { get; set; }
-const string SecretName = "CacheConnection";
+ _redisConnection = await RedisConnection.InitializeAsync(connectionString: configuration["CacheConnection"].ToString());
-private static void InitializeConfiguration()
-{
- var builder = new ConfigurationBuilder()
- .AddUserSecrets<Program>();
-
- Configuration = builder.Build();
-}
```
-## Configure the cache client
-
-In this section, you will configure the console application to use the [StackExchange.Redis](https://github.com/StackExchange/StackExchange.Redis) client for .NET.
-
-In your command window, execute the following command in the *Redistest* project directory:
-
-```
-dotnet add package StackExchange.Redis
-```
-
-Once the installation is completed, the *StackExchange.Redis* cache client is available to use with your project.
--
-## Connect to the cache
-
-Add the following `using` statement to *Program.cs*:
+In `RedisConnection.cs`, you see the `StackExchange.Redis` namespace has been added to the code. This is needed for the `RedisConnection` class.
```csharp using StackExchange.Redis;
-```
-
-The connection to the Azure Cache for Redis is managed by the `ConnectionMultiplexer` class. This class should be shared and reused throughout your client application. Do not create a new connection for each operation.
-
-In *Program.cs*, add the following members to the `Program` class of your console application:
-
-```csharp
-private static Lazy<ConnectionMultiplexer> lazyConnection = CreateConnection();
-public static ConnectionMultiplexer Connection
-{
- get
- {
- return lazyConnection.Value;
- }
-}
-
-private static Lazy<ConnectionMultiplexer> CreateConnection()
-{
- return new Lazy<ConnectionMultiplexer>(() =>
- {
- string cacheConnection = Configuration[SecretName];
- return ConnectionMultiplexer.Connect(cacheConnection);
- });
-}
```
+<!-- Is this right Philo -->
+The `RedisConnection` code ensures that there is always a healthy connection to the cache by managing the `ConnectionMultiplexer` instance from `StackExchange.Redis`. The `RedisConnection` class recreates the connection when a connection is lost and unable to reconnect automatically.
-This approach to sharing a `ConnectionMultiplexer` instance in your application uses a static property that returns a connected instance. The code provides a thread-safe way to initialize only a single connected `ConnectionMultiplexer` instance. `abortConnect` is set to false, which means that the call succeeds even if a connection to the Azure Cache for Redis is not established. One key feature of `ConnectionMultiplexer` is that it automatically restores connectivity to the cache once the network issue or other causes are resolved.
-
-The value of the *CacheConnection* secret is accessed using the Secret Manager configuration provider and used as the password parameter.
-
-## Handle RedisConnectionException and SocketException by reconnecting
-
-A recommended best practice when calling methods on `ConnectionMultiplexer` is to attempt to resolve `RedisConnectionException` and `SocketException` exceptions automatically by closing and reestablishing the connection.
-
-Add the following `using` statements to *Program.cs*:
+For more information, see [StackExchange.Redis](https://stackexchange.github.io/StackExchange.Redis/) and the code in a [GitHub repo](https://github.com/StackExchange/StackExchange.Redis).
-```csharp
-using System.Net.Sockets;
-using System.Threading;
-```
-
-In *Program.cs*, add the following members to the `Program` class:
-
-```csharp
-private static IConfigurationRoot Configuration { get; set; }
-private static long _lastReconnectTicks = DateTimeOffset.MinValue.UtcTicks;
-private static DateTimeOffset _firstErrorTime = DateTimeOffset.MinValue;
-private static DateTimeOffset _previousErrorTime = DateTimeOffset.MinValue;
-private static SemaphoreSlim _reconnectSemaphore = new SemaphoreSlim(initialCount: 1, maxCount: 1);
-private static SemaphoreSlim _initSemaphore = new SemaphoreSlim(initialCount: 1, maxCount: 1);
-private static ConnectionMultiplexer _connection;
-private static bool _didInitialize = false;
-// In general, let StackExchange.Redis handle most reconnects,
-// so limit the frequency of how often ForceReconnect() will
-// actually reconnect.
-public static TimeSpan ReconnectMinInterval => TimeSpan.FromSeconds(60);
-// If errors continue for longer than the below threshold, then the
-// multiplexer seems to not be reconnecting, so ForceReconnect() will
-// re-create the multiplexer.
-public static TimeSpan ReconnectErrorThreshold => TimeSpan.FromSeconds(30);
-public static TimeSpan RestartConnectionTimeout => TimeSpan.FromSeconds(15);
-public static int RetryMaxAttempts => 5;
-
-public static ConnectionMultiplexer Connection { get { return _connection; } }
-private static async Task InitializeAsync()
-{
- if (_didInitialize)
- {
- throw new InvalidOperationException("Cannot initialize more than once.");
- }
- var builder = new ConfigurationBuilder()
- .AddUserSecrets<Program>();
- Configuration = builder.Build();
- _connection = await CreateConnectionAsync();
- _didInitialize = true;
-}
-// This method may return null if it fails to acquire the semaphore in time.
-// Use the return value to update the "connection" field
-private static async Task<ConnectionMultiplexer> CreateConnectionAsync()
-{
- if (_connection != null)
- {
- // If we already have a good connection, let's re-use it
- return _connection;
- }
- try
- {
- await _initSemaphore.WaitAsync(RestartConnectionTimeout);
- }
- catch
- {
- // We failed to enter the semaphore in the given amount of time. Connection will either be null, or have a value that was created by another thread.
- return _connection;
- }
- // We entered the semaphore successfully.
- try
- {
- if (_connection != null)
- {
- // Another thread must have finished creating a new connection while we were waiting to enter the semaphore. Let's use it
- return _connection;
- }
- // Otherwise, we really need to create a new connection.
- string cacheConnection = Configuration["CacheConnection"].ToString();
- return await ConnectionMultiplexer.ConnectAsync(cacheConnection);
- }
- finally
- {
- _initSemaphore.Release();
- }
-}
-private static async Task CloseConnectionAsync(ConnectionMultiplexer oldConnection)
-{
- if (oldConnection == null)
- {
- return;
- }
- try
- {
- await oldConnection.CloseAsync();
- }
- catch (Exception)
- {
- // Ignore any errors from the oldConnection
- }
-}
-/// <summary>
-/// Force a new ConnectionMultiplexer to be created.
-/// NOTES:
-/// 1. Users of the ConnectionMultiplexer MUST handle ObjectDisposedExceptions, which can now happen as a result of calling ForceReconnectAsync().
-/// 2. Call ForceReconnectAsync() for RedisConnectionExceptions and RedisSocketExceptions. You can also call it for RedisTimeoutExceptions,
-/// but only if you're using generous ReconnectMinInterval and ReconnectErrorThreshold. Otherwise, establishing new connections can cause
-/// a cascade failure on a server that's timing out because it's already overloaded.
-/// 3. The code will:
-/// a. wait to reconnect for at least the "ReconnectErrorThreshold" time of repeated errors before actually reconnecting
-/// b. not reconnect more frequently than configured in "ReconnectMinInterval"
-/// </summary>
-public static async Task ForceReconnectAsync()
-{
- var utcNow = DateTimeOffset.UtcNow;
- long previousTicks = Interlocked.Read(ref _lastReconnectTicks);
- var previousReconnectTime = new DateTimeOffset(previousTicks, TimeSpan.Zero);
- TimeSpan elapsedSinceLastReconnect = utcNow - previousReconnectTime;
- // If multiple threads call ForceReconnectAsync at the same time, we only want to honor one of them.
- if (elapsedSinceLastReconnect < ReconnectMinInterval)
- {
- return;
- }
- try
- {
- await _reconnectSemaphore.WaitAsync(RestartConnectionTimeout);
- }
- catch
- {
- // If we fail to enter the semaphore, then it is possible that another thread has already done so.
- // ForceReconnectAsync() can be retried while connectivity problems persist.
- return;
- }
- try
- {
- utcNow = DateTimeOffset.UtcNow;
- elapsedSinceLastReconnect = utcNow - previousReconnectTime;
- if (_firstErrorTime == DateTimeOffset.MinValue)
- {
- // We haven't seen an error since last reconnect, so set initial values.
- _firstErrorTime = utcNow;
- _previousErrorTime = utcNow;
- return;
- }
- if (elapsedSinceLastReconnect < ReconnectMinInterval)
- {
- return; // Some other thread made it through the check and the lock, so nothing to do.
- }
- TimeSpan elapsedSinceFirstError = utcNow - _firstErrorTime;
- TimeSpan elapsedSinceMostRecentError = utcNow - _previousErrorTime;
- bool shouldReconnect =
- elapsedSinceFirstError >= ReconnectErrorThreshold // Make sure we gave the multiplexer enough time to reconnect on its own if it could.
- && elapsedSinceMostRecentError <= ReconnectErrorThreshold; // Make sure we aren't working on stale data (e.g. if there was a gap in errors, don't reconnect yet).
- // Update the previousErrorTime timestamp to be now (e.g. this reconnect request).
- _previousErrorTime = utcNow;
- if (!shouldReconnect)
- {
- return;
- }
- _firstErrorTime = DateTimeOffset.MinValue;
- _previousErrorTime = DateTimeOffset.MinValue;
- ConnectionMultiplexer oldConnection = _connection;
- await CloseConnectionAsync(oldConnection);
- _connection = null;
- _connection = await CreateConnectionAsync();
- Interlocked.Exchange(ref _lastReconnectTicks, utcNow.UtcTicks);
- }
- finally
- {
- _reconnectSemaphore.Release();
- }
-}
-// In real applications, consider using a framework such as
-// Polly to make it easier to customize the retry approach.
-private static async Task<T> BasicRetryAsync<T>(Func<T> func)
-{
- int reconnectRetry = 0;
- int disposedRetry = 0;
- while (true)
- {
- try
- {
- return func();
- }
- catch (Exception ex) when (ex is RedisConnectionException || ex is SocketException)
- {
- reconnectRetry++;
- if (reconnectRetry > RetryMaxAttempts)
- throw;
- await ForceReconnectAsync();
- }
- catch (ObjectDisposedException)
- {
- disposedRetry++;
- if (disposedRetry > RetryMaxAttempts)
- throw;
- }
- }
-}
-public static Task<IDatabase> GetDatabaseAsync()
-{
- return BasicRetryAsync(() => Connection.GetDatabase());
-}
-public static Task<System.Net.EndPoint[]> GetEndPointsAsync()
-{
- return BasicRetryAsync(() => Connection.GetEndPoints());
-}
-public static Task<IServer> GetServerAsync(string host, int port)
-{
- return BasicRetryAsync(() => Connection.GetServer(host, port));
-}
-```
+<!-- :::code language="csharp" source="~/samples-cache/quickstart/dotnet-core/RedisConnection.cs"::: -->
## Executing cache commands
-In *Program.cs*, add the following code for the `Main` procedure of the `Program` class for your console application:
-
+In `program.cs`, you can see the following code for the `RunRedisCommandsAsync` method in the `Program` class for the console application:
+<!-- Replaced this code with lines 57-81 from dotnet-core/Program.cs -->
```csharp
-static void Main(string[] args)
-{
- InitializeConfiguration();
-
- IDatabase cache = GetDatabase();
-
- // Perform cache operations using the cache object...
-
- // Simple PING command
- string cacheCommand = "PING";
- Console.WriteLine("\nCache command : " + cacheCommand);
- Console.WriteLine("Cache response : " + cache.Execute(cacheCommand).ToString());
-
- // Simple get and put of integral data types into the cache
- cacheCommand = "GET Message";
- Console.WriteLine("\nCache command : " + cacheCommand + " or StringGet()");
- Console.WriteLine("Cache response : " + cache.StringGet("Message").ToString());
-
- cacheCommand = "SET Message \"Hello! The cache is working from a .NET Core console app!\"";
- Console.WriteLine("\nCache command : " + cacheCommand + " or StringSet()");
- Console.WriteLine("Cache response : " + cache.StringSet("Message", "Hello! The cache is working from a .NET Core console app!").ToString());
-
- // Demonstrate "SET Message" executed as expected...
- cacheCommand = "GET Message";
- Console.WriteLine("\nCache command : " + cacheCommand + " or StringGet()");
- Console.WriteLine("Cache response : " + cache.StringGet("Message").ToString());
-
- // Get the client list, useful to see if connection list is growing...
- // Note that this requires allowAdmin=true in the connection string
- cacheCommand = "CLIENT LIST";
- Console.WriteLine("\nCache command : " + cacheCommand);
- var endpoint = (System.Net.DnsEndPoint)GetEndPoints()[0];
- IServer server = GetServer(endpoint.Host, endpoint.Port);
- ClientInfo[] clients = server.ClientList();
-
- Console.WriteLine("Cache response :");
- foreach (ClientInfo client in clients)
+private static async Task RunRedisCommandsAsync(string prefix)
{
- Console.WriteLine(client.Raw);
- }
+ // Simple PING command
+ Console.WriteLine($"{Environment.NewLine}{prefix}: Cache command: PING");
+ RedisResult pingResult = await _redisConnection.BasicRetryAsync(async (db) => await db.ExecuteAsync("PING"));
+ Console.WriteLine($"{prefix}: Cache response: {pingResult}");
- CloseConnection(lazyConnection);
-}
-```
+ // Simple get and put of integral data types into the cache
+ string key = "Message";
+ string value = "Hello! The cache is working from a .NET console app!";
-Save *Program.cs*.
+ Console.WriteLine($"{Environment.NewLine}{prefix}: Cache command: GET {key} via StringGetAsync()");
+ RedisValue getMessageResult = await _redisConnection.BasicRetryAsync(async (db) => await db.StringGetAsync(key));
+ Console.WriteLine($"{prefix}: Cache response: {getMessageResult}");
-Azure Cache for Redis has a configurable number of databases (default of 16) that can be used to logically separate the data within an Azure Cache for Redis. The code connects to the default database, DB 0. For more information, see [What are Redis databases?](cache-development-faq.yml#what-are-redis-databases-) and [Default Redis server configuration](cache-configure.md#default-redis-server-configuration).
+ Console.WriteLine($"{Environment.NewLine}{prefix}: Cache command: SET {key} \"{value}\" via StringSetAsync()");
+ bool stringSetResult = await _redisConnection.BasicRetryAsync(async (db) => await db.StringSetAsync(key, value));
+ Console.WriteLine($"{prefix}: Cache response: {stringSetResult}");
-Cache items can be stored and retrieved by using the `StringSet` and `StringGet` methods.
+ Console.WriteLine($"{Environment.NewLine}{prefix}: Cache command: GET {key} via StringGetAsync()");
+ getMessageResult = await _redisConnection.BasicRetryAsync(async (db) => await db.StringGetAsync(key));
+ Console.WriteLine($"{prefix}: Cache response: {getMessageResult}");
-Redis stores most data as Redis strings, but these strings can contain many types of data, including serialized binary data, which can be used when storing .NET objects in the cache.
+ // Store serialized object to cache
+ Employee e007 = new Employee("007", "Davide Columbo", 100);
+ stringSetResult = await _redisConnection.BasicRetryAsync(async (db) => await db.StringSetAsync("e007", JsonSerializer.Serialize(e007)));
+ Console.WriteLine($"{Environment.NewLine}{prefix}: Cache response from storing serialized Employee object: {stringSetResult}");
-Execute the following command in your command window to build the app:
+ // Retrieve serialized object from cache
+ getMessageResult = await _redisConnection.BasicRetryAsync(async (db) => await db.StringGetAsync("e007"));
+ Employee e007FromCache = JsonSerializer.Deserialize<Employee>(getMessageResult);
+ Console.WriteLine($"{prefix}: Deserialized Employee .NET object:{Environment.NewLine}");
+ Console.WriteLine($"{prefix}: Employee.Name : {e007FromCache.Name}");
+ Console.WriteLine($"{prefix}: Employee.Id : {e007FromCache.Id}");
+ Console.WriteLine($"{prefix}: Employee.Age : {e007FromCache.Age}{Environment.NewLine}");
+ }
-```
-dotnet build
```
-Then run the app with the following command:
-
-```
-dotnet run
-```
+Cache items can be stored and retrieved by using the `StringSetAsync` and `StringGetAsync` methods.
-In the example below, you can see the `Message` key previously had a cached value, which was set using the Redis Console in the Azure portal. The app updated that cached value. The app also executed the `PING` and `CLIENT LIST` commands.
+In the example, you can see the `Message` key is set to value. The app updated that cached value. The app also executed the `PING` and command.
-![Console app partial](./media/cache-dotnet-core-quickstart/cache-console-app-partial.png)
+### Work with .NET objects in the cache
+The Redis server stores most data as strings, but these strings can contain many types of data, including serialized binary data, which can be used when storing .NET objects in the cache.
-## Work with .NET objects in the cache
+Azure Cache for Redis can cache both .NET objects and primitive data types, but before a .NET object can be cached it must be serialized.
-Azure Cache for Redis can cache both .NET objects and primitive data types, but before a .NET object can be cached it must be serialized. This .NET object serialization is the responsibility of the application developer, and gives the developer flexibility in the choice of the serializer.
-
-One simple way to serialize objects is to use the `JsonConvert` serialization methods in [Newtonsoft.Json](https://www.nuget.org/packages/Newtonsoft.Json/) and serialize to and from JSON. In this section, you will add a .NET object to the cache.
-
-Execute the following command to add the *Newtonsoft.json* package to the app:
-
-```
-dotnet add package Newtonsoft.json
-```
-
-Add the following `using` statement to the top of *Program.cs*:
-
-```csharp
-using Newtonsoft.Json;
-```
+This .NET object serialization is the responsibility of the application developer, and gives the developer flexibility in the choice of the serializer.
-Add the following `Employee` class definition to *Program.cs*:
+The following `Employee` class was defined in *Program.cs* so that the sample could also show how to get and set a serialized object :
```csharp class Employee
-{
- public string Id { get; set; }
- public string Name { get; set; }
- public int Age { get; set; }
-
- public Employee(string employeeId, string name, int age)
{
- Id = employeeId;
- Name = name;
- Age = age;
+ public string Id { get; set; }
+ public string Name { get; set; }
+ public int Age { get; set; }
+
+ public Employee(string id, string name, int age)
+ {
+ Id = id;
+ Name = name;
+ Age = age;
+ }
}
-}
```
-At the bottom of `Main()` procedure in *Program.cs*, and before the call to `CloseConnection()`, add the following lines of code to cache and retrieve a serialized .NET object:
+## Run the sample
-```csharp
- // Store .NET object to cache
- Employee e007 = new Employee("007", "Davide Columbo", 100);
- Console.WriteLine("Cache response from storing Employee .NET object : " +
- cache.StringSet("e007", JsonConvert.SerializeObject(e007)));
-
- // Retrieve .NET object from cache
- Employee e007FromCache = JsonConvert.DeserializeObject<Employee>(cache.StringGet("e007"));
- Console.WriteLine("Deserialized Employee .NET object :\n");
- Console.WriteLine("\tEmployee.Name : " + e007FromCache.Name);
- Console.WriteLine("\tEmployee.Id : " + e007FromCache.Id);
- Console.WriteLine("\tEmployee.Age : " + e007FromCache.Age + "\n");
-```
+If you have opened any files, save them and build the app with the following command:
-Save *Program.cs* and rebuild the app with the following command:
-
-```
+```dos
dotnet build ``` Run the app with the following command to test serialization of .NET objects:
-```
+```dos
dotnet run ```
-![Console app completed](./media/cache-dotnet-core-quickstart/cache-console-app-complete.png)
- ## Clean up resources
-If you will be continuing to the next tutorial, you can keep the resources created in this quickstart and reuse them.
+If you continue to use this quickstart, you can keep the resources you created and reuse them.
-Otherwise, if you are finished with the quickstart sample application, you can delete the Azure resources created in this quickstart to avoid charges.
+Otherwise, if you're finished with the quickstart sample application, you can delete the Azure resources created in this quickstart to avoid charges.
> [!IMPORTANT] > Deleting a resource group is irreversible and that the resource group and all the resources in it are permanently deleted. Make sure that you do not accidentally delete the wrong resource group or resources. If you created the resources for hosting this sample inside an existing resource group that contains resources you want to keep, you can delete each resource individually on the left instead of deleting the resource group. >
+### To delete a resource group
-Sign in to the [Azure portal](https://portal.azure.com) and select **Resource groups**.
+1. Sign in to the [Azure portal](https://portal.azure.com) and select **Resource groups**.
-In the **Filter by name...** textbox, type the name of your resource group. The instructions for this article used a resource group named *TestResources*. On your resource group in the result list, select **...** then **Delete resource group**.
+1. In the **Filter by name...** textbox, type the name of your resource group. The instructions for this article used a resource group named *TestResources*. On your resource group in the result list, select **...** then **Delete resource group**.
-![Delete](./media/cache-dotnet-core-quickstart/cache-delete-resource-group.png)
+ :::image type="content" source="media/cache-dotnet-core-quickstart/cache-delete-resource-group.png" alt-text="Delete":::
-You will be asked to confirm the deletion of the resource group. Type the name of your resource group to confirm, and select **Delete**.
+1. You'll be asked to confirm the deletion of the resource group. Type the name of your resource group to confirm, and select **Delete**.
After a few moments, the resource group and all of its contained resources are deleted. --
-<a name="next-steps"></a>
- ## Next steps
-In this quickstart, you learned how to use Azure Cache for Redis from a .NET Core application. Continue to the next quickstart to use Azure Cache for Redis with an ASP.NET web app.
-
-> [!div class="nextstepaction"]
-> [Create an ASP.NET web app that uses an Azure Cache for Redis.](./cache-web-app-howto.md)
-
-Want to optimize and save on your cloud spending?
-
-> [!div class="nextstepaction"]
-> [Start analyzing costs with Cost Management](../cost-management-billing/costs/quick-acm-cost-analysis.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn)
+- [Connection resilience](cache-best-practices-connection.md)
+- [Best Practices Development](cache-best-practices-development.md)
azure-cache-for-redis Cache Dotnet How To Use Azure Redis Cache https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-dotnet-how-to-use-azure-redis-cache.md
ms.devlang: csharp Previously updated : 06/18/2020
-#Customer intent: As a .NET developer, new to Azure Cache for Redis, I want to create a new .NET app that uses Azure Cache for Redis.
Last updated : 03/25/2022+ # Quickstart: Use Azure Cache for Redis in .NET Framework
In this quickstart, you incorporate Azure Cache for Redis into a .NET Framework
## Skip to the code on GitHub
-If you want to skip straight to the code, see the [.NET Framework quickstart](https://github.com/Azure-Samples/azure-cache-redis-samples/tree/main/quickstart/dotnet) on GitHub.
+Clone the repo from [(https://github.com/Azure-Samples/azure-cache-redis-samples/tree/main/quickstart/dotnet](https://github.com/Azure-Samples/azure-cache-redis-samples/tree/main/quickstart/dotnet) on GitHub.
## Prerequisites
If you want to skip straight to the code, see the [.NET Framework quickstart](ht
- [.NET Framework 4 or higher](https://dotnet.microsoft.com/download/dotnet-framework), which is required by the StackExchange.Redis client. ## Create a cache+ [!INCLUDE [redis-cache-create](includes/redis-cache-create.md)] [!INCLUDE [redis-cache-access-keys](includes/redis-cache-access-keys.md)]
-Create a file on your computer named *CacheSecrets.config* and place it in a location where it won't be checked in with the source code of your sample application. For this quickstart, the *CacheSecrets.config* file is located here, *C:\AppSecrets\CacheSecrets.config*.
-
-Edit the *CacheSecrets.config* file and add the following contents:
-
-```xml
-<appSettings>
- <add key="CacheConnection" value="<host-name>,abortConnect=false,ssl=true,allowAdmin=true,password=<access-key>"/>
-</appSettings>
-```
-
-Replace `<host-name>` with your cache host name.
+1. Create a file on your computer named *CacheSecrets.config* and place it *C:\AppSecrets\CacheSecrets.config*.
-Replace `<access-key>` with the primary key for your cache.
+1. Edit the *CacheSecrets.config* file and add the following contents:
+ ```xml
+ <appSettings>
+ <add key="CacheConnection" value="<host-name>,abortConnect=false,ssl=true,allowAdmin=true,password=<access-key>"/>
+ </appSettings>
+ ```
-## Create a console app
+1. Replace `<host-name>` with your cache host name.
-In Visual Studio, select **File** > **New** > **Project**.
+1. Replace `<access-key>` with the primary key for your cache.
-Select **Console App (.NET Framework)**, and **Next** to configure your app. Type a **Project name**, verify that **.NET Framework 4.6.1** or higher is selected, and then select **Create** to create a new console application.
-
-<a name="configure-the-cache-clients"></a>
+1. Save the file.
## Configure the cache client
-In this section, you will configure the console application to use the [StackExchange.Redis](https://github.com/StackExchange/StackExchange.Redis) client for .NET.
-
-In Visual Studio, select **Tools** > **NuGet Package Manager** > **Package Manager Console**, and run the following command from the Package Manager Console window.
-
-```powershell
-Install-Package StackExchange.Redis
-```
+<!-- this section was removed from the core sample -->
+In this section, you prepare the console application to use the [StackExchange.Redis](https://github.com/StackExchange/StackExchange.Redis) client for .NET.
-Once the installation is completed, the *StackExchange.Redis* cache client is available to use with your project.
+1. In Visual Studio, select **Tools** > **NuGet Package Manager** > **Package Manager Console**, and run the following command from the Package Manager Console window.
+ ```powershell
+ Install-Package StackExchange.Redis
+ ```
+
+1. Once the installation is completed, the *StackExchange.Redis* cache client is available to use with your project.
-## Connect to the cache
+## Connect to the Secrets cache
-In Visual Studio, open your *App.config* file and update it to include an `appSettings` `file` attribute that references the *CacheSecrets.config* file.
+In Visual Studio, open your *App.config* file to verify it contains an `appSettings` `file` attribute that references the *CacheSecrets.config* file.
```xml <?xml version="1.0" encoding="utf-8" ?>
In Visual Studio, open your *App.config* file and update it to include an `appSe
</configuration> ```
-In Solution Explorer, right-click **References** and select **Add a reference**. Add a reference to the **System.Configuration** assembly.
+Never store credentials in source code. To keep this sample simple, we use an external secrets config file. A better approach would be to use [Azure Key Vault with certificates](/rest/api/keyvault/certificate-scenarios).
-Add the following `using` statements to *Program.cs*:
+## Connect to the cache with RedisConnection
-```csharp
-using StackExchange.Redis;
-using System.Configuration;
-```
-
-The connection to the Azure Cache for Redis is managed by the `ConnectionMultiplexer` class. This class should be shared and reused throughout your client application. Do not create a new connection for each operation.
-
-Never store credentials in source code. To keep this sample simple, IΓÇÖm only using an external secrets config file. A better approach would be to use [Azure Key Vault with certificates](/rest/api/keyvault/certificate-scenarios).
-
-In *Program.cs*, add the following members to the `Program` class of your console application:
+The connection to your cache is managed by the `RedisConnection` class. The connection is first made in this statement from `Program.cs`:
```csharp
-private static Lazy<ConnectionMultiplexer> lazyConnection = CreateConnection();
+ _redisConnection = await RedisConnection.InitializeAsync(connectionString: ConfigurationManager.AppSettings["CacheConnection"].ToString());
-public static ConnectionMultiplexer Connection
-{
- get
- {
- return lazyConnection.Value;
- }
-}
-private static Lazy<ConnectionMultiplexer> CreateConnection()
-{
- return new Lazy<ConnectionMultiplexer>(() =>
- {
- string cacheConnection = ConfigurationManager.AppSettings["CacheConnection"].ToString();
- return ConnectionMultiplexer.Connect(cacheConnection);
- });
-}
```
-This approach to sharing a `ConnectionMultiplexer` instance in your application uses a static property that returns a connected instance. The code provides a thread-safe way to initialize only a single connected `ConnectionMultiplexer` instance. `abortConnect` is set to false, which means that the call succeeds even if a connection to the Azure Cache for Redis is not established. One key feature of `ConnectionMultiplexer` is that it automatically restores connectivity to the cache once the network issue or other causes are resolved.
- The value of the *CacheConnection* appSetting is used to reference the cache connection string from the Azure portal as the password parameter.
-## Handle RedisConnectionException and SocketException by reconnecting
-
-A recommended best practice when calling methods on `ConnectionMultiplexer` is to attempt to resolve `RedisConnectionException` and `SocketException` exceptions automatically by closing and reestablishing the connection.
-
-Add the following `using` statements to *Program.cs*:
+In `RedisConnection.cs`, you see the `StackExchange.Redis` namespace with the `using` keyword. This is needed for the `RedisConnection` class.
```csharp
-using System.Net.Sockets;
-using System.Threading;
+using StackExchange.Redis;
```
-In *Program.cs*, add the following members to the `Program` class:
+<!-- Is this right Philo -->
+The `RedisConnection` code ensures that there is always a healthy connection to the cache by managing the `ConnectionMultiplexer` instance from `StackExchange.Redis`. The `RedisConnection` class recreates the connection when a connection is lost and unable to reconnect automatically.
-```csharp
-private static long _lastReconnectTicks = DateTimeOffset.MinValue.UtcTicks;
-private static DateTimeOffset _firstErrorTime = DateTimeOffset.MinValue;
-private static DateTimeOffset _previousErrorTime = DateTimeOffset.MinValue;
-private static SemaphoreSlim _reconnectSemaphore = new SemaphoreSlim(initialCount: 1, maxCount: 1);
-private static SemaphoreSlim _initSemaphore = new SemaphoreSlim(initialCount: 1, maxCount: 1);
-private static ConnectionMultiplexer _connection;
-private static bool _didInitialize = false;
-// In general, let StackExchange.Redis handle most reconnects,
-// so limit the frequency of how often ForceReconnect() will
-// actually reconnect.
-public static TimeSpan ReconnectMinInterval => TimeSpan.FromSeconds(60);
-// If errors continue for longer than the below threshold, then the
-// multiplexer seems to not be reconnecting, so ForceReconnect() will
-// re-create the multiplexer.
-public static TimeSpan ReconnectErrorThreshold => TimeSpan.FromSeconds(30);
-public static TimeSpan RestartConnectionTimeout => TimeSpan.FromSeconds(15);
-public static int RetryMaxAttempts => 5;
-public static ConnectionMultiplexer Connection { get { return _connection; } }
-private static async Task InitializeAsync()
-{
- if (_didInitialize)
- {
- throw new InvalidOperationException("Cannot initialize more than once.");
- }
- _connection = await CreateConnectionAsync();
- _didInitialize = true;
-}
-// This method may return null if it fails to acquire the semaphore in time.
-// Use the return value to update the "connection" field
-private static async Task<ConnectionMultiplexer> CreateConnectionAsync()
-{
- if (_connection != null)
- {
- // If we already have a good connection, let's re-use it
- return _connection;
- }
- try
- {
- await _initSemaphore.WaitAsync(RestartConnectionTimeout);
- }
- catch
- {
- // We failed to enter the semaphore in the given amount of time. Connection will either be null, or have a value that was created by another thread.
- return _connection;
- }
- // We entered the semaphore successfully.
- try
- {
- if (_connection != null)
- {
- // Another thread must have finished creating a new connection while we were waiting to enter the semaphore. Let's use it
- return _connection;
- }
- // Otherwise, we really need to create a new connection.
- string cacheConnection = ConfigurationManager.AppSettings["CacheConnection"].ToString();
- return await ConnectionMultiplexer.ConnectAsync(cacheConnection);
- }
- finally
- {
- _initSemaphore.Release();
- }
-}
-private static async Task CloseConnectionAsync(ConnectionMultiplexer oldConnection)
-{
- if (oldConnection == null)
- {
- return;
- }
- try
- {
- await oldConnection.CloseAsync();
- }
- catch (Exception)
- {
- // Ignore any errors from the oldConnection
- }
-}
-/// <summary>
-/// Force a new ConnectionMultiplexer to be created.
-/// NOTES:
-/// 1. Users of the ConnectionMultiplexer MUST handle ObjectDisposedExceptions, which can now happen as a result of calling ForceReconnectAsync().
-/// 2. Call ForceReconnectAsync() for RedisConnectionExceptions and RedisSocketExceptions. You can also call it for RedisTimeoutExceptions,
-/// but only if you're using generous ReconnectMinInterval and ReconnectErrorThreshold. Otherwise, establishing new connections can cause
-/// a cascade failure on a server that's timing out because it's already overloaded.
-/// 3. The code will:
-/// a. wait to reconnect for at least the "ReconnectErrorThreshold" time of repeated errors before actually reconnecting
-/// b. not reconnect more frequently than configured in "ReconnectMinInterval"
-/// </summary>
-public static async Task ForceReconnectAsync()
-{
- var utcNow = DateTimeOffset.UtcNow;
- long previousTicks = Interlocked.Read(ref _lastReconnectTicks);
- var previousReconnectTime = new DateTimeOffset(previousTicks, TimeSpan.Zero);
- TimeSpan elapsedSinceLastReconnect = utcNow - previousReconnectTime;
- // If multiple threads call ForceReconnectAsync at the same time, we only want to honor one of them.
- if (elapsedSinceLastReconnect < ReconnectMinInterval)
- {
- return;
- }
- try
- {
- await _reconnectSemaphore.WaitAsync(RestartConnectionTimeout);
- }
- catch
- {
- // If we fail to enter the semaphore, then it is possible that another thread has already done so.
- // ForceReconnectAsync() can be retried while connectivity problems persist.
- return;
- }
- try
- {
- utcNow = DateTimeOffset.UtcNow;
- elapsedSinceLastReconnect = utcNow - previousReconnectTime;
- if (_firstErrorTime == DateTimeOffset.MinValue)
- {
- // We haven't seen an error since last reconnect, so set initial values.
- _firstErrorTime = utcNow;
- _previousErrorTime = utcNow;
- return;
- }
- if (elapsedSinceLastReconnect < ReconnectMinInterval)
- {
- return; // Some other thread made it through the check and the lock, so nothing to do.
- }
- TimeSpan elapsedSinceFirstError = utcNow - _firstErrorTime;
- TimeSpan elapsedSinceMostRecentError = utcNow - _previousErrorTime;
- bool shouldReconnect =
- elapsedSinceFirstError >= ReconnectErrorThreshold // Make sure we gave the multiplexer enough time to reconnect on its own if it could.
- && elapsedSinceMostRecentError <= ReconnectErrorThreshold; // Make sure we aren't working on stale data (e.g. if there was a gap in errors, don't reconnect yet).
- // Update the previousErrorTime timestamp to be now (e.g. this reconnect request).
- _previousErrorTime = utcNow;
- if (!shouldReconnect)
- {
- return;
- }
- _firstErrorTime = DateTimeOffset.MinValue;
- _previousErrorTime = DateTimeOffset.MinValue;
- ConnectionMultiplexer oldConnection = _connection;
- await CloseConnectionAsync(oldConnection);
- _connection = null;
- _connection = await CreateConnectionAsync();
- Interlocked.Exchange(ref _lastReconnectTicks, utcNow.UtcTicks);
- }
- finally
- {
- _reconnectSemaphore.Release();
- }
-}
-// In real applications, consider using a framework such as
-// Polly to make it easier to customize the retry approach.
-private static async Task<T> BasicRetryAsync<T>(Func<T> func)
-{
- int reconnectRetry = 0;
- int disposedRetry = 0;
- while (true)
- {
- try
- {
- return func();
- }
- catch (Exception ex) when (ex is RedisConnectionException || ex is SocketException)
- {
- reconnectRetry++;
- if (reconnectRetry > RetryMaxAttempts)
- throw;
- await ForceReconnectAsync();
- }
- catch (ObjectDisposedException)
- {
- disposedRetry++;
- if (disposedRetry > RetryMaxAttempts)
- throw;
- }
- }
-}
-public static Task<IDatabase> GetDatabaseAsync()
-{
- return BasicRetryAsync(() => Connection.GetDatabase());
-}
-public static Task<System.Net.EndPoint[]> GetEndPointsAsync()
-{
- return BasicRetryAsync(() => Connection.GetEndPoints());
-}
-public static Task<IServer> GetServerAsync(string host, int port)
-{
- return BasicRetryAsync(() => Connection.GetServer(host, port));
-}
-```
+For more information, see [StackExchange.Redis](https://stackexchange.github.io/StackExchange.Redis/) and the code in a [GitHub repo](https://github.com/StackExchange/StackExchange.Redis).
+
+<!-- :::code language="csharp" source="~/samples-cache/quickstart/dotnet/Redistest/RedisConnection.cs"::: -->
## Executing cache commands
-Add the following code for the `Main` procedure of the `Program` class for your console application:
+In `program.cs`, you can see the following code for the `RunRedisCommandsAsync` method in the `Program` class for the console application:
```csharp
-static void Main(string[] args)
-{
- IDatabase cache = GetDatabase();
-
- // Perform cache operations using the cache object...
-
- // Simple PING command
- string cacheCommand = "PING";
- Console.WriteLine("\nCache command : " + cacheCommand);
- Console.WriteLine("Cache response : " + cache.Execute(cacheCommand).ToString());
-
- // Simple get and put of integral data types into the cache
- cacheCommand = "GET Message";
- Console.WriteLine("\nCache command : " + cacheCommand + " or StringGet()");
- Console.WriteLine("Cache response : " + cache.StringGet("Message").ToString());
-
- cacheCommand = "SET Message \"Hello! The cache is working from a .NET console app!\"";
- Console.WriteLine("\nCache command : " + cacheCommand + " or StringSet()");
- Console.WriteLine("Cache response : " + cache.StringSet("Message", "Hello! The cache is working from a .NET console app!").ToString());
-
- // Demonstrate "SET Message" executed as expected...
- cacheCommand = "GET Message";
- Console.WriteLine("\nCache command : " + cacheCommand + " or StringGet()");
- Console.WriteLine("Cache response : " + cache.StringGet("Message").ToString());
-
- // Get the client list, useful to see if connection list is growing...
- // Note that this requires allowAdmin=true in the connection string
- cacheCommand = "CLIENT LIST";
- Console.WriteLine("\nCache command : " + cacheCommand);
- var endpoint = (System.Net.DnsEndPoint)GetEndPoints()[0];
- IServer server = GetServer(endpoint.Host, endpoint.Port);
- ClientInfo[] clients = server.ClientList();
-
- Console.WriteLine("Cache response :");
- foreach (ClientInfo client in clients)
+private static async Task RunRedisCommandsAsync(string prefix)
{
- Console.WriteLine(client.Raw);
- }
+ // Simple PING command
+ Console.WriteLine($"{Environment.NewLine}{prefix}: Cache command: PING");
+ RedisResult pingResult = await _redisConnection.BasicRetryAsync(async (db) => await db.ExecuteAsync("PING"));
+ Console.WriteLine($"{prefix}: Cache response: {pingResult}");
- CloseConnection(lazyConnection);
-}
-```
+ // Simple get and put of integral data types into the cache
+ string key = "Message";
+ string value = "Hello! The cache is working from a .NET console app!";
-Azure Cache for Redis has a configurable number of databases (default of 16) that can be used to logically separate the data within an Azure Cache for Redis. The code connects to the default database, DB 0. For more information, see [What are Redis databases?](cache-development-faq.yml#what-are-redis-databases-) and [Default Redis server configuration](cache-configure.md#default-redis-server-configuration).
+ Console.WriteLine($"{Environment.NewLine}{prefix}: Cache command: GET {key} via StringGetAsync()");
+ RedisValue getMessageResult = await _redisConnection.BasicRetryAsync(async (db) => await db.StringGetAsync(key));
+ Console.WriteLine($"{prefix}: Cache response: {getMessageResult}");
-Cache items can be stored and retrieved by using the `StringSet` and `StringGet` methods.
+ Console.WriteLine($"{Environment.NewLine}{prefix}: Cache command: SET {key} \"{value}\" via StringSetAsync()");
+ bool stringSetResult = await _redisConnection.BasicRetryAsync(async (db) => await db.StringSetAsync(key, value));
+ Console.WriteLine($"{prefix}: Cache response: {stringSetResult}");
-Redis stores most data as Redis strings, but these strings can contain many types of data, including serialized binary data, which can be used when storing .NET objects in the cache.
+ Console.WriteLine($"{Environment.NewLine}{prefix}: Cache command: GET {key} via StringGetAsync()");
+ getMessageResult = await _redisConnection.BasicRetryAsync(async (db) => await db.StringGetAsync(key));
+ Console.WriteLine($"{prefix}: Cache response: {getMessageResult}");
-Press **Ctrl+F5** to build and run the console app.
+ // Store serialized object to cache
+ Employee e007 = new Employee("007", "Davide Columbo", 100);
+ stringSetResult = await _redisConnection.BasicRetryAsync(async (db) => await db.StringSetAsync("e007", JsonSerializer.Serialize(e007)));
+ Console.WriteLine($"{Environment.NewLine}{prefix}: Cache response from storing serialized Employee object: {stringSetResult}");
-In the example below, you can see the `Message` key previously had a cached value, which was set using the Redis Console in the Azure portal. The app updated that cached value. The app also executed the `PING` and `CLIENT LIST` commands.
+ // Retrieve serialized object from cache
+ getMessageResult = await _redisConnection.BasicRetryAsync(async (db) => await db.StringGetAsync("e007"));
+ Employee e007FromCache = JsonSerializer.Deserialize<Employee>(getMessageResult);
+ Console.WriteLine($"{prefix}: Deserialized Employee .NET object:{Environment.NewLine}");
+ Console.WriteLine($"{prefix}: Employee.Name : {e007FromCache.Name}");
+ Console.WriteLine($"{prefix}: Employee.Id : {e007FromCache.Id}");
+ Console.WriteLine($"{prefix}: Employee.Age : {e007FromCache.Age}{Environment.NewLine}");
+ }
-![Console app partial](./media/cache-dotnet-how-to-use-azure-redis-cache/cache-console-app-partial.png)
+```
-## Work with .NET objects in the cache
+Cache items can be stored and retrieved by using the `StringSetAsync` and `StringGetAsync` methods.
-Azure Cache for Redis can cache both .NET objects and primitive data types, but before a .NET object can be cached it must be serialized. This .NET object serialization is the responsibility of the application developer, and gives the developer flexibility in the choice of the serializer.
+In the example, you can see the `Message` key is set to value. The app updated that cached value. The app also executed the `PING` and command.
-One simple way to serialize objects is to use the `JsonConvert` serialization methods in [Newtonsoft.Json](https://www.nuget.org/packages/Newtonsoft.Json/) and serialize to and from JSON. In this section, you will add a .NET object to the cache.
+### Work with .NET objects in the cache
-In Visual Studio, select **Tools** > **NuGet Package Manager** > **Package Manager Console**, and run the following command from the Package Manager Console window.
+The Redis server stores most data as strings, but these strings can contain many types of data, including serialized binary data, which can be used when storing .NET objects in the cache.
-```powershell
-Install-Package Newtonsoft.Json
-```
+Azure Cache for Redis can cache both .NET objects and primitive data types, but before a .NET object can be cached it must be serialized.
-Add the following `using` statement to the top of *Program.cs*:
+This .NET object serialization is the responsibility of the application developer, and gives the developer flexibility in the choice of the serializer.
-```csharp
-using Newtonsoft.Json;
-```
+One simple way to serialize objects is to use the `JsonConvert` serialization methods in `System.text.Json`.
+
+Add the `System.text.Json` namespace to Visual Studio:
-Add the following `Employee` class definition to *Program.cs*:
+1. Select **Tools** > **NuGet Package Manager** > *Package Manager Console**.
+
+1. Then, run the following command from the Package Manager Console window.
+ ```powershell
+ Install-Package system.text.json
+ ```
+
+<!-- :::image type="content" source="media/cache-dotnet-how-to-use-azure-redis-cache/cache-console-app-partial.png" alt-text="Console app partial"::: -->
+
+The following `Employee` class was defined in *Program.cs* so that the sample could also show how to get and set a serialized object :
```csharp class Employee
class Employee
} ```
-At the bottom of `Main()` procedure in *Program.cs*, and before the call to `CloseConnection()`, add the following lines of code to cache and retrieve a serialized .NET object:
-
-```csharp
- // Store .NET object to cache
- Employee e007 = new Employee("007", "Davide Columbo", 100);
- Console.WriteLine("Cache response from storing Employee .NET object : " +
- cache.StringSet("e007", JsonConvert.SerializeObject(e007)));
-
- // Retrieve .NET object from cache
- Employee e007FromCache = JsonConvert.DeserializeObject<Employee>(cache.StringGet("e007"));
- Console.WriteLine("Deserialized Employee .NET object :\n");
- Console.WriteLine("\tEmployee.Name : " + e007FromCache.Name);
- Console.WriteLine("\tEmployee.Id : " + e007FromCache.Id);
- Console.WriteLine("\tEmployee.Age : " + e007FromCache.Age + "\n");
-```
-
-Press **Ctrl+F5** to build and run the console app to test serialization of .NET objects.
+## Run the sample
-![Console app completed](./media/cache-dotnet-how-to-use-azure-redis-cache/cache-console-app-complete.png)
+Press **Ctrl+F5** to build and run the console app to test serialization of .NET objects.
## Clean up resources
-If you will be continuing to the next tutorial, you can keep the resources created in this quickstart and reuse them.
+If you continuing to the use this quickstart, you can keep the resources created and reuse them.
-Otherwise, if you are finished with the quickstart sample application, you can delete the Azure resources created in this quickstart to avoid charges.
+Otherwise, if you are finished with the quickstart sample application, you can delete the Azure resources created in this quickstart to avoid charges.
> [!IMPORTANT] > Deleting a resource group is irreversible and that the resource group and all the resources in it are permanently deleted. Make sure that you do not accidentally delete the wrong resource group or resources. If you created the resources for hosting this sample inside an existing resource group that contains resources you want to keep, you can delete each resource individually on the left instead of deleting the resource group.
Sign in to the [Azure portal](https://portal.azure.com) and select **Resource gr
In the **Filter by name...** textbox, type the name of your resource group. The instructions for this article used a resource group named *TestResources*. On your resource group in the result list, select **...** then **Delete resource group**.
-![Delete](./media/cache-dotnet-how-to-use-azure-redis-cache/cache-delete-resource-group.png)
-You will be asked to confirm the deletion of the resource group. Type the name of your resource group to confirm, and select **Delete**.
+You are asked to confirm the deletion of the resource group. Type the name of your resource group to confirm, and select **Delete**.
After a few moments, the resource group and all of its contained resources are deleted. --
-<a name="next-steps"></a>
- ## Next steps
-In this quickstart, you learned how to use Azure Cache for Redis from a .NET application. Continue to the next quickstart to use Azure Cache for Redis with an ASP.NET web app.
-
-> [!div class="nextstepaction"]
-> [Create an ASP.NET web app that uses an Azure Cache for Redis.](./cache-web-app-howto.md)
-
-Want to optimize and save on your cloud spending?
-
-> [!div class="nextstepaction"]
-> [Start analyzing costs with Cost Management](../cost-management-billing/costs/quick-acm-cost-analysis.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn)
+- [Connection resilience](cache-best-practices-connection.md)
+- [Best Practices Development](cache-best-practices-development.md)
azure-cache-for-redis Cache How To Premium Clustering https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-how-to-premium-clustering.md
Previously updated : 02/08/2021 Last updated : 02/28/2022 # Configure Redis clustering for a Premium Azure Cache for Redis instance
Clustering is enabled **New Azure Cache for Redis** on the left during cache cr
:::image type="content" source="media/cache-private-link/1-create-resource.png" alt-text="Create resource.":::
-2. On the **New** page, select **Databases** and then select **Azure Cache for Redis**.
+1. On the **New** page, select **Databases** and then select **Azure Cache for Redis**.
:::image type="content" source="media/cache-private-link/2-select-cache.png" alt-text="Select Azure Cache for Redis.":::
-3. On the **New Redis Cache** page, configure the settings for your new premium cache.
+1. On the **New Redis Cache** page, configure the settings for your new premium cache.
| Setting | Suggested value | Description | | | - | -- |
Clustering is enabled **New Azure Cache for Redis** on the left during cache cr
| **Location** | Drop-down and select a location. | Select a [region](https://azure.microsoft.com/regions/) near other services that will use your cache. | | **Cache type** | Drop-down and select a premium cache to configure premium features. For details, see [Azure Cache for Redis pricing](https://azure.microsoft.com/pricing/details/cache/). | The pricing tier determines the size, performance, and features that are available for the cache. For more information, see [Azure Cache for Redis Overview](cache-overview.md). |
-4. Select the **Networking** tab or select the **Networking** button at the bottom of the page.
+1. Select the **Networking** tab or select the **Networking** button at the bottom of the page.
-5. In the **Networking** tab, select your connectivity method. For premium cache instances, you can connect either publicly, via Public IP addresses or service endpoints, or privately, using a private endpoint.
+1. In the **Networking** tab, select your connectivity method. For premium cache instances, you can connect either publicly, via Public IP addresses or service endpoints, or privately, using a private endpoint.
-6. Select the **Next: Advanced** tab or select the **Next: Advanced** button on the bottom of the page.
+1. Select the **Next: Advanced** tab or select the **Next: Advanced** button on the bottom of the page.
-7. In the **Advanced** tab for a premium cache instance, configure the settings for non-TLS port, clustering, and data persistence. To enable clustering, select **Enable**.
+1. In the **Advanced** tab for a premium cache instance, configure the settings for non-TLS port, clustering, and data persistence. To enable clustering, select **Enable**.
:::image type="content" source="media/cache-how-to-premium-clustering/redis-cache-clustering.png" alt-text="Clustering toggle.":::
Clustering is enabled **New Azure Cache for Redis** on the left during cache cr
Once the cache is created, you connect to it and use it just like a non-clustered cache. Redis distributes the data throughout the Cache shards. If diagnostics is [enabled](cache-how-to-monitor.md#enable-cache-diagnostics), metrics are captured separately for each shard and can be [viewed](cache-how-to-monitor.md) in Azure Cache for Redis on the left.
-8. Select the **Next: Tags** tab or select the **Next: Tags** button at the bottom of the page.
+1. Select the **Next: Tags** tab or select the **Next: Tags** button at the bottom of the page.
-9. Optionally, in the **Tags** tab, enter the name and value if you wish to categorize the resource.
+1. Optionally, in the **Tags** tab, enter the name and value if you wish to categorize the resource.
-10. Select **Review + create**. You're taken to the Review + create tab where Azure validates your configuration.
+1. Select **Review + create**. You're taken to the Review + create tab where Azure validates your configuration.
-11. After the green Validation passed message appears, select **Create**.
+1. After the green Validation passed message appears, select **Create**.
It takes a while for the cache to create. You can monitor progress on the Azure Cache for Redis **Overview** page. When **Status** shows as **Running**, the cache is ready to use.
It takes a while for the cache to create. You can monitor progress on the Azure
For sample code on working with clustering with the StackExchange.Redis client, see the [clustering.cs](https://github.com/rustd/RedisSamples/blob/master/HelloWorld/Clustering.cs) portion of the [Hello World](https://github.com/rustd/RedisSamples/tree/master/HelloWorld) sample.
-<a name="cluster-size"></a>
- ## Change the cluster size on a running premium cache To change the cluster size on a running premium cache with clustering enabled, select **Cluster Size** from the **Resource menu**.
Many clients support Redis clustering but not all. Check the documentation for t
The Redis clustering protocol requires each client to connect to each shard directly in clustering mode, and also defines new error responses such as 'MOVED' na 'CROSSSLOTS'. When you attempt to use a client, which doesn't support clustering, with a cluster mode cache, the result can be many [MOVED redirection exceptions](https://redis.io/topics/cluster-spec#moved-redirection), or just break your application, if you're doing cross-slot multi-key requests. > [!NOTE]
-> If you're using StackExchange.Redis as your client, ensure you're using the latest version of [StackExchange.Redis](https://www.nuget.org/packages/StackExchange.Redis/) 1.0.481 or later for clustering to work correctly. For more information on any issues with move exceptions, see [move exceptions](#move-exceptions).
+> If you're using StackExchange.Redis as your client, ensure you're using the latest version of [StackExchange.Redis](https://www.nuget.org/packages/StackExchange.Redis/) 1.0.481 or later for clustering to work correctly. For more information on any issues with move exceptions, see [move exceptions](#im-getting-move-exceptions-when-using-stackexchangeredis-and-clustering-what-should-i-do).
> ### How do I connect to my cache when clustering is enabled?
Clustering is only available for premium caches.
* **Redis Output Cache provider** - no changes required. * **Redis Session State provider** - to use clustering, you must use [RedisSessionStateProvider](https://www.nuget.org/packages/Microsoft.Web.RedisSessionStateProvider) 2.0.1 or higher or an exception is thrown, which is a breaking change. For more information, see [v2.0.0 Breaking Change Details](https://github.com/Azure/aspnet-redis-providers/wiki/v2.0.0-Breaking-Change-Details).
-<a name="move-exceptions"></a>
- ### I'm getting MOVE exceptions when using StackExchange.Redis and clustering, what should I do?
-If you're using StackExchange.Redis and receive `MOVE` exceptions when using clustering, ensure that you're using [StackExchange.Redis 1.1.603](https://www.nuget.org/packages/StackExchange.Redis/) or later. For instructions on configuring your .NET applications to use StackExchange.Redis, see [Configure the cache clients](cache-dotnet-how-to-use-azure-redis-cache.md#configure-the-cache-clients).
+If you're using StackExchange.Redis and receive `MOVE` exceptions when using clustering, ensure that you're using [StackExchange.Redis 1.1.603](https://www.nuget.org/packages/StackExchange.Redis/) or later. For instructions on configuring your .NET applications to use StackExchange.Redis, see [Configure the cache clients](cache-dotnet-how-to-use-azure-redis-cache.md#configure-the-cache-client).
## Next steps
azure-cache-for-redis Cache How To Scale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-how-to-scale.md
The following list contains answers to commonly asked questions about Azure Cach
- You can't scale from a **Premium** cache down to a **Basic** or **Standard** pricing tier. - You can scale from one **Premium** cache pricing tier to another. - You can't scale from a **Basic** cache directly to a **Premium** cache. First, scale from **Basic** to **Standard** in one scaling operation, and then from **Standard** to **Premium** in a later scaling operation.-- If you enabled clustering when you created your **Premium** cache, you can [change the cluster size](cache-how-to-premium-clustering.md#cluster-size). If your cache was created without clustering enabled, you can configure clustering at a later time.
+- If you enabled clustering when you created your **Premium** cache, you can [change the cluster size](cache-how-to-premium-clustering.md#set-up-clustering). If your cache was created without clustering enabled, you can configure clustering at a later time.
For more information, see [How to configure clustering for a Premium Azure Cache for Redis](cache-how-to-premium-clustering.md).
azure-cache-for-redis Cache Java Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-java-get-started.md
Title: 'Quickstart: Use Azure Cache for Redis in Java'
-description: In this quickstart, you will create a new Java app that uses Azure Cache for Redis
+description: In this quickstart, you'll create a new Java app that uses Azure Cache for Redis
Last updated 05/22/2020
ms.devlang: java
-#Customer intent: As a Java developer, new to Azure Cache for Redis, I want to create a new Java app that uses Azure Cache for Redis.
+ # Quickstart: Use Azure Cache for Redis in Java
-In this quickstart, you incorporate Azure Cache for Redis into a Java app using the [Jedis](https://github.com/xetorthio/jedis) Redis client to have access to a secure, dedicated cache that is accessible from any application within Azure.
+In this quickstart, you incorporate Azure Cache for Redis into a Java app using the [Jedis](https://github.com/xetorthio/jedis) Redis client. Your cache is a secure, dedicated cache that is accessible from any application within Azure.
## Skip to the code on GitHub
-If you want to skip straight to the code, see the [Java quickstart](https://github.com/Azure-Samples/azure-cache-redis-samples/tree/main/quickstart/java) on GitHub.
+Clone the repo [Java quickstart](https://github.com/Azure-Samples/azure-cache-redis-samples/tree/main/quickstart/java) on GitHub.
## Prerequisites
If you want to skip straight to the code, see the [Java quickstart](https://gith
## Setting up the working environment
-Depending on your operating system, add environment variables for your **Host name** and **Primary access key**. Open a command prompt, or a terminal window, and set up the following values:
+Depending on your operating system, add environment variables for your **Host name** and **Primary access key** that you noted above. Open a command prompt, or a terminal window, and set up the following values:
-```CMD
+```dos
set REDISCACHEHOSTNAME=<YOUR_HOST_NAME>.redis.cache.windows.net set REDISCACHEKEY=<YOUR_PRIMARY_ACCESS_KEY> ```
Replace the placeholders with the following values:
- `<YOUR_HOST_NAME>`: The DNS host name, obtained from the *Properties* section of your Azure Cache for Redis resource in the Azure portal. - `<YOUR_PRIMARY_ACCESS_KEY>`: The primary access key, obtained from the *Access keys* section of your Azure Cache for Redis resource in the Azure portal.
-## Create a new Java app
-
-Using Maven, generate a new quickstart app:
+## Understanding the Java sample
-```CMD
-mvn archetype:generate -DarchetypeGroupId=org.apache.maven.archetypes -DarchetypeArtifactId=maven-archetype-quickstart -DarchetypeVersion=1.3 -DgroupId=example.demo -DartifactId=redistest -Dversion=1.0
-```
+In this sample, you use Maven to run the quickstart app.
-Change to the new *redistest* project directory.
+1. Change to the new *redistest* project directory.
-Open the *pom.xml* file and add a dependency for [Jedis](https://github.com/xetorthio/jedis):
+1. Open the *pom.xml* file. In the file, you'll see a dependency for [Jedis](https://github.com/xetorthio/jedis):
-```xml
+ ```xml
<dependency>
- <groupId>redis.clients</groupId>
- <artifactId>jedis</artifactId>
- <version>3.2.0</version>
- <type>jar</type>
- <scope>compile</scope>
+ <groupId>redis.clients</groupId>
+ <artifactId>jedis</artifactId>
+ <version>4.1.0</version>
+ <type>jar</type>
+ <scope>compile</scope>
</dependency>
-```
-
-Save the *pom.xml* file.
-
-Open *App.java* and replace the code with the following code:
-
-```java
-package example.demo;
-
-import redis.clients.jedis.Jedis;
-import redis.clients.jedis.JedisShardInfo;
-
-/**
- * Redis test
- *
- */
-public class App
-{
- public static void main( String[] args )
+ ```
+
+1. Close the *pom.xml* file.
+
+1. Open *App.java* and see the code with the following code:
+
+ ```java
+ package example.demo;
+
+ import redis.clients.jedis.DefaultJedisClientConfig;
+ import redis.clients.jedis.Jedis;
+
+ /**
+ * Redis test
+ *
+ */
+ public class App
{-
- boolean useSsl = true;
- String cacheHostname = System.getenv("REDISCACHEHOSTNAME");
- String cachekey = System.getenv("REDISCACHEKEY");
-
- // Connect to the Azure Cache for Redis over the TLS/SSL port using the key.
- JedisShardInfo shardInfo = new JedisShardInfo(cacheHostname, 6380, useSsl);
- shardInfo.setPassword(cachekey); /* Use your access key. */
- Jedis jedis = new Jedis(shardInfo);
-
- // Perform cache operations using the cache connection object...
-
- // Simple PING command
- System.out.println( "\nCache Command : Ping" );
- System.out.println( "Cache Response : " + jedis.ping());
-
- // Simple get and put of integral data types into the cache
- System.out.println( "\nCache Command : GET Message" );
- System.out.println( "Cache Response : " + jedis.get("Message"));
-
- System.out.println( "\nCache Command : SET Message" );
- System.out.println( "Cache Response : " + jedis.set("Message", "Hello! The cache is working from Java!"));
-
- // Demonstrate "SET Message" executed as expected...
- System.out.println( "\nCache Command : GET Message" );
- System.out.println( "Cache Response : " + jedis.get("Message"));
-
- // Get the client list, useful to see if connection list is growing...
- System.out.println( "\nCache Command : CLIENT LIST" );
- System.out.println( "Cache Response : " + jedis.clientList());
-
- jedis.close();
+ public static void main( String[] args )
+ {
+
+ boolean useSsl = true;
+ String cacheHostname = System.getenv("REDISCACHEHOSTNAME");
+ String cachekey = System.getenv("REDISCACHEKEY");
+
+ // Connect to the Azure Cache for Redis over the TLS/SSL port using the key.
+ Jedis jedis = new Jedis(cacheHostname, 6380, DefaultJedisClientConfig.builder()
+ .password(cachekey)
+ .ssl(useSsl)
+ .build());
+
+ // Perform cache operations using the cache connection object...
+
+ // Simple PING command
+ System.out.println( "\nCache Command : Ping" );
+ System.out.println( "Cache Response : " + jedis.ping());
+
+ // Simple get and put of integral data types into the cache
+ System.out.println( "\nCache Command : GET Message" );
+ System.out.println( "Cache Response : " + jedis.get("Message"));
+
+ System.out.println( "\nCache Command : SET Message" );
+ System.out.println( "Cache Response : " + jedis.set("Message", "Hello! The cache is working from Java!"));
+
+ // Demonstrate "SET Message" executed as expected...
+ System.out.println( "\nCache Command : GET Message" );
+ System.out.println( "Cache Response : " + jedis.get("Message"));
+
+ // Get the client list, useful to see if connection list is growing...
+ System.out.println( "\nCache Command : CLIENT LIST" );
+ System.out.println( "Cache Response : " + jedis.clientList());
+
+ jedis.close();
+ }
}
-}
-```
+ ```
-This code shows you how to connect to an Azure Cache for Redis instance using the cache host name and key environment variables. The code also stores and retrieves a string value in the cache. The `PING` and `CLIENT LIST` commands are also executed.
+ This code shows you how to connect to an Azure Cache for Redis instance using the cache host name and key environment variables. The code also stores and retrieves a string value in the cache. The `PING` and `CLIENT LIST` commands are also executed.
-Save *App.java*.
+1. Close the *App.java*.
## Build and run the app
-Execute the following Maven command to build and run the app:
+1. First, if you haven't already, you must set the environment variables as noted above.
-```CMD
-mvn compile
-mvn exec:java -D exec.mainClass=example.demo.App
-```
+ ```dos
+ set REDISCACHEHOSTNAME=<YOUR_HOST_NAME>.redis.cache.windows.net
+ set REDISCACHEKEY=<YOUR_PRIMARY_ACCESS_KEY>
+ ```
+
+1. Execute the following Maven command to build and run the app:
+
+ ```dos
+ mvn compile
+ mvn exec:java -D exec.mainClass=example.demo.App
+ ```
-In the example below, you can see the `Message` key previously had a cached value, which was set using the Redis Console in the Azure portal. The app updated that cached value. The app also executed the `PING` and `CLIENT LIST` commands.
+In the example below, you see the `Message` key previously had a cached value. The value was updated to a new value using `jedis.set`. The app also executed the `PING` and `CLIENT LIST` commands.
-![Azure Cache for Redis app completed](./media/cache-java-get-started/azure-cache-redis-complete.png)
## Clean up resources
-If you will be continuing to the next tutorial, you can keep the resources created in this quickstart and reuse them.
+If you continue to use the quickstart code, you can keep the resources created in this quickstart and reuse them.
-Otherwise, if you are finished with the quickstart sample application, you can delete the Azure resources created in this quickstart to avoid charges.
+Otherwise, if you're finished with the quickstart sample application, you can delete the Azure resources created in this quickstart to avoid charges.
> [!IMPORTANT] > Deleting a resource group is irreversible and that the resource group and all the resources in it are permanently deleted. Make sure that you do not accidentally delete the wrong resource group or resources. If you created the resources for hosting this sample inside an existing resource group that contains resources you want to keep, you can delete each resource individually on the left instead of deleting the resource group.
Otherwise, if you are finished with the quickstart sample application, you can d
1. In the **Filter by name** textbox, type the name of your resource group. The instructions for this article used a resource group named *TestResources*. On your resource group in the result list, select **...** then **Delete resource group**.
- ![Azure resource group deleted](./media/cache-java-get-started/azure-cache-redis-delete-resource-group.png)
+ :::image type="content" source="./media/cache-java-get-started/azure-cache-redis-delete-resource-group.png" alt-text="Azure resource group deleted":::
-1. You will be asked to confirm the deletion of the resource group. Type the name of your resource group to confirm, and select **Delete**.
+1. You'll be asked to confirm the deletion of the resource group. Type the name of your resource group to confirm, and select **Delete**.
After a few moments, the resource group and all of its contained resources are deleted.
After a few moments, the resource group and all of its contained resources are d
In this quickstart, you learned how to use Azure Cache for Redis from a Java application. Continue to the next quickstart to use Azure Cache for Redis with an ASP.NET web app.
-> [!div class="nextstepaction"]
-> [Create an ASP.NET web app that uses an Azure Cache for Redis.](./cache-web-app-howto.md)
+- [Development](cache-best-practices-development.md)
+- [Connection resilience](cache-best-practices-connection.md)
azure-cache-for-redis Cache Web App Aspnet Core Howto https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-web-app-aspnet-core-howto.md
ms.devlang: csharp Previously updated : 03/31/2021
-#Customer intent: As an ASP.NET Core developer, new to Azure Cache for Redis, I want to create a new ASP.NET Core web app that uses Azure Cache for Redis.
Last updated : 03/25/2022+
-# Quickstart: Use Azure Cache for Redis with an ASP.NET Core web app
+
+# Quickstart: Use Azure Cache for Redis with an ASP.NET Core web app
In this quickstart, you incorporate Azure Cache for Redis into an ASP.NET Core web application that connects to Azure Cache for Redis to store and retrieve data from the cache. ## Skip to the code on GitHub
-If you want to skip straight to the code, see the [ASP.NET Core quickstart](https://github.com/Azure-Samples/azure-cache-redis-samples/tree/main/quickstart/aspnet-core) on GitHub.
+Clone the repo [https://github.com/Azure-Samples/azure-cache-redis-samples/tree/main/quickstart/aspnet-core](https://github.com/Azure-Samples/azure-cache-redis-samples/tree/main/quickstart/aspnet-core) on GitHub.
## Prerequisites
If you want to skip straight to the code, see the [ASP.NET Core quickstart](http
[!INCLUDE [redis-cache-access-keys](includes/redis-cache-access-keys.md)]
-Make a note of the **HOST NAME** and the **Primary** access key. You will use these values later to construct the *CacheConnection* secret.
-
-## Create an ASP.NET Core web app
-
-Open a new command window and execute the following command to create a new ASP.NET Core Web App (Model-View-Controller):
-
-```dotnetcli
-dotnet new mvc -o ContosoTeamStats
-```
-
-In your command window, change to the new *ContosoTeamStats* project directory.
-
+Make a note of the **HOST NAME** and the **Primary** access key. You use these values later to construct the *CacheConnection* secret.
-Execute the following command to add the *Microsoft.Extensions.Configuration.UserSecrets* package to the project:
+## Add a local secret for the connection string
-```dotnetcli
-dotnet add package Microsoft.Extensions.Configuration.UserSecrets
-```
-
-Execute the following command to restore your packages:
-
-```dotnetcli
-dotnet restore
-```
+In your command window, execute the following command to store a new secret named *CacheConnection*, after replacing the placeholders, including angle brackets, for your cache name and primary access key:
-In your command window, execute the following command to store a new secret named *CacheConnection*, after replacing the placeholders (including angle brackets) for your cache name and primary access key:
-
-```dotnetcli
+```dos
dotnet user-secrets set CacheConnection "<cache name>.redis.cache.windows.net,abortConnect=false,ssl=true,allowAdmin=true,password=<primary-access-key>" ```
-## Configure the cache client
-
-In this section, you will configure the application to use the [StackExchange.Redis](https://github.com/StackExchange/StackExchange.Redis) client for .NET.
+## Connect to the cache with RedisConnection
-In your command window, execute the following command in the *ContosoTeamStats* project directory:
+The connection to your cache is managed by the `RedisConnection` class. The connection is made in this statement in `HomeController.cs` in the *Controllers* folder:
-```dotnetcli
-dotnet add package StackExchange.Redis
+```csharp
+_redisConnection = await _redisConnectionFactory;
```
-Once the installation is completed, the *StackExchange.Redis* cache client is available to use with your project.
-
-## Update the HomeController and Layout
-
-Add the following `using` statements to *Controllers\HomeController.cs*:
+In `RedisConnection.cs`, you see the `StackExchange.Redis` namespace has been added to the code. This is needed for the `RedisConnection` class.
```csharp
-using System.Net.Sockets;
-using System.Text;
-using System.Threading;
-
-using Microsoft.Extensions.Configuration;
- using StackExchange.Redis; ```
-Replace:
+The `RedisConnection` code ensures that there is always a healthy connection to the cache by managing the `ConnectionMultiplexer` instance from `StackExchange.Redis`. The `RedisConnection` class recreates the connection when a connection is lost and unable to reconnect automatically.
-```csharp
-private readonly ILogger<HomeController> _logger;
+For more information, see [StackExchange.Redis](https://stackexchange.github.io/StackExchange.Redis/) and the code in a [GitHub repo](https://github.com/StackExchange/StackExchange.Redis).
-public HomeController(ILogger<HomeController> logger)
-{
- _logger = logger;
-}
-```
+<!-- :::code language="csharp" source="~/samples-cache/quickstart/aspnet-core/ContosoTeamStats/RedisConnection.cs"::: -->
-with:
+## Layout views in the sample
-```csharp
-public ActionResult RedisCache()
-{
- ViewBag.Message = "A simple example with Azure Cache for Redis on ASP.NET Core.";
- IDatabase cache = GetDatabase();
- // Perform cache operations using the cache object...
- // Simple PING command
- ViewBag.command1 = "PING";
- ViewBag.command1Result = cache.Execute(ViewBag.command1).ToString();
- // Simple get and put of integral data types into the cache
- ViewBag.command2 = "GET Message";
- ViewBag.command2Result = cache.StringGet("Message").ToString();
- ViewBag.command3 = "SET Message \"Hello! The cache is working from ASP.NET Core!\"";
- ViewBag.command3Result = cache.StringSet("Message", "Hello! The cache is working from ASP.NET Core!").ToString();
- // Demonstrate "SET Message" executed as expected...
- ViewBag.command4 = "GET Message";
- ViewBag.command4Result = cache.StringGet("Message").ToString();
- // Get the client list, useful to see if connection list is growing...
- // Note that this requires allowAdmin=true in the connection string
- ViewBag.command5 = "CLIENT LIST";
- StringBuilder sb = new StringBuilder();
-
- var endpoint = (System.Net.DnsEndPoint)GetEndPoints()[0];
- IServer server = GetServer(endpoint.Host, endpoint.Port);
- ClientInfo[] clients = server.ClientList();
- sb.AppendLine("Cache response :");
- foreach (ClientInfo client in clients)
- {
- sb.AppendLine(client.Raw);
- }
- ViewBag.command5Result = sb.ToString();
- return View();
-}
-
-private const string SecretName = "CacheConnection";
-private static long _lastReconnectTicks = DateTimeOffset.MinValue.UtcTicks;
-private static DateTimeOffset _firstErrorTime = DateTimeOffset.MinValue;
-private static DateTimeOffset _previousErrorTime = DateTimeOffset.MinValue;
-private static SemaphoreSlim _reconnectSemaphore = new SemaphoreSlim(initialCount: 1, maxCount: 1);
-private static SemaphoreSlim _initSemaphore = new SemaphoreSlim(initialCount: 1, maxCount: 1);
-private static ConnectionMultiplexer _connection;
-private static bool _didInitialize = false;
-// In general, let StackExchange.Redis handle most reconnects,
-// so limit the frequency of how often ForceReconnect() will
-// actually reconnect.
-public static TimeSpan ReconnectMinInterval => TimeSpan.FromSeconds(60);
-// If errors continue for longer than the below threshold, then the
-// multiplexer seems to not be reconnecting, so ForceReconnect() will
-// re-create the multiplexer.
-public static TimeSpan ReconnectErrorThreshold => TimeSpan.FromSeconds(30);
-public static TimeSpan RestartConnectionTimeout => TimeSpan.FromSeconds(15);
-public static int RetryMaxAttempts => 5;
-public static ConnectionMultiplexer Connection { get { return _connection; } }
-private static async Task InitializeAsync()
-{
- if (_didInitialize)
- {
- throw new InvalidOperationException("Cannot initialize more than once.");
- }
- _connection = await CreateConnectionAsync();
- _didInitialize = true;
-}
-// This method may return null if it fails to acquire the semaphore in time.
-// Use the return value to update the "connection" field
-private static async Task<ConnectionMultiplexer> CreateConnectionAsync()
-{
- if (_connection != null)
- {
- // If we already have a good connection, let's re-use it
- return _connection;
- }
- try
- {
- await _initSemaphore.WaitAsync(RestartConnectionTimeout);
- }
- catch
- {
- // We failed to enter the semaphore in the given amount of time. Connection will either be null, or have a value that was created by another thread.
- return _connection;
- }
- // We entered the semaphore successfully.
- try
- {
- if (_connection != null)
- {
- // Another thread must have finished creating a new connection while we were waiting to enter the semaphore. Let's use it
- return _connection;
- }
- // Otherwise, we really need to create a new connection.
- string cacheConnection = Configuration[SecretName];
- return await ConnectionMultiplexer.ConnectAsync(cacheConnection);
- }
- finally
- {
- _initSemaphore.Release();
- }
-}
-private static async Task CloseConnectionAsync(ConnectionMultiplexer oldConnection)
-{
- if (oldConnection == null)
- {
- return;
- }
- try
- {
- await oldConnection.CloseAsync();
- }
- catch (Exception)
- {
- // Ignore any errors from the oldConnection
- }
-}
-/// <summary>
-/// Force a new ConnectionMultiplexer to be created.
-/// NOTES:
-/// 1. Users of the ConnectionMultiplexer MUST handle ObjectDisposedExceptions, which can now happen as a result of calling ForceReconnectAsync().
-/// 2. Call ForceReconnectAsync() for RedisConnectionExceptions and RedisSocketExceptions. You can also call it for RedisTimeoutExceptions,
-/// but only if you're using generous ReconnectMinInterval and ReconnectErrorThreshold. Otherwise, establishing new connections can cause
-/// a cascade failure on a server that's timing out because it's already overloaded.
-/// 3. The code will:
-/// a. wait to reconnect for at least the "ReconnectErrorThreshold" time of repeated errors before actually reconnecting
-/// b. not reconnect more frequently than configured in "ReconnectMinInterval"
-/// </summary>
-public static async Task ForceReconnectAsync()
-{
- var utcNow = DateTimeOffset.UtcNow;
- long previousTicks = Interlocked.Read(ref _lastReconnectTicks);
- var previousReconnectTime = new DateTimeOffset(previousTicks, TimeSpan.Zero);
- TimeSpan elapsedSinceLastReconnect = utcNow - previousReconnectTime;
- // If multiple threads call ForceReconnectAsync at the same time, we only want to honor one of them.
- if (elapsedSinceLastReconnect < ReconnectMinInterval)
- {
- return;
- }
- try
- {
- await _reconnectSemaphore.WaitAsync(RestartConnectionTimeout);
- }
- catch
- {
- // If we fail to enter the semaphore, then it is possible that another thread has already done so.
- // ForceReconnectAsync() can be retried while connectivity problems persist.
- return;
- }
- try
- {
- utcNow = DateTimeOffset.UtcNow;
- elapsedSinceLastReconnect = utcNow - previousReconnectTime;
- if (_firstErrorTime == DateTimeOffset.MinValue)
- {
- // We haven't seen an error since last reconnect, so set initial values.
- _firstErrorTime = utcNow;
- _previousErrorTime = utcNow;
- return;
- }
- if (elapsedSinceLastReconnect < ReconnectMinInterval)
- {
- return; // Some other thread made it through the check and the lock, so nothing to do.
- }
- TimeSpan elapsedSinceFirstError = utcNow - _firstErrorTime;
- TimeSpan elapsedSinceMostRecentError = utcNow - _previousErrorTime;
- bool shouldReconnect =
- elapsedSinceFirstError >= ReconnectErrorThreshold // Make sure we gave the multiplexer enough time to reconnect on its own if it could.
- && elapsedSinceMostRecentError <= ReconnectErrorThreshold; // Make sure we aren't working on stale data (e.g. if there was a gap in errors, don't reconnect yet).
- // Update the previousErrorTime timestamp to be now (e.g. this reconnect request).
- _previousErrorTime = utcNow;
- if (!shouldReconnect)
- {
- return;
- }
- _firstErrorTime = DateTimeOffset.MinValue;
- _previousErrorTime = DateTimeOffset.MinValue;
- ConnectionMultiplexer oldConnection = _connection;
- await CloseConnectionAsync(oldConnection);
- _connection = null;
- _connection = await CreateConnectionAsync();
- Interlocked.Exchange(ref _lastReconnectTicks, utcNow.UtcTicks);
- }
- finally
- {
- _reconnectSemaphore.Release();
- }
-}
-// In real applications, consider using a framework such as
-// Polly to make it easier to customize the retry approach.
-private static async Task<T> BasicRetryAsync<T>(Func<T> func)
-{
- int reconnectRetry = 0;
- int disposedRetry = 0;
- while (true)
- {
- try
- {
- return func();
- }
- catch (Exception ex) when (ex is RedisConnectionException || ex is SocketException)
- {
- reconnectRetry++;
- if (reconnectRetry > RetryMaxAttempts)
- throw;
- await ForceReconnectAsync();
- }
- catch (ObjectDisposedException)
- {
- disposedRetry++;
- if (disposedRetry > RetryMaxAttempts)
- throw;
- }
- }
-}
-public static Task<IDatabase> GetDatabaseAsync()
-{
- return BasicRetryAsync(() => Connection.GetDatabase());
-}
-public static Task<System.Net.EndPoint[]> GetEndPointsAsync()
-{
- return BasicRetryAsync(() => Connection.GetEndPoints());
-}
-public static Task<IServer> GetServerAsync(string host, int port)
-{
- return BasicRetryAsync(() => Connection.GetServer(host, port));
-}
-```
+The home page layout for this sample is stored in the *_Layout.cshtml* file. From this page, you start the actual cache testing by clicking the **Azure Cache for Redis Test** from this page.
-Open *Views\Shared\\_Layout.cshtml*.
+1. Open *Views\Shared\\_Layout.cshtml*.
-Replace:
+1. You should see in `<div class="navbar-header">`:
-```cshtml
-<a class="navbar-brand" asp-area="" asp-controller="Home" asp-action="Index">ContosoTeamStats</a>
-```
+ ```html
+ <a class="navbar-brand" asp-area="" asp-controller="Home" asp-action="RedisCache">Azure Cache for Redis Test</a>
+ ```
-with:
+### Showing data from the cache
-```cshtml
-<a class="navbar-brand" asp-area="" asp-controller="Home" asp-action="RedisCache">Azure Cache for Redis Test</a>
-```
+From the home page, you select **Azure Cache for Redis Test** to see the sample output.
-## Add a new RedisCache view and update the styles
-
-Create a new file *Views\Home\RedisCache.cshtml* with the following content:
-
-```cshtml
-@{
- ViewBag.Title = "Azure Cache for Redis Test";
-}
-
-<h2>@ViewBag.Title.</h2>
-<h3>@ViewBag.Message</h3>
-<br /><br />
-<table border="1" cellpadding="10" class="redis-results">
- <tr>
- <th>Command</th>
- <th>Result</th>
- </tr>
- <tr>
- <td>@ViewBag.command1</td>
- <td><pre>@ViewBag.command1Result</pre></td>
- </tr>
- <tr>
- <td>@ViewBag.command2</td>
- <td><pre>@ViewBag.command2Result</pre></td>
- </tr>
- <tr>
- <td>@ViewBag.command3</td>
- <td><pre>@ViewBag.command3Result</pre></td>
- </tr>
- <tr>
- <td>@ViewBag.command4</td>
- <td><pre>@ViewBag.command4Result</pre></td>
- </tr>
- <tr>
- <td>@ViewBag.command5</td>
- <td><pre>@ViewBag.command5Result</pre></td>
- </tr>
-</table>
-```
-
-Add the following lines to *wwwroot\css\site.css*:
-
-```css
-.redis-results pre {
- white-space: pre-wrap;
-}
-```
+1. In **Solution Explorer**, expand the **Views** folder, and then right-click the **Home** folder.
-## Run the app locally
+1. You should see this code in the *RedisCache.cshtml* file.
-Execute the following command in your command window to build the app:
+ ```csharp
+ @{
+ ViewBag.Title = "Azure Cache for Redis Test";
+ }
-```dotnetcli
-dotnet build
-```
+ <h2>@ViewBag.Title.</h2>
+ <h3>@ViewBag.Message</h3>
+ <br /><br />
+ <table border="1" cellpadding="10">
+ <tr>
+ <th>Command</th>
+ <th>Result</th>
+ </tr>
+ <tr>
+ <td>@ViewBag.command1</td>
+ <td><pre>@ViewBag.command1Result</pre></td>
+ </tr>
+ <tr>
+ <td>@ViewBag.command2</td>
+ <td><pre>@ViewBag.command2Result</pre></td>
+ </tr>
+ <tr>
+ <td>@ViewBag.command3</td>
+ <td><pre>@ViewBag.command3Result</pre></td>
+ </tr>
+ <tr>
+ <td>@ViewBag.command4</td>
+ <td><pre>@ViewBag.command4Result</pre></td>
+ </tr>
+ <tr>
+ <td>@ViewBag.command5</td>
+ <td><pre>@ViewBag.command5Result</pre></td>
+ </tr>
+ </table>
+ ```
-Then run the app with the following command:
+## Run the app locally
-```dotnetcli
-dotnet run
-```
+1. Execute the following command in your command window to build the app:
-Browse to `https://localhost:5001` in your web browser.
+ ```dos
+ dotnet build
+ ```
+
+1. Then run the app with the following command:
-Select **Azure Cache for Redis Test** in the navigation bar of the web page to test cache access.
+ ```dos
+ dotnet run
+ ```
+
+1. Browse to `https://localhost:5001` in your web browser.
-In the example below, you can see the `Message` key previously had a cached value, which was set using the Redis Console in the Azure portal. The app updated that cached value. The app also executed the `PING` and `CLIENT LIST` commands.
+1. Select **Azure Cache for Redis Test** in the navigation bar of the web page to test cache access.
-![Simple test completed local](./media/cache-web-app-aspnet-core-howto/cache-simple-test-complete-local.png)
## Clean up resources
-If you're continuing to the next tutorial, you can keep the resources that you created in this quickstart and reuse them.
+If you continue to use this quickstart, you can keep the resources you created and reuse them.
Otherwise, if you're finished with the quickstart sample application, you can delete the Azure resources that you created in this quickstart to avoid charges.
Otherwise, if you're finished with the quickstart sample application, you can de
1. Sign in to the [Azure portal](https://portal.azure.com), and then select **Resource groups**.
-2. In the **Filter by name...** box, type the name of your resource group. The instructions for this article used a resource group named *TestResources*. On your resource group, in the results list, select **...**, and then select **Delete resource group**.
+1. In the **Filter by name...** box, type the name of your resource group. The instructions for this article used a resource group named *TestResources*. On your resource group, in the results list, select **...**, and then select **Delete resource group**.
- ![Delete](./media/cache-web-app-howto/cache-delete-resource-group.png)
+ :::image type="content" source="media/cache-web-app-howto/cache-delete-resource-group.png" alt-text="Delete":::
-You're asked to confirm the deletion of the resource group. Type the name of your resource group to confirm, and then select **Delete**.
+1. You're asked to confirm the deletion of the resource group. Type the name of your resource group to confirm, and then select **Delete**.
After a few moments, the resource group and all of its resources are deleted. ## Next steps-
-For information on deploying to Azure, see:
-
-> [!div class="nextstepaction"]
-> [Tutorial: Build an ASP.NET Core and SQL Database app in Azure App Service](../app-service/tutorial-dotnetcore-sqldb-app.md)
-
-For information about storing the cache connection secret in Azure Key Vault, see:
-
-> [!div class="nextstepaction"]
-> [Azure Key Vault configuration provider in ASP.NET Core](/aspnet/core/security/key-vault-configuration)
-
-Want to scale your cache from a lower tier to a higher tier?
-
-> [!div class="nextstepaction"]
-> [How to Scale Azure Cache for Redis](./cache-how-to-scale.md)
-
-Want to optimize and save on your cloud spending?
-
-> [!div class="nextstepaction"]
-> [Start analyzing costs with Cost Management](../cost-management-billing/costs/quick-acm-cost-analysis.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn)
+- [Connection resilience](cache-best-practices-connection.md)
+- [Best Practices Development](cache-best-practices-development.md)
azure-cache-for-redis Cache Web App Howto https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-web-app-howto.md
description: In this quickstart, you learn how to create an ASP.NET web app with
Previously updated : 09/29/2020 Last updated : 03/25/2022
-#Customer intent: As an ASP.NET developer, new to Azure Cache for Redis, I want to create a new ASP.NET web app that uses Azure Cache for Redis.
+ + # Quickstart: Use Azure Cache for Redis with an ASP.NET web app In this quickstart, you use Visual Studio 2019 to create an ASP.NET web application that connects to Azure Cache for Redis to store and retrieve data from the cache. You then deploy the app to Azure App Service. ## Skip to the code on GitHub
-If you want to skip straight to the code, see the [ASP.NET quickstart](https://github.com/Azure-Samples/azure-cache-redis-samples/tree/main/quickstart/aspnet) on GitHub.
+Clone the repo [https://github.com/Azure-Samples/azure-cache-redis-samples/tree/main/quickstart/aspnet](https://github.com/Azure-Samples/azure-cache-redis-samples/tree/main/quickstart/aspnet) on GitHub.
## Prerequisites - Azure subscription - [create one for free](https://azure.microsoft.com/free/dotnet) - [Visual Studio 2019](https://www.visualstudio.com/downloads/) with the **ASP.NET and web development** and **Azure development** workloads.
-## Create the Visual Studio project
-
-1. Open Visual Studio, and then select **File** > **New** > **Project**.
-
-2. In the **Create a new project** dialog box, take the following steps:
-
- ![Create project](./media/cache-web-app-howto/cache-create-project.png)
-
- a. In the search box, enter _C# ASP.NET Web Application_.
-
- b. Select **ASP.NET Web Application (.NET Framework)**.
-
- c. Select **Next**.
-
-3. In the **Project name** box, give the project a name. For this example, we used **ContosoTeamStats**.
-
-4. Verify that **.NET Framework 4.6.1** or higher is selected.
-
-5. Select **Create**.
-
-6. Select **MVC** as the project type.
-
-7. Make sure that **No Authentication** is specified for the **Authentication** settings. Depending on your version of Visual Studio, the default **Authentication** setting might be something else. To change it, select **Change Authentication** and then **No Authentication**.
-
-8. Select **Create** to create the project.
- ## Create a cache Next, you create the cache for the app.
Next, you create the cache for the app.
## Update the MVC application
-In this section, you update the application to support a new view that displays a simple test against Azure Cache for Redis.
+In this section, you can see an MVC application that presents a view that displays a simple test against Azure Cache for Redis.
-- [Update the web.config file with an app setting for the cache](#update-the-webconfig-file-with-an-app-setting-for-the-cache)-- Configure the application to use the StackExchange.Redis client-- Update the HomeController and Layout-- Add a new RedisCache view
+### How the web.config file connects to the cache
-### Update the web.config file with an app setting for the cache
-
-When you run the application locally, the information in *CacheSecrets.config* is used to connect to your Azure Cache for Redis instance. Later you deploy this application to Azure. At that time, you configure an app setting in Azure that the application uses to retrieve the cache connection information instead of this file.
+When you run the application locally, the information in *CacheSecrets.config* is used to connect to your Azure Cache for Redis instance. Later, you can deploy this application to Azure. At that time, you configure an app setting in Azure that the application uses to retrieve the cache connection information instead of this file.
Because the file *CacheSecrets.config* isn't deployed to Azure with your application, you only use it while testing the application locally. Keep this information as secure as possible to prevent malicious access to your cache data. #### To update the *web.config* file
-1. In **Solution Explorer**, double-click the *web.config* file to open it.
+1. In **Solution Explorer**, open the *web.config* file.
- ![Web.config](./media/cache-web-app-howto/cache-web-config.png)
+ :::image type="content" source="media/cache-web-app-howto/cache-web-config.png" alt-text="Web.config":::
-2. In the *web.config* file, find the `<appSetting>` element. Then add the following `file` attribute. If you used a different file name or location, substitute those values for the ones that are shown in the example.
+1. In the *web.config* file, you can how to set the `<appSetting>` element for running the application locally.
-- Before: `<appSettings>`-- After: `<appSettings file="C:\AppSecrets\CacheSecrets.config">`
+ `<appSettings file="C:\AppSecrets\CacheSecrets.config">`
The ASP.NET runtime merges the contents of the external file with the markup in the `<appSettings>` element. The runtime ignores the file attribute if the specified file can't be found. Your secrets (the connection string to your cache) aren't included as part of the source code for the application. When you deploy your web app to Azure, the *CacheSecrets.config* file isn't deployed.
-### To configure the application to use StackExchange.Redis
+## Install StackExchange.Redis
+
+Your solution needs the `StackExchange.Redis` package to run. Install it, with this procedure:
1. To configure the app to use the [StackExchange.Redis](https://github.com/StackExchange/StackExchange.Redis) NuGet package for Visual Studio, select **Tools > NuGet Package Manager > Package Manager Console**.
-2. Run the following command from the `Package Manager Console` window:
+1. Run the following command from the `Package Manager Console` window:
```powershell Install-Package StackExchange.Redis ```
-3. The NuGet package downloads and adds the required assembly references for your client application to access Azure Cache for Redis with the StackExchange.Azure Cache for Redis client. If you prefer to use a strong-named version of the `StackExchange.Redis` client library, install the `StackExchange.Redis` package.
+1. The NuGet package downloads and adds the required assembly references for your client application to access Azure Cache for Redis with the `StackExchange.Redis` client.
-### To update the HomeController and Layout
+<!--
-1. In **Solution Explorer**, expand the **Controllers** folder, and then open the *HomeController.cs* file.
+Philo - Isn't this superfluous now?
-2. Add the following `using` statements at the top of the file.
+1. If you prefer to use a strong-named version of the `StackExchange.Redis` client library, install the `StackExchange.Redis` package.
+ -->
- ```csharp
- using StackExchange.Redis;
- using System.Configuration;
- using System.Net.Sockets;
- using System.Text;
- using System.Threading;
- ```
+## Connect to the cache with RedisConnection
-3. Add the following members to the `HomeController` class to support a new `RedisCache` action that runs some commands against the new cache.
+The connection to your cache is managed by the `RedisConnection` class. The connection is first made in this statement from `ContosoTeamStats/Controllers/HomeController.cs`:
- ```csharp
- public async Task<ActionResult> RedisCache()
- {
- ViewBag.Message = "A simple example with Azure Cache for Redis on ASP.NET.";
-
- if (Connection == null)
- {
- await InitializeAsync();
- }
-
- IDatabase cache = await GetDatabaseAsync();
-
- // Perform cache operations using the cache object...
-
- // Simple PING command
- ViewBag.command1 = "PING";
- ViewBag.command1Result = cache.Execute(ViewBag.command1).ToString();
-
- // Simple get and put of integral data types into the cache
- ViewBag.command2 = "GET Message";
- ViewBag.command2Result = cache.StringGet("Message").ToString();
-
- ViewBag.command3 = "SET Message \"Hello! The cache is working from ASP.NET!\"";
- ViewBag.command3Result = cache.StringSet("Message", "Hello! The cache is working from ASP.NET!").ToString();
-
- // Demonstrate "SET Message" executed as expected...
- ViewBag.command4 = "GET Message";
- ViewBag.command4Result = cache.StringGet("Message").ToString();
-
- // Get the client list, useful to see if connection list is growing...
- // Note that this requires allowAdmin=true in the connection string
- ViewBag.command5 = "CLIENT LIST";
- StringBuilder sb = new StringBuilder();
- var endpoint = (System.Net.DnsEndPoint)(await GetEndPointsAsync())[0];
- IServer server = await GetServerAsync(endpoint.Host, endpoint.Port);
- ClientInfo[] clients = await server.ClientListAsync();
-
- sb.AppendLine("Cache response :");
- foreach (ClientInfo client in clients)
- {
- sb.AppendLine(client.Raw);
- }
-
- ViewBag.command5Result = sb.ToString();
-
- return View();
- }
-
- private static long _lastReconnectTicks = DateTimeOffset.MinValue.UtcTicks;
- private static DateTimeOffset _firstErrorTime = DateTimeOffset.MinValue;
- private static DateTimeOffset _previousErrorTime = DateTimeOffset.MinValue;
-
- private static SemaphoreSlim _reconnectSemaphore = new SemaphoreSlim(initialCount: 1, maxCount: 1);
- private static SemaphoreSlim _initSemaphore = new SemaphoreSlim(initialCount: 1, maxCount: 1);
-
- private static ConnectionMultiplexer _connection;
- private static bool _didInitialize = false;
-
- // In general, let StackExchange.Redis handle most reconnects,
- // so limit the frequency of how often ForceReconnect() will
- // actually reconnect.
- public static TimeSpan ReconnectMinInterval => TimeSpan.FromSeconds(60);
-
- // If errors continue for longer than the below threshold, then the
- // multiplexer seems to not be reconnecting, so ForceReconnect() will
- // re-create the multiplexer.
- public static TimeSpan ReconnectErrorThreshold => TimeSpan.FromSeconds(30);
-
- public static TimeSpan RestartConnectionTimeout => TimeSpan.FromSeconds(15);
-
- public static int RetryMaxAttempts => 5;
-
- public static ConnectionMultiplexer Connection { get { return _connection; } }
-
- public static async Task InitializeAsync()
- {
- if (_didInitialize)
- {
- throw new InvalidOperationException("Cannot initialize more than once.");
- }
-
- _connection = await CreateConnectionAsync();
- _didInitialize = true;
- }
-
- // This method may return null if it fails to acquire the semaphore in time.
- // Use the return value to update the "connection" field
- private static async Task<ConnectionMultiplexer> CreateConnectionAsync()
- {
- if (_connection != null)
- {
- // If we already have a good connection, let's re-use it
- return _connection;
- }
-
- try
- {
- await _initSemaphore.WaitAsync(RestartConnectionTimeout);
- }
- catch
- {
- // We failed to enter the semaphore in the given amount of time. Connection will either be null, or have a value that was created by another thread.
- return _connection;
- }
-
- // We entered the semaphore successfully.
- try
- {
- if (_connection != null)
- {
- // Another thread must have finished creating a new connection while we were waiting to enter the semaphore. Let's use it
- return _connection;
- }
-
- // Otherwise, we really need to create a new connection.
- string cacheConnection = ConfigurationManager.AppSettings["CacheConnection"].ToString();
- return await ConnectionMultiplexer.ConnectAsync(cacheConnection);
- }
- finally
- {
- _initSemaphore.Release();
- }
- }
-
- private static async Task CloseConnectionAsync(ConnectionMultiplexer oldConnection)
- {
- if (oldConnection == null)
- {
- return;
- }
- try
- {
- await oldConnection.CloseAsync();
- }
- catch (Exception)
- {
- // Ignore any errors from the oldConnection
- }
- }
-
- /// <summary>
- /// Force a new ConnectionMultiplexer to be created.
- /// NOTES:
- /// 1. Users of the ConnectionMultiplexer MUST handle ObjectDisposedExceptions, which can now happen as a result of calling ForceReconnectAsync().
- /// 2. Call ForceReconnectAsync() for RedisConnectionExceptions and RedisSocketExceptions. You can also call it for RedisTimeoutExceptions,
- /// but only if you're using generous ReconnectMinInterval and ReconnectErrorThreshold. Otherwise, establishing new connections can cause
- /// a cascade failure on a server that's timing out because it's already overloaded.
- /// 3. The code will:
- /// a. wait to reconnect for at least the "ReconnectErrorThreshold" time of repeated errors before actually reconnecting
- /// b. not reconnect more frequently than configured in "ReconnectMinInterval"
- /// </summary>
- public static async Task ForceReconnectAsync()
- {
- var utcNow = DateTimeOffset.UtcNow;
- long previousTicks = Interlocked.Read(ref _lastReconnectTicks);
- var previousReconnectTime = new DateTimeOffset(previousTicks, TimeSpan.Zero);
- TimeSpan elapsedSinceLastReconnect = utcNow - previousReconnectTime;
-
- // If multiple threads call ForceReconnectAsync at the same time, we only want to honor one of them.
- if (elapsedSinceLastReconnect < ReconnectMinInterval)
- {
- return;
- }
-
- try
- {
- await _reconnectSemaphore.WaitAsync(RestartConnectionTimeout);
- }
- catch
- {
- // If we fail to enter the semaphore, then it is possible that another thread has already done so.
- // ForceReconnectAsync() can be retried while connectivity problems persist.
- return;
- }
-
- try
- {
- utcNow = DateTimeOffset.UtcNow;
- elapsedSinceLastReconnect = utcNow - previousReconnectTime;
-
- if (_firstErrorTime == DateTimeOffset.MinValue)
- {
- // We haven't seen an error since last reconnect, so set initial values.
- _firstErrorTime = utcNow;
- _previousErrorTime = utcNow;
- return;
- }
-
- if (elapsedSinceLastReconnect < ReconnectMinInterval)
- {
- return; // Some other thread made it through the check and the lock, so nothing to do.
- }
-
- TimeSpan elapsedSinceFirstError = utcNow - _firstErrorTime;
- TimeSpan elapsedSinceMostRecentError = utcNow - _previousErrorTime;
-
- bool shouldReconnect =
- elapsedSinceFirstError >= ReconnectErrorThreshold // Make sure we gave the multiplexer enough time to reconnect on its own if it could.
- && elapsedSinceMostRecentError <= ReconnectErrorThreshold; // Make sure we aren't working on stale data (e.g. if there was a gap in errors, don't reconnect yet).
-
- // Update the previousErrorTime timestamp to be now (e.g. this reconnect request).
- _previousErrorTime = utcNow;
-
- if (!shouldReconnect)
- {
- return;
- }
-
- _firstErrorTime = DateTimeOffset.MinValue;
- _previousErrorTime = DateTimeOffset.MinValue;
-
- ConnectionMultiplexer oldConnection = _connection;
- await CloseConnectionAsync(oldConnection);
- _connection = null;
- _connection = await CreateConnectionAsync();
- Interlocked.Exchange(ref _lastReconnectTicks, utcNow.UtcTicks);
- }
- finally
- {
- _reconnectSemaphore.Release();
- }
- }
-
- // In real applications, consider using a framework such as
- // Polly to make it easier to customize the retry approach.
- private static async Task<T> BasicRetryAsync<T>(Func<T> func)
- {
- int reconnectRetry = 0;
- int disposedRetry = 0;
-
- while (true)
- {
- try
- {
- return func();
- }
- catch (Exception ex) when (ex is RedisConnectionException || ex is SocketException)
- {
- reconnectRetry++;
- if (reconnectRetry > RetryMaxAttempts)
- throw;
- await ForceReconnectAsync();
- }
- catch (ObjectDisposedException)
- {
- disposedRetry++;
- if (disposedRetry > RetryMaxAttempts)
- throw;
- }
- }
- }
-
- public static Task<IDatabase> GetDatabaseAsync()
- {
- return BasicRetryAsync(() => Connection.GetDatabase());
- }
-
- public static Task<System.Net.EndPoint[]> GetEndPointsAsync()
- {
- return BasicRetryAsync(() => Connection.GetEndPoints());
- }
-
- public static Task<IServer> GetServerAsync(string host, int port)
- {
- return BasicRetryAsync(() => Connection.GetServer(host, port));
- }
- ```
+```csharp
+ private static Task<RedisConnection> _redisConnectionFactory = RedisConnection.InitializeAsync(connectionString: ConfigurationManager.AppSettings["CacheConnection"].ToString()););
-4. In **Solution Explorer**, expand the **Views** > **Shared** folder. Then open the *_Layout.cshtml* file.
+```
- Replace:
+The value of the *CacheConnection* secret is accessed using the Secret Manager configuration provider and is used as the password parameter.
- ```csharp
- @Html.ActionLink("Application name", "Index", "Home", new { area = "" }, new { @class = "navbar-brand" })
- ```
+In `RedisConnection.cs`, you see the `StackExchange.Redis` namespace has been added to the code. This is needed for the `RedisConnection` class.
+
+```csharp
+using StackExchange.Redis;
+```
+
+The `RedisConnection` code ensures that there is always a healthy connection to the cache by managing the `ConnectionMultiplexer` instance from `StackExchange.Redis`. The `RedisConnection` class recreates the connection when a connection is lost and unable to reconnect automatically.
- with:
+For more information, see [StackExchange.Redis](https://stackexchange.github.io/StackExchange.Redis/) and the code in a [GitHub repo](https://github.com/StackExchange/StackExchange.Redis).
+
+<!-- :::code language="csharp" source="~/samples-cache/quickstart/aspnet/ContosoTeamStats/RedisConnection.cs "::: -->
+
+## Layout views in the sample
+
+The home page layout for this sample is stored in the *_Layout.cshtml* file. From this page, you start the actual cache testing by clicking the **Azure Cache for Redis Test** from this page.
+
+1. In **Solution Explorer**, expand the **Views** > **Shared** folder. Then open the *_Layout.cshtml* file.
+
+1. You see the following line in `<div class="navbar-header">`.
```csharp @Html.ActionLink("Azure Cache for Redis Test", "RedisCache", "Home", new { area = "" }, new { @class = "navbar-brand" }) ```
-### To add a new RedisCache view
+ :::image type="content" source="media/cache-web-app-howto/cache-welcome-page.png" alt-text="screenshot of welcome page":::
+
+### Showing data from the cache
-1. In **Solution Explorer**, expand the **Views** folder, and then right-click the **Home** folder. Choose **Add** > **View...**.
+From the home page, you select **Azure Cache for Redis Test** to see the sample output.
-2. In the **Add View** dialog box, enter **RedisCache** for the View Name. Then select **Add**.
+1. In **Solution Explorer**, expand the **Views** folder, and then right-click the **Home** folder.
-3. Replace the code in the *RedisCache.cshtml* file with the following code:
+1. You should see this code in the *RedisCache.cshtml* file.
```csharp @{
By default, the project is configured to host the app locally in [IIS Express](/
1. In Visual Studio, select **Debug** > **Start Debugging** to build and start the app locally for testing and debugging.
-2. In the browser, select **Azure Cache for Redis Test** on the navigation bar.
+1. In the browser, select **Azure Cache for Redis Test** on the navigation bar.
-3. In the following example, the `Message` key previously had a cached value, which was set by using the Azure Cache for Redis console in the portal. The app updated that cached value. The app also executed the `PING` and `CLIENT LIST` commands.
+1. In the following example, the `Message` key previously had a cached value, which was set by using the Azure Cache for Redis console in the portal. The app updated that cached value. The app also executed the `PING` and `CLIENT LIST` commands.
- ![Simple test completed local](./media/cache-web-app-howto/cache-simple-test-complete-local.png)
+ :::image type="content" source="media/cache-web-app-howto/cache-simple-test-complete-local.png" alt-text="Screenshot of simple test completed local":::
## Publish and run in Azure
After you successfully test the app locally, you can deploy the app to Azure and
1. In Visual Studio, right-click the project node in Solution Explorer. Then select **Publish**.
- ![Publish](./media/cache-web-app-howto/cache-publish-app.png)
+ :::image type="content" source="media/cache-web-app-howto/cache-publish-app.png" alt-text="Publish":::
-2. Select **Microsoft Azure App Service**, select **Create New**, and then select **Publish**.
+1. Select **Microsoft Azure App Service**, select **Create New**, and then select **Publish**.
- ![Publish to App Service](./media/cache-web-app-howto/cache-publish-to-app-service.png)
+ :::image type="content" source="media/cache-web-app-howto/cache-publish-to-app-service.png" alt-text="Publish to App Service":::
-3. In the **Create App Service** dialog box, make the following changes:
+1. In the **Create App Service** dialog box, make the following changes:
| Setting | Recommended value | Description | | - | :: | -- |
After you successfully test the app locally, you can deploy the app to Azure and
| **Resource group** | Use the same resource group where you created the cache (for example, *TestResourceGroup*). | The resource group helps you manage all resources as a group. Later, when you want to delete the app, you can just delete the group. | | **App Service plan** | Select **New**, and then create a new App Service plan named *TestingPlan*. <br />Use the same **Location** you used when creating your cache. <br />Choose **Free** for the size. | An App Service plan defines a set of compute resources for a web app to run with. |
- ![App Service dialog box](./media/cache-web-app-howto/cache-create-app-service-dialog.png)
+ :::image type="content" source="media/cache-web-app-howto/cache-create-app-service-dialog.png" alt-text="App Service dialog box":::
-4. After you configure the App Service hosting settings, select **Create**.
+1. After you configure the App Service hosting settings, select **Create**.
-5. Monitor the **Output** window in Visual Studio to see the publishing status. After the app has been published, the URL for the app is logged:
+1. Monitor the **Output** window in Visual Studio to see the publishing status. After the app has been published, the URL for the app is logged:
- ![Publishing output](./media/cache-web-app-howto/cache-publishing-output.png)
+ :::image type="content" source="media/cache-web-app-howto/cache-publishing-output.png" alt-text="Publishing output":::
### Add the app setting for the cache
After the new app has been published, add a new app setting. This setting is use
1. Type the app name in the search bar at the top of the Azure portal to find the new app you created.
- ![Find app](./media/cache-web-app-howto/cache-find-app-service.png)
+ :::image type="content" source="media/cache-web-app-howto/cache-find-app-service.png" alt-text="Find app":::
2. Add a new app setting named **CacheConnection** for the app to use to connect to the cache. Use the same value you configured for `CacheConnection` in your *CacheSecrets.config* file. The value contains the cache host name and access key.
- ![Add app setting](./media/cache-web-app-howto/cache-add-app-setting.png)
+ :::image type="content" source="media/cache-web-app-howto/cache-add-app-setting.png" alt-text="Add app setting":::
### Run the app in Azure
-In your browser, go to the URL for the app. The URL appears in the results of the publishing operation in the Visual Studio output window. It's also provided in the Azure portal on the overview page of the app you created.
+1. In your browser, go to the URL for the app. The URL appears in the results of the publishing operation in the Visual Studio output window. It's also provided in the Azure portal on the overview page of the app you created.
-Select **Azure Cache for Redis Test** on the navigation bar to test cache access.
-
-![Simple test completed Azure](./media/cache-web-app-howto/cache-simple-test-complete-azure.png)
+1. Select **Azure Cache for Redis Test** on the navigation bar to test cache access as you did with the local version.
## Clean up resources
-If you're continuing to the next tutorial, you can keep the resources that you created in this quickstart and reuse them.
+If you continue to use this quickstart, you can keep the resources you created and reuse them.
Otherwise, if you're finished with the quickstart sample application, you can delete the Azure resources that you created in this quickstart to avoid charges.
Otherwise, if you're finished with the quickstart sample application, you can de
1. Sign in to the [Azure portal](https://portal.azure.com), and then select **Resource groups**.
-2. In the **Filter by name...** box, type the name of your resource group. The instructions for this article used a resource group named *TestResources*. On your resource group, in the results list, select **...**, and then select **Delete resource group**.
+1. In the **Filter by name...** box, type the name of your resource group. The instructions for this article used a resource group named *TestResources*. On your resource group, in the results list, select **...**, and then select **Delete resource group**.
- ![Delete](./media/cache-web-app-howto/cache-delete-resource-group.png)
+ :::image type="content" source="media/cache-dotnet-core-quickstart/cache-delete-resource-group.png" alt-text="Delete":::
-You're asked to confirm the deletion of the resource group. Type the name of your resource group to confirm, and then select **Delete**.
+1. You're asked to confirm the deletion of the resource group. Type the name of your resource group to confirm, and then select **Delete**.
After a few moments, the resource group and all of its resources are deleted. ## Next steps
-In the next tutorial, you use Azure Cache for Redis in a more realistic scenario to improve performance of an app. You update this application to cache leaderboard results using the cache-aside pattern with ASP.NET and a database.
-
-> [!div class="nextstepaction"]
-> [Create a cache-aside leaderboard on ASP.NET](cache-web-app-cache-aside-leaderboard.md)
+- [Connection resilience](cache-best-practices-connection.md)
+- [Best Practices Development](cache-best-practices-development.md)
azure-functions Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/start-stop-vms/troubleshoot.md
Learn more about monitoring Azure Functions and logic apps:
* [Monitor logic apps](../../logic-apps/monitor-logic-apps.md).
-* If you run into problems during deployment, you encounter an issue when using Start/Stop VMs v2 (preview), or if you have a related question, you can submit an issue on [GitHub](https://github.com/microsoft/startstopv2-deployments/issues). Filing an Azure support incident from the [Azure support site](https://azure.microsoft.com/support/options/) is not available for this preview version.
+* If you run into problems during deployment, you encounter an issue when using Start/Stop VMs v2 (preview), or if you have a related question, you can submit an issue on [GitHub](https://github.com/microsoft/startstopv2-deployments/issues). Filing an Azure support incident from the [Azure support site](https://azure.microsoft.com/support/options/) is also available for this preview version.
azure-government Compare Azure Government Global Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/compare-azure-government-global-azure.md
Table below lists API endpoints in Azure vs. Azure Government for accessing and
||Gallery URL|gallery.azure.com|gallery.azure.us|| ||Microsoft Azure portal|portal.azure.com|portal.azure.us|| ||Microsoft Intune|enterpriseregistration.windows.net|enterpriseregistration.microsoftonline.us|Enterprise registration|
-|||manage.microsoft.com|\manage.microsoft.us|Enterprise enrollment|
+|||manage.microsoft.com|manage.microsoft.us|Enterprise enrollment|
|**Migration**|Azure Site Recovery|hypervrecoverymanager.windowsazure.com|hypervrecoverymanager.windowsazure.us|Site Recovery service| |||backup.windowsazure.com|backup.windowsazure.us|Protection service| |||blob.core.windows.net|blob.core.usgovcloudapi.net|Storing VM snapshots|
The following Azure Cost Management + Billing **features are not currently avail
This section outlines variations and considerations when using Media services in the Azure Government environment. For service availability, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=cdn,media-services&regions=non-regional,usgov-non-regional,us-dod-central,us-dod-east,usgov-arizona,usgov-texas,usgov-virginia&rar=true).
-### [Media Services](/media-services/)
+### [Media Services](/azure/media-services/)
-For Azure Media Services v3 feature variations in Azure Government, see [Azure Media Services v3 clouds and regions availability](/media-services/latest/azure-clouds-regions#us-government-cloud).
+For Azure Media Services v3 feature variations in Azure Government, see [Azure Media Services v3 clouds and regions availability](/azure/media-services/latest/azure-clouds-regions#us-government-cloud).
## Migration
azure-government Azure Services In Fedramp Auditscope https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/compliance/azure-services-in-fedramp-auditscope.md
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| [Machine Learning](../../machine-learning/index.yml) | &#x2705; | &#x2705; | | [Managed Applications](../../azure-resource-manager/managed-applications/index.yml) | &#x2705; | &#x2705; | | **Service** | **FedRAMP High** | **DoD IL2** |
-| [Media Services](/media-services/) | &#x2705; | &#x2705; |
+| [Media Services](/azure/media-services/) | &#x2705; | &#x2705; |
| [Microsoft 365 Defender](/microsoft-365/security/defender/) (formerly Microsoft Threat Protection) | &#x2705; | &#x2705; | | [Microsoft Azure Attestation](../../attestation/index.yml)| &#x2705; | &#x2705; | | [Microsoft Azure Marketplace portal](https://azuremarketplace.microsoft.com/marketplace/)| &#x2705; | &#x2705; |
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| [Logic Apps](../../logic-apps/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Machine Learning](../../machine-learning/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | | [Managed Applications](../../azure-resource-manager/managed-applications/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
-| [Media Services](/media-services/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
+| [Media Services](/azure/media-services/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
| [Microsoft 365 Defender](/microsoft-365/security/defender/) (formerly Microsoft Threat Protection) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | | [Microsoft Azure portal](../../azure-portal/index.yml) | &#x2705; | &#x2705; | &#x2705;| &#x2705; | &#x2705; | | **Service** | **FedRAMP High** | **DoD IL2** | **DoD IL4** | **DoD IL5** | **DoD IL6** |
azure-government Documentation Government Csp List https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/documentation-government-csp-list.md
Below you can find a list of all the authorized Cloud Solution Providers (CSPs),
|[FourPoints Technology](https://www.4points.com)| |[For The Record LTD](https://www.fortherecord.com/)| |[Fujitsu America Inc.](https://www.fujitsu.com/us/)|
-|[Futurez Consulting, LLC](https://futurezconsulting.com/)|
|[General Dynamics Information Technology](https://gdit.com/)| |[Giga-Green Technologies](https://giga-green.com)| |[Gimmal](https://www.gimmal.com/)|
azure-maps Tutorial Creator Feature Stateset https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/tutorial-creator-feature-stateset.md
To update the `occupied` state of the unit with feature `id` "UNIT26":
## Additional information
-* For information on how to retrieve the state of a feature using its feature `id`, see [Feature State - Get States](/rest/api/maps/v2/feature-state/get-states).
- * For information on how to delete the stateset and its resources, see [Feature State - Delete Stateset](/rest/api/maps/v2/feature-state/delete-stateset) . * For information on using the Azure Maps Creator [Feature State service](/rest/api/maps/v2/feature-state) to apply styles that are based on the dynamic properties of indoor map data features, see how to article [Implement dynamic styling for Creator indoor maps](indoor-map-dynamic-styling.md).
azure-monitor Autoscale Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/autoscale/autoscale-troubleshoot.md
Title: Troubleshooting Azure autoscale
-description: Tracking down problems with Azure autoscaling used in Service Fabric, Virtual Machines, Web Apps, and cloud services.
+ Title: Troubleshooting Azure Monitor autoscale
+description: Tracking down problems with Azure Monitor autoscaling used in Service Fabric, Virtual Machines, Web Apps, and cloud services.
Last updated 11/4/2019
-# Troubleshooting Azure autoscale
+# Troubleshooting Azure Monitor autoscale
Azure Monitor autoscale helps you to have the right amount of resources running to handle the load on your application. It enables you to add resources to handle increases in load and also save money by removing resources that are sitting idle. You can scale based on a schedule, fixed date-time, or resource metric you choose. For more information, see [Autoscale Overview](autoscale-overview.md).
Create alert rules to get notified of autoscale actions or failures. You can als
For more information, see [autoscale resource logs](autoscale-resource-log-schema.md) ## Next steps
-Read information on [autoscale best practices](autoscale-best-practices.md).
+Read information on [autoscale best practices](autoscale-best-practices.md).
azure-monitor Container Insights Enable Existing Clusters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-enable-existing-clusters.md
If you would rather integrate with an existing workspace, perform the following
## Enable using Terraform
-1. Add the **oms_agent** add-on profile to the existing [azurerm_kubernetes_cluster resource](https://www.terraform.io/docs/providers/azurerm/d/kubernetes_cluster.html#addon_profile)
+1. Add the **oms_agent** add-on profile to the existing [azurerm_kubernetes_cluster resource](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/data-sources/kubernetes_cluster)
``` addon_profile {
If you would rather integrate with an existing workspace, perform the following
} ```
-2. Add the [azurerm_log_analytics_solution](https://www.terraform.io/docs/providers/azurerm/r/log_analytics_solution.html) following the steps in the Terraform documentation.
+2. Add the [azurerm_log_analytics_solution](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/log_analytics_solution) following the steps in the Terraform documentation.
+
+3. The metrics are not collected by default through Terraform, so once onboarded, there is an additional step to assign the monitoring metrics publisher role, which is required to [enable the metrics](./container-insights-update-metrics.md#update-one-cluster-by-using-the-azure-cli).
## Enable from Azure Monitor in the portal
azure-monitor Data Ingestion From File https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/data-ingestion-from-file.md
- Title: Ingest data from a file using Data Collection Rules (DCR)
-description: Learn how to ingest data from a file into a Log Analytics workspace from files using DCR.
--- Previously updated : 03/21/2022--
-# Customer intent: As a DevOps specialist, I want to ingest external data from a file into a workspace.
-
-# Collect and ingest data from a file using Data Collection Rules (DCR) (Preview)
-
-If you want to collect, log files from your systems using agents, you can use a Data Collection Rules.
-
-You can define how Azure Monitor transforms and stores data ingested into your workspace by setting [Data Collection Rules (DCR)](https://docs.microsoft.com/azure/azure-monitor/essentials/data-collection-rule-overview). Using DCR lets you ingest data quickly from different log formats.
-
-This tutorial explains how to ingest data from a file into a Log Analytics workspace using DCR.
-
->[!NOTE]
-> * To use this method, you need to make use of MMA agent. We recommend using AMA that has more native integration with Custom Logs v2 (currently in preview)
-> * Use [Custom Logs v2](https://docs.microsoft.com/azure/azure-monitor/logs/custom-logs-overview) that allows transformations and exports
-
-## Prerequisites
-
-To complete this tutorial, you need a [Log Analytics workspace](quick-create-workspace.md).
-
-## Create a custom log table
-
->[!TIP]
-> * If you already have a custom log table, you can skip this step and go and set a DCR.
-
-Before you can send data to the workspace, you need to create the custom table that the data will be sent to:
-
-1. Go to the **Log Analytics workspaces** menu in the Azure portal and select a workspace.
-1. Select **Custom Log** > **Add custom log**.
-1. Upload a sample log file.
-1. Select a record delimiter.
-1. Set a collection path:
- 1. Select Windows or Linux to specify which path format you're adding.
- 1. Set the path on to the custom log file on your machine.
-1. Specify a name for the table. Azure Monitor automatically adds the *_CL* (custom log) suffix to the table name.
-1. Select **Create**.
-## Create a Data Collection Rule (DCR)
-1. Make sure the name of the stream is `Custom-{TableName}`.
-
- For example:
-
- ```json
- {
- "properties": {
- "destinations": {
- "logAnalytics": [
- {
- "workspaceResourceId": "/subscriptions/<SubscriptionID/resourcegroups/<ResourceGroupName>/providers/Microsoft.OperationalInsights/workspaces/<DCRName>",
- "workspaceId": "WorkspaceID",
- "name": "MyLogFolder"
- }
- ]
- },
- "dataFlows": [
- {
- "streams": [
- "Custom-DataPullerE2E_CL"
- ],
- "destinations": [
- "MyLogFolder"
- ],
- "transformKql": "source",
- "outputStream": "Custom-DataPullerE2E_CL"
- }
- ]
- },
- "location": "eastus2euap",
- "id": "/subscriptions/<SubscriptionID>/resourcegroups/<ResourceGroupName>/providers/Microsoft.Insights/dataCollectionRules/<DCRName>",
- "name": "<DCRName>",
- "type": "Microsoft.Insights/dataCollectionRules"
- }
- ```
-
-1. Set the Data Collection Rule to be the default on the workspace. Use the following API command:
-
- ```json
- PUT https://management.azure.com/subscriptions/<SubscriptionID>/resourceGroups/<ResourceGroupName>/providers/Microsoft.OperationalInsights/workspaces/<WorkspaceName>?api-version=2015-11-01-preview
- {
- "properties": {
- "defaultDataCollectionRuleResourceId": "/subscriptions/<SubscriptionID>/resourcegroups/<ResourceGroupName>/providers/Microsoft.Insights/dataCollectionRules/<DCRName>"
- },
- "location": "eastus2euap",
- "type": "Microsoft.OperationalInsights/workspaces"
- }
- ```
-
-1. Set the table as file-based custom log ingestion via DCR eligible, use the Custom log definition API.
-
- 1. First run the following Get command:
-
- ```json
- GET https://management.azure.com/subscriptions/<SubscriptionID>/resourcegroups/<ResourceGroupName>/providers/Microsoft.OperationalInsights/workspaces/MyLogFolder/logsettings/customlogs/definitions/DataPullerE2E_CL?api-version=2020-08-01
- ```
-
- 1. Copy the response and send a PUT request:
-
- ```JSON
- {
- "Name": "DataPullerE2E_CL",
- "Description": "custom log to test Data puller E2E",
- "Inputs": [
- {
- "Location": {
- "FileSystemLocations": {
- "WindowsFileTypeLogPaths": [
- "C:\\MyLogFolder\\*.txt",
- "C:\\MyLogFolder\\MyLogFolder.txt"
- ]
- }
- },
- "RecordDelimiter": {
- "RegexDelimiter": {
- "Pattern": \\n,
- "MatchIndex": 0,
- "NumberedGroup": null
- }
- }
- }
- ],
- "Properties": [
- {
- "Name": "TimeGenerated",
- "Type": "DateTime",
- "Extraction": {
- "DateTimeExtraction": {}
- }
- }
- ],
- "SetDataCollectionRuleBased": true
- }
- ```
-
- >[!Note]
- > * The `SetDataCollectionRuleBased` flag, from the last API command, enables the table as data puller.
- > * Once you switch to DCREnabled mode, data will stop flowing unless you have DCR configured.
-
- * To validate that the value is updated, run:
- ```json
- GET https://management.azure.com/subscriptions/<SubscriptionID>/resourcegroups/<ResourceGroupName>/providers/microsoft.operationalinsights/workspaces/MyLogFolder/datasources?api-version=2020-08-01&$filter=(kind%20eq%20'CustomLog')
- ```
azure-monitor Ingestion Time Transformations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/ingestion-time-transformations.md
Last updated 01/19/2022
-# Ingestion-time transformations in Azure Monitor Logs (preview)
+# Tutorial: Ingestion-time transformations in Azure Monitor Logs (preview)
[Ingestion-time transformations](ingestion-time-transformations.md) allow you to manipulate incoming data before it's stored in a Log Analytics workspace. You can add data filtering, parsing and extraction, and control the structure of the data that gets ingested.in [!INCLUDE [Sign up for preview](../../../includes/azure-monitor-custom-logs-signup.md)]
Use ingestion-time transformation for the following scenarios:
**Simplify query requirements.** You may have a table with valuable data buried in a particular column or data that needs some type of conversion each time it's queried. Create a transformation that parses this data into a custom column so that queries don't need to parse it. Remove extra data from the column that isn't required to decrease ingestion and retention costs. ## Supported workflows
-Ingestion-time transformation is applied to any workflow that doesn't currently use a [data collection rule](../essentials/data-collection-rule-overview.md) sending data to a [supported table](tables-feature-support.md). The workflows that currently use data collection rules are as follows. Any transformation on a workspace will be ignored for these workloads.
+Ingestion-time transformation is applied to any workflow that doesn't currently use a [data collection rule](../essentials/data-collection-rule-overview.md) to send data to a [supported table](tables-feature-support.md). Any transformation on a workspace will be ignored for these workflows.
+
+The workflows that currently use data collection rules are as follows:
- [Azure Monitor agent](../agents/data-collection-rule-azure-monitor-agent.md) - [Custom logs](../logs/custom-logs-overview.md)
azure-netapp-files Faq Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/faq-security.md
Previously updated : 10/11/2021 Last updated : 04/08/2021 # Security FAQs for Azure NetApp Files
However, you cannot create Azure policies (custom naming policies) on the Azure
Deletion of an Azure NetApp Files volume is performed programmatically with immediate effect. The delete operation includes deleting keys used for encrypting data at rest. There is no option for any scenario to recover a deleted volume once the delete operation is executed successfully (via interfaces such as the Azure portal and the API.)
+## How are the Active Directory Connector credentials stored on the Azure NetApp Files service?
+
+The AD Connector credentials are stored in the Azure NetApp Files control plane database in an encrypted format. The encryption algorithm used is AES-256 (one-way).
## Next steps
azure-resource-manager Bicep Config Modules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/bicep-config-modules.md
Title: Module setting for Bicep config description: Describes how to customize configuration values for modules in Bicep deployments. Previously updated : 01/03/2022 Last updated : 04/08/2022 # Add module settings in the Bicep config file
For a template spec, use:
module stgModule 'ts/CoreSpecs:storage:v1' = { ```
+An alias has been predefined for the [public module registry](./modules.md#path-to-module). To reference a public module, you can use the format:
+
+```bicep
+br/public:<file>:<tag>
+```
+
+You can override the public module registry alias definition in the bicepconfig.json file:
+
+```json
+{
+ "moduleAliases": {
+ "br": {
+ "public": {
+ "registry": "<your_module_registry>",
+ "modulePath": "<optional_module_path>"
+ }
+ }
+ }
+}
+```
+ ## Credentials for publishing/restoring modules To [publish](bicep-cli.md#publish) modules to a private module registry or to [restore](bicep-cli.md#restore) external modules to the local cache, the account must have the correct permissions to access the registry. You can configure the credential precedence for authenticating to the registry. By default, Bicep uses the credentials from the user authenticated in Azure CLI or Azure PowerShell. To customize the credential precedence, see [Add credential precedence to Bicep config](bicep-config.md#credential-precedence).
azure-resource-manager Modules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/modules.md
Title: Bicep modules description: Describes how to define a module in a Bicep file, and how to use module scopes. Previously updated : 02/01/2022 Last updated : 04/08/2022 # Bicep modules Bicep enables you to organize deployments into modules. A module is just a Bicep file that is deployed from another Bicep file. With modules, you improve the readability of your Bicep files by encapsulating complex details of your deployment. You can also easily reuse modules for different deployments.
-To share modules with other people in your organization, create a [template spec](../templates/template-specs.md) or [private registry](private-module-registry.md). Template specs and modules in the registry are only available to users with the correct permissions.
+To share modules with other people in your organization, create a [template spec](../templates/template-specs.md), [public registry](https://github.com/Azure/bicep-registry-modules), or [private registry](private-module-registry.md). Template specs and modules in the registry are only available to users with the correct permissions.
> [!TIP] > The choice between module registry and template specs is mostly a matter of preference. There are a few things to consider when you choose between the two:
For example, to deploy a file that is up one level in the directory from your ma
### File in registry
+#### Public module registry
+
+The public module registry is hosted in a Microsoft container registry (MCR). The source code and the modules are stored in [GitHub](https://github.com/azure/bicep-registry-modules). The [README file](https://github.com/azure/bicep-registry-modules#readme) in the GitHub repo lists the available modules and their latest versions:
+
+![Bicep public module registry modules](./media/modules/bicep-public-module-registry-modules.png)
+
+Select the versions to see the available versions. You can also select **Code** to see the module source code, and open the Readme files.
+
+There are only a few published modules currently. More modules are coming. If you like to contribute to the registry, see the [contribution guide](https://github.com/Azure/bicep-registry-modules/blob/main/CONTRIBUTING.md).
+
+To link to a public registry module, specify the module path with the following syntax:
+
+```bicep
+module <symbolic-name> 'br/public:<file-path>:<tag>' = {}
+```
+
+- **br/public** is the alias for the public module registry.
+- **file path** can contain segments that can be separated by the `/` character.
+- **tag** is used for specifying a version for the module.
+
+For example:
++
+> [!NOTE]
+> **br/public** is the alias for the public registry. It can also be written as
+>
+> ```bicep
+> module <symbolic-name> 'br:mcr.microsoft.com/bicep/<file-path>:<tag>' = {}
+> ```
+>
+> For more information see aliases and configuring aliases later in this section.
+
+#### Private module registry
+ If you've [published a module to a registry](bicep-cli.md#publish), you can link to that module. Provide the name for the Azure container registry and a path to the module. Specify the module path with the following syntax: ```bicep
The full path for a module in a registry can be long. Instead of providing the f
::: code language="bicep" source="~/azure-docs-bicep-samples/syntax-samples/modules/alias-definition.bicep" highlight="1" :::
+An alias for the public module registry has been predefined:
++
+You can override the public alias in the bicepconfig.json file.
+ ### File in template spec After creating a [template spec](../bicep/template-specs.md), you can link to that template spec in a module. Specify the template spec in the following format:
azure-resource-manager Private Module Registry https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/private-module-registry.md
Title: Create private registry for Bicep module description: Learn how to set up an Azure container registry for private Bicep modules Previously updated : 02/21/2022 Last updated : 04/01/2022 # Create private registry for Bicep modules
-To share [modules](modules.md) within your organization, you can create a private module registry. You publish modules to that registry and give read access to users who need to deploy the modules. After the modules are shared in the registries, you can reference them from your Bicep files.
+To share [modules](modules.md) within your organization, you can create a private module registry. You publish modules to that registry and give read access to users who need to deploy the modules. After the modules are shared in the registries, you can reference them from your Bicep files. To contribute to the public module registry, see the [contribution guide](https://github.com/Azure/bicep-registry-modules/blob/main/CONTRIBUTING.md).
To work with module registries, you must have [Bicep CLI](./install.md) version **0.4.1008 or later**. To use with Azure CLI, you must also have version **2.31.0 or later**; to use with Azure PowerShell, you must also have version **7.0.0** or later.
azure-resource-manager Quickstart Private Module Registry https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/quickstart-private-module-registry.md
Title: Publish modules to private module registry description: Publish Bicep modules to private module registry and use the modules. Previously updated : 01/04/2022 Last updated : 04/01/2022 #Customer intent: As a developer new to Azure deployment, I want to learn how to publish Bicep modules to private module registry.
# Quickstart: Publish Bicep modules to private module registry
-Learn how to publish Bicep modules to private modules registry, and how to call the modules from your Bicep files. Private module registry allows you to share Bicep modules within your organization. To learn more, see [Create private registry for Bicep modules](./private-module-registry.md).
+Learn how to publish Bicep modules to private modules registry, and how to call the modules from your Bicep files. Private module registry allows you to share Bicep modules within your organization. To learn more, see [Create private registry for Bicep modules](./private-module-registry.md). To contribute to the public module registry, see the [contribution guide](https://github.com/Azure/bicep-registry-modules/blob/main/CONTRIBUTING.md).
## Prerequisites
azure-signalr Concept Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/concept-metrics.md
+
+ Title: Metrics in Azure SignalR Service
+description: Metrics in Azure SignalR Service.
+++ Last updated : 04/08/2022++
+# Metrics in Azure SignalR Service
+
+Azure SignalR Service has some built-in metrics and you and sets up [alerts](../azure-monitor/alerts/alerts-overview.md) and [autoscale](./signalr-howto-scale-autoscale.md) base on metrics.
+
+## Understand metrics
+
+Metrics provide the running info of the service. The available metrics are:
+
+> [!CAUTION]
+> The aggregation type "Count" is meaningless for all the metrics. Please DO NOT use it.
+
+|Metric|Unit|Recommended Aggregation Type|Description|Dimensions|
+||||||
+|Connection Close Count|Count|Sum|The count of connections closed by various reasons.|Endpoint, ConnectionCloseCategory|
+|Connection Count|Count|Max / Avg|The amount of connection.|Endpoint|
+|Connection Open Count|Count|Sum|The count of new connections opened.|Endpoint|
+|Connection Quota Utilization|Percent|Max / Avg|The percentage of connection connected relative to connection quota.|No Dimensions|
+|Inbound Traffic|Bytes|Sum|The inbound traffic of service|No Dimensions|
+|Message Count|Count|Sum|The total amount of messages.|No Dimensions|
+|Outbound Traffic|Bytes|Sum|The outbound traffic of service|No Dimensions|
+|System Errors|Percent|Avg|The percentage of system errors|No Dimensions|
+|User Errors|Percent|Avg|The percentage of user errors|No Dimensions|
+
+### Understand Dimensions
+
+Dimensions of a metric are name/value pairs that carry extra data to describe the metric value.
+
+The dimensions available in some metrics:
+
+* Endpoint: Describe the type of connection. Including dimension values: Client, Server, LiveTrace
+* ConnectionCloseCategory: Describe the categories of why connection getting closed. Including dimension values:
+ - Normal: Normal closure.
+ - Throttled: With (Message count/rate or connection) throttling, check Connection Count and Message Count current usage and your resource limits.
+ - PingTimeout: Connection ping timeout.
+ - NoAvailableServerConnection: Client connection cannot be established (won't even pass handshake) as no available server connection.
+ - InvokeUpstreamFailed: Upstream invoke failed.
+ - SlowClient: Too many messages queued up at service side, which needed to be sent.
+ - HandshakeError: Terminate connection in handshake phase, could be caused by the remote party closed the WebSocket connection without completing the close handshake. Mostly, it's caused by network issue. Otherwise, please check if the client is able to create websocket connection due to some browser settings.
+ - ServerConnectionNotFound: Target hub server not available. Nothing need to be done for improvement, this is by-design and reconnection should be done after this drop.
+ - ServerConnectionClosed: Client connection aborted because the corresponding server connection is dropped. When app server uses Azure SignalR Service SDK, in the background, it initiates server connections to the remote Azure SignalR service. Each client connection to the service is associated with one of the server connections to route traffic between client and app server. Once a server connection is closed, all the client connections it serves will be closed with ServerConnectionDropped message.
+ - ServiceTransientError: Internal server error
+ - BadRequest: This caused by invalid hub name, wrong payload, etc.
+ - ClosedByAppServer: App server asks the service to close the client.
+ - ServiceReload: This is triggered when a connection is dropped due to an internal service component reload. This event does not indicate a malfunction and is part of normal service operation.
+ - ServiceModeSwitched: Connection closed after service mode switched like from serverless mode to default mode
+ - Unauthorized: The connection is unauthorized
+
+Learn more about [multi-dimensional metrics](../azure-monitor/essentials/data-platform-metrics.md#multi-dimensional-metrics)
+
+### Understand the minimum grain of message count
+
+The minimum grain of message count showed in metric are 1, which means 2 KB outbound data traffic. If user sending very small amount of messages such as several bytes in a sampling time period, the message count will be 0.
+
+The way to check out small amount of messages is using metrics *Outbound Traffic*, which is count by bytes.
+
+### Understand System Errors and User Errors
+
+The Errors are the percentage of failure operations. Operations are consist of connecting, sending message and so on. The difference between System Error and User Error is that the former is the failure caused by our internal service error and the latter is caused by users. In normal case, the System Errors should be very low and near to zero.
+
+> [!IMPORTANT]
+> In some cases, the User Error will be always very high, especially in serverless case. In some browser, when user close the web page, the SignalR client doesn't close gracefully. The service will finally close it because of timeout. The timeout closure will be counted into User Error.
+
+## Related resources
+
+- [Aggregation types in Azure Monitor](../azure-monitor/essentials/metrics-supported.md#microsoftsignalrservicesignalr )
azure-signalr Signalr Howto Scale Multi Instances https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/signalr-howto-scale-multi-instances.md
ms.devlang: csharp Previously updated : 03/27/2019 Last updated : 04/08/2022 # How to scale SignalR Service with multiple instances?
-The latest SignalR Service SDK supports multiple endpoints for SignalR Service instances. You can use this feature to scale the concurrent connections, or use it for cross-region messaging.
+SignalR Service SDK supports multiple endpoints for SignalR Service instances. You can use this feature to scale the concurrent connections, or use it for cross-region messaging.
## For ASP.NET Core
app.MapAzureSignalR(GetType().FullName, hub, options => {
}); ```
+## Service Endpoint Metrics
+
+To enable advanced router, SignalR server SDK provides multiple metrics to help server do smart decision. The properties are under `ServiceEndpoint.EndpointMetrics`.
+
+| Metric Name | Description |
+| -- | -- |
+| `ClientConnectionCount` | Total concurrent connected client connection count on all hubs for the service endpoint |
+| `ServerConnectionCount` | Total concurrent connected server connection count on all hubs for the service endpoint |
+| `ConnectionCapacity` | Total connection quota for the service endpoint, including client and server connections |
+
+Below is an example to customize router according to `ClientConnectionCount`.
+
+```cs
+private class CustomRouter : EndpointRouterDecorator
+{
+ public override ServiceEndpoint GetNegotiateEndpoint(HttpContext context, IEnumerable<ServiceEndpoint> endpoints)
+ {
+ return endpoints.OrderBy(x => x.EndpointMetrics.ClientConnectionCount).FirstOrDefault(x => x.Online) // Get the available endpoint with minimal clients load
+ ?? base.GetNegotiateEndpoint(context, endpoints); // Or fallback to the default behavior to randomly select one from primary endpoints, or fallback to secondary when no primary ones are online
+ }
+}
+```
+
+## Dynamic Scale ServiceEndpoints
+
+From SDK version 1.5.0, we're enabling dynamic scale ServiceEndpoints for ASP.NET Core version first. So you don't have to restart app server when you need to add/remove a ServiceEndpoint. As ASP.NET Core is supporting default configuration like `appsettings.json` with `reloadOnChange: true`, you don't need to change a code and it's supported by nature. And if you'd like to add some customized configuration and work with hot-reload, please refer to [this](https://docs.microsoft.com/aspnet/core/fundamentals/configuration/?view=aspnetcore-3.1).
+
+> [!NOTE]
+>
+> Considering the time of connection set-up between server/service and client/service may be different, to ensure no message loss during the scale process, we have a staging period waiting for server connections be ready before open the new ServiceEndpoint to clients. Usually it takes seconds to complete and you'll be able to see log like `Succeed in adding endpoint: '{endpoint}'` which indicates the process complete. But for some unexpected reasons like cross-region network issue or configuration inconsistent on different app servers, the staging period will not be able to finish correctly. Since limited things can be done in these cases, we choose to promote the scale as it is. It's suggested to restart App Server when you find the scaling process not working correctly.
+>
+> The default timeout period for the scale is 5 minutes, and it can be customized by changing the value in `ServiceOptions.ServiceScaleTimeout`. If you have a lot of app servers, it's suggested to extend the value a little bit more.
++ ## Configuration in cross-region scenarios The `ServiceEndpoint` object has an `EndpointType` property with value `primary` or `secondary`.
azure-signalr Signalr Quickstart Azure Functions Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/signalr-quickstart-azure-functions-java.md
Title: Use Java to create a chat room with Azure Functions and SignalR Service
description: A quickstart for using Azure SignalR Service and Azure Functions to create an App showing GitHub star count using Java. Previously updated : 06/09/2021 Last updated : 04/04/2022 ms.devlang: java
# Quickstart: Use Java to create an App showing GitHub star count with Azure Functions and SignalR Service
-Azure SignalR Service lets you easily add real-time functionality to your application and Azure Functions is a serverless platform that lets you run your code without managing any infrastructure. In this quickstart, learn how to use SignalR Service and Azure Functions to build a serverless application with Java to broadcast messages to clients.
+In this article, you'll use Azure SignalR Service, Azure Functions, and Java to build a serverless application to broadcast messages to clients.
> [!NOTE]
-> You can get all codes mentioned in the article from [GitHub](https://github.com/aspnet/AzureSignalR-samples/tree/main/samples/QuickStartServerless/java)
+> The code in this article is available on [GitHub](https://github.com/aspnet/AzureSignalR-samples/tree/main/samples/QuickStartServerless/java).
## Prerequisites - A code editor, such as [Visual Studio Code](https://code.visualstudio.com/)-- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio).
+- An Azure account with an active subscription. If you don't already have an account, [create an account for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio).
- [Azure Functions Core Tools](https://github.com/Azure/azure-functions-core-tools#installing). Used to run Azure Function apps locally.-
- > [!NOTE]
- > The required SignalR Service bindings in Java are only supported in Azure Function Core Tools version 2.4.419 (host version 2.0.12332) or above.
-
- > [!NOTE]
- > To install extensions, Azure Functions Core Tools requires the [.NET Core SDK](https://dotnet.microsoft.com/download) installed. However, no knowledge of .NET is required to build JavaScript Azure Function apps.
+
+ - The required SignalR Service bindings in Java are only supported in Azure Function Core Tools version 2.4.419 (host version 2.0.12332) or above.
+ - To install extensions, Azure Functions Core Tools requires the [.NET Core SDK](https://dotnet.microsoft.com/download) installed. However, no knowledge of .NET is required to build Java Azure Function apps.
- [Java Developer Kit](https://www.azul.com/downloads/zulu/), version 11-- [Apache Maven](https://maven.apache.org), version 3.0 or above-
-> [!NOTE]
-> This quickstart can be run on macOS, Windows, or Linux.
-
-Having issues? Try the [troubleshooting guide](signalr-howto-troubleshoot-guide.md) or [let us know](https://aka.ms/asrs/qsjava).
+- [Apache Maven](https://maven.apache.org), version 3.0 or above.
-## Log in to Azure
+This quickstart can be run on macOS, Windows, or Linux.
-Sign in to the Azure portal at <https://portal.azure.com/> with your Azure account.
-
-Having issues? Try the [troubleshooting guide](signalr-howto-troubleshoot-guide.md) or [let us know](https://aka.ms/asrs/qsjava).
+## Create an Azure SignalR Service instance
[!INCLUDE [Create instance](includes/signalr-quickstart-create-instance.md)]
-Having issues? Try the [troubleshooting guide](signalr-howto-troubleshoot-guide.md) or [let us know](https://aka.ms/asrs/qsjava).
+## Configure and run the Azure Function app
+Make sure you have Azure Function Core Tools, Java (version 11 in the sample), and Maven installed.
-## Configure and run the Azure Function app
+1. Initialize the project using Maven:
-1. Make sure you have Azure Function Core Tools, Java (version 11 in the sample) and maven installed.
-
```bash mvn archetype:generate -DarchetypeGroupId=com.microsoft.azure -DarchetypeArtifactId=azure-functions-archetype -DjavaVersion=11 ```
- Maven asks you for values needed to finish generating the project. You can provide the following values.
+ Maven asks you for values needed to finish generating the project. Provide the following values:
| Prompt | Value | Description | | | -- | -- | | **groupId** | `com.signalr` | A value that uniquely identifies your project across all projects, following the [package naming rules](https://docs.oracle.com/javase/specs/jls/se6/html/packages.html#7.7) for Java. | | **artifactId** | `java` | A value that is the name of the jar, without a version number. | | **version** | `1.0-SNAPSHOT` | Choose the default value. |
- | **package** | `com.signalr` | A value that is the Java package for the generated function code. Use the default. |
+ | **package** | `com.signalr` | A value that is the Java package for the generated function code. Use the default. |
-2. After you initialize a project. Go to the folder `src/main/java/com/signalr` and copy the following codes to `Function.java`
+1. Go to the folder `src/main/java/com/signalr` and copy the following code to *Function.java*:
```java package com.signalr;
Having issues? Try the [troubleshooting guide](signalr-howto-troubleshoot-guide.
} ```
-3. Some dependencies need to be added. So open the `pom.xml` and add some dependency that used in codes.
+1. Some dependencies need to be added. Open *pom.xml* and add the following dependencies used in the code:
```xml <dependency>
Having issues? Try the [troubleshooting guide](signalr-howto-troubleshoot-guide.
</dependency> ```
-4. The client interface of this sample is a web page. Considered we read HTML content from `content/https://docsupdatetracker.net/index.html` in `index` function, create a new file `content/https://docsupdatetracker.net/index.html` in `resources` directory. Your directory tree should look like this.
-
- ```
- FunctionsProject
- | - src
- | | - main
- | | | - java
- | | | | - com
- | | | | | - signalr
- | | | | | | - Function.java
- | | | - resources
- | | | | - content
- | | | | | - https://docsupdatetracker.net/index.html
- | - pom.xml
- | - host.json
- | - local.settings.json
+1. The client interface for this sample is a web page. We read HTML content from *content/https://docsupdatetracker.net/index.html* in the `index` function, and then create a new file *content/https://docsupdatetracker.net/index.html* in the `resources` directory. Your directory tree should look like this:
+
+ ``` nsProject
+ | - src
+ | | - main
+ | | | - java
+ | | | | - com
+ | | | | | - signalr
+ | | | | | | - Function.java
+ | | | - resources
+ | | | | - content
+ | | | | | - https://docsupdatetracker.net/index.html
+ | - pom.xml
+ | - host.json
+ | - local.settings.json
```
- Open the `https://docsupdatetracker.net/index.html` and copy the following content.
+1. Open *https://docsupdatetracker.net/index.html* and copy the following content:
```html <html> <body>
- <h1>Azure SignalR Serverless Sample</h1>
- <div id="messages"></div>
- <script src="https://cdnjs.cloudflare.com/ajax/libs/microsoft-signalr/3.1.7/signalr.min.js"></script>
- <script>
+ <h1>Azure SignalR Serverless Sample</h1>
+ <div id="messages"></div>
+ <script src="https://cdnjs.cloudflare.com/ajax/libs/microsoft-signalr/3.1.7/signalr.min.js"></script>
+ <script>
let messages = document.querySelector('#messages'); const apiBaseUrl = window.location.origin; const connection = new signalR.HubConnectionBuilder() .withUrl(apiBaseUrl + '/api') .configureLogging(signalR.LogLevel.Information) .build();
- connection.on('newMessage', (message) => {
+ connection.on('newMessage', (message) => {
document.getElementById("messages").innerHTML = message;
- });
+ });
- connection.start()
+ connection.start()
.catch(console.error);
- </script>
+ </script>
</body> </html> ```
-5. It's almost done now. The last step is to set a connection string of the SignalR Service to Azure Function settings.
+1. You're almost done now. The last step is to set a connection string of the SignalR Service to Azure Function settings.
- 1. In the browser where the Azure portal is opened, confirm the SignalR Service instance you deployed earlier was successfully created by searching for its name in the search box at the top of the portal. Select the instance to open it.
+ 1. Search for the Azure SignalR instance you deployed earlier using the **Search** box in Azure portal. Select the instance to open it.
![Search for the SignalR Service instance](media/signalr-quickstart-azure-functions-csharp/signalr-quickstart-search-instance.png) 1. Select **Keys** to view the connection strings for the SignalR Service instance.
-
+ ![Screenshot that highlights the primary connection string.](media/signalr-quickstart-azure-functions-javascript/signalr-quickstart-keys.png)
- 1. Copy the primary connection string. And execute the command below.
-
+ 1. Copy the primary connection string, and then run the following command:
+ ```bash func settings add AzureSignalRConnectionString "<signalr-connection-string>" # Also we need to set AzureWebJobsStorage as Azure Function's requirement func settings add AzureWebJobsStorage "UseDevelopmentStorage=true" ```
-
-6. Run the Azure Function in local:
+
+1. Run the Azure Function in local:
```bash mvn clean package mvn azure-functions:run ```
- After Azure Function running locally. Use your browser to visit `http://localhost:7071/api/index` and you can see the current star count. And if you star or unstar in the GitHub, you will get a star count refreshing every few seconds.
+ After Azure Function is running locally, go to `http://localhost:7071/api/index` and you'll see the current star count. If you star or "unstar" in the GitHub, you'll get a star count refreshing every few seconds.
> [!NOTE] > SignalR binding needs Azure Storage, but you can use local storage emulator when the Function is running locally. > If you got some error like `There was an error performing a read operation on the Blob Storage Secret Repository. Please ensure the 'AzureWebJobsStorage' connection string is valid.` You need to download and enable [Storage Emulator](../storage/common/storage-use-emulator.md)
-
-Having issues? Try the [troubleshooting guide](signalr-howto-troubleshoot-guide.md) or [let us know](https://aka.ms/asrs/qsjava).
- [!INCLUDE [Cleanup](includes/signalr-quickstart-cleanup.md)]
Having issues? Try the [troubleshooting guide](signalr-howto-troubleshoot-guide.
## Next steps
-In this quickstart, you built and ran a real-time serverless application in local. Learn more how to use SignalR Service bindings for Azure Functions.
-Next, learn more about how to bi-directional communicating between clients and Azure Function with SignalR Service.
+In this quickstart, you built and ran a real-time serverless application in the local host. Next, learn more about how to bi-directional communicating between clients and Azure Function with SignalR Service.
> [!div class="nextstepaction"] > [SignalR Service bindings for Azure Functions](../azure-functions/functions-bindings-signalr-service.md)
azure-signalr Signalr Quickstart Azure Functions Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/signalr-quickstart-azure-functions-javascript.md
Title: Use JavaScript to create a chat room with Azure Functions and SignalR Ser
description: A quickstart for using Azure SignalR Service and Azure Functions to create an App showing GitHub star count using JavaScript. Previously updated : 06/09/2021 Last updated : 04/04/2022 ms.devlang: javascript
# Quickstart: Use JavaScript to create an App showing GitHub star count with Azure Functions and SignalR Service
-Azure SignalR Service lets you easily add real-time functionality to your application and Azure Functions is a serverless platform that lets you run your code without managing any infrastructure. In this quickstart, learn how to use SignalR Service and Azure Functions to build a serverless application with JavaScript to broadcast messages to clients.
+ In this article, you'll use Azure SignalR Service, Azure Functions, and JavaScript to build a serverless application to broadcast messages to clients.
> [!NOTE]
-> You can get all codes mentioned in the article from [GitHub](https://github.com/aspnet/AzureSignalR-samples/tree/main/samples/QuickStartServerless/javascript)
+> You can get all code mentioned in the article from [GitHub](https://github.com/aspnet/AzureSignalR-samples/tree/main/samples/QuickStartServerless/javascript).
## Prerequisites -- A code editor, such as [Visual Studio Code](https://code.visualstudio.com/)-- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio).
+- A code editor, such as [Visual Studio Code](https://code.visualstudio.com/).
+- An Azure account with an active subscription. If you don't already have an Azure account, [create an account for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio).
- [Azure Functions Core Tools](https://github.com/Azure/azure-functions-core-tools#installing), version 2 or above. Used to run Azure Function apps locally. - [Node.js](https://nodejs.org/en/download/), version 10.x
- > [!NOTE]
- > The examples should work with other versions of Node.js, see [Azure Functions runtime versions documentation](../azure-functions/functions-versions.md#languages) for more information.
+The examples should work with other versions of Node.js, for more information, see [Azure Functions runtime versions documentation](../azure-functions/functions-versions.md#languages).
-> [!NOTE]
-> This quickstart can be run on macOS, Windows, or Linux.
-
-Having issues? Try the [troubleshooting guide](signalr-howto-troubleshoot-guide.md) or [let us know](https://aka.ms/asrs/qsjs).
-
-## Log in to Azure
-
-Sign in to the Azure portal at <https://portal.azure.com/> with your Azure account.
+This quickstart can be run on macOS, Windows, or Linux.
-Having issues? Try the [troubleshooting guide](signalr-howto-troubleshoot-guide.md) or [let us know](https://aka.ms/asrs/qsjs).
+## Create an Azure SignalR Service instance
[!INCLUDE [Create instance](includes/signalr-quickstart-create-instance.md)]
-Having issues? Try the [troubleshooting guide](signalr-howto-troubleshoot-guide.md) or [let us know](https://aka.ms/asrs/qsjs).
-- ## Setup and run the Azure Function locally
-1. Make sure you have Azure Function Core Tools installed. And create an empty directory and navigate to the directory with command line.
+Make sure you have Azure Functions Core Tools installed.
+
+1. Using the command line, create an empty directory and then change to it. Initialize a new project:
```bash # Initialize a function project func init --worker-runtime javascript ```
-2. After you initialize a project, you need to create functions. In this sample, we need to create 3 functions.
+2. After you initialize a project, you need to create functions. In this sample, we'll create three functions:
- 1. Run the following command to create a `index` function, which will host a web page for client.
+ 1. Run the following command to create a `index` function, which will host a web page for clients.
```bash func new -n index -t HttpTrigger ```
- Open `index/function.json` and copy the following json codes:
+
+ Open *index/function.json* and copy the following json code:
```json {
Having issues? Try the [troubleshooting guide](signalr-howto-troubleshoot-guide.
} ```
- Open `index/index.js` and copy the following codes.
+ Open *index/index.js* and copy the following code:
```javascript var fs = require('fs').promises
Having issues? Try the [troubleshooting guide](signalr-howto-troubleshoot-guide.
} } ```
-
- 2. Create a `negotiate` function for clients to get access token.
-
+
+ 2. Create a `negotiate` function for clients to get an access token.
+ ```bash func new -n negotiate -t SignalRNegotiateHTTPTrigger ```
-
- Open `negotiate/function.json` and copy the following json codes:
-
+
+ Open *negotiate/function.json* and copy the following json code:
+ ```json { "disabled": false,
Having issues? Try the [troubleshooting guide](signalr-howto-troubleshoot-guide.
] } ```
-
- 3. Create a `broadcast` function to broadcast messages to all clients. In the sample, we use time trigger to broadcast messages periodically.
-
+
+ 3. Create a `broadcast` function to broadcast messages to all clients. In the sample, we use a time trigger to broadcast messages periodically.
+
```bash func new -n broadcast -t TimerTrigger ```
-
- Open `broadcast/function.json` and copy the following codes.
-
+
+ Open *broadcast/function.json* and copy the following code:
+
```json { "bindings": [
Having issues? Try the [troubleshooting guide](signalr-howto-troubleshoot-guide.
] } ```
-
- Open `broadcast/index.js` and copy the following codes.
-
+
+ Open *broadcast/index.js* and copy the following code:
+
```javascript var https = require('https');
Having issues? Try the [troubleshooting guide](signalr-howto-troubleshoot-guide.
} ```
-3. The client interface of this sample is a web page. Considered we read HTML content from `content/https://docsupdatetracker.net/index.html` in `index` function, create a new file `https://docsupdatetracker.net/index.html` in `content` directory under your project root folder. And copy the following content.
+3. The client interface of this sample is a web page. We read HTML content from *content/https://docsupdatetracker.net/index.html* in the `index` function, create a new file named *https://docsupdatetracker.net/index.html* in the `content` directory under your project root folder. Copy the following code:
```html <html>
Having issues? Try the [troubleshooting guide](signalr-howto-troubleshoot-guide.
</html> ```
-4. It's almost done now. The last step is to set a connection string of the SignalR Service to Azure Function settings.
+4. You're almost done now. The last step is to set a connection string of the SignalR Service to Azure Function settings.
- 1. In the browser where the Azure portal is opened, confirm the SignalR Service instance you deployed earlier was successfully created by searching for its name in the search box at the top of the portal. Select the instance to open it.
+ 1. In the Azure portal, find the SignalR instance you deployed earlier by typing its name in the **Search** box. Select the instance to open it.
![Search for the SignalR Service instance](media/signalr-quickstart-azure-functions-csharp/signalr-quickstart-search-instance.png) 1. Select **Keys** to view the connection strings for the SignalR Service instance.
-
+
![Screenshot that highlights the primary connection string.](media/signalr-quickstart-azure-functions-javascript/signalr-quickstart-keys.png) 1. Copy the primary connection string. And execute the command below.
-
+
```bash func settings add AzureSignalRConnectionString "<signalr-connection-string>" ```
-
-5. Run the Azure Function in local:
+
+5. Run the Azure function in local host:
```bash func start ```
- After Azure Function running locally. Use your browser to visit `http://localhost:7071/api/index` and you can see the current star count. And if you star or unstar in the GitHub, you will get a star count refreshing every few seconds.
+ After Azure Function running locally. Use your browser to visit `http://localhost:7071/api/index` and you can see the current star count. And if you star or "unstar" in GitHub, you'll see the star count refreshing every few seconds.
> [!NOTE]
- > SignalR binding needs Azure Storage, but you can use local storage emulator when the Function is running locally.
- > If you got some error like `There was an error performing a read operation on the Blob Storage Secret Repository. Please ensure the 'AzureWebJobsStorage' connection string is valid.` You need to download and enable [Storage Emulator](../storage/common/storage-use-emulator.md)
+ > SignalR binding needs Azure Storage, but you can use local storage emulator when the function is running locally.
+ > If you got an error like `There was an error performing a read operation on the Blob Storage Secret Repository. Please ensure the 'AzureWebJobsStorage' connection string is valid.` You need to download and enable [Storage Emulator](../storage/common/storage-use-emulator.md)
Having issues? Try the [troubleshooting guide](signalr-howto-troubleshoot-guide.md) or [let us know](https://aka.ms/asrs/qscsharp) [!INCLUDE [Cleanup](includes/signalr-quickstart-cleanup.md)]
-Having issues? Try the [troubleshooting guide](signalr-howto-troubleshoot-guide.md) or [let us know](https://aka.ms/asrs/qsjs).
- ## Next steps
-In this quickstart, you built and ran a real-time serverless application in local. Learn more how to use SignalR Service bindings for Azure Functions.
-Next, learn more about how to bi-directional communicating between clients and Azure Function with SignalR Service.
+In this quickstart, you built and ran a real-time serverless application in localhost. Next, learn more about how to bi-directional communicating between clients and Azure Function with SignalR Service.
> [!div class="nextstepaction"] > [SignalR Service bindings for Azure Functions](../azure-functions/functions-bindings-signalr-service.md)
azure-sql Active Directory Interactive Connect Azure Sql Db https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/active-directory-interactive-connect-azure-sql-db.md
Previously updated : 04/23/2020 Last updated : 04/06/2022 # Connect to Azure SQL Database with Azure AD Multi-Factor Authentication [!INCLUDE[appliesto-sqldb](../includes/appliesto-sqldb.md)] This article provides a C# program that connects to Azure SQL Database. The program uses interactive mode authentication, which supports [Azure AD Multi-Factor Authentication](../../active-directory/authentication/concept-mfa-howitworks.md).
-For more information about Multi-Factor Authentication support for SQL tools, see [Azure Active Directory support in SQL Server Data Tools (SSDT)](/sql/ssdt/azure-active-directory).
+For more information about Multi-Factor Authentication support for SQL tools, see [Using multi-factor Azure Active Directory authentication](/azure/azure-sql/database/authentication-mfa-ssms-overview).
## Multi-Factor Authentication for Azure SQL Database
-Starting in .NET Framework version 4.7.2, the enum [`SqlAuthenticationMethod`](/dotnet/api/system.data.sqlclient.sqlauthenticationmethod) has a new value: `ActiveDirectoryInteractive`. In a client C# program, the enum value directs the system to use the Azure Active Directory (Azure AD) interactive mode that supports Multi-Factor Authentication to connect to Azure SQL Database. The user who runs the program sees the following dialog boxes:
+`Active Directory Interactive` authentication supports multi-factor authentication using [Microsoft.Data.SqlClient](/sql/connect/ado-net/introduction-microsoft-data-sqlclient-namespace) to connect to Azure SQL data sources. In a client C# program, the enum value directs the system to use the Azure Active Directory (Azure AD) interactive mode that supports Multi-Factor Authentication to connect to Azure SQL Database. The user who runs the program sees the following dialog boxes:
* A dialog box that displays an Azure AD user name and asks for the user's password.
- If the user's domain is federated with Azure AD, this dialog box doesn't appear, because no password is needed.
+ If the user's domain is federated with Azure AD, the dialog box doesn't appear, because no password is needed.
- If the Azure AD policy imposes Multi-Factor Authentication on the user, the next two dialog boxes are displayed.
+ If the Azure AD policy imposes Multi-Factor Authentication on the user, a dialog box to sign in to your account will display.
* The first time a user goes through Multi-Factor Authentication, the system displays a dialog box that asks for a mobile phone number to send text messages to. Each message provides the *verification code* that the user must enter in the next dialog box.
For screenshots of these dialog boxes, see [Configure multi-factor authenticatio
> > You can also search directly with the [optional ?term=&lt;search value&gt; parameter](/dotnet/api/?term=SqlAuthenticationMethod).
-## Configure your C# application in the Azure portal
+## Prerequisite
Before you begin, you should have a [logical SQL server](logical-servers.md) created and available.
-### Register your app and set permissions
-
-To use Azure AD authentication, your C# program has to register as an Azure AD application. To register an app, you need to be either an Azure AD admin or a user assigned the Azure AD *Application Developer* role. For more information about assigning roles, see [Assign administrator and non-administrator roles to users with Azure Active Directory](../../active-directory/fundamentals/active-directory-users-assign-role-azure-portal.md).
-
-Completing an app registration generates and displays an **application ID**. Your program has to include this ID to connect.
-
-To register and set necessary permissions for your application:
-
-1. In the Azure portal, select **Azure Active Directory** > **App registrations** > **New registration**.
-
- ![App registration](./media/active-directory-interactive-connect-azure-sql-db/image1.png)
-
- After the app registration is created, the **application ID** value is generated and displayed.
-
- ![App ID displayed](./media/active-directory-interactive-connect-azure-sql-db/image2.png)
-
-2. Select **API permissions** > **Add a permission**.
-
- ![Permissions settings for registered app](./media/active-directory-interactive-connect-azure-sql-db/sshot-registered-app-settings-required-permissions-add-api-access-c32.png)
-
-3. Select **APIs my organization uses** > type **Azure SQL Database** into the search > and select **Azure SQL Database**.
-
- ![Add access to API for Azure SQL Database](./media/active-directory-interactive-connect-azure-sql-db/sshot-registered-app-settings-required-permissions-add-api-access-Azure-sql-db-d11.png)
-
-4. Select **Delegated permissions** > **user_impersonation** > **Add permissions**.
-
- ![Delegate permissions to API for Azure SQL Database](./media/active-directory-interactive-connect-azure-sql-db/sshot-add-api-access-azure-sql-db-delegated-permissions-checkbox-e14.png)
- ### Set an Azure AD admin for your server
-For your C# program to run, a [logical SQL server](logical-servers.md) admin needs to assign an Azure AD admin for your server.
+For the C# example to run, a [logical SQL server](logical-servers.md) admin needs to assign an Azure AD admin for your server.
On the **SQL server** page, select **Active Directory admin** > **Set admin**. For more information about Azure AD admins and users for Azure SQL Database, see the screenshots in [Configure and manage Azure Active Directory authentication with SQL Database](authentication-aad-configure.md#provision-azure-ad-admin-sql-database).
-### Add a non-admin user to a specific database (optional)
+## Microsoft.Data.SqlClient
-An Azure AD admin for a [logical SQL server](logical-servers.md) can run the C# example program. An Azure AD user can run the program if they are in the database. An Azure AD SQL admin or an Azure AD user who exists already in the database and has the `ALTER ANY USER` permission on the database can add a user.
-
-You can add a user to the database with the SQL [`Create User`](/sql/t-sql/statements/create-user-transact-sql) command. An example is `CREATE USER [<username>] FROM EXTERNAL PROVIDER`.
-
-For more information, see [Use Azure Active Directory Authentication for authentication with SQL Database, Managed Instance, or Azure Synapse Analytics](authentication-aad-overview.md).
-
-## New authentication enum value
-
-The C# example relies on the [`System.Data.SqlClient`](/dotnet/api/system.data.sqlclient) namespace. Of special interest for Multi-Factor Authentication is the enum `SqlAuthenticationMethod`, which has the following values:
-
-* `SqlAuthenticationMethod.ActiveDirectoryInteractive`
-
- Use this value with an Azure AD user name to implement Multi-Factor Authentication. This value is the focus of the present article. It produces an interactive experience by displaying dialog boxes for the user password, and then for Multi-Factor Authentication validation if Multi-Factor Authentication is imposed on this user. This value is available starting with .NET Framework version 4.7.2.
-
-* `SqlAuthenticationMethod.ActiveDirectoryIntegrated`
-
- Use this value for a *federated* account. For a federated account, the user name is known to the Windows domain. This authentication method doesn't support Multi-Factor Authentication.
-
-* `SqlAuthenticationMethod.ActiveDirectoryPassword`
-
- Use this value for authentication that requires an Azure AD user name and password. Azure SQL Database does the authentication. This method doesn't support Multi-Factor Authentication.
+The C# example relies on the [Microsoft.Data.SqlClient](/sql/connect/ado-net/introduction-microsoft-data-sqlclient-namespace) namespace. For more information, see [Using Azure Active Directory authentication with SqlClient](/sql/connect/ado-net/sql/azure-active-directory-authentication).
> [!NOTE]
-> If you are using .NET Core, you will want to use the [Microsoft.Data.SqlClient](/dotnet/api/microsoft.data.sqlclient) namespace. For more information, see the following [blog](https://devblogs.microsoft.com/dotnet/introducing-the-new-microsoftdatasqlclient/).
-
-## Set C# parameter values from the Azure portal
-
-For the C# program to successfully run, you need to assign proper values to static fields. Shown here are fields with example values. Also shown are the Azure portal locations where you can obtain the needed values.
-
-| Static field name | Example value | Where in Azure portal |
-| :- | : | :-- |
-| Az_SQLDB_svrName | "my-sqldb-svr.database.windows.net" | **SQL servers** > **Filter by name** |
-| AzureAD_UserID | "auser\@abc.onmicrosoft.com" | **Azure Active Directory** > **User** > **New guest user** |
-| Initial_DatabaseName | "myDatabase" | **SQL servers** > **SQL databases** |
-| ClientApplicationID | "a94f9c62-97fe-4d19-b06d-111111111111" | **Azure Active Directory** > **App registrations** > **Search by name** > **Application ID** |
-| RedirectUri | new Uri("https://mywebserver.com/") | **Azure Active Directory** > **App registrations** > **Search by name** > *[Your-App-registration]* > **Settings** > **RedirectURIs**<br /><br />For this article, any valid value is fine for RedirectUri, because it isn't used here. |
+> [System.Data.SqlClient](/dotnet/api/system.data.sqlclient) uses the Azure Active Directory Authentication Library (ADAL), which will be deprecated. If you're using the [System.Data.SqlClient](/dotnet/api/system.data.sqlclient) namespace for Azure Active Directory authentication, migrate applications to [Microsoft.Data.SqlClient](/sql/connect/ado-net/introduction-microsoft-data-sqlclient-namespace) and the [Microsoft Authentication Library (MSAL)](/azure/active-directory/develop/msal-migration). For more information about using Azure AD authentication with SqlClient, see [Using Azure Active Directory authentication with SqlClient](/sql/connect/ado-net/sql/azure-active-directory-authentication).
## Verify with SQL Server Management Studio
-Before you run the C# program, it's a good idea to check that your setup and configurations are correct in SQL Server Management Studio (SSMS). Any C# program failure can then be narrowed to source code.
+Before you run the C# example, it's a good idea to check that your setup and configurations are correct in [SQL Server Management Studio (SSMS)](/sql/ssms/download-sql-server-management-studio-ssms). Any C# program failure can then be narrowed to source code.
### Verify server-level firewall IP addresses
-Run SSMS from the same computer, in the same building, where you plan to run the C# program. For this test, any **Authentication** mode is OK. If there's any indication that the server isn't accepting your IP address, see [server-level and database-level firewall rules](firewall-configure.md) for help.
+Run SSMS from the same computer, in the same building, where you plan to run the C# example. For this test, any **Authentication** mode is OK. If there's any indication that the server isn't accepting your IP address, see [server-level and database-level firewall rules](firewall-configure.md) for help.
### Verify Azure Active Directory Multi-Factor Authentication
Run SSMS again, this time with **Authentication** set to **Azure Active Director
For more information, see [Configure Multi-Factor Authentication for SSMS and Azure AD](authentication-mfa-ssms-configure.md). > [!NOTE]
-> If you are a guest user in the database, you also need to provide the Azure AD domain name for the database: Select **Options** > **AD domain name or tenant ID**. To find the domain name in the Azure portal, select **Azure Active Directory** > **Custom domain names**. In the C# example program, providing a domain name is not necessary.
+> If you are a guest user in the database, you also need to provide the Azure AD domain name for the database: Select **Options** > **AD domain name or tenant ID**. If you are running SSMS 18.x or later, the AD domain name or tenant ID is no longer needed for guest users because 18.x or later automatically recognizes it.
+>
+>To find the domain name in the Azure portal, select **Azure Active Directory** > **Custom domain names**. In the C# example program, providing a domain name is not necessary.
## C# code example > [!NOTE] > If you are using .NET Core, you will want to use the [Microsoft.Data.SqlClient](/dotnet/api/microsoft.data.sqlclient) namespace. For more information, see the following [blog](https://devblogs.microsoft.com/dotnet/introducing-the-new-microsoftdatasqlclient/).
-The example C# program relies on the [*Microsoft.IdentityModel.Clients.ActiveDirectory*](/dotnet/api/microsoft.identitymodel.clients.activedirectory) DLL assembly.
-
-To install this package, in Visual Studio, select **Project** > **Manage NuGet Packages**. Search for and install **Microsoft.IdentityModel.Clients.ActiveDirectory**.
- This is an example of C# source code. ```csharp using System;
+using Microsoft.Data.SqlClient;
-// Reference to Azure AD authentication assembly
-using Microsoft.IdentityModel.Clients.ActiveDirectory;
-
-using DA = System.Data;
-using SC = System.Data.SqlClient;
-using AD = Microsoft.IdentityModel.Clients.ActiveDirectory;
-using TX = System.Text;
-using TT = System.Threading.Tasks;
-
-namespace ADInteractive5
+public class Program
{
- class Program
+ public static void Main(string[] args)
{
- // ASSIGN YOUR VALUES TO THESE STATIC FIELDS !!
- static public string Az_SQLDB_svrName = "<Your server>";
- static public string AzureAD_UserID = "<Your User ID>";
- static public string Initial_DatabaseName = "<Your Database>";
- // Some scenarios do not need values for the following two fields:
- static public readonly string ClientApplicationID = "<Your App ID>";
- static public readonly Uri RedirectUri = new Uri("<Your URI>");
-
- public static void Main(string[] args)
- {
- var provider = new ActiveDirectoryAuthProvider();
+ // Use your own server, database, and user ID.
+ // Connetion string - user ID is not provided and is asked interactively.
+ string ConnectionString = @"Server=<your server>.database.windows.net; Authentication=Active Directory Interactive; Database=<your database>";
- SC.SqlAuthenticationProvider.SetProvider(
- SC.SqlAuthenticationMethod.ActiveDirectoryInteractive,
- //SC.SqlAuthenticationMethod.ActiveDirectoryIntegrated, // Alternatives.
- //SC.SqlAuthenticationMethod.ActiveDirectoryPassword,
- provider);
- Program.Connection();
- }
+ using (SqlConnection conn = new SqlConnection(ConnectionString))
- public static void Connection()
{
- SC.SqlConnectionStringBuilder builder = new SC.SqlConnectionStringBuilder();
-
- // Program._ static values that you set earlier.
- builder["Data Source"] = Program.Az_SQLDB_svrName;
- builder.UserID = Program.AzureAD_UserID;
- builder["Initial Catalog"] = Program.Initial_DatabaseName;
-
- // This "Password" is not used with .ActiveDirectoryInteractive.
- //builder["Password"] = "<YOUR PASSWORD HERE>";
-
- builder["Connect Timeout"] = 15;
- builder["TrustServerCertificate"] = true;
- builder.Pooling = false;
-
- // Assigned enum value must match the enum given to .SetProvider().
- builder.Authentication = SC.SqlAuthenticationMethod.ActiveDirectoryInteractive;
- SC.SqlConnection sqlConnection = new SC.SqlConnection(builder.ConnectionString);
-
- SC.SqlCommand cmd = new SC.SqlCommand(
- "SELECT '******** MY QUERY RAN SUCCESSFULLY!! ********';",
- sqlConnection);
-
- try
+ conn.Open();
+ Console.WriteLine("ConnectionString2 succeeded.");
+ using (var cmd = new SqlCommand("SELECT @@Version", conn))
{
- sqlConnection.Open();
- if (sqlConnection.State == DA.ConnectionState.Open)
- {
- var rdr = cmd.ExecuteReader();
- var msg = new TX.StringBuilder();
- while (rdr.Read())
- {
- msg.AppendLine(rdr.GetString(0));
- }
- Console.WriteLine(msg.ToString());
- Console.WriteLine(":Success");
- }
- else
- {
- Console.WriteLine(":Failed");
- }
- sqlConnection.Close();
+ Console.WriteLine("select @@version");
+ var result = cmd.ExecuteScalar();
+ Console.WriteLine(result.ToString());
}
- catch (Exception ex)
- {
- Console.ForegroundColor = ConsoleColor.Red;
- Console.WriteLine("Connection failed with the following exception...");
- Console.WriteLine(ex.ToString());
- Console.ResetColor();
- }
- }
- } // EOClass Program.
-
- /// <summary>
- /// SqlAuthenticationProvider - Is a public class that defines 3 different Azure AD
- /// authentication methods. The methods are supported in the new .NET 4.7.2.
- /// .
- /// 1. Interactive, 2. Integrated, 3. Password
- /// .
- /// All 3 authentication methods are based on the Azure
- /// Active Directory Authentication Library (ADAL) managed library.
- /// </summary>
- public class ActiveDirectoryAuthProvider : SC.SqlAuthenticationProvider
- {
- // Program._ more static values that you set!
- private readonly string _clientId = Program.ClientApplicationID;
- private readonly Uri _redirectUri = Program.RedirectUri;
- public override async TT.Task<SC.SqlAuthenticationToken>
- AcquireTokenAsync(SC.SqlAuthenticationParameters parameters)
- {
- AD.AuthenticationContext authContext =
- new AD.AuthenticationContext(parameters.Authority);
- authContext.CorrelationId = parameters.ConnectionId;
- AD.AuthenticationResult result;
-
- switch (parameters.AuthenticationMethod)
- {
- case SC.SqlAuthenticationMethod.ActiveDirectoryInteractive:
- Console.WriteLine("In method 'AcquireTokenAsync', case_0 == '.ActiveDirectoryInteractive'.");
-
- result = await authContext.AcquireTokenAsync(
- parameters.Resource, // "https://database.windows.net/"
- _clientId,
- _redirectUri,
- new AD.PlatformParameters(AD.PromptBehavior.Auto),
- new AD.UserIdentifier(
- parameters.UserId,
- AD.UserIdentifierType.RequiredDisplayableId));
- break;
-
- case SC.SqlAuthenticationMethod.ActiveDirectoryIntegrated:
- Console.WriteLine("In method 'AcquireTokenAsync', case_1 == '.ActiveDirectoryIntegrated'.");
-
- result = await authContext.AcquireTokenAsync(
- parameters.Resource,
- _clientId,
- new AD.UserCredential());
- break;
-
- case SC.SqlAuthenticationMethod.ActiveDirectoryPassword:
- Console.WriteLine("In method 'AcquireTokenAsync', case_2 == '.ActiveDirectoryPassword'.");
-
- result = await authContext.AcquireTokenAsync(
- parameters.Resource,
- _clientId,
- new AD.UserPasswordCredential(
- parameters.UserId,
- parameters.Password));
- break;
-
- default: throw new InvalidOperationException();
- }
- return new SC.SqlAuthenticationToken(result.AccessToken, result.ExpiresOn);
}
+ Console.ReadKey();
- public override bool IsSupported(SC.SqlAuthenticationMethod authenticationMethod)
- {
- return authenticationMethod == SC.SqlAuthenticationMethod.ActiveDirectoryIntegrated
- || authenticationMethod == SC.SqlAuthenticationMethod.ActiveDirectoryInteractive
- || authenticationMethod == SC.SqlAuthenticationMethod.ActiveDirectoryPassword;
- }
- } // EOClass ActiveDirectoryAuthProvider.
-} // EONamespace. End of entire program source code.
+ }
+}
```
namespace ADInteractive5
This is an example of the C# test output. ```C#
-[C:\Test\VSProj\ADInteractive5\ADInteractive5\bin\Debug\]
->> ADInteractive5.exe
-In method 'AcquireTokenAsync', case_0 == '.ActiveDirectoryInteractive'.
-******** MY QUERY RAN SUCCESSFULLY!! ********
-
-:Success
-
-[C:\Test\VSProj\ADInteractive5\ADInteractive5\bin\Debug\]
->>
+ConnectionString2 succeeded.
+select @@version
+Microsoft SQL Azure (RTM) - 12.0.2000.8
+ ...
``` ## Next steps
-> [!IMPORTANT]
-> The PowerShell Azure Resource Manager module is still supported by Azure SQL Database, but all future development is for the Az.Sql module. For these cmdlets, see [AzureRM.Sql](/powershell/module/AzureRM.Sql/). The arguments for the commands in the Az module and in the AzureRm modules are substantially identical.
-
-& [Get-AzSqlServerActiveDirectoryAdministrator](/powershell/module/az.sql/get-azsqlserveractivedirectoryadministrator)
+- [Azure Active Directory server principals](authentication-azure-ad-logins.md)
+- [Azure AD-only authentication with Azure SQL](authentication-azure-ad-only-authentication.md)
+- [Using multi-factor Azure Active Directory authentication](authentication-mfa-ssms-overview.md)
azure-sql Always Encrypted Enclaves Enable Sgx https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/always-encrypted-enclaves-enable-sgx.md
Title: "Enable Intel SGX for Always Encrypted"
-description: "Learn how to enable Intel SGX for Always Encrypted with secure enclaves in Azure SQL Database by selecting an SGX-enabled hardware generation."
+description: "Learn how to enable Intel SGX for Always Encrypted with secure enclaves in Azure SQL Database by selecting SGX-enabled hardware."
ms.reviwer: vanto Previously updated : 07/14/2021 Last updated : 04/06/2022 # Enable Intel SGX for Always Encrypted for your Azure SQL Database [!INCLUDE[appliesto-sqldb](../includes/appliesto-sqldb.md)]
-[Always Encrypted with secure enclaves](/sql/relational-databases/security/encryption/always-encrypted-enclaves) in Azure SQL Database uses [Intel Software Guard Extensions (Intel SGX)](https://itpeernetwork.intel.com/microsoft-azure-confidential-computing/) enclaves. For Intel SGX to be available, the database must use the [vCore model](service-tiers-vcore.md) and the [DC-series](service-tiers-sql-database-vcore.md#dc-series) hardware generation.
+[Always Encrypted with secure enclaves](/sql/relational-databases/security/encryption/always-encrypted-enclaves) in Azure SQL Database uses [Intel Software Guard Extensions (Intel SGX)](https://itpeernetwork.intel.com/microsoft-azure-confidential-computing/) enclaves. For Intel SGX to be available, the database must use the [vCore model](service-tiers-vcore.md) and [DC-series](service-tiers-sql-database-vcore.md#dc-series) hardware.
-Configuring the DC-series hardware generation to enable Intel SGX enclaves is the responsibility of the Azure SQL Database administrator. See [Roles and responsibilities when configuring SGX enclaves and attestation](always-encrypted-enclaves-plan.md#roles-and-responsibilities-when-configuring-sgx-enclaves-and-attestation).
+Configuring the DC-series hardware to enable Intel SGX enclaves is the responsibility of the Azure SQL Database administrator. See [Roles and responsibilities when configuring SGX enclaves and attestation](always-encrypted-enclaves-plan.md#roles-and-responsibilities-when-configuring-sgx-enclaves-and-attestation).
> [!NOTE]
-> Intel SGX is not available in hardware generations other than DC-series. For example, Intel SGX is not available for Gen5 hardware, and it is not available for databases using the [DTU model](service-tiers-dtu.md).
+> Intel SGX is not available in hardware configurations other than DC-series. For example, Intel SGX is not available for Gen5 hardware, and it is not available for databases using the [DTU model](service-tiers-dtu.md).
> [!IMPORTANT]
-> Before you configure the DC-series hardware generation for your database, check the regional availability of DC-series and make sure you understand its performance limitations. For more information, see [DC-series](service-tiers-sql-database-vcore.md#dc-series).
+> Before you configure the DC-series hardware for your database, check the regional availability of DC-series and make sure you understand its performance limitations. For more information, see [DC-series](service-tiers-sql-database-vcore.md#dc-series).
-For detailed instructions for how to configure a new or existing database to use a specific hardware generation, see [Selecting a hardware generation](service-tiers-sql-database-vcore.md#selecting-a-hardware-generation).
+For detailed instructions for how to configure a new or existing database to use a specific hardware configuration, see [Hardware configuration](service-tiers-sql-database-vcore.md#hardware-configuration).
## Next steps
azure-sql Always Encrypted Enclaves Getting Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/always-encrypted-enclaves-getting-started.md
ms.reviwer: vanto Previously updated : 07/14/2021 Last updated : 04/06/2022 # Tutorial: Getting started with Always Encrypted with secure enclaves in Azure SQL Database
To continue to interact with the PowerShell Gallery, run the following command b
## Step 1: Create and configure a server and a DC-series database
-In this step, you will create a new Azure SQL Database logical server and a new database using the DC-series hardware generation, required for Always Encrypted with secure enclaves. For more information see [DC-series](service-tiers-sql-database-vcore.md#dc-series).
+In this step, you will create a new Azure SQL Database logical server and a new database using DC-series hardware, required for Always Encrypted with secure enclaves. For more information see [DC-series](service-tiers-sql-database-vcore.md#dc-series).
# [Portal](#tab/azure-portal)
In this step, you will create a new Azure SQL Database logical server and a new
- **Password**: Enter a password that meets requirements, and enter it again in the **Confirm password** field. - **Location**: Select a location from the dropdown list. > [!IMPORTANT]
- > You need to select a location (an Azure region) that supports both the DC-series hardware generation and Microsoft Azure Attestation. For the list of regions supporting DC-series, see [DC-series availability](service-tiers-sql-database-vcore.md#dc-series). [Here](https://azure.microsoft.com/global-infrastructure/services/?products=azure-attestation) is the regional availability of Microsoft Azure Attestation.
+ > You need to select a location (an Azure region) that supports both the DC-series hardware and Microsoft Azure Attestation. For the list of regions supporting DC-series, see [DC-series availability](service-tiers-sql-database-vcore.md#dc-series). [Here](https://azure.microsoft.com/global-infrastructure/services/?products=azure-attestation) is the regional availability of Microsoft Azure Attestation.
Select **OK**. 1. Leave **Want to use SQL elastic pool** set to **No**.
In this step, you will create a new Azure SQL Database logical server and a new
1. Create a new resource group. > [!IMPORTANT]
- > You need to create your resource group in a region (location) that supports both the DC-series hardware generation and Microsoft Azure Attestation. For the list of regions supporting DC-series, see [DC-series availability](service-tiers-sql-database-vcore.md#dc-series). [Here](https://azure.microsoft.com/global-infrastructure/services/?products=azure-attestation) is the regional availability of Microsoft Azure Attestation.
+ > You need to create your resource group in a region (location) that supports both the DC-series hardware and Microsoft Azure Attestation. For the list of regions supporting DC-series, see [DC-series availability](service-tiers-sql-database-vcore.md#dc-series). [Here](https://azure.microsoft.com/global-infrastructure/services/?products=azure-attestation) is the regional availability of Microsoft Azure Attestation.
```powershell $resourceGroupName = "<your new resource group name>"
In this step, you'll create and configure an attestation provider in Microsoft A
$attestationProviderName = "<your attestation provider name>" New-AzAttestation -Name $attestationProviderName -ResourceGroupName $resourceGroupName -Location $location ```
-1. Assign yourself to the Attestation Contributor role for the attestaton provider, to ensure you have permissions to configure an attestation policy.
+1. Assign yourself to the Attestation Contributor role for the attestation provider, to ensure you have permissions to configure an attestation policy.
```powershell New-AzRoleAssignment -SignInName $context.Account.Id `
azure-sql Always Encrypted Enclaves Plan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/always-encrypted-enclaves-plan.md
ms.reviwer: vanto Previously updated : 07/14/2021 Last updated : 04/06/2022 # Plan for Intel SGX enclaves and attestation in Azure SQL Database
Last updated 07/14/2021
## Plan for Intel SGX in Azure SQL Database
-Intel SGX is a hardware-based trusted execution environment technology. Intel SGX is available for databases that use the [vCore model](service-tiers-sql-database-vcore.md) and the [DC-series](service-tiers-sql-database-vcore.md?#dc-series) hardware generation. Therefore, to ensure you can use Always Encrypted with secure enclaves in your database, you need to either select the DC-series hardware generation when you create the database, or you can update your existing database to use the DC-series hardware generation.
+Intel SGX is a hardware-based trusted execution environment technology. Intel SGX is available for databases that use the [vCore model](service-tiers-sql-database-vcore.md) and [DC-series](service-tiers-sql-database-vcore.md?#dc-series) hardware. Therefore, to ensure you can use Always Encrypted with secure enclaves in your database, you need to either select the DC-series hardware when you create the database, or you can update your existing database to use the DC-series hardware.
> [!NOTE]
-> Intel SGX is not available in hardware generations other than DC-series. For example, Intel SGX is not available for Gen5 hardware, and it is not available for databases using the [DTU model](service-tiers-dtu.md).
+> Intel SGX is not available in hardware other than DC-series. For example, Intel SGX is not available for Gen5 hardware, and it is not available for databases using the [DTU model](service-tiers-dtu.md).
> [!IMPORTANT]
-> Before you configure the DC-series hardware generation for your database, check the regional availability of DC-series and make sure you understand its performance limitations. For details, see [DC-series](service-tiers-sql-database-vcore.md#dc-series).
+> Before you configure the DC-series hardware for your database, check the regional availability of DC-series and make sure you understand its performance limitations. For details, see [DC-series](service-tiers-sql-database-vcore.md#dc-series).
## Plan for attestation in Azure SQL Database
-[Microsoft Azure Attestation](../../attestation/overview.md) is a solution for attesting Trusted Execution Environments (TEEs), including Intel SGX enclaves in Azure SQL databases using the DC-series hardware generation.
+[Microsoft Azure Attestation](../../attestation/overview.md) is a solution for attesting Trusted Execution Environments (TEEs), including Intel SGX enclaves in Azure SQL databases using DC-series hardware.
To use Azure Attestation for attesting Intel SGX enclaves in Azure SQL Database, you need to create an [attestation provider](../../attestation/basic-concepts.md#attestation-provider) and configure it with the Microsoft-provided attestation policy. See [Configure attestation for Always Encrypted using Azure Attestation](always-encrypted-enclaves-configure-attestation.md)
To use Azure Attestation for attesting Intel SGX enclaves in Azure SQL Database,
Configuring your environment to support Intel SGX enclaves and attestation for Always Encrypted in Azure SQL Database involves setting up components of different types: Microsoft Azure Attestation, Azure SQL Database, and applications that trigger enclave attestation. Configuring components of each type is performed by users assuming one of the below distinct roles: - Attestation administrator - creates an attestation provider in Microsoft Azure Attestation, authors the attestation policy, grants Azure SQL logical server access to the attestation provider, and shares the attestation URL that points to the policy to application administrators.-- Azure SQL Database administrator - enables SGX enclaves in databases by selecting the DC-series hardware generation, and provides the attestation administrator with the identity of the Azure SQL logical server that needs to access the attestation provider.
+- Azure SQL Database administrator - enables SGX enclaves in databases by selecting the DC-series hardware, and provides the attestation administrator with the identity of the Azure SQL logical server that needs to access the attestation provider.
- Application administrator - configures applications with the attestation URL obtained from the attestation administrator. In production environments (handling real sensitive data), it is important your organization adheres to role separation when configuring attestation, where each distinct role is assumed by different people. In particular, if the goal of deploying Always Encrypted in your organization is to reduce the attack surface area by ensuring Azure SQL Database administrators cannot access sensitive data, Azure SQL Database administrators should not control attestation policies.
azure-sql Analyze Prevent Deadlocks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/analyze-prevent-deadlocks.md
+
+ Title: Analyze and prevent deadlocks
+
+description: Learn how to analyze deadlocks and prevent them from reoccurring in Azure SQL Database
+++++++ Last updated : 4/8/2022++
+# Analyze and prevent deadlocks in Azure SQL Database
+
+This article teaches you how to identify deadlocks in Azure SQL Database, use deadlock graphs and Query Store to identify the queries in the deadlock, and plan and test changes to prevent deadlocks from reoccurring.
+
+This article focuses on identifying and analyzing deadlocks due to lock contention. Learn more about other types of deadlocks in [resources that can deadlock](/sql/relational-databases/sql-server-transaction-locking-and-row-versioning-guide#deadlock_resources).
+
+## How deadlocks occur in Azure SQL Database
+
+Each new database in Azure SQL Database has the [read committed snapshot](/sql/t-sql/statements/alter-database-transact-sql-set-options?view=azuresqldb-current&preserve-view=true#read_committed_snapshot--on--off--1) (RCSI) database setting enabled by default. [Blocking](understand-resolve-blocking.md) between sessions reading data and sessions writing data is minimized under RCSI, which uses row versioning to increase concurrency. However, blocking and deadlocks may still occur in databases in Azure SQL Database because:
+
+- Queries that modify data may block one another.
+- Queries may run under isolation levels that increase blocking. Isolation levels may be specified via client library methods, [query hints](/sql/t-sql/queries/hints-transact-sql-query), or [SET statements](/sql/t-sql/statements/set-transaction-isolation-level-transact-sql) in Transact-SQL.
+- [RCSI may be disabled](/sql/t-sql/statements/alter-database-transact-sql-set-options?view=azuresqldb-current&preserve-view=true#read_committed_snapshot--on--off--1), causing the database to use shared (S) locks to protect SELECT statements run under the read committed isolation level. This may increase blocking and deadlocks.
+
+### An example deadlock
+
+A deadlock occurs when two or more tasks permanently block one another because each task has a lock on a resource the other task is trying to lock. A deadlock is also called a cyclic dependency: in the case of a two-task deadlock, transaction A has a dependency on transaction B, and transaction B closes the circle by having a dependency on transaction A.
+
+For example:
+
+1. **Session A** begins an explicit transaction and runs an update statement that acquires an update (U) lock on one row on table `SalesLT.Product` that is [converted to an exclusive (X) lock](/sql/relational-databases/sql-server-transaction-locking-and-row-versioning-guide#behavior-when-modifying-data).
+1. **Session B** runs an update statement that modifies the `SalesLT.ProductDescription` table. The update statement joins to the `SalesLT.Product` table to find the correct rows to update.
+ - **Session B** acquires an update (U) lock on 72 rows on the `SalesLT.ProductDescription` table.
+ - **Session B** needs a shared lock on rows on the table `SalesLT.Product`, including the row that is locked by **Session A**. **Session B** is blocked on `SalesLT.Product`.
+1. **Session A** continues its transaction, and now runs an update against the `SalesLT.ProductDescription` table. **Session A** is blocked by Session B on `SalesLT.ProductDescription`.
++
+All transactions in a deadlock will wait indefinitely unless one of the participating transactions is rolled back, for example, because its session was terminated.
+
+The database engine deadlock monitor periodically checks for tasks that are in a deadlock. If the deadlock monitor detects a cyclic dependency, it chooses one of the tasks as a victim and terminates its transaction with error 1205, "Transaction (Process ID *N*) was deadlocked on lock resources with another process and has been chosen as the deadlock victim. Rerun the transaction." Breaking the deadlock in this way allows the other task or tasks in the deadlock to complete their transactions.
+
+>[!NOTE]
+> Learn more about the criteria for choosing a deadlock victim in the [Deadlock process list](#deadlock-process-list) section of this article.
++
+The application with the transaction chosen as the deadlock victim should retry the transaction, which usually completes after the other transaction or transactions involved in the deadlock have finished.
+
+It is a best practice to introduce a short, randomized delay before retry to avoid encountering the same deadlock again. Learn more about how to design [retry logic for transient errors](troubleshoot-common-connectivity-issues.md#retry-logic-for-transient-errors).
+
+### Default isolation level in Azure SQL Database
+
+New databases in Azure SQL Database enable read committed snapshot (RCSI) by default. RCSI changes the behavior of the [read committed isolation level](/sql/relational-databases/sql-server-transaction-locking-and-row-versioning-guide#database-engine-isolation-levels) to use [row-versioning](/sql/relational-databases/sql-server-transaction-locking-and-row-versioning-guide#Row_versioning) to provide statement-level consistency without the use of shared (S) locks for SELECT statements.
+
+With RCSI enabled:
+
+- Statements reading data do not block statements modifying data.
+- Statements modifying data do not block statements reading data.
+
+[Snapshot isolation level](/sql/t-sql/statements/alter-database-transact-sql-set-options?view=azuresqldb-current&preserve-view=true#b-enable-snapshot-isolation-on-a-database) is also enabled by default for new databases in Azure SQL Database. Snapshot isolation is an additional row-based isolation level that provides transaction-level consistency for data and which uses row versions to select rows to update. To use snapshot isolation, queries or connections must explicitly set their transaction isolation level to `SNAPSHOT`. This may only be done when snapshot isolation is enabled for the database.
+
+You can identify if RCSI and/or snapshot isolation are enabled with Transact-SQL. Connect to your database in Azure SQL Database and run the following query:
+
+```sql
+SELECT name, is_read_committed_snapshot_on, snapshot_isolation_state_desc
+FROM sys.databases
+WHERE name = DB_NAME();
+GO
+```
+
+If RCSI is enabled, the `is_read_committed_snapshot_on` column will return the value **1**. If snapshot isolation is enabled, the `snapshot_isolation_state_desc` column will return the value **ON**.
+
+If [RCSI has been disabled](/sql/t-sql/statements/alter-database-transact-sql-set-options?view=azuresqldb-current&preserve-view=true#read_committed_snapshot--on--off--1) for a database in Azure SQL Database, investigate why RCSI was disabled before re-enabling it. Application code may have been written expecting that queries reading data will be blocked by queries writing data, resulting in incorrect results from race conditions when RCSI is enabled.
+
+### Interpreting deadlock events
+
+A deadlock event is emitted after the deadlock manager in Azure SQL Database detects a deadlock and selects a transaction as the victim. In other words, if you set up alerts for deadlocks, the notification fires after an individual deadlock has been resolved. There is no user action that needs to be taken for that deadlock. Applications should be written to include [retry logic](troubleshoot-common-connectivity-issues.md#retry-logic-for-transient-errors) so that they automatically continue after receiving error 1205, "Transaction (Process ID *N*) was deadlocked on lock resources with another process and has been chosen as the deadlock victim. Rerun the transaction."
+
+It's useful to set up alerts, however, as deadlocks may reoccur. Deadlock alerts enable you to investigate if a pattern of repeat deadlocks is happening in your database, in which case you may choose to take action to prevent deadlocks from reoccurring. Learn more about alerting in the [Monitor and alert on deadlocks](#monitor-and-alert-on-deadlocks) section of this article.
+
+### Top methods to prevent deadlocks
+
+The lowest risk approach to preventing deadlocks from reoccurring is generally to [tune nonclustered indexes](#prevent-a-deadlock-from-reoccurring) to optimize queries involved in the deadlock.
+
+- Risk is low for this approach because tuning nonclustered indexes doesn't require changes to the query code itself, reducing the risk of a user error when rewriting Transact-SQL that causes incorrect data to be returned to the user.
+- Effective nonclustered index tuning helps queries find the data to read and modify more efficiently. By reducing the amount of data that a query needs to access, the likelihood of blocking is reduced and deadlocks can often be prevented.
+
+In some cases, creating or tuning a clustered index can reduce blocking and deadlocks. Because the clustered index is included in all nonclustered index definitions, creating or modifying a clustered index can be an IO intensive and time consuming operation on larger tables with existing nonclustered indexes. Learn more about [Clustered index design guidelines](/sql/relational-databases/sql-server-index-design-guide#Clustered).
+
+When index tuning isn't successful at preventing deadlocks, other methods are available:
+
+- If the deadlock occurs only when a particular plan is chosen for one of the queries involved in the deadlock, [forcing a query plan](/sql/relational-databases/system-stored-procedures/sp-query-store-force-plan-transact-sql) with Query Store may prevent deadlocks from reoccurring.
+- Rewriting Transact-SQL for one or more transactions involved in the deadlock can also help prevent deadlocks. Breaking apart explicit transactions into smaller transactions requires careful coding and testing to ensure data validity when concurrent modifications occur.
+
+Learn more about each of these approaches in the [Prevent a deadlock from reoccurring](#prevent-a-deadlock-from-reoccurring) section of this article.
+
+## Monitor and alert on deadlocks
+
+In this article, we will use the `AdventureWorksLT` sample database to set up alerts for deadlocks, cause an example deadlock, analyze the deadlock graph for the example deadlock, and test changes to prevent the deadlock from reoccurring.
+
+We'll use the [SQL Server Management Studio](/sql/ssms/download-sql-server-management-studio-ssms) (SSMS) client in this article, as it contains functionality to display deadlock graphs in an interactive visual mode. You can use other clients such as [Azure Data Studio](/sql/azure-data-studio/download-azure-data-studio) to follow along with the examples, but you may only be able to view deadlock graphs as XML.
++
+### Create the AdventureWorksLT database
+
+To follow along with the examples, create a new database in Azure SQL Database and select **Sample** data as the **Data source**.
+
+For detailed instructions on how to create `AdventureWorksLT` with the Azure portal, Azure CLI, or PowerShell, select the approach of your choice in [Quickstart: Create an Azure SQL Database single database](single-database-create-quickstart.md).
+
+### Set up deadlock alerts in the Azure portal
+
+To set up alerts for deadlock events, follow the steps in the article [Create alerts for Azure SQL Database and Azure Synapse Analytics using the Azure portal](alerts-insights-configure-portal.md).
+
+Select **Deadlocks** as the signal name for the alert. Configure the **Action group** to notify you using the method of your choice, such as the **Email/SMS/Push/Voice** action type.
+
+## Collect deadlock graphs in Azure SQL Database with Extended Events
+
+Deadlock graphs are a rich source of information regarding the processes and locks involved in a deadlock. To collect deadlock graphs with Extended Events (XEvents) in Azure SQL Database, capture the `sqlserver.database_xml_deadlock_report` event.
+
+You can collect deadlock graphs with XEvents using either the [ring buffer target](xevent-code-ring-buffer.md) or an [event file target](xevent-code-event-file.md). Considerations for selecting the appropriate target type are summarized in the following table:
++
+|Approach |Benefits |Considerations |Usage scenarios |
+|||||
+|Ring buffer target | <ul><li>Simple setup with Transact-SQL only.</li></ul> | <ul><li>Event data is cleared when the XEvents session is stopped for any reason, such as taking the database offline or a database failover.</li><li>Database resources are used to maintain data in the ring buffer and to query session data.</li></ul> | <ul><li>Collect sample trace data for testing and learning.</li><li>Create for short term needs if you cannot set up a session using an event file target immediately.</li><li>Use as a "landing pad" for trace data, when you have set up an automated process to persist trace data into a table.</li> </ul> |
+Event file target | <ul><li>Persists event data to a blob in Azure Storage so data is available even after the session is stopped.</li><li>Event files may be downloaded from the Azure portal or [Azure Storage Explorer](#use-azure-storage-explorer) and analyzed locally, which does not require using database resources to query session data.</li></ul> | <ul><li>Setup is more complex and requires configuration of an Azure Storage container and database scoped credential.</ul></li> | <ul><li>General use when you want event data to persist even after the event session stops.</li><li>You want to run a trace that generates larger amounts of event data than you would like to persist in memory.</li></ul> |
+
+Select the target type you would like to use:
+
+# [Ring buffer target](#tab/ring-buffer)
+
+The ring buffer target is convenient and easy to set up, but has a limited capacity, which can cause older events to be lost. The ring buffer does not persist events to storage and the ring buffer target is cleared when the XEvents session is stopped. This means that any XEvents collected will not be available when the database engine restarts for any reason, such as a failover. The ring buffer target is best suited to learning and short-term needs if you do not have the ability to set up an XEvents session to an event file target immediately.
+
+This sample code creates an XEvents session that captures deadlock graphs in memory using the [ring buffer target](/sql/relational-databases/extended-events/targets-for-extended-events-in-sql-server#ring_buffer-target). The maximum memory allowed for the ring buffer target is 4 MB, and the session will automatically run when the database comes online, such as after a failover.
+
+To create and then start a XEvents session for the `sqlserver.database_xml_deadlock_report` event that writes to the ring buffer target, connect to your database and run the following Transact-SQL:
+
+```sql
+CREATE EVENT SESSION [deadlocks] ON DATABASE
+ADD EVENT sqlserver.database_xml_deadlock_report
+ADD TARGET package0.ring_buffer
+WITH (STARTUP_STATE=ON, MAX_MEMORY=4 MB)
+GO
+
+ALTER EVENT SESSION [deadlocks] ON DATABASE
+ STATE = START;
+GO
+```
+
+# [Event file target](#tab/event-file)
+
+The event file target persists deadlock graphs to files so they are available even after the XEvents session is stopped. The event file target also allows you to capture more deadlock graphs without allocating additional memory for a ring buffer. The event file target is suitable for long term use and for collecting larger amounts of trace data.
+
+To create an XEvents session that writes to an event file target, we will:
+
+1. Configure an Azure Storage container to hold the trace files using the Azure portal.
+1. Create a database scoped credential with Transact-SQL.
+1. Create the XEvents session with Transact-SQL.
+
+### Configure an Azure Storage container
+
+To configure an Azure Storage container, first create or select an existing Azure Storage account, then create the container. Generate a Shared Access Signature (SAS) token for the container. This section describes completing this process in the Azure portal.
+
+> [!NOTE]
+> If you wish to create and configure the Azure Storage blob container with PowerShell, see [Event File target code for extended events in Azure SQL Database](xevent-code-event-file.md). Alternately, you may find it convenient to [Use Azure Storage Explorer](#use-azure-storage-explorer) to create and configure the Azure Storage blob container instead of using the Azure portal.
+
+#### Create or select an Azure Storage account
+
+You can use an existing Azure Storage account or create a new Azure Storage account to host a container for trace files.
+
+To use an existing Azure Storage account:
+1. Navigate to the resource group you want to work with in the Azure portal.
+1. On the **Overview** pane, Under **Resources**, set the **Type** dropdown to *Storage account*.
+1. Select the storage account you want to use.
+
+To create a new Azure Storage account, follow the steps in [Create an Azure storage account](/azure/media-services/latest/storage-create-how-to). Complete the process by selecting **Go to resource** in the final step.
+
+#### Create a container
+
+From the storage account page in the Azure portal:
+
+1. Under **Data storage**, select **Containers**.
+1. Select **+ Container** to create a new container. The New container pane will appear.
+1. Enter a name for the container under **Name**.
+1. Select **Create**.
+1. Select the container from the list after it has been created.
+
+#### Create a shared access token
+
+From the container page in the Azure portal:
+
+1. Under **Settings**, select **Shared access tokens**.
+1. Leave the **Signing method** radio button set to the default selection, **Account key**.
+1. Under the **Permissions** dropdown, select the **Read**, **Write**, and **List** permissions.
+1. Set **Start** to the date and time you would like to be able to write trace files. Optionally, configure the time zone in the dropdown below **Start**.
+1. Set **Expiry** to the date and time you would like these permissions to expire. Optionally, configure the time zone in the dropdown below **Expiry**. You are able to set this to a date far in the future, such as ten years, if you wish.
+1. Select **Generate SAS token and URL**. The Blob SAS token and Blob SAS URL will be displayed on the screen.
+1. Copy and preserve the *Blob SAS token* and *Blob SAS URL* values for use in further steps.
+
+### Create a database scoped credential
+
+Connect to your database in Azure SQL Database with SSMS to run the following steps.
+
+To create a database scoped credential, you must first create a [master key](/sql/t-sql/statements/create-master-key-transact-sql) in the database if one does not exist.
+
+Run the following Transact-SQL to create a master key if one does not exist:
+
+```sql
+IF 0 = (SELECT COUNT(*)
+ FROM sys.symmetric_keys
+ WHERE symmetric_key_id = 101 and name=N'##MS_DatabaseMasterKey##')
+BEGIN
+ PRINT N'Creating master key';
+ CREATE MASTER KEY;
+END
+ELSE
+BEGIN
+ PRINT N'Master key already exists, no action taken';
+END
+GO
+```
+
+Next, create a database scoped credential with the following Transact-SQL. Before running the code:
+- Modify the URL to reflect your storage account name and your container name. This URL will be present at the beginning of the *Blob SAS URL* you copied when you created the shared access token. You only need the text prior to the first `?` in the string.
+- Modify the `SECRET` to contain the *Blob SAS token* value you copied when you created the shared access token.
+
+```sql
+CREATE DATABASE SCOPED CREDENTIAL
+ [https://yourstorageaccountname.blob.core.windows.net/yourcontainername]
+ WITH
+ IDENTITY = 'SHARED ACCESS SIGNATURE',
+ SECRET = 'sp=r&st=2022-04-08T14:34:21Z&se=2032-04-08T22:34:21Z&sv=2020-08-04&sr=c&sig=pUNbbsmDiMzXr1vuNGZh84zyOMBFaBjgWv53IhOzYWQ%3D'
+ ;
+GO
+```
+
+### Create the XEvents session
+
+Create and start the XEvents session with the following Transact-SQL. Before running the statement:
+- Replace the `filename` value to reflect your storage account name and your container name. This URL will be present at the beginning of the *Blob SAS URL* you copied when you created the shared access token. You only need the text prior to the first `?` in the string.
+- Optionally change the filename stored. The filename you specify here will be part of the actual filename(s) used for the blob(s) storing event data: additional values will be appended so that all event files have a unique name.
+- Optionally add additional events to the session.
+
+```sql
+CREATE EVENT SESSION [deadlocks_eventfile] ON DATABASE
+ADD EVENT sqlserver.database_xml_deadlock_report
+ADD TARGET package0.event_file
+ (SET filename =
+ 'https://yourstorageaccountname.blob.core.windows.net/yourcontainername/deadlocks.xel'
+ )
+WITH (STARTUP_STATE=ON, MAX_MEMORY=4 MB)
+GO
+
+ALTER EVENT SESSION [deadlocks_eventfile] ON DATABASE
+ STATE = START;
+GO
+```
+++
+## Cause a deadlock in AdventureWorksLT
+
+> [!NOTE]
+> This example works in the AdventureWorksLT database with the default schema and data when RCSI has been enabled. See [Create the AdventureWorksLT database](#create-the-adventureworkslt-database) for instructions to create the database.
+
+To cause a deadlock, you will need to connect two sessions to the `AdventureWorksLT` database. We'll refer to these sessions as **Session A** and **Session B**.
+
+In **Session A**, run the following Transact-SQL. This code begins an [explicit transaction](/sql/relational-databases/sql-server-transaction-locking-and-row-versioning-guide#starting-transactions) and runs a single statement that updates the `SalesLT.Product` table. To do this, the transaction acquires an [update (U) lock](/sql/relational-databases/sql-server-transaction-locking-and-row-versioning-guide#behavior-when-modifying-data) on one row on table `SalesLT.Product` which is converted to an exclusive (X) lock. We leave the transaction open.
+
+```sql
+BEGIN TRAN
+
+ UPDATE SalesLT.Product SET SellEndDate = SellEndDate + 1
+ WHERE Color = 'Red';
+
+```
+
+Now, in **Session B**, run the following Transact-SQL. This code doesn't explicitly begin a transaction. Instead, it operates in [autocommit transaction mode](/sql/relational-databases/sql-server-transaction-locking-and-row-versioning-guide#starting-transactions). This statement updates the `SalesLT.ProductDescription` table. The update will take out an update (U) lock on 72 rows on the `SalesLT.ProductDescription` table. The query joins to other tables, including the `SalesLT.Product` table.
+
+```sql
+UPDATE SalesLT.ProductDescription SET Description = Description
+ FROM SalesLT.ProductDescription as pd
+ JOIN SalesLT.ProductModelProductDescription as pmpd on
+ pd.ProductDescriptionID = pmpd.ProductDescriptionID
+ JOIN SalesLT.ProductModel as pm on
+ pmpd.ProductModelID = pm.ProductModelID
+ JOIN SalesLT.Product as p on
+ pm.ProductModelID=p.ProductModelID
+ WHERE p.Color = 'Silver';
+```
+
+To complete this update, **Session B** needs a shared (S) lock on rows on the table `SalesLT.Product`, including the row that is locked by **Session A**. **Session B** will be blocked on `SalesLT.Product`.
+
+Return to **Session A**. Run the following Transact-SQL statement. This runs a second UPDATE statement as part of the open transaction.
+
+```sql
+ UPDATE SalesLT.ProductDescription SET Description = Description
+ FROM SalesLT.ProductDescription as pd
+ JOIN SalesLT.ProductModelProductDescription as pmpd on
+ pd.ProductDescriptionID = pmpd.ProductDescriptionID
+ JOIN SalesLT.ProductModel as pm on
+ pmpd.ProductModelID = pm.ProductModelID
+ JOIN SalesLT.Product as p on
+ pm.ProductModelID=p.ProductModelID
+ WHERE p.Color = 'Red';
+```
+
+The second update statement in **Session A** will be blocked by **Session B** on the `SalesLT.ProductDescription`.
+
+**Session A** and **Session B** are now mutually blocking one another. Neither transaction can proceed, as they each need a resource that is locked by the other.
+
+After a few seconds, the deadlock monitor will identify that the transactions in **Session A** and **Session B** are mutually blocking one another, and that neither can make progress. You should see a deadlock occur, with **Session A** chosen as the deadlock victim. An error message will appear in **Session A** with text similar to the following:
+
+> Msg 1205, Level 13, State 51, Line 7
+> Transaction (Process ID 91) was deadlocked on lock resources with another process and has been chosen as the deadlock victim. Rerun the transaction.
+
+**Session B** will complete successfully.
+
+If you [set up deadlock alerts in the Azure portal](#set-up-deadlock-alerts-in-the-azure-portal), you should receive a notification shortly after the deadlock occurs.
+
+## View deadlock graphs from an XEvents session
+
+If you have [set up an XEvents session to collect deadlocks](#collect-deadlock-graphs-in-azure-sql-database-with-extended-events) and a deadlock has occurred after the session was started, you can view an interactive graphic display of the deadlock graph as well as the XML for the deadlock graph.
+
+Different methods are available to obtain deadlock information for the ring buffer target and event file targets. Select the target you used for your XEvents session:
+
+# [Ring buffer target](#tab/ring-buffer)
+
+If you set up an XEvents session writing to the ring buffer, you can query deadlock information with the following Transact-SQL. Before running the query, replace the value of `@tracename` with the name of your xEvents session.
+
+```sql
+DECLARE @tracename sysname = N'deadlocks';
+
+WITH ring_buffer AS (
+ SELECT CAST(target_data AS XML) as rb
+ FROM sys.dm_xe_database_sessions AS s
+ JOIN sys.dm_xe_database_session_targets AS t
+ ON CAST(t.event_session_address AS BINARY(8)) = CAST(s.address AS BINARY(8))
+ WHERE s.name = @tracename and
+ t.target_name = N'ring_buffer'
+), dx AS (
+ SELECT
+ dxdr.evtdata.query('.') as deadlock_xml_deadlock_report
+ FROM ring_buffer
+ CROSS APPLY rb.nodes('/RingBufferTarget/event[@name=''database_xml_deadlock_report'']') AS dxdr(evtdata)
+)
+SELECT
+ d.query('/event/data[@name=''deadlock_cycle_id'']/value').value('(/value)[1]', 'int') AS [deadlock_cycle_id],
+ d.value('(/event/@timestamp)[1]', 'DateTime2') AS [deadlock_timestamp],
+ d.query('/event/data[@name=''database_name'']/value').value('(/value)[1]', 'nvarchar(256)') AS [database_name],
+ d.query('/event/data[@name=''xml_report'']/value/deadlock') AS deadlock_xml,
+ LTRIM(RTRIM(REPLACE(REPLACE(d.value('.', 'nvarchar(2000)'),CHAR(10),' '),CHAR(13),' '))) as query_text
+FROM dx
+CROSS APPLY deadlock_xml_deadlock_report.nodes('(/event/data/value/deadlock/process-list/process/inputbuf)') AS ib(d)
+ORDER BY [deadlock_timestamp] DESC;
+GO
+```
+
+# [Event file target](#tab/event-file)
+
+If you set up an XEvents session writing to an event file, you can download files from the Azure portal and view them locally, or you can query event files with Transact-SQL.
+
+Downloading files from the Azure portal is recommended because this method does not require using database resources to query session data.
+
+### Optionally restart the XEvents session
+
+If an Extended Events session is currently running and writing to an event file target, the blob container being written to will have a **Lease state** of *Leased* in the Azure portal. The size will be the maximum size of the file. To download a smaller file, you may wish to stop and restart the Extended Events session before downloading files. This will cause the file to change its **Lease state** to *Available*, and the file size will be the space used by events in the file.
+
+To stop and restart an XEvents session, connect to your database and run the following Transact-SQL. Before running the code, replace the name of the xEvents session with the appropriate value.
+
+```sql
+ALTER EVENT SESSION [deadlocks_eventfile] ON DATABASE
+ STATE = STOP;
+GO
+ALTER EVENT SESSION [deadlocks_eventfile] ON DATABASE
+ STATE = START;
+GO
+```
+
+### Download trace files from the Azure portal
+
+To view deadlock events that have been collected across multiple files, download the event session files to your local computer and view the files in SSMS.
+
+> [!NOTE]
+> You can also use [Use Azure Storage Explorer](#use-azure-storage-explorer) to quickly and conveniently download event session files from a blob container in Azure Storage.
+
+To download the files from the Azure portal:
+
+1. Navigate to the storage account hosting your container in the Azure portal.
+1. Under **Data storage**, select **Containers**.
+1. Select the container holding your XEvent trace files.
+1. For each file you wish to download, select **...**, then **Download**.
+
+### View XEvents trace files in SSMS
+
+If you have download multiple files, you can open events from all of the files together in the XEvents viewer in SSMS. To do so:
+1. Open SSMS.
+1. Select **File**, then **Open**, then **Merge Extended Events files...**.
+1. Select **Add**.
+1. Navigated to the directory where you downloaded the files. Use the **Shift** key to select multiple files.
+1. Select **Open**.
+1. Select **OK** in the **Merge Extended Events Files** dialog.
+
+If you have downloaded a single file, right-click the file and select **Open with**, then **SSMS**. This will open the XEvents viewer in SSMS.
+
+Navigate between events collected by selecting the relevant timestamp. To view the XML for a deadlock, double-click the `xml_report` row in the lower pane.
+
+### Query trace files with Transact-SQL
+
+> [!Important]
+> Querying large (1 GB and larger) XEvents trace files using this method is not recommended because it may consume large amounts of memory in your database or elastic pool.
+
+To query XEvents trace files from an Azure Storage container with Transact-SQL, you must provide the exact file name for the trace file. You must also run the query in the context of the database with the credential to access the storage, in other words, the same database that has created the XEvents files.
+
+Run the following Transact-SQL to query the currently active XEvents trace file. Before running the query, replace `@tracename` with the name of your XEvents session.
+
+```sql
+DECLARE @tracename sysname = N'deadlocks_eventfile',
+ @filename nvarchar(2000);
+
+WITH eft as (SELECT CAST(target_data AS XML) as rb
+ FROM sys.dm_xe_database_sessions AS s
+ JOIN sys.dm_xe_database_session_targets AS t
+ ON CAST(t.event_session_address AS BINARY(8)) = CAST(s.address AS BINARY(8))
+ WHERE s.name = @tracename and
+ t.target_name = N'event_file'
+)
+SELECT @filename = ft.evtdata.value('(@name)[1]','nvarchar(2000)')
+FROM eft
+CROSS APPLY rb.nodes('EventFileTarget/File') as ft(evtdata);
+
+WITH xevents AS (
+ SELECT cast(event_data as XML) as ed
+ FROM sys.fn_xe_file_target_read_file(@filename, null, null, null )
+), dx AS (
+ SELECT
+ dxdr.evtdata.query('.') as deadlock_xml_deadlock_report
+ FROM xevents
+ CROSS APPLY ed.nodes('/event[@name=''database_xml_deadlock_report'']') AS dxdr(evtdata)
+)
+SELECT
+ d.query('/event/data[@name=''deadlock_cycle_id'']/value').value('(/value)[1]', 'int') AS [deadlock_cycle_id],
+ d.value('(/event/@timestamp)[1]', 'DateTime2') AS [deadlock_timestamp],
+ d.query('/event/data[@name=''database_name'']/value').value('(/value)[1]', 'nvarchar(256)') AS [database_name],
+ d.query('/event/data[@name=''xml_report'']/value/deadlock') AS deadlock_xml,
+ LTRIM(RTRIM(REPLACE(REPLACE(d.value('.', 'nvarchar(2000)'),CHAR(10),' '),CHAR(13),' '))) as query_text
+FROM dx
+CROSS APPLY deadlock_xml_deadlock_report.nodes('(/event/data/value/deadlock/process-list/process/inputbuf)') AS ib(d)
+ORDER BY [deadlock_timestamp] DESC;
+GO
+```
+
+To query non-active files, navigate to the Storage Account and container in the Azure portal to identify the filenames.
+
+Run the following Transact-SQL query against your database to query a specific XEvents file. Before running the query, substitute the storage account name, container name, and filename in the URL for `@filename`:
+
+```sql
+declare @filename nvarchar(2000) = N'https://yourstorageaccountname.blob.core.windows.net/yourcontainername/yourfilename.xel';
+
+with xevents AS (
+ SELECT cast(event_data as XML) as ed
+ FROM sys.fn_xe_file_target_read_file(@filename, null, null, null )
+), dx AS (
+ SELECT
+ dxdr.evtdata.query('.') as deadlock_xml_deadlock_report
+ FROM xevents
+ CROSS APPLY ed.nodes('/event[@name=''database_xml_deadlock_report'']') AS dxdr(evtdata)
+)
+SELECT
+ d.query('/event/data[@name=''deadlock_cycle_id'']/value').value('(/value)[1]', 'int') AS [deadlock_cycle_id],
+ d.value('(/event/@timestamp)[1]', 'DateTime2') AS [deadlock_timestamp],
+ d.query('/event/data[@name=''database_name'']/value').value('(/value)[1]', 'nvarchar(256)') AS [database_name],
+ d.query('/event/data[@name=''xml_report'']/value/deadlock') AS deadlock_xml,
+ LTRIM(RTRIM(REPLACE(REPLACE(d.value('.', 'nvarchar(2000)'),CHAR(10),' '),CHAR(13),' '))) as query_text
+FROM dx
+CROSS APPLY deadlock_xml_deadlock_report.nodes('(/event/data/value/deadlock/process-list/process/inputbuf)') AS ib(d)
+ORDER BY [deadlock_timestamp] DESC;
+GO
+```
+++
+### View and save a deadlock graph in XML
+
+Viewing a deadlock graph in XML format allows you to copy the `inputbuffer` of Transact-SQL statements involved in the deadlock. You may also prefer to analyze deadlocks in a text-based format.
+
+If you have used a Transact-SQL query to return deadlock graph information, to view the deadlock graph XML, select the value in the `deadlock_xml` column from any row to open the deadlock graph's XML in a new window in SSMS.
+
+The XML for this example deadlock graph is:
+
+```xml
+<deadlock>
+ <victim-list>
+ <victimProcess id="process24756e75088" />
+ </victim-list>
+ <process-list>
+ <process id="process24756e75088" taskpriority="0" logused="6528" waitresource="KEY: 8:72057594045202432 (98ec012aa510)" waittime="192" ownerId="1011123" transactionname="user_transaction" lasttranstarted="2022-03-08T15:44:43.490" XDES="0x2475c980428" lockMode="U" schedulerid="3" kpid="30192" status="suspended" spid="89" sbid="0" ecid="0" priority="0" trancount="2" lastbatchstarted="2022-03-08T15:44:49.250" lastbatchcompleted="2022-03-08T15:44:49.210" lastattention="1900-01-01T00:00:00.210" clientapp="Microsoft SQL Server Management Studio - Query" hostname="LAPTOP-CHRISQ" hostpid="16716" loginname="chrisqpublic" isolationlevel="read committed (2)" xactid="1011123" currentdb="8" currentdbname="AdventureWorksLT" lockTimeout="4294967295" clientoption1="671096864" clientoption2="128056">
+ <executionStack>
+ <frame procname="unknown" queryhash="0xef52b103e8b9b8ca" queryplanhash="0x02b0f58d7730f798" line="1" stmtstart="2" stmtend="792" sqlhandle="0x02000000c58b8f1e24e8f104a930776e21254b1771f92a520000000000000000000000000000000000000000">
+unknown </frame>
+ </executionStack>
+ <inputbuf>
+ UPDATE SalesLT.ProductDescription SET Description = Description
+ FROM SalesLT.ProductDescription as pd
+ JOIN SalesLT.ProductModelProductDescription as pmpd on
+ pd.ProductDescriptionID = pmpd.ProductDescriptionID
+ JOIN SalesLT.ProductModel as pm on
+ pmpd.ProductModelID = pm.ProductModelID
+ JOIN SalesLT.Product as p on
+ pm.ProductModelID=p.ProductModelID
+ WHERE p.Color = 'Red' </inputbuf>
+ </process>
+ <process id="process2476d07d088" taskpriority="0" logused="11360" waitresource="KEY: 8:72057594045267968 (39e18040972e)" waittime="2641" ownerId="1013536" transactionname="UPDATE" lasttranstarted="2022-03-08T15:44:46.807" XDES="0x2475ca80428" lockMode="S" schedulerid="2" kpid="94040" status="suspended" spid="95" sbid="0" ecid="0" priority="0" trancount="2" lastbatchstarted="2022-03-08T15:44:46.807" lastbatchcompleted="2022-03-08T15:44:46.760" lastattention="1900-01-01T00:00:00.760" clientapp="Microsoft SQL Server Management Studio - Query" hostname="LAPTOP-CHRISQ" hostpid="16716" loginname="chrisqpublic" isolationlevel="read committed (2)" xactid="1013536" currentdb="8" currentdbname="AdventureWorksLT" lockTimeout="4294967295" clientoption1="671088672" clientoption2="128056">
+ <executionStack>
+ <frame procname="unknown" queryhash="0xef52b103e8b9b8ca" queryplanhash="0x02b0f58d7730f798" line="1" stmtstart="2" stmtend="798" sqlhandle="0x020000002c85bb06327c0852c0be840fc1e30efce2b7c8090000000000000000000000000000000000000000">
+unknown </frame>
+ </executionStack>
+ <inputbuf>
+ UPDATE SalesLT.ProductDescription SET Description = Description
+ FROM SalesLT.ProductDescription as pd
+ JOIN SalesLT.ProductModelProductDescription as pmpd on
+ pd.ProductDescriptionID = pmpd.ProductDescriptionID
+ JOIN SalesLT.ProductModel as pm on
+ pmpd.ProductModelID = pm.ProductModelID
+ JOIN SalesLT.Product as p on
+ pm.ProductModelID=p.ProductModelID
+ WHERE p.Color = 'Silver'; </inputbuf>
+ </process>
+ </process-list>
+ <resource-list>
+ <keylock hobtid="72057594045202432" dbid="8" objectname="9e011567-2446-4213-9617-bad2624ccc30.SalesLT.ProductDescription" indexname="PK_ProductDescription_ProductDescriptionID" id="lock2474df12080" mode="U" associatedObjectId="72057594045202432">
+ <owner-list>
+ <owner id="process2476d07d088" mode="U" />
+ </owner-list>
+ <waiter-list>
+ <waiter id="process24756e75088" mode="U" requestType="wait" />
+ </waiter-list>
+ </keylock>
+ <keylock hobtid="72057594045267968" dbid="8" objectname="9e011567-2446-4213-9617-bad2624ccc30.SalesLT.Product" indexname="PK_Product_ProductID" id="lock2474b588580" mode="X" associatedObjectId="72057594045267968">
+ <owner-list>
+ <owner id="process24756e75088" mode="X" />
+ </owner-list>
+ <waiter-list>
+ <waiter id="process2476d07d088" mode="S" requestType="wait" />
+ </waiter-list>
+ </keylock>
+ </resource-list>
+</deadlock>
+```
+
+To save the deadlock graph as an XML file:
+
+1. Select **File** and **Save As...**.
+1. Leave the **Save as type** value as the default **XML Files (*.xml)**
+1. Set the **File name** to the name of your choice.
+1. Select **Save**.
+
+### Save a deadlock graph as an XDL file that can be displayed interactively in SSMS
+
+Viewing an interactive representation of a deadlock graph can be useful to get a quick overview of the processes and resources involved in a deadlock, and quickly identifying the deadlock victim.
+
+To save a deadlock graph as a file that can be graphically displayed by SSMS:
+
+1. Select the value in the `deadlock_xml` column from any row to open the deadlock graph's XML in a new window in SSMS.
+1. Select **File** and **Save As...**.
+1. Set **Save as type** to **All Files**.
+1. Set the **File name** to the name of your choice, with the extension set to **.xdl**.
+1. Select **Save**.
+
+ :::image type="content" source="media/analyze-prevent-deadlocks/ssms-save-deadlock-file-xdl.png" alt-text="A screenshot in SSMS of saving a deadlock graph XML file to a file with the xsd extension." lightbox="media/analyze-prevent-deadlocks/ssms-save-deadlock-file-xdl.png":::
+
+1. Close the file by selecting the **X** on the tab at the top of the window, or by selecting **File**, then **Close**.
+1. Reopen the file in SSMS by selecting **File**, then **Open**, then **File**. Select the file you saved with the `.xdl` extension.
+
+ The deadlock graph will now display in SSMS with a visual representation of the processes and resources involved in the deadlock.
+
+ :::image type="content" source="media/analyze-prevent-deadlocks/ssms-deadlock-graph-xdl-file-graphic-display.png" alt-text="Screenshot of an xdl file opened in SSMS. The deadlock graph is displayed graphically, with processes indicated by ovals and lock resources as rectangles." lightbox="media/analyze-prevent-deadlocks/ssms-deadlock-graph-xdl-file-graphic-display.png":::
+
+## Analyze a deadlock for Azure SQL Database
+
+A deadlock graph typically has three nodes:
+
+- **Victim-list**. The deadlock victim process identifier.
+- **Process-list**. Information on all the processes involved in the deadlock. Deadlock graphs use the term 'process' to represent a session running a transaction.
+- **Resource-list**. Information about the resources involved in the deadlock.
+
+When analyzing a deadlock, it is useful to step through these nodes.
+
+### Deadlock victim list
+
+The deadlock victim list shows the process that was chosen as the deadlock victim. In the visual representation of a deadlock graph, processes are represented by ovals. The deadlock victim process has an "X" drawn over the oval.
++
+In the [XML view of a deadlock graph](#view-and-save-a-deadlock-graph-in-xml), the `victim-list` node gives an ID for the process that was the victim of the deadlock.
+
+In our example deadlock, the victim process ID is **process24756e75088**. We can use this ID when examining the process-list and resource-list nodes to learn more about the victim process and the resources it was locking or requesting to lock.
+
+### Deadlock process list
+
+The deadlock process list is a rich source of information about the transactions involved in the deadlock.
+
+The graphic representation of the deadlock graph shows only a subset of information contained in the deadlock graph XML. The ovals in the deadlock graph represent the process, and show information including the:
+
+- Server process ID, also known as the session ID or SPID.
+- [Deadlock priority](/sql/t-sql/statements/set-deadlock-priority-transact-sql) of the session. If two sessions have different deadlock priorities, the session with the lower priority is chosen as the deadlock victim. In this example, both sessions have the same deadlock priority.
+- The amount of transaction log used by the session in bytes. If both sessions have the same deadlock priority, the deadlock monitor chooses the session that is less expensive to roll back as the deadlock victim. The cost is determined by comparing the number of log bytes written to that point in each transaction.
+
+ In our example deadlock, session_id 89 had used a lower amount of transaction log, and was selected as the deadlock victim.
+
+Additionally, you can view the [input buffer](/sql/relational-databases/system-dynamic-management-views/sys-dm-exec-input-buffer-transact-sql) for the last statement run in each session prior to the deadlock by hovering the mouse over each process. The input buffer will appear in a tooltip.
++
+Additional information is available for processes in the [XML view of the deadlock graph](#view-and-save-a-deadlock-graph-in-xml), including:
+
+- Identifying information for the session, such as the client name, host name, and login name.
+- The query plan hash for the last statement run by each session prior to the deadlock. The query plan hash is useful for retrieving more information about the query from [Query Store](/sql/relational-databases/performance/monitoring-performance-by-using-the-query-store).
+
+In our example deadlock:
+
+- We can see that both sessions were run using the SSMS client under the **chrisqpublic** login.
+- The query plan hash of the last statement run prior to the deadlock by our deadlock victim is **0x02b0f58d7730f798**. We can see the text of this statement in the input buffer.
+- The query plan hash of the last statement run by the other session in our deadlock is also **0x02b0f58d7730f798**. We can see the text of this statement in the input buffer. In this case, both queries have the same query plan hash because the queries are identical, except for a literal value used as an equality predicate.
+
+We'll use these values later in this article to [find additional information in Query Store](#find-query-execution-plans-in-query-store).
+
+#### Limitations of the input buffer in the deadlock process list
+
+There are some limitations to be aware of regarding input buffer information in the deadlock process list.
+
+Query text may be truncated in the input buffer. The input buffer is limited to the first 4,000 characters of the statement being executed.
+
+Additionally, some statements involved in the deadlock may not be included in the deadlock graph. In our example, **Session A** ran two update statements within a single transaction. Only the second update statement, the update that caused the deadlock, is included in the deadlock graph. The first update statement run by **Session A** played a part in the deadlock by blocking **Session B**. The input buffer, `query_hash`, and related information for the first statement run by **Session A** is not included in the deadlock graph.
+
+To identify the full Transact-SQL run in a multi-statement transaction involved in a deadlock, you will need to either find the relevant information in the stored procedure or application code that ran the query, or run a trace using [Extended Events](/sql/relational-databases/extended-events/extended-events) to capture full statements run by sessions involved in a deadlock while it occurs. If a statement involved in the deadlock has been truncated and only partial Transact-SQL appears in the input buffer, you can find the [Transact-SQL for the statement in Query Store with the Execution Plan](#find-query-execution-plans-in-query-store).
+
+### Deadlock resource list
+
+The deadlock resource list shows which lock resources are owned and waited on by the processes in the deadlock.
+
+Resources are represented by rectangles in the visual representation of the deadlock:
++
+> [!NOTE]
+> You may notice that database names are represented as uniquedientifers in deadlock graphs for databases in Azure SQL Database. This is the `physical_database_name` for the database listed in the [sys.databases](/sql/relational-databases/system-catalog-views/sys-databases-transact-sql) and [sys.dm_user_db_resource_governance](/sql/relational-databases/system-dynamic-management-views/sys-dm-user-db-resource-governor-azure-sql-database) dynamic management views.
+
+In this example deadlock:
+
+- The deadlock victim, which we have referred to as **Session A**:
+ - Owns an exclusive (X) lock on a key on the `PK_Product_ProductID` index on the `SalesLT.Product` table.
+ - Requests an update (U) lock on a key on the `PK_ProductDescription_ProductDescriptionID` index on the `SalesLT.ProductDescription` table.
+
+- The other process, which we have referred to as **Session B**:
+ - Owns an update (U) lock on a key on the `PK_ProductDescription_ProductDescriptionID` index on the `SalesLT.ProductDescription` table.
+ - Requests a shared (S) lock on a key on the `PK_ProductDescription_ProductDescriptionID` index on the `SalesLT.ProductDescription` table.
+
+We can see the same information in the [XML of the deadlock graph](#view-and-save-a-deadlock-graph-in-xml) in the **resource-list** node.
+
+### Find query execution plans in Query Store
+
+It is often useful to examine the query execution plans for statements involved in the deadlock. These execution plans can often be found in Query Store using the query plan hash from the XML view of the deadlock graph's [process list](#deadlock-process-list).
+
+This Transact-SQL query looks for query plans matching the query plan hash we found for our example deadlock. Connect to the user database in Azure SQL Database to run the query.
+
+```sql
+DECLARE @query_plan_hash binary(8) = 0x02b0f58d7730f798
+
+SELECT
+ qrsi.end_time as interval_end_time,
+ qs.query_id,
+ qp.plan_id,
+ qt.query_sql_text,
+ TRY_CAST(qp.query_plan as XML) as query_plan,
+ qrs.count_executions
+FROM sys.query_store_query as qs
+JOIN sys.query_store_query_text as qt on qs.query_text_id=qt.query_text_id
+JOIN sys.query_store_plan as qp on qs.query_id=qp.query_id
+JOIN sys.query_store_runtime_stats qrs on qp.plan_id = qrs.plan_id
+JOIN sys.query_store_runtime_stats_interval qrsi on qrs.runtime_stats_interval_id=qrsi.runtime_stats_interval_id
+WHERE query_plan_hash = @query_plan_hash
+ORDER BY interval_end_time, query_id;
+GO
+```
+
+You may not be able to obtain a query execution plan from Query Store, depending on your Query Store [CLEANUP_POLICY or QUERY_CAPTURE_MODE settings](/sql/t-sql/statements/alter-database-transact-sql-set-options#query-store). In this case, you can often get needed information by [displaying the estimated execution plan](/sql/relational-databases/performance/display-the-estimated-execution-plan) for the query.
+
+### Look for patterns that increase blocking
+
+When examining query execution plans involved in deadlocks, look out for patterns that may contribute to blocking and deadlocks.
+
+- **Table or index scans**. When queries modifying data are run under RCSI, the selection of rows to update is done using a blocking scan where an update (U) lock is taken on the data row as data values are read. If the data row does not meet the update criteria, the update lock is released and the next row is locked and scanned.
+
+ Tuning indexes to help modification queries find rows more efficiently reduces the number of update locks issued. This reduces the chances of blocking and deadlocks.
+
+- **Indexed views referencing more than one table**. When you modify a table that is referenced in an indexed view, the database engine must also maintain the indexed view. This requires taking out more locks and can lead to increased blocking and deadlocks. Indexed views may also cause update operations to internally execute under the read committed isolation level.
+
+- **Modifications to columns referenced in foreign key constraints**. When you modify columns in a table that are referenced in a FOREIGN KEY constraint, the database engine must look for related rows in the referencing table. Row versions cannot be used for these reads. In cases where cascading updates or deletes are enabled, the isolation level may be escalated to serializable for the duration of the statement to protect against phantom inserts.
+
+- **Lock hints**. Look for [table hints](/sql/t-sql/queries/hints-transact-sql-table) that specify isolation levels requiring more locks. These hints include `HOLDLOCK` (which is equivalent to serializable), `SERIALIZABLE`, `READCOMMITTEDLOCK` (which disables RCSI), and `REPEATABLEREAD`. Additionally, hints such as `PAGLOCK`, `TABLOCK`, `UPDLOCK`, and `XLOCK` can increase the risks of blocking and deadlocks.
+
+ If these hints are in place, research why the hints were implemented. These hints may prevent race conditions and ensure data validity. It may be possible to leave these hints in place and prevent future deadlocks using an alternate method in the [Prevent a deadlock from reoccurring](#prevent-a-deadlock-from-reoccurring) section of this article if necessary.
+
+ > [!NOTE]
+ > Learn more about behavior when modifying data using row versioning in the [Transaction locking and row versioning guide](/sql/relational-databases/sql-server-transaction-locking-and-row-versioning-guide#behavior-when-modifying-data).
+
+When examining the full code for a transaction, either in an execution plan or in application query code, look for additional problematic patterns:
+
+- **User interaction in transactions**. User interaction inside an explicit multi-statement transaction significantly increases the duration of transactions. This makes it more likely for these transactions to overlap and for blocking and deadlocks to occur.
+
+ Similarly, holding an open transaction and querying an unrelated database or system mid-transaction significantly increases the chances of blocking and deadlocks.
+
+- **Transactions accessing objects in different orders**. Deadlocks are less likely to occur when concurrent explicit multi-statement transactions follow the same patterns and access objects in the same order.
+
+## Prevent a deadlock from reoccurring
+
+There are multiple techniques available to prevent deadlocks from reoccurring, including index tuning, forcing plans with Query Store, and modifying Transact-SQL queries.
+
+- **Review the table's clustered index**. Most tables benefit from clustered indexes, but often, tables are implemented as [heaps](/sql/relational-databases/indexes/heaps-tables-without-clustered-indexes) by accident.
+
+ One way to check for a clustered index is by using the [sp_helpindex](/sql/relational-databases/system-stored-procedures/sp-helpindex-transact-sql) system stored procedure. For example, we can view a summary of the indexes on the `SalesLT.Product` table by executing the following statement:
+
+ ```sql
+ exec sp_helpindex 'SalesLT.Product';
+ GO
+ ```
+
+ Review the index_description column. A table can have only one clustered index. If a clustered index has been implemented for the table, the index_description will contain the word 'clustered'.
+
+ If no clustered index is present, the table is a heap. In this case, review if the table was intentionally created as a heap to solve a specific performance problem. Consider implementing a clustered index based on the [clustered index design guidelines](/sql/relational-databases/sql-server-index-design-guide#Clustered).
+
+ In some cases, creating or tuning a clustered index may reduce or eliminate blocking in deadlocks. In other cases, you may need to employ an additional technique such as the others in this list.
+
+- **Create or modify nonclustered indexes.** Tuning nonclustered indexes can help your modification queries find the data to update more quickly, which reduces the number of update locks required.
+
+ In our example deadlock, the query execution plan [found in Query Store](#find-query-execution-plans-in-query-store) contains a clustered index scan against the `PK_Product_ProductID` index. The deadlock graph indicates that a shared (S) lock wait on this index is a component in the deadlock.
+
+ :::image type="content" source="media/analyze-prevent-deadlocks/deadlock-execution-plan-clustered-index-scan.png" alt-text="Screenshot of a query execution plan. A clustered index scan is being performed against the PK_Product_ProductID index on the Product table.":::
+
+ This index scan is being performed because our update query needs to modify an indexed view named `vProductAndDescription`. As mentioned in the [Look for patterns that increase blocking](#look-for-patterns-that-increase-blocking) section of this article, indexed views referencing multiple tables may increase blocking and the likelihood of deadlocks.
+
+ If we create the following nonclustered index in the `AdventureWorksLT` database that "covers" the columns from `SalesLT.Product` referenced by the indexed view, this helps the query find rows much more efficiently:
+
+ ```sql
+ CREATE INDEX ix_Product_ProductID_Name_ProductModelID on SalesLT.Product (ProductID, Name, ProductModelID);
+ GO
+ ```
+
+ After creating this index, the deadlock no longer reoccurs.
+
+ When deadlocks involve modifications to columns referenced in foreign key constraints, ensure that indexes on the referencing table of the FOREIGN KEY support efficiently finding related rows.
+
+ While indexes can dramatically improve query performance in some cases, indexes also have overhead and management costs. Review [general index design guidelines](/sql/relational-databases/sql-server-index-design-guide#General_Design) to help assess the benefit of indexes before creating indexes, especially wide indexes and indexes on large tables.
+
+- **Assess the value of indexed views**. Another option to prevent our example deadlock from reoccurring is to drop the `SalesLT.vProductAndDescription` indexed view. If that indexed view is not being used, this will reduce the overhead of maintaining the indexed view over time.
+
+- **Use Snapshot isolation**. In some cases, [setting the transaction isolation level](/sql/t-sql/statements/set-transaction-isolation-level-transact-sql) to snapshot for one or more of the transactions involved in a deadlock may prevent blocking and deadlocks from reoccurring.
+
+ This technique is most likely to be successful when used on SELECT statements when [read committed snapshot is disabled in a database](#how-deadlocks-occur-in-azure-sql-database). When read committed snapshot is disabled, SELECT queries using the read committed isolation level require shared (S) locks. Using snapshot isolation on these transactions removes the need for shared locks, which can prevent blocking and deadlocks.
+
+ In databases where read committed snapshot isolation has been enabled, SELECT queries do not require shared (S) locks, so deadlocks are more likely to occur between transactions that are modifying data. In cases where deadlocks occur between multiple transactions modifying data, snapshot isolation may result in an [update conflict](/sql/relational-databases/sql-server-transaction-locking-and-row-versioning-guide#behavior-in-summary) instead of a deadlock. This similarly requires one of the transactions to retry its operation.
+
+- **Force a plan with Query Store**. You may find that one of the queries in the deadlock has multiple execution plans, and the deadlock only occurs when a specific plan is used. You can prevent the deadlock from reoccurring by [forcing a plan](/sql/relational-databases/system-stored-procedures/sp-query-store-force-plan-transact-sql) in Query Store.
+
+- **Modify the Transact-SQL**. You may need to modify Transact-SQL to prevent the deadlock from reoccurring. Modifying Transact-SQL should be done carefully and changes should be rigorously tested to ensure that data is correct when modifications run concurrently. When rewriting Transact-SQL, consider:
+ - Ordering statements in transactions so that they access objects in the same order.
+ - Breaking apart transactions into smaller transactions when possible.
+ - Using query hints, if necessary, to optimize performance. You can apply hints without changing application code [using Query Store](/sql/relational-databases/performance/query-store-hints?view=azuresqldb-current&preserve-view=true).
+
+Find more ways to [minimize deadlocks in the Transaction locking and row versioning guide](/sql/relational-databases/sql-server-transaction-locking-and-row-versioning-guide#deadlock_minimizing).
+
+> [!NOTE]
+> In some cases, you may wish to [adjust the deadlock priority](/sql/t-sql/statements/set-deadlock-priority-transact-sql) of one or more sessions involved in a deadlock if it is important for one of the sessions to complete successfully without retrying, or when one of the queries involved in the deadlock is not critical and should be always chosen as the victim. While this does not prevent the deadlock from reoccurring, it may reduce the impact of future deadlocks.
+
+## Drop an XEvents session
+
+You may wish to leave an XEvents session collecting deadlock information running on critical databases for long periods. Be aware that if you use an event file target, this may result in large files if multiple deadlocks occur. You may delete blob files from Azure Storage for an active trace, except for the file that is currently being written to.
+
+When you wish to remove an XEvents session, the Transact-SQL drop the session is the same, regardless of the target type selected.
+
+To remove an XEvents session, run the following Transact-SQL. Before running the code, replace the name of the session with the appropriate value.
+
+```sql
+ALTER EVENT SESSION [deadlocks] ON DATABASE
+ STATE = STOP;
+GO
+
+DROP EVENT SESSION [deadlocks] ON DATABASE;
+GO
+```
+
+## Use Azure Storage Explorer
+
+[Azure Storage Explorer](/azure/vs-azure-tools-storage-manage-with-storage-explorer) is a standalone application that simplifies working with event file targets stored in blobs in Azure Storage. You can use Storage Explorer to:
+
+- [Create a blob container](/azure/vs-azure-tools-storage-explorer-blobs#create-a-blob-container) to hold XEvent session data.
+- [Get the shared access signature (SAS)](/azure/vs-azure-tools-storage-explorer-blobs#get-the-sas-for-a-blob-container) for a blob container.
+ - As mentioned in [Collect deadlock graphs in Azure SQL Database with Extended Events](#collect-deadlock-graphs-in-azure-sql-database-with-extended-events), the read, write, and list permissions are required.
+ - Remove any leading `?` character from the `Query string` to use the value as the secret when [creating a database scoped credential](?tabs=event-file#create-a-database-scoped-credential).
+- [View and download](/azure/vs-azure-tools-storage-explorer-blobs#view-a-blob-containers-contents) extended event files from a blob container.
+
+[Download Azure Storage Explorer.](https://azure.microsoft.com/features/storage-explorer/).
+
+## Next steps
+
+Learn more about performance in Azure SQL Database:
+
+- [Understand and resolve Azure SQL Database blocking problems](understand-resolve-blocking.md)
+- [Transaction Locking and Row Versioning Guide](/sql/relational-databases/sql-server-transaction-locking-and-row-versioning-guide)
+- [SET TRANSACTION ISOLATION LEVEL](/sql/t-sql/statements/set-transaction-isolation-level-transact-sql)
+- [Azure SQL Database: Improving Performance Tuning with Automatic Tuning](/Shows/Data-Exposed/Azure-SQL-Database-Improving-Performance-Tuning-with-Automatic-Tuning)
+- [Deliver consistent performance with Azure SQL](/learn/modules/azure-sql-performance/)
+- [Retry logic for transient errors](troubleshoot-common-connectivity-issues.md#retry-logic-for-transient-errors).
azure-sql Authentication Aad Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/authentication-aad-configure.md
Previously updated : 12/15/2021 Last updated : 04/09/2022 # Configure and manage Azure AD authentication with Azure SQL
To grant your SQL Managed Instance Azure AD read permission using the Azure port
3. Navigate to the SQL Managed Instance you want to use for Azure AD integration.
- :::image type="content" source="./media/authentication-aad-configure/aad.png" alt-text="Screenshot of the Azure portal showing the Active Directory admin page open for the selected SQL managed instance.":::
+ ![Screenshot of the Azure portal showing the Active Directory admin page open for the selected SQL managed instance.](./media/authentication-aad-configure/active-directory-pane.png)
4. Select the banner on top of the Active Directory admin page and grant permission to the current user.
To grant your SQL Managed Instance Azure AD read permission using the Azure port
The process of changing the administrator may take several minutes. Then the new administrator appears in the Active Directory admin box.
+ For Azure AD users and groups, the **Object ID** is displayed next to the admin name. For applications (service principals), the **Application ID** is displayed.
+ After provisioning an Azure AD admin for your SQL Managed Instance, you can begin to create Azure AD server principals (logins) with the [CREATE LOGIN](/sql/t-sql/statements/create-login-transact-sql?view=azuresqldb-mi-current&preserve-view=true) syntax. For more information, see [SQL Managed Instance overview](../managed-instance/sql-managed-instance-paas-overview.md#azure-active-directory-integration). > [!TIP]
The following two procedures show you how to provision an Azure Active Directory
:::image type="content" source="./media/authentication-aad-configure/save-admin.png" alt-text="save admin":::
+ For Azure AD users and groups, the **Object ID** is displayed next to the admin name. For applications (service principals), the **Application ID** is displayed.
+ The process of changing the administrator may take several minutes. Then the new administrator appears in the **Active Directory admin** box. > [!NOTE]
For more information about CLI commands, see [az sql server](/cli/azure/sql/serv
## Configure your client computers
+> [!NOTE]
+> [System.Data.SqlClient](/dotnet/api/system.data.sqlclient) uses the Azure Active Directory Authentication Library (ADAL), which will be deprecated. If you're using the [System.Data.SqlClient](/dotnet/api/system.data.sqlclient) namespace for Azure Active Directory authentication, migrate applications to [Microsoft.Data.SqlClient](/sql/connect/ado-net/introduction-microsoft-data-sqlclient-namespace) and the [Microsoft Authentication Library (MSAL)](/azure/active-directory/develop/msal-migration). For more information about using Azure AD authentication with SqlClient, see [Using Azure Active Directory authentication with SqlClient](/sql/connect/ado-net/sql/azure-active-directory-authentication).
+>
+> SSMS and SSDT still uses the Azure Active Directory Authentication Library (ADAL). If you want to continue using *ADAL.DLL* in your applications, you can use the links in this section to install the latest SSMS, ODBC, and OLE DB driver that contains the latest *ADAL.DLL* library.
+ On all client machines, from which your applications or users connect to SQL Database or Azure Synapse using Azure AD identities, you must install the following software: - .NET Framework 4.6 or later from [https://msdn.microsoft.com/library/5a4x27ek.aspx](/dotnet/framework/install/guide-for-developers).-- Azure Active Directory Authentication Library for SQL Server (*ADAL.DLL*). Below are the download links to install the latest SSMS, ODBC, and OLE DB driver that contains the *ADAL.DLL* library.
+- [Microsoft Authentication Library (MSAL)](/azure/active-directory/develop/msal-migration) or Azure Active Directory Authentication Library for SQL Server (*ADAL.DLL*). Below are the download links to install the latest SSMS, ODBC, and OLE DB driver that contains the *ADAL.DLL* library.
- [SQL Server Management Studio](/sql/ssms/download-sql-server-management-studio-ssms) - [ODBC Driver 17 for SQL Server](/sql/connect/odbc/download-odbc-driver-for-sql-server?view=sql-server-ver15&preserve-view=true) - [OLE DB Driver 18 for SQL Server](/sql/connect/oledb/download-oledb-driver-for-sql-server?view=sql-server-ver15&preserve-view=true)
The following procedures show you how to connect to a SQL Database with an Azure
To use integrated Windows authentication, your domain's Active Directory must be federated with Azure Active Directory, or should be a managed domain that is configured for seamless single sign-on for pass-through or password hash authentication. For more information, see [Azure Active Directory Seamless Single Sign-On](../../active-directory/hybrid/how-to-connect-sso.md).
-> [!NOTE]
-> [MSAL.NET (Microsoft.Identity.Client)](https://github.com/AzureAD/microsoft-authentication-library-for-dotnet/wiki#roadmap) for integrated Windows authentication is not supported for seamless single sign-on for pass-through and password hash authentication.
- Your client application (or a service) connecting to the database must be running on a domain-joined machine under a user's domain credentials. To connect to a database using integrated authentication and an Azure AD identity, the Authentication keyword in the database connection string must be set to `Active Directory Integrated`. The following C# code sample uses ADO .NET.
azure-sql Authentication Aad Service Principal Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/authentication-aad-service-principal-tutorial.md
Previously updated : 01/20/2022 Last updated : 03/29/2022
For a similar approach on how to set the **Directory Readers** permission for SQ
## Create a service principal (an Azure AD application) in Azure AD
-1. Follow the guide here to [register your app](active-directory-interactive-connect-azure-sql-db.md#register-your-app-and-set-permissions).
+Register your application if you have not already done so. To register an app, you need to either be an Azure AD admin or a user assigned the Azure AD *Application Developer* role. For more information about assigning roles, see [Assign administrator and non-administrator roles to users with Azure Active Directory](../../active-directory/fundamentals/active-directory-users-assign-role-azure-portal.md).
-2. You'll also need to create a client secret for signing in. Follow the guide here to [upload a certificate or create a secret for signing in](../../active-directory/develop/howto-create-service-principal-portal.md#authentication-two-options).
+Completing an app registration generates and displays an **Application ID**.
-3. Record the following from your application registration. It should be available from your **Overview** pane:
+To register your application:
+
+1. In the Azure portal, select **Azure Active Directory** > **App registrations** > **New registration**.
+
+ ![App registration](./media/active-directory-interactive-connect-azure-sql-db/image1.png)
+
+ After the app registration is created, the **Application ID** value is generated and displayed.
+
+ ![App ID displayed](./media/active-directory-interactive-connect-azure-sql-db/image2.png)
+
+1. You'll also need to create a client secret for signing in. Follow the guide here to [upload a certificate or create a secret for signing in](../../active-directory/develop/howto-create-service-principal-portal.md#authentication-two-options).
+
+1. Record the following from your application registration. It should be available from your **Overview** pane:
- **Application ID** - **Tenant ID** - This should be the same as before
azure-sql Authentication Azure Ad Only Authentication Create Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/authentication-azure-ad-only-authentication-create-server.md
Previously updated : 12/16/2021 Last updated : 04/06/2022 # Create server with Azure AD-only authentication enabled in Azure SQL
This how-to guide outlines the steps to create a [logical server](logical-server
## Permissions
-To provision a logical server or managed instance, you'll need to have the appropriate permissions to create these resources. Azure users with higher permissions, such as subscription [Owners](../../role-based-access-control/built-in-roles.md#owner), [Contributors](../../role-based-access-control/built-in-roles.md#contributor), [Service Administrators](../../role-based-access-control/rbac-and-directory-admin-roles.md#classic-subscription-administrator-roles), and [Co-Administrators](../../role-based-access-control/rbac-and-directory-admin-roles.md#classic-subscription-administrator-roles) have the privilege to create a SQL server or managed instance. To create these resources with the least privileged Azure RBAC role, use the [SQL Server Contributor](../../role-based-access-control/built-in-roles.md#sql-server-contributor) role for SQL Database and [SQL Managed Instance Contributor](../../role-based-access-control/built-in-roles.md#sql-managed-instance-contributor) role for Managed Instance.
+To provision a logical server or managed instance, you'll need to have the appropriate permissions to create these resources. Azure users with higher permissions, such as subscription [Owners](../../role-based-access-control/built-in-roles.md#owner), [Contributors](../../role-based-access-control/built-in-roles.md#contributor), [Service Administrators](../../role-based-access-control/rbac-and-directory-admin-roles.md#classic-subscription-administrator-roles), and [Co-Administrators](../../role-based-access-control/rbac-and-directory-admin-roles.md#classic-subscription-administrator-roles) have the privilege to create a SQL server or managed instance. To create these resources with the least privileged Azure RBAC role, use the [SQL Server Contributor](../../role-based-access-control/built-in-roles.md#sql-server-contributor) role for SQL Database and [SQL Managed Instance Contributor](../../role-based-access-control/built-in-roles.md#sql-managed-instance-contributor) role for SQL Managed Instance.
The [SQL Security Manager](../../role-based-access-control/built-in-roles.md#sql-security-manager) Azure RBAC role doesn't have enough permissions to create a server or instance with Azure AD-only authentication enabled. The [SQL Security Manager](../../role-based-access-control/built-in-roles.md#sql-security-manager) role will be required to manage the Azure AD-only authentication feature after server or instance creation.
Replace the following values in the example:
- `<ResourceGroupName>`: Name of the resource group for your logical server - `<AzureADAccount>`: Can be an Azure AD user or group. For example, `DummyLogin` - `<Location>`: Location of the server, such as `westus2`, or `centralus`-- `<objectId>`: Can be found by going to the [Azure portal](https://portal.azure.com), and going to your **Azure Active Directory** resource. In the **User** pane, search for the Azure AD user and find their **Object ID**
+- `<objectId>`: Can be found by going to the [Azure portal](https://portal.azure.com), and going to your **Azure Active Directory** resource. In the **User** pane, search for the Azure AD user and find their **Object ID**. If you're using an application (service principal) as the Azure AD admin, replace this value with the **Application ID**. You will need to update the `principalType` as well.
```rest Import-Module Azure
$authHeader = @{
# Server admin (login and password) is generated by the system # Authentication body
-# The sid is the Azure AD Object ID for the user
+# The sid is the Azure AD Object ID for the user or group, and Application ID for applications. Update the principalType as well
$body = '{ "location": "<Location>",
You can also use the following template. Use a [Custom deployment in the Azure p
"aad_admin_objectid": { "type": "String", "metadata": {
- "description": "The Object ID of the Azure AD admin."
+ "description": "The Object ID of the Azure AD admin if the admin is a user or group. For Applications, use the Application ID."
} }, "aad_admin_tenantid": {
You can also use the following template. Use a [Custom deployment in the Azure p
1. Fill out the mandatory information required on the **Basics** tab for **Project details** and **Managed Instance details**. This is a minimum set of information required to provision a SQL Managed Instance.
- :::image type="content" source="media/authentication-azure-ad-only-authentication/azure-ad-only-managed-instance-create-basic.png" alt-text="Azure portal screenshot of the create Managed Instance basic tab ":::
+ :::image type="content" source="media/authentication-azure-ad-only-authentication/azure-ad-only-managed-instance-create-basic.png" alt-text="Azure portal screenshot of the create SQL Managed Instance basic tab ":::
For more information on the configuration options, see [Quickstart: Create an Azure SQL Managed Instance](../managed-instance/instance-create-quickstart.md).
You can also use the following template. Use a [Custom deployment in the Azure p
1. Select **Set admin**, which brings up a menu to select an Azure AD principal as your managed instance Azure AD administrator. When you're finished, use the **Select** button to set your admin.
- :::image type="content" source="media/authentication-azure-ad-only-authentication/azure-ad-only-managed-instance-create-basic-choose-authentication.png" alt-text="Azure portal screenshot of the create Managed Instance basic tab and choosing Azure AD only authentication":::
+ :::image type="content" source="media/authentication-azure-ad-only-authentication/azure-ad-only-managed-instance-create-basic-choose-authentication.png" alt-text="Azure portal screenshot of the create SQL Managed Instance basic tab and choosing Azure AD only authentication":::
1. You can leave the rest of the settings default. For more information on the **Networking**, **Security**, or other tabs and settings, follow the guide in the article [Quickstart: Create an Azure SQL Managed Instance](../managed-instance/instance-create-quickstart.md).
Replace the following values in the example:
- `<ResourceGroupName>`: Name of the resource group for your logical server - `<AzureADAccount>`: Can be an Azure AD user or group. For example, `DummyLogin` - `<Location>`: Location of the server, such as `westus2`, or `centralus`-- `<objectId>`: Can be found by going to the [Azure portal](https://portal.azure.com), and going to your **Azure Active Directory** resource. In the **User** pane, search for the Azure AD user and find their **Object ID**
+- `<objectId>`: Can be found by going to the [Azure portal](https://portal.azure.com), and going to your **Azure Active Directory** resource. In the **User** pane, search for the Azure AD user and find their **Object ID**. If you're using an application (service principal) as the Azure AD admin, replace this value with the **Application ID**. You'll need to update the `principalType` as well.
- The `subnetId` parameter needs to be updated with the `<ResourceGroupName>`, the `Subscription ID`, `<VNetName>`, and `<SubnetName>`
Use a [Custom deployment in the Azure portal](https://portal.azure.com/#create/M
"aad_admin_objectid": { "type": "String", "metadata": {
- "description": "The Object ID of the Azure AD admin."
+ "description": "The Object ID of the Azure AD admin. The Object ID of the Azure AD admin if the admin is a user or group. For Applications, use the Application ID."
} }, "aad_admin_tenantid": {
Use a [Custom deployment in the Azure portal](https://portal.azure.com/#create/M
{ "name": "allow_redirect_inbound", "properties": {
- "description": "Allow inbound redirect traffic to Managed Instance inside the virtual network",
+ "description": "Allow inbound redirect traffic to SQL Managed Instance inside the virtual network",
"protocol": "Tcp", "sourcePortRange": "*", "destinationPortRange": "11000-11999",
Use a [Custom deployment in the Azure portal](https://portal.azure.com/#create/M
### Grant Directory Readers permissions
-Once the deployment is complete for your managed instance, you may notice that the Managed Instance needs **Read** permissions to access Azure Active Directory. Read permissions can be granted by clicking on the displayed message in the Azure portal by a person with enough privileges. For more information, see [Directory Readers role in Azure Active Directory for Azure SQL](authentication-aad-directory-readers-role.md).
+Once the deployment is complete for your managed instance, you may notice that the SQL Managed Instance needs **Read** permissions to access Azure Active Directory. Read permissions can be granted by clicking on the displayed message in the Azure portal by a person with enough privileges. For more information, see [Directory Readers role in Azure Active Directory for Azure SQL](authentication-aad-directory-readers-role.md).
:::image type="content" source="media/authentication-azure-ad-only-authentication/azure-ad-portal-read-permissions.png" alt-text="screenshot of the Active Directory admin menu in Azure portal showing Read permissions needed":::
azure-sql Configure Max Degree Of Parallelism https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/configure-max-degree-of-parallelism.md
Title: "Configure the max degree of parallelism (MAXDOP)" description: Learn about the max degree of parallelism (MAXDOP). Previously updated : "04/12/2021" Last updated : "04/06/2022" dev_langs: - "TSQL"
> [!Tip] > We recommend that customers avoid setting MAXDOP to 0 even if it does not appear to cause problems currently.
- Excessive parallelism becomes most problematic when there are more concurrent requests than can be supported by the CPU and worker thread resources provided by the service objective. Avoid MAXDOP 0 to reduce the risk of potential future problems due to excessive parallelism if a database is scaled up, or if future hardware generations in Azure SQL Database provide more cores for the same database service objective.
+ Excessive parallelism becomes most problematic when there are more concurrent requests than can be supported by the CPU and worker thread resources provided by the service objective. Avoid MAXDOP 0 to reduce the risk of potential future problems due to excessive parallelism if a database is scaled up, or if future hardware configurations in Azure SQL Database provide more cores for the same database service objective.
### Modifying MAXDOP
azure-sql Gateway Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/gateway-migration.md
Previously updated : 07/01/2019 Last updated : 04/06/2022 # Azure SQL Database traffic migration to newer Gateways [!INCLUDE[appliesto-sqldb](../includes/appliesto-sqldb.md)]
-Microsoft periodically refreshes hardware to optimize the customer experience. During these refreshes, Azure adds gateways built on newer hardware generations, migrates traffic to them, and eventually decommissions gateways built on older hardware in some regions.
+Microsoft periodically refreshes hardware to optimize the customer experience. During these refreshes, Azure adds gateways built on newer hardware, migrates traffic to them, and eventually decommissions gateways built on older hardware in some regions.
To avoid service disruptions during refreshes, allow the communication with SQL Gateway IP subnet ranges for the region. Review [SQL Gateway IP subnet ranges](connectivity-architecture.md#gateway-ip-addresses) and include the ranges for your region.
azure-sql High Cpu Diagnose Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/high-cpu-diagnose-troubleshoot.md
Previously updated : 12/15/2021 Last updated : 04/06/2022 # Diagnose and troubleshoot high CPU on Azure SQL Database
Consider experimenting with small changes in the MAXDOP configuration at the dat
You may find that your workload's queries and indexes are properly tuned, or that performance tuning requires changes that you cannot make in the short term due to internal processes or other reasons. Adding more CPU resources may be beneficial for these databases. You can [scale database resources with minimal downtime](scale-resources.md).
-You can add more CPU resources to your Azure SQL Database by configuring the vCore count or the [hardware generation](service-tiers-sql-database-vcore.md#hardware-generations) for databases using the [vCore purchasing model](service-tiers-sql-database-vcore.md).
+You can add more CPU resources to your Azure SQL Database by configuring the vCore count or the [hardware configuration](service-tiers-sql-database-vcore.md#hardware-configuration) for databases using the [vCore purchasing model](service-tiers-sql-database-vcore.md).
Under the [DTU-based purchasing model](service-tiers-dtu.md), you can raise your service tier and increase the number of database transaction units (DTUs). A DTU represents a blended measure of CPU, memory, reads, and writes. One benefit of the vCore purchasing model is that it allows more granular control over the hardware in use and the number of vCores. You can [migrate Azure SQL Database from the DTU-based model to the vCore-based model](migrate-dtu-to-vcore.md) to transition between purchasing models.
Learn more about monitoring and performance tuning Azure SQL Database in the fol
* [Enable automatic tuning to monitor queries and improve workload performance](automatic-tuning-enable.md) * [Query processing architecture guide](/sql/relational-databases/query-processing-architecture-guide) * [Best practices with Query Store](/sql/relational-databases/performance/best-practice-with-the-query-store)
-* [Detectable types of query performance bottlenecks in Azure SQL Database](../identify-query-performance-issues.md)
+* [Detectable types of query performance bottlenecks in Azure SQL Database](../identify-query-performance-issues.md)
+* [Analyze and prevent deadlocks in Azure SQL Database](analyze-prevent-deadlocks.md)
azure-sql Logical Servers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/logical-servers.md
To create and manage servers, databases, and firewalls with Transact-SQL, use th
|[sys.dm_db_resource_stats (Azure SQL Database)](/sql/relational-databases/system-dynamic-management-views/sys-dm-db-resource-stats-azure-sql-database)| Returns CPU, IO, and memory consumption for a database in Azure SQL Database. One row exists for every 15 seconds, even if there is no activity in the database.| |[sys.resource_stats (Azure SQL Database)](/sql/relational-databases/system-catalog-views/sys-resource-stats-azure-sql-database)|Returns CPU usage and storage data for a database in Azure SQL Database. The data is collected and aggregated within five-minute intervals.| |[sys.database_connection_stats (Azure SQL Database)](/sql/relational-databases/system-catalog-views/sys-database-connection-stats-azure-sql-database)|Contains statistics for database connectivity events for Azure SQL Database, providing an overview of database connection successes and failures. |
-|[sys.event_log (Azure SQL Database)](/sql/relational-databases/system-catalog-views/sys-event-log-azure-sql-database)|Returns successful Azure SQL Database database connections, connection failures, and deadlocks for Azure SQL Database. You can use this information to track or troubleshoot your database activity.|
+|[sys.event_log (Azure SQL Database)](/sql/relational-databases/system-catalog-views/sys-event-log-azure-sql-database)|Returns successful database connections and connection failures for Azure SQL Database. You can use this information to track or troubleshoot your database activity.|
|[sp_set_firewall_rule (Azure SQL Database)](/sql/relational-databases/system-stored-procedures/sp-set-firewall-rule-azure-sql-database)|Creates or updates the server-level firewall settings for your server. This stored procedure is only available in the master database to the server-level principal login. A server-level firewall rule can only be created using Transact-SQL after the first server-level firewall rule has been created by a user with Azure-level permissions| |[sys.firewall_rules (Azure SQL Database)](/sql/relational-databases/system-catalog-views/sys-firewall-rules-azure-sql-database)|Returns information about the server-level firewall settings associated with a server.| |[sp_delete_firewall_rule (Azure SQL Database)](/sql/relational-databases/system-stored-procedures/sp-delete-firewall-rule-azure-sql-database)|Removes server-level firewall settings from a server. This stored procedure is only available in the master database to the server-level principal login.|
azure-sql Migrate Dtu To Vcore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/migrate-dtu-to-vcore.md
Previously updated : 01/18/2022 Last updated : 04/06/2022 # Migrate Azure SQL Database from the DTU-based model to the vCore-based model [!INCLUDE[appliesto-sqldb](../includes/appliesto-sqldb.md)]
For most DTU to vCore migration scenarios, databases and elastic pools in the Ba
To choose the service objective, or compute size, for the migrated database in the vCore model, you can use a simple but approximate rule of thumb: every 100 DTUs in the Basic or Standard tiers require *at least* 1 vCore, and every 125 DTUs in the Premium tier require *at least* 1 vCore. > [!TIP]
-> This rule is approximate because it does not consider the hardware generation used for the DTU database or elastic pool.
+> This rule is approximate because it does not consider the specific type of hardware used for the DTU database or elastic pool.
-In the DTU model, the system may select any available [hardware generation](service-tiers-dtu.md#hardware-generations) for your database or elastic pool. Further, in the DTU model you have only indirect control over the number of vCores (logical CPUs) by choosing higher or lower DTU or eDTU values.
+In the DTU model, the system may select any available [hardware configuration](service-tiers-dtu.md#hardware-configuration) for your database or elastic pool. Further, in the DTU model you have only indirect control over the number of vCores (logical CPUs) by choosing higher or lower DTU or eDTU values.
-In the vCore model, customers must make an explicit choice of both the hardware generation and the number of vCores (logical CPUs). While DTU model does not offer these choices, the hardware generation and the number of logical CPUs used for every database and elastic pool are exposed via dynamic management views. This makes it possible to determine the matching vCore service objective more precisely.
+In the vCore model, customers must make an explicit choice of both the hardware configuration and the number of vCores (logical CPUs). While DTU model does not offer these choices, hardware type and the number of logical CPUs used for every database and elastic pool are exposed via dynamic management views. This makes it possible to determine the matching vCore service objective more precisely.
The following approach uses this information to determine a vCore service objective with a similar allocation of resources, to obtain a similar level of performance after migration to the vCore model. ### DTU to vCore mapping
-A T-SQL query below, when executed in the context of a DTU database to be migrated, returns a matching (possibly fractional) number of vCores in each hardware generation in the vCore model. By rounding this number to the closest number of vCores available for [databases](resource-limits-vcore-single-databases.md) and [elastic pools](resource-limits-vcore-elastic-pools.md) in each hardware generation in the vCore model, customers can choose the vCore service objective that is the closest match for their DTU database or elastic pool.
+A T-SQL query below, when executed in the context of a DTU database to be migrated, returns a matching (possibly fractional) number of vCores in each hardware configuration in the vCore model. By rounding this number to the closest number of vCores available for [databases](resource-limits-vcore-single-databases.md) and [elastic pools](resource-limits-vcore-elastic-pools.md) in each hardware configuration in the vCore model, customers can choose the vCore service objective that is the closest match for their DTU database or elastic pool.
Sample migration scenarios using this approach are described in the [Examples](#dtu-to-vcore-migration-examples) section.
FROM dtu_vcore_map;
### Additional factors
-Besides the number of vCores (logical CPUs) and the hardware generation, several other factors may influence the choice of vCore service objective:
+Besides the number of vCores (logical CPUs) and the type of hardware, several other factors may influence the choice of vCore service objective:
- The mapping Transact-SQL query matches DTU and vCore service objectives in terms of their CPU capacity, therefore the results will be more accurate for CPU-bound workloads.-- For the same hardware generation and the same number of vCores, IOPS and transaction log throughput resource limits for vCore databases are often higher than for DTU databases. For IO-bound workloads, it may be possible to lower the number of vCores in the vCore model to achieve the same level of performance. Actual resource limits for DTU and vCore databases are exposed in the [sys.dm_user_db_resource_governance](/sql/relational-databases/system-dynamic-management-views/sys-dm-user-db-resource-governor-azure-sql-database) view. Comparing these values between the DTU database or pool to be migrated, and a vCore database or pool with an approximately matching service objective will help you select the vCore service objective more precisely.-- The mapping query also returns the amount of memory per core for the DTU database or elastic pool to be migrated, and for each hardware generation in the vCore model. Ensuring similar or higher total memory after migration to vCore is important for workloads that require a large memory data cache to achieve sufficient performance, or workloads that require large memory grants for query processing. For such workloads, depending on actual performance, it may be necessary to increase the number of vCores to get sufficient total memory.
+- For the same hardware type and the same number of vCores, IOPS and transaction log throughput resource limits for vCore databases are often higher than for DTU databases. For IO-bound workloads, it may be possible to lower the number of vCores in the vCore model to achieve the same level of performance. Actual resource limits for DTU and vCore databases are exposed in the [sys.dm_user_db_resource_governance](/sql/relational-databases/system-dynamic-management-views/sys-dm-user-db-resource-governor-azure-sql-database) view. Comparing these values between the DTU database or pool to be migrated, and a vCore database or pool with an approximately matching service objective will help you select the vCore service objective more precisely.
+- The mapping query also returns the amount of memory per core for the DTU database or elastic pool to be migrated, and for each hardware configuration in the vCore model. Ensuring similar or higher total memory after migration to vCore is important for workloads that require a large memory data cache to achieve sufficient performance, or workloads that require large memory grants for query processing. For such workloads, depending on actual performance, it may be necessary to increase the number of vCores to get sufficient total memory.
- The [historical resource utilization](/sql/relational-databases/system-catalog-views/sys-resource-stats-azure-sql-database) of the DTU database should be considered when choosing the vCore service objective. DTU databases with consistently under-utilized CPU resources may need fewer vCores than the number returned by the mapping query. Conversely, DTU databases where consistently high CPU utilization causes inadequate workload performance may require more vCores than returned by the query. - If migrating databases with intermittent or unpredictable usage patterns, consider the use of [Serverless](serverless-tier-overview.md) compute tier. Note that the max number of concurrent [workers](resource-limits-logical-server.md#sessions-workers-and-requests) in serverless is 75% of the limit in provisioned compute for the same number of max vCores configured. Also, the max memory available in serverless is 3 GB times the maximum number of vCores configured, which is less than the per-core memory for provisioned compute. For example, on Gen5 max memory is 120 GB when 40 max vCores are configured in serverless, vs. 204 GB for a 40 vCore provisioned compute.-- In the vCore model, the supported maximum database size may differ depending on hardware generation. For large databases, check supported maximum sizes in the vCore model for [single databases](resource-limits-vcore-single-databases.md) and [elastic pools](resource-limits-vcore-elastic-pools.md).
+- In the vCore model, the supported maximum database size may differ depending on hardware. For large databases, check supported maximum sizes in the vCore model for [single databases](resource-limits-vcore-single-databases.md) and [elastic pools](resource-limits-vcore-elastic-pools.md).
- For elastic pools, the [DTU](resource-limits-dtu-elastic-pools.md) and [vCore](resource-limits-vcore-elastic-pools.md) models have differences in the maximum supported number of databases per pool. This should be considered when migrating elastic pools with many databases.-- Some hardware generations may not be available in every region. Check availability under [Hardware generations for SQL Database](./service-tiers-sql-database-vcore.md#hardware-generations).
+- Some hardware configurations may not be available in every region. Check availability under [Hardware configuration for SQL Database](./service-tiers-sql-database-vcore.md#hardware-configuration).
> [!IMPORTANT] > The DTU to vCore sizing guidelines above are provided to help in the initial estimation of the target database service objective. >
-> The optimal configuration of the target database is workload-dependent. Thus, to achieve the optimal price/performance ratio after migration, you may need to leverage the flexibility of the vCore model to adjust the number of vCores, hardware generation, and service and compute tiers. You may also need to adjust database configuration parameters, such as [maximum degree of parallelism](configure-max-degree-of-parallelism.md), and/or change the database [compatibility level](/sql/t-sql/statements/alter-database-transact-sql-compatibility-level) to enable recent improvements in the database engine.
+> The optimal configuration of the target database is workload-dependent. Thus, to achieve the optimal price/performance ratio after migration, you may need to leverage the flexibility of the vCore model to adjust the number of vCores, hardware configuration, and service and compute tiers. You may also need to adjust database configuration parameters, such as [maximum degree of parallelism](configure-max-degree-of-parallelism.md), and/or change the database [compatibility level](/sql/t-sql/statements/alter-database-transact-sql-compatibility-level) to enable recent improvements in the database engine.
> ### DTU to vCore migration examples
The mapping query returns the following result (some columns not shown for brevi
|-|-|-|--|--|--|--| |0.25|Gen4|0.42|0.250|7|0.425|5.05|
-We see that the DTU database has the equivalent of 0.25 logical CPUs (vCores), with 0.42 GB of memory per vCore, and is using Gen4 hardware. The smallest vCore service objectives in the Gen4 and Gen5 hardware generations, **GP_Gen4_1** and **GP_Gen5_2**, provide more compute resources than the Standard S0 database, so a direct match is not possible. Since Gen4 hardware is being [decommissioned](https://azure.microsoft.com/updates/gen-4-hardware-on-azure-sql-database-approaching-end-of-life-in-2020/), the **GP_Gen5_2** option is preferred. Additionally, if the workload is well-suited for the [Serverless](serverless-tier-overview.md) compute tier, then **GP_S_Gen5_1** would be a closer match.
+We see that the DTU database has the equivalent of 0.25 logical CPUs (vCores), with 0.42 GB of memory per vCore, and is using Gen4 hardware. The smallest vCore service objectives in the Gen4 and Gen5 hardware configurations, **GP_Gen4_1** and **GP_Gen5_2**, provide more compute resources than the Standard S0 database, so a direct match is not possible. Since Gen4 hardware is being [decommissioned](https://azure.microsoft.com/updates/gen-4-hardware-on-azure-sql-database-approaching-end-of-life-in-2020/), the **GP_Gen5_2** option is preferred. Additionally, if the workload is well-suited for the [Serverless](serverless-tier-overview.md) compute tier, then **GP_S_Gen5_1** would be a closer match.
**Migrating a Premium P15 database**
azure-sql Monitoring With Dmvs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/monitoring-with-dmvs.md
For detailed information on dynamic management views, see [Dynamic Management Vi
## Monitor with SQL insights
-[Azure Monitor SQL insights](../../azure-monitor/insights/sql-insights-overview.md) is a tool for monitoring Azure SQL managed instances, Azure SQL databases, and SQL Server instances in Azure SQL VMs. This service uses a remote agent to capture data from dynamic management views (DMVs) and routes the data to Azure Log Analytics, where it can be monitored and analyzed. You can view this data from [Azure Monitor](../../azure-monitor/overview.md) in provided views, or access the Log data directly to run queries and analyze trends. To start using Azure Monitor SQL insights, see [Enable SQL insights](../../azure-monitor/insights/sql-insights-enable.md).
+[Azure Monitor SQL insights](../../azure-monitor/insights/sql-insights-overview.md) is a tool for monitoring managed instances, databases in Azure SQL Database, and SQL Server instances in Azure SQL VMs. This service uses a remote agent to capture data from dynamic management views (DMVs) and routes the data to Azure Log Analytics, where it can be monitored and analyzed. You can view this data from [Azure Monitor](../../azure-monitor/overview.md) in provided views, or access the Log data directly to run queries and analyze trends. To start using Azure Monitor SQL insights, see [Enable SQL insights](../../azure-monitor/insights/sql-insights-enable.md).
## Permissions
ORDER BY 2 DESC;
Slow or long-running queries can contribute to excessive resource consumption and be the consequence of blocked queries. The cause of the blocking can be poor application design, bad query plans, the lack of useful indexes, and so on. You can use the sys.dm_tran_locks view to get information about the current locking activity in database. For example code, see [sys.dm_tran_locks (Transact-SQL)](/sql/relational-databases/system-dynamic-management-views/sys-dm-tran-locks-transact-sql). For more information on troubleshooting blocking, see [Understand and resolve Azure SQL blocking problems](understand-resolve-blocking.md).
+### Monitoring deadlocks
+
+In some cases, two or more queries may mutually block one another, resulting in a deadlock.
+
+You can create an Extended Events trace a database in Azure SQL Database to capture deadlock events, then find related queries and their execution plans in Query Store. Learn more in [Analyze and prevent deadlocks in Azure SQL Database](analyze-prevent-deadlocks.md).
+
+For Azure SQL Managed Instance, refer to the [Deadlocks](/sql/relational-databases/sql-server-transaction-locking-and-row-versioning-guide#deadlock_tools) of the [Transaction locking and row versioning guide](/sql/relational-databases/sql-server-transaction-locking-and-row-versioning-guide).
+ ### Monitoring query plans An inefficient query plan also may increase CPU consumption. The following example uses the [sys.dm_exec_query_stats](/sql/relational-databases/system-dynamic-management-views/sys-dm-exec-query-stats-transact-sql) view to determine which query uses the most cumulative CPU.
ORDER BY highest_cpu_queries.total_worker_time DESC;
- [Introduction to Azure SQL Database and Azure SQL Managed Instance](sql-database-paas-overview.md) - [Diagnose and troubleshoot high CPU on Azure SQL Database](high-cpu-diagnose-troubleshoot.md)-- [Tune applications and databases for performance in Azure SQL Database and Azure SQL Managed Instance](performance-guidance.md)
+- [Tune applications and databases for performance in Azure SQL Database and Azure SQL Managed Instance](performance-guidance.md)
+- [Understand and resolve Azure SQL Database blocking problems](understand-resolve-blocking.md)
+- [Analyze and prevent deadlocks in Azure SQL Database](analyze-prevent-deadlocks.md)
azure-sql Performance Guidance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/performance-guidance.md
Although Azure SQL Database and Azure SQL Managed Instance service tiers are des
Applications that have inherent data access concurrency issues, for example deadlocking, might not benefit from a higher compute size. Consider reducing round trips against the database by caching data on the client side with the Azure Caching service or another caching technology. See [Application tier caching](#application-tier-caching).
+ To prevent deadlocks from reoccurring in Azure SQL Database, see [Analyze and prevent deadlocks in Azure SQL Database](analyze-prevent-deadlocks.md). For Azure SQL Managed Instance, refer to the [Deadlocks](/sql/relational-databases/sql-server-transaction-locking-and-row-versioning-guide#deadlock_tools) of the [Transaction locking and row versioning guide](/sql/relational-databases/sql-server-transaction-locking-and-row-versioning-guide).
+ ## Tune your database In this section, we look at some techniques that you can use to tune database to gain the best performance for your application and run it at the lowest possible compute size. Some of these techniques match traditional SQL Server tuning best practices, but others are specific to Azure SQL Database and Azure SQL Managed Instance. In some cases, you can examine the consumed resources for a database to find areas to further tune and extend traditional SQL Server techniques to work in Azure SQL Database and Azure SQL Managed Instance.
azure-sql Purchasing Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/purchasing-models.md
Previously updated : 02/02/2022 Last updated : 04/06/2022 # Compare vCore and DTU-based purchasing models of Azure SQL Database [!INCLUDE[appliesto-sqldb](../includes/appliesto-sqldb.md)]
A virtual core (vCore) represents a logical CPU and offers you the option to cho
In the vCore-based purchasing model for SQL Database, you can choose between the General Purpose and Business Critical service tiers. Review [service tiers](service-tiers-sql-database-vcore.md#service-tiers) to learn more. For single databases, you can also choose the [Hyperscale service tier](service-tier-hyperscale.md).
-In the vCore-based purchasing model, you pay for:
--- Compute resources (the service tier + the number of vCores and the amount of memory + the generation of hardware).-- The type and amount of data and log storage.-- Backup storage.
+In the vCore-based purchasing model, your costs depend on the choice and usage of:
+- Service tier
+- Hardware configuration
+- Compute resources (the number of vCores and the amount of memory)
+- Reserved database storage
+- Actual backup storage
## DTU purchasing model
azure-sql Quota Increase Request https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/quota-increase-request.md
Previously updated : 06/04/2020 Last updated : 04/06/2022 # Request quota increases for Azure SQL Database and SQL Managed Instance
If your subscription needs access in a particular region, select the **Region ac
### Request enabling specific hardware in a region
-If a hardware generation you want to use is not available in your region, you may request it using the following steps. For more information on hardware generations and regional availability, see [Hardware generations for SQL Database](./service-tiers-sql-database-vcore.md#hardware-generations) or [Hardware generations for SQL Managed Instance](../managed-instance/service-tiers-managed-instance-vcore.md#hardware-generations).
+If the hardware you want to use is not available in your region, you may request it using the following steps. For more information on hardware regional availability, see [Hardware configurations for SQL Database](./service-tiers-sql-database-vcore.md#hardware-configuration) or [Hardware configurations for SQL Managed Instance](../managed-instance/service-tiers-managed-instance-vcore.md#hardware-configurations).
1. Select the **Other quota request** quota type.
-1. In the **Description** field, state your request, including the name of the hardware generation and the name of the region you need it in.
+1. In the **Description** field, state your request, including the name of the hardware and the name of the region you need it in.
![Request hardware in a new region](./media/quota-increase-request/hardware-in-new-region.png)
azure-sql Reserved Capacity Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/reserved-capacity-overview.md
Previously updated : 10/13/2020 Last updated : 04/06/2022 # Save costs for resources with reserved capacity - Azure SQL Database & SQL Managed Instance [!INCLUDE[appliesto-sqldb-sqlmi](../includes/appliesto-sqldb-sqlmi.md)]
For more information about how enterprise customers and Pay-As-You-Go customers
## Determine correct size before purchase
-The size of reservation should be based on the total amount of compute used by the existing or soon-to-be-deployed database or managed instance within a specific region and using the same performance tier and hardware generation.
+The size of reservation should be based on the total amount of compute used by the existing or soon-to-be-deployed database or managed instance within a specific region and using the same performance tier and hardware configuration.
For example, let's suppose that you are running one general purpose, Gen5 ΓÇô 16 vCore elastic pool and two business critical Gen5 ΓÇô 4 vCore single databases. Further, let's supposed that you plan to deploy within the next month an additional general purpose Gen5 ΓÇô 16 vCore elastic pool and one business critical Gen5 ΓÇô 32 vCore elastic pool. Also, let's suppose that you know that you will need these resources for at least 1 year. In this case, you should purchase a 32 (2x16) vCores 1-year reservation for single database/elastic pool general purpose - Gen5 and a 40 (2x4 + 32) vCore 1-year reservation for single database/elastic pool business critical - Gen5.
azure-sql Serverless Tier Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/serverless-tier-overview.md
Previously updated : 9/28/2021 Last updated : 04/06/2022 # Azure SQL Database serverless [!INCLUDE[appliesto-sqldb](../includes/appliesto-sqldb.md)]
If using [customer managed transparent data encryption](transparent-data-encrypt
Creating a new database or moving an existing database into a serverless compute tier follows the same pattern as creating a new database in provisioned compute tier and involves the following two steps.
-1. Specify the service objective. The service objective prescribes the service tier, hardware generation, and max vCores. For service objective options, see [serverless resource limits](resource-limits-vcore-single-databases.md#general-purposeserverless-computegen5)
+1. Specify the service objective. The service objective prescribes the service tier, hardware configuration, and max vCores. For service objective options, see [serverless resource limits](resource-limits-vcore-single-databases.md#general-purposeserverless-computegen5)
2. Optionally, specify the min vCores and auto-pause delay to change their default values. The following table shows the available values for these parameters.
azure-sql Service Tiers Dtu https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/service-tiers-dtu.md
Previously updated : 02/02/2022 Last updated : 04/06/2022 # DTU-based purchasing model overview [!INCLUDE[appliesto-sqldb](../includes/appliesto-sqldb.md)]
The input values for this formula can be obtained from [sys.dm_db_resource_stats
> [!NOTE] > The DTU limit of a database is determined by CPU, reads, writes, and memory available to the database. However, because the SQL Database engine typically uses all available memory for its data cache to improve performance, the `avg_memory_usage_percent` value will usually be close to 100 percent, regardless of current database load. Therefore, even though memory does indirectly influence the DTU limit, it is not used in the DTU utilization formula.
-## Hardware generations
+## Hardware configuration
-In the DTU-based purchasing model, customers cannot choose the hardware generation used for their databases. While a given database usually stays on a specific hardware generation for a long time (commonly for multiple months), there are certain events that can cause a database to be moved to another hardware generation.
+In the DTU-based purchasing model, customers cannot choose the hardware configuration used for their databases. While a given database usually stays on a specific type of hardware for a long time (commonly for multiple months), there are certain events that can cause a database to be moved to different hardware.
-For example, a database can be moved to a different hardware generation if it's scaled up or down to a different service objective, or if the current infrastructure in a datacenter is approaching its capacity limits, or if the currently used hardware is being decommissioned due to its end of life.
+For example, a database can be moved to different hardware if it's scaled up or down to a different service objective, or if the current infrastructure in a datacenter is approaching its capacity limits, or if the currently used hardware is being decommissioned due to its end of life.
-If a database is moved to different hardware, workload performance can change. The DTU model guarantees that the throughput and response time of the [DTU benchmark](dtu-benchmark.md) workload will remain substantially identical as the database moves to a different hardware generation, as long as its service objective (the number of DTUs) stays the same.
+If a database is moved to different hardware, workload performance can change. The DTU model guarantees that the throughput and response time of the [DTU benchmark](dtu-benchmark.md) workload will remain substantially identical as the database moves to a different hardware type, as long as its service objective (the number of DTUs) stays the same.
-However, across the wide spectrum of customer workloads running in Azure SQL Database, the impact of using different hardware for the same service objective can be more pronounced. Different workloads will benefit from different hardware configuration and features. Therefore, for workloads other than the [DTU benchmark](dtu-benchmark.md), it's possible to see performance differences if the database moves from one hardware generation to another.
+However, across the wide spectrum of customer workloads running in Azure SQL Database, the impact of using different hardware for the same service objective can be more pronounced. Different workloads may benefit from different hardware configurations and features. Therefore, for workloads other than the [DTU benchmark](dtu-benchmark.md), it's possible to see performance differences if the database moves from one type of hardware to another.
For example, an application that is sensitive to network latency can see better performance on Gen5 hardware vs. Gen4 due to the use of Accelerated Networking in Gen5, but an application using intensive read IO can see better performance on Gen4 hardware versus Gen5 due to a higher memory per core ratio on Gen4.
-Customers with workloads that are sensitive to hardware changes or customers who wish to control the choice of hardware generation for their database can use the [vCore](service-tiers-vcore.md) model to choose their preferred hardware generation during database creation and scaling. In the vCore model, resource limits of each service objective on each hardware generation are documented, for both [single databases](resource-limits-vcore-single-databases.md) and [elastic pools](resource-limits-vcore-elastic-pools.md). For more information about hardware generations in the vCore model, see [Hardware generations for SQL Database](./service-tiers-sql-database-vcore.md#hardware-generations) or [Hardware generations for SQL Managed Instance](../managed-instance/service-tiers-managed-instance-vcore.md#hardware-generations).
+Customers can use the [vCore](service-tiers-vcore.md) model to choose their preferred hardware configuration during database creation and scaling. In the vCore model, detailed resource limits of each service objective in each hardware configuration are documented for [single databases](resource-limits-vcore-single-databases.md) and [elastic pools](resource-limits-vcore-elastic-pools.md). For more information about hardware in the vCore model, see [Hardware configuration for SQL Database](./service-tiers-sql-database-vcore.md#hardware-configuration) or [Hardware configuration for SQL Managed Instance](../managed-instance/service-tiers-managed-instance-vcore.md#hardware-configurations).
## Compare service tiers
azure-sql Service Tiers Sql Database Vcore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/service-tiers-sql-database-vcore.md
Previously updated : 03/02/2022 Last updated : 04/06/2022 # vCore purchasing model - Azure SQL Database
This article reviews the [vCore purchasing model](service-tiers-vcore.md) for [A
The vCore purchasing model used by Azure SQL Database provides several benefits over the DTU purchasing model: - Higher compute, memory, I/O, and storage limits.-- Control over the hardware generation to better match compute and memory requirements of the workload.
+- Choice of hardware configuration to better match compute and memory requirements of the workload.
- Pricing discounts for [Azure Hybrid Benefit (AHB)](../azure-hybrid-benefit.md). - Greater transparency in the hardware details that power the compute, that facilitates planning for migrations from on-premises deployments. - [Reserved instance pricing](reserved-capacity-overview.md) is only available for vCore purchasing model.
Compute tier options in the vCore model include the provisioned and [serverless]
- While the **provisioned compute tier** bills for the amount of compute provisioned at a fixed price per hour, the **serverless compute tier** bills for the amount of compute used, per second.
-## Hardware generations
+## Hardware configuration
-Hardware generation options in the vCore model include Gen 4/5, M-series, Fsv2-series, and DC-series. The hardware generation generally defines the compute and memory limits and other characteristics that impact the performance of the workload.
+Hardware configurations in the vCore model include Gen4, Gen5, M-series, Fsv2-series, and DC-series. Hardware configuration defines compute and memory limits and other characteristics that impact workload performance.
### Gen4/Gen5
For regions where Gen4/Gen5 is available, see [Gen4/Gen5 availability](#gen4gen5
### Fsv2-series -- Fsv2-series is a compute optimized hardware option delivering low CPU latency and high clock speed for the most CPU demanding workloads.-- Depending on the workload, Fsv2-series can deliver more CPU performance per vCore than Gen5, and the 72 vCore size can provide more CPU performance for less cost than 80 vCores on Gen5. -- Fsv2 provides less memory and tempdb per vCore than other hardware so workloads sensitive to those limits may want to consider Gen5 or M-series instead.  
+- Fsv2-series is a compute optimized hardware configuration delivering low CPU latency and high clock speed for the most CPU demanding workloads.
+- Depending on the workload, Fsv2-series can deliver more CPU performance per vCore than other types of hardware. For example, the 72 vCore Fsv2 compute size can provide more CPU performance than 80 vCores on Gen5, at lower cost.
+- Fsv2 provides less memory and tempdb per vCore than other hardware, so workloads sensitive to those limits may perform better on Gen5 or M-series.
Fsv2-series in only supported in the General Purpose tier. For regions where Fsv2-series is available, see [Fsv2-series availability](#fsv2-series-1). ### M-series -- M-series is a memory optimized hardware option for workloads demanding more memory and higher compute limits than provided by Gen5.
+- M-series is a memory optimized hardware configuration for workloads demanding more memory and higher compute limits than provided by other types of hardware.
- M-series provides 29 GB per vCore and up to 128 vCores, which increases the memory limit relative to Gen5 by 8x to nearly 4 TB.
-M-series is only supported in the Business Critical tier and does not support zone redundancy. For regions where M-series is available, see [M-series availability](#m-series-1).
+M-series is only supported in the Business Critical tier and does not support zone redundancy. For regions where M-series is available, see [M-series availability](#m-series-1).
#### Azure offer types supported by M-series
-To access M-series, the subscription must be a paid offer type including Pay-As-You-Go or Enterprise Agreement (EA). For a complete list of Azure offer types supported by M-series, see [current offers without spending limits](https://azure.microsoft.com/support/legal/offer-details).
+To create databases or elastic pools on M-series hardware, the subscription must be a paid offer type including Pay-As-You-Go or Enterprise Agreement (EA). For a complete list of Azure offer types supported by M-series, see [current offers without spending limits](https://azure.microsoft.com/support/legal/offer-details).
<!-- To enable M-series hardware for a subscription and region, a support request must be opened. The subscription must be a paid offer type including Pay-As-You-Go or Enterprise Agreement (EA). If the support request is approved, then the selection and provisioning experience of M-series follows the same pattern as for other hardware generations. For regions where M-series is available, see [M-series availability](#m-series).
To enable M-series hardware for a subscription and region, a support request mus
- DC-series is designed for workloads that process sensitive data and demand confidential query processing capabilities, provided by Always Encrypted with secure enclaves. - DC-series hardware provides balanced compute and memory resources.
-DC-series is only supported for the Provisioned compute (Serverless is not supported) and it does not support zone redundancy. For regions where DC-series is available, see [DC-series availability](#dc-series-1).
+DC-series is only supported for Provisioned compute (Serverless is not supported) and does not support zone redundancy. For regions where DC-series is available, see [DC-series availability](#dc-series-1).
#### Azure offer types supported by DC-series
-To access DC-series, the subscription must be a paid offer type including Pay-As-You-Go or Enterprise Agreement (EA). For a complete list of Azure offer types supported by DC-series, see [current offers without spending limits](https://azure.microsoft.com/support/legal/offer-details).
+To create databases or elastic pools on DC-series hardware, the subscription must be a paid offer type including Pay-As-You-Go or Enterprise Agreement (EA). For a complete list of Azure offer types supported by DC-series, see [current offers without spending limits](https://azure.microsoft.com/support/legal/offer-details).
+### Selecting hardware configuration
-### Selecting a hardware generation
+You can select hardware configuration for a database or elastic pool in SQL Database at the time of creation. You can also change hardware configuration of an existing database or elastic pool.
-In the Azure portal, you can select the hardware generation for a database or pool in SQL Database at the time of creation, or you can change the hardware generation of an existing database or pool.
-
-**To select a hardware generation when creating a SQL Database or pool**
+**To select a hardware configuration when creating a SQL Database or pool**
For detailed information, see [Create a SQL Database](single-database-create-quickstart.md).
On the **Basics** tab, select the **Configure database** link in the **Compute +
:::image type="content" source="./media/service-tiers-vcore/configure-sql-database.png" alt-text="configure SQL database" loc-scope="azure-portal":::
-Select the desired hardware generation:
+Select the desired hardware configuration:
:::image type="content" source="./media/service-tiers-vcore/select-hardware.png" alt-text="select hardware for SQL database" loc-scope="azure-portal":::
-**To change the hardware generation of an existing SQL Database or pool**
+**To change hardware configuration of an existing SQL Database or pool**
For a database, on the Overview page, select the **Pricing tier** link:
For a database, on the Overview page, select the **Pricing tier** link:
For a pool, on the Overview page, select **Configure**.
-Follow the steps to change configuration, and select the hardware generation as described in the previous steps.
+Follow the steps to change configuration, and select hardware configuration as described in the previous steps.
### Hardware availability #### <a id="gen4gen5-1"></a> Gen4/Gen5
-Gen4 hardware is [being phased out](https://azure.microsoft.com/updates/gen-4-hardware-on-azure-sql-database-approaching-end-of-life-in-2020/) and is no longer available for new deployments. All new databases must be deployed on later hardware generations.
+Gen4 hardware is [being retired](https://azure.microsoft.com/updates/gen-4-hardware-on-azure-sql-database-approaching-end-of-life-in-2020/) and is no longer available for new deployments. All new databases must be deployed on other hardware configurations.
-Gen5 is available in all public regions worldwide.
+Gen5 hardware is available in all public regions worldwide.
#### Fsv2-series Fsv2-series is available in the following regions:
-Australia Central, Australia Central 2, Australia East, Australia Southeast, Brazil South, Canada Central, East Asia, East Us, France Central, India Central, Korea Central, Korea South, North Europe, South Africa North, Southeast Asia, UK South, UK West, West Europe, West Us 2.
+Australia Central, Australia Central 2, Australia East, Australia Southeast, Brazil South, Canada Central, East Asia, East US, France Central, India Central, Korea Central, Korea South, North Europe, South Africa North, Southeast Asia, UK South, UK West, West Europe, West US 2.
#### M-series
Approved support requests are typically fulfilled within 5 business days.
#### DC-series
-DC-series is available in the following regions: Canada Central, Canada East, East US, North Europe, UK South, West Europe, West US.
+DC-series is available in the following regions:
+Canada Central, Canada East, East US, North Europe, UK South, West Europe, West US.
If you need DC-series in a currently unsupported region, [submit a support ticket](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest). On the **Basics** page, provide the following:
If you need DC-series in a currently unsupported region, [submit a support ticke
:::image type="content" source="./media/service-tiers-vcore/request-dc-series.png" alt-text="Request DC-series in a new region" loc-scope="azure-portal":::
-## Compute and memory
+## Compute resources (CPU and memory)
-The following table compares compute and memory between the different generations and compute tiers:
+The following table compares compute resources in different hardware configurations and compute tiers:
-|Hardware generation |Compute |Memory |
+|Hardware configuration |Compute |Memory |
|:|:|:| |Gen4 |- Intel&reg; E5-2673 v3 (Haswell) 2.4-GHz processors<br>- Provision up to 24 vCores (1 vCore = 1 physical core) |- 7 GB per vCore<br>- Provision up to 168 GB| |Gen5 |**Provisioned compute**<br>- Intel&reg; E5-2673 v4 (Broadwell) 2.3-GHz, Intel&reg; SP-8160 (Skylake)\*, and Intel&reg; 8272CL (Cascade Lake) 2.5 GHz\* processors<br>- Provision up to 80 vCores (1 vCore = 1 hyper-thread)<br><br>**Serverless compute**<br>- Intel&reg; E5-2673 v4 (Broadwell) 2.3-GHz and Intel&reg; SP-8160 (Skylake)* processors<br>- Auto-scale up to 40 vCores (1 vCore = 1 hyper-thread)|**Provisioned compute**<br>- 5.1 GB per vCore<br>- Provision up to 408 GB<br><br>**Serverless compute**<br>- Auto-scale up to 24 GB per vCore<br>- Auto-scale up to 120 GB max| |Fsv2-series |- Intel&reg; 8168 (Skylake) processors<br>- Featuring a sustained all core turbo clock speed of 3.4 GHz and a maximum single core turbo clock speed of 3.7 GHz.<br>- Provision up to 72 vCores (1 vCore = 1 hyper-thread)|- 1.9 GB per vCore<br>- Provision up to 136 GB| |M-series |- Intel&reg; E7-8890 v3 2.5 GHz and Intel&reg; 8280M 2.7 GHz (Cascade Lake) processors<br>- Provision up to 128 vCores (1 vCore = 1 hyper-thread)|- 29 GB per vCore<br>- Provision up to 3.7 TB|
-|DC-series | - Intel XEON E-2288G processors<br>- Featuring Intel Software Guard Extension (Intel SGX))<br>- Provision up to 8 vCores (1 vCore = 1 physical core) | 4.5 GB per vCore |
-
-\* In the [sys.dm_user_db_resource_governance](/sql/relational-databases/system-dynamic-management-views/sys-dm-user-db-resource-governor-azure-sql-database) dynamic management view, hardware generation for databases using Intel&reg; SP-8160 (Skylake) processors appears as Gen6, hardware generation for databases using Intel&reg; 8272CL (Cascade Lake) appears as Gen7 and hardware generation for databases using Intel Xeon&reg; Platinum 8307C (Ice Lake) appear as Gen8. Resource limits for all Gen5 databases are the same regardless of processor type (Broadwell, Skylake, or Cascade Lake).
+|DC-series | - Intel&reg; XEON E-2288G processors<br>- Featuring Intel Software Guard Extension (Intel SGX))<br>- Provision up to 8 vCores (1 vCore = 1 physical core) | 4.5 GB per vCore |
-For more information on vCore resource limits, review [single databases](resource-limits-vcore-single-databases.md), or [pooled databases](resource-limits-vcore-elastic-pools.md).
+\* In the [sys.dm_user_db_resource_governance](/sql/relational-databases/system-dynamic-management-views/sys-dm-user-db-resource-governor-azure-sql-database) dynamic management view, hardware generation for databases using Intel&reg; SP-8160 (Skylake) processors appears as Gen6, hardware generation for databases using Intel&reg; 8272CL (Cascade Lake) appears as Gen7 and hardware generation for databases using Intel Xeon&reg; Platinum 8307C (Ice Lake) appear as Gen8. For a given compute size and hardware configuration, resource limits are the same regardless of processor type (Broadwell, Skylake, or Cascade Lake).
+For more information see resource limits for [single databases](resource-limits-vcore-single-databases.md) and [elastic pools](resource-limits-vcore-elastic-pools.md).
## Next steps
azure-sql Service Tiers Vcore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/service-tiers-vcore.md
Previously updated : 02/02/2022 Last updated : 04/06/2022 # vCore purchasing model overview - Azure SQL Database and Azure SQL Managed Instance
This article provides a brief overview of the vCore purchasing model used by bot
[!INCLUDE [vcore-overview](../includes/vcore-overview.md)] > [!IMPORTANT]
-> In Azure SQL Database, compute resources, I/O, and data and log storage are charged per database or elastic pool. Backup storage is charged per each database.
+> In Azure SQL Database, compute resources (CPU and memory), I/O, and data and log storage are charged per database or elastic pool. Backup storage is charged per each database.
-The vCore purchasing model provides transparency in the hardware details that power compute, control over the hardware generation, higher scaling granularity, and pricing discounts with the [Azure Hybrid Benefit (AHB)](../azure-hybrid-benefit.md) and [Reserved Instance (RI)](../database/reserved-capacity-overview.md).
+The vCore purchasing model provides transparency in database CPU, memory, and storage resource allocation, hardware configuration, higher scaling granularity, and pricing discounts with the [Azure Hybrid Benefit (AHB)](../azure-hybrid-benefit.md) and [Reserved Instance (RI)](../database/reserved-capacity-overview.md).
In the case of Azure SQL Database, the vCore purchasing model provides higher compute, memory, I/O, and storage limits than the DTU model.
azure-sql Single Database Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/single-database-manage.md
To create and manage the servers, databases, and firewalls with Transact-SQL, us
|[sys.dm_db_resource_stats](/sql/relational-databases/system-dynamic-management-views/sys-dm-db-resource-stats-azure-sql-database)| Returns CPU, IO, and memory consumption for a database in Azure SQL Database. One row exists for every 15 seconds, even if there's no activity in the database.| |[sys.resource_stats](/sql/relational-databases/system-catalog-views/sys-resource-stats-azure-sql-database)|Returns CPU usage and storage data for a database in Azure SQL Database. The data is collected and aggregated within five-minute intervals.| |[sys.database_connection_stats](/sql/relational-databases/system-catalog-views/sys-database-connection-stats-azure-sql-database)|Contains statistics for SQL Database connectivity events, providing an overview of database connection successes and failures. |
-|[sys.event_log](/sql/relational-databases/system-catalog-views/sys-event-log-azure-sql-database)|Returns successful Azure SQL Database connections, connection failures, and deadlocks. You can use this information to track or troubleshoot your database activity with SQL Database.|
+|[sys.event_log](/sql/relational-databases/system-catalog-views/sys-event-log-azure-sql-database)|Returns successful Azure SQL Database connections and connection failures. You can use this information to track or troubleshoot your database activity with SQL Database.|
|[sp_set_firewall_rule](/sql/relational-databases/system-stored-procedures/sp-set-firewall-rule-azure-sql-database)|Creates or updates the server-level firewall settings for your server. This stored procedure is only available in the master database to the server-level principal login. A server-level firewall rule can only be created using Transact-SQL after the first server-level firewall rule has been created by a user with Azure-level permissions| |[sys.firewall_rules](/sql/relational-databases/system-catalog-views/sys-firewall-rules-azure-sql-database)|Returns information about the server-level firewall settings associated with your database in Azure SQL Database.| |[sp_delete_firewall_rule](/sql/relational-databases/system-stored-procedures/sp-delete-firewall-rule-azure-sql-database)|Removes server-level firewall settings from your server. This stored procedure is only available in the master database to the server-level principal login.|
azure-sql Troubleshoot Common Errors Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/troubleshoot-common-errors-issues.md
Learn more about related topics in the following articles:
- [Azure SQL Database and Azure Synapse Analytics network access controls](./network-access-controls-overview.md) - [Troubleshooting transaction log errors with Azure SQL Database and Azure SQL Managed Instance](troubleshoot-transaction-log-errors-issues.md) - [Troubleshoot transient connection errors in SQL Database and SQL Managed Instance](troubleshoot-common-connectivity-issues.md)
+- [Analyze and prevent deadlocks in Azure SQL Database](analyze-prevent-deadlocks.md)
azure-sql Understand Resolve Blocking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/understand-resolve-blocking.md
Previously updated : 3/02/2021 Last updated : 4/8/2022 # Understand and resolve Azure SQL Database blocking problems [!INCLUDE[appliesto-sqldb](../includes/appliesto-sqldb.md)]
The article describes blocking in Azure SQL databases and demonstrates how to tr
In this article, the term connection refers to a single logged-on session of the database. Each connection appears as a session ID (SPID) or session_id in many DMVs. Each of these SPIDs is often referred to as a process, although it is not a separate process context in the usual sense. Rather, each SPID consists of the server resources and data structures necessary to service the requests of a single connection from a given client. A single client application may have one or more connections. From the perspective of Azure SQL Database, there is no difference between multiple connections from a single client application on a single client computer and multiple connections from multiple client applications or multiple client computers; they are atomic. One connection can block another connection, regardless of the source client.
+For information on troubleshooting deadlocks, see [Analyze and prevent deadlocks in Azure SQL Database](analyze-prevent-deadlocks.md).
+ > [!NOTE] > **This content is focused on Azure SQL Database.** Azure SQL Database is based on the latest stable version of the Microsoft SQL Server database engine, so much of the content is similar though troubleshooting options and tools may differ. For more on blocking in SQL Server, see [Understand and resolve SQL Server blocking problems](/troubleshoot/sql/performance/understand-resolve-blocking). ## Understand blocking
-Blocking is an unavoidable and by-design characteristic of any relational database management system (RDBMS) with lock-based concurrency. As mentioned previously, in SQL Server, blocking occurs when one session holds a lock on a specific resource and a second SPID attempts to acquire a conflicting lock type on the same resource. Typically, the time frame for which the first SPID locks the resource is small. When the owning session releases the lock, the second connection is then free to acquire its own lock on the resource and continue processing. This is normal behavior and may happen many times throughout the course of a day with no noticeable effect on system performance.
+Blocking is an unavoidable and by-design characteristic of any relational database management system (RDBMS) with lock-based concurrency. Blocking in a database in Azure SQL Database occurs when one session holds a lock on a specific resource and a second SPID attempts to acquire a conflicting lock type on the same resource. Typically, the time frame for which the first SPID locks the resource is small. When the owning session releases the lock, the second connection is then free to acquire its own lock on the resource and continue processing. This is normal behavior and may happen many times throughout the course of a day with no noticeable effect on system performance.
+
+Each new database in Azure SQL Database has the [read committed snapshot](/sql/t-sql/statements/alter-database-transact-sql-set-options?view=azuresqldb-current&preserve-view=true#read_committed_snapshot--on--off--1) (RCSI) database setting enabled by default. Blocking between sessions reading data and sessions writing data is minimized under RCSI, which uses row versioning to increase concurrency. However, blocking and deadlocks may still occur in databases in Azure SQL Database because:
+
+- Queries that modify data may block one another.
+- Queries may run under isolation levels that increase blocking. Isolation levels may be specified in application connection strings, [query hints](/sql/t-sql/queries/hints-transact-sql-query), or [SET statements](/sql/t-sql/statements/set-transaction-isolation-level-transact-sql) in Transact-SQL.
+- [RCSI may be disabled](/sql/t-sql/statements/alter-database-transact-sql-set-options?view=azuresqldb-current&preserve-view=true#read_committed_snapshot--on--off--1), causing the database to use shared (S) locks to protect SELECT statements run under the read committed isolation level. This may increase blocking and deadlocks.
+
+[Snapshot isolation level](/sql/t-sql/statements/alter-database-transact-sql-set-options?view=azuresqldb-current&preserve-view=true#b-enable-snapshot-isolation-on-a-database) is also enabled by default for new databases in Azure SQL Database. Snapshot isolation is an additional row-based isolation level that provides transaction-level consistency for data and which uses row versions to select rows to update. To use snapshot isolation, queries or connections must explicitly set their transaction isolation level to `SNAPSHOT`. This may only be done when snapshot isolation is enabled for the database.
+
+You can identify if RCSI and/or snapshot isolation are enabled with Transact-SQL. Connect to your database in Azure SQL Database and run the following query:
+
+```sql
+SELECT name, is_read_committed_snapshot_on, snapshot_isolation_state_desc
+FROM sys.databases
+WHERE name = DB_NAME();
+GO
+```
+
+If RCSI is enabled, the `is_read_committed_snapshot_on` column will return the value **1**. If snapshot isolation is enabled, the `snapshot_isolation_state_desc` column will return the value **ON**.
-The duration and transaction context of a query determine how long its locks are held and, thereby, their effect on other queries. If the query is not executed within a transaction (and no lock hints are used), the locks for SELECT statements will only be held on a resource at the time it is actually being read, not during the query. For INSERT, UPDATE, and DELETE statements, the locks are held during the query, both for data consistency and to allow the query to be rolled back if necessary.
+The duration and transaction context of a query determine how long its locks are held and, thereby, their effect on other queries. SELECT statements run under RCSI [do not acquire shared (S) locks on the data being read](/sql/relational-databases/sql-server-transaction-locking-and-row-versioning-guide#behavior-when-reading-data), and therefore do not block transactions that are modifying data. For INSERT, UPDATE, and DELETE statements, the locks are held during the query, both for data consistency and to allow the query to be rolled back if necessary.
-For queries executed within a transaction, the duration for which the locks are held are determined by the type of query, the transaction isolation level, and whether lock hints are used in the query. For a description of locking, lock hints, and transaction isolation levels, see the following articles:
+For queries executed within an [explicit transaction](/sql/relational-databases/sql-server-transaction-locking-and-row-versioning-guide#starting-transactions), the type of locks and duration for which the locks are held are determined by the type of query, the transaction isolation level, and whether lock hints are used in the query. For a description of locking, lock hints, and transaction isolation levels, see the following articles:
* [Locking in the Database Engine](/sql/relational-databases/sql-server-transaction-locking-and-row-versioning-guide) * [Customizing Locking and Row Versioning](/sql/relational-databases/sql-server-transaction-locking-and-row-versioning-guide#customizing-locking-and-row-versioning)
Now let's dive in to discuss how to pinpoint the main blocking session with an a
## Gather blocking information
-To counteract the difficulty of troubleshooting blocking problems, a database administrator can use SQL scripts that constantly monitor the state of locking and blocking in the Azure SQL database. To gather this data, there are essentially two methods.
+To counteract the difficulty of troubleshooting blocking problems, a database administrator can use SQL scripts that constantly monitor the state of locking and blocking in the database in Azure SQL Database. To gather this data, there are essentially two methods.
The first is to query dynamic management objects (DMOs) and store the results for comparison over time. Some objects referenced in this article are dynamic management views (DMVs) and some are dynamic management functions (DMFs). The second method is to use XEvents to capture what is executing. - ## Gather information from DMVs Referencing DMVs to troubleshoot blocking has the goal of identifying the SPID (session ID) at the head of the blocking chain and the SQL Statement. Look for victim SPIDs that are being blocked. If any SPID is being blocked by another SPID, then investigate the SPID owning the resource (the blocking SPID). Is that owner SPID being blocked as well? You can walk the chain to find the head blocker then investigate why it is maintaining its lock.
-Remember to run each of these scripts in the target Azure SQL database.
+Remember to run each of these scripts in the target database in Azure SQL Database.
* The sp_who and sp_who2 commands are older commands to show all current sessions. The DMV `sys.dm_exec_sessions` returns more data in a result set that is easier to query and filter. You will find `sys.dm_exec_sessions` at the core of other queries.
CROSS APPLY sys.dm_exec_sql_text ([s_ec].[most_recent_sql_handle]) AS [s_est];
> [!Note] > For much more on wait types including aggregated wait stats over time, see the DMV [sys.dm_db_wait_stats](/sql/relational-databases/system-dynamic-management-views/sys-dm-db-wait-stats-azure-sql-database). This DMV returns aggregate wait stats for the current database only.
-* Use the [sys.dm_tran_locks](/sql/relational-databases/system-dynamic-management-views/sys-dm-tran-locks-transact-sql) DMV for more granular information on what locks have been placed by queries. This DMV can return large amounts of data on a production SQL Server, and is useful for diagnosing what locks are currently held.
+* Use the [sys.dm_tran_locks](/sql/relational-databases/system-dynamic-management-views/sys-dm-tran-locks-transact-sql) DMV for more granular information on what locks have been placed by queries. This DMV can return large amounts of data on a production database, and is useful for diagnosing what locks are currently held.
Due to the INNER JOIN on `sys.dm_os_waiting_tasks`, the following query restricts the output from `sys.dm_tran_locks` only to currently blocked requests, their wait status, and their locks:
AND object_name(p.object_id) = '<table_name>';
* With DMVs, storing the query results over time will provide data points that will allow you to review blocking over a specified time interval to identify persisted blocking or trends.
-## Gather information from Extended events
+## Gather information from Extended Events
In addition to the previous information, it is often necessary to capture a trace of the activities on the server to thoroughly investigate a blocking problem on Azure SQL Database. For example, if a session executes multiple statements within a transaction, only the last statement that was submitted will be represented. However, one of the earlier statements may be the reason locks are still being held. A trace will enable you to see all the commands executed by a session within the current transaction.
Refer to the document that explains how to use the [Extended Events New Session
- Sql_batch_completed - Sql_batch_starting -- Lock
- - Lock_deadlock
+- Category deadlock_monitor
+ - database_xml_deadlock_report
-- Session
+- Category session
- Existing_connection - Login - Logout
+> [!NOTE]
+> For detailed information on deadlocks, see [Analyze and prevent deadlocks in Azure SQL Database](analyze-prevent-deadlocks.md).
+ ## Identify and resolve common blocking scenarios By examining the previous information, you can determine the cause of most blocking problems. The rest of this article is a discussion of how to use this information to identify and resolve some common blocking scenarios. This discussion assumes you have used the blocking scripts (referenced earlier) to capture information on the blocking SPIDs and have captured application activity using an XEvent session.
The Waittype, Open_Tran, and Status columns refer to information returned by [sy
Reports from the [Query Store](/sql/relational-databases/performance/best-practice-with-the-query-store) in SSMS are also a highly recommended and valuable tool for identifying the most costly queries, suboptimal execution plans. Also review the [Intelligent Performance](intelligent-insights-overview.md) section of the Azure portal for the Azure SQL database, including [Query Performance Insight](query-performance-insight-use.md).
+ If the query performs only SELECT operations, consider [running the statement under snapshot isolation](/sql/t-sql/statements/set-transaction-isolation-level-transact-sql) if it is enabled in your database, especially if RCSI has been disabled. As when RCSI is enabled, queries reading data do not require shared (S) locks under snapshot isolation level. Additionally, snapshot isolation provides transaction level consistency for all statements in an explicit multi-statement transaction. Snapshot isolation may [already be enabled in your database](#understand-blocking). Snapshot isolation may also be used with queries performing modifications, but you must handle [update conflicts](/sql/relational-databases/sql-server-transaction-locking-and-row-versioning-guide#behavior-in-summary).
+ If you have a long-running query that is blocking other users and cannot be optimized, consider moving it from an OLTP environment to a dedicated reporting system, a [synchronous read-only replica of the database](read-scale-out.md). 1. Blocking caused by a sleeping SPID that has an uncommitted transaction
The Waittype, Open_Tran, and Status columns refer to information returned by [sy
1. Blocking caused by a SPID whose corresponding client application did not fetch all result rows to completion After sending a query to the server, all applications must immediately fetch all result rows to completion. If an application does not fetch all result rows, locks can be left on the tables, blocking other users. If you are using an application that transparently submits SQL statements to the server, the application must fetch all result rows. If it does not (and if it cannot be configured to do so), you may be unable to resolve the blocking problem. To avoid the problem, you can restrict poorly behaved applications to a reporting or a decision-support database, separate from the main OLTP database.+
+ The impact of this scenario is reduced when read committed snapshot is enabled on the database, which is the default configuration in Azure SQL Database. Learn more in the [Understand blocking](#understand-blocking) section of this article.
> [!NOTE] > See [guidance for retry logic](./troubleshoot-common-connectivity-issues.md#retry-logic-for-transient-errors) for applications connecting to Azure SQL Database.
The Waittype, Open_Tran, and Status columns refer to information returned by [sy
## See also
+* [Analyze and prevent deadlocks in Azure SQL Database](analyze-prevent-deadlocks.md)
* [Monitoring and performance tuning in Azure SQL Database and Azure SQL Managed Instance](./monitor-tune-overview.md) * [Monitoring performance by using the Query Store](/sql/relational-databases/performance/monitoring-performance-by-using-the-query-store) * [Transaction Locking and Row Versioning Guide](/sql/relational-databases/sql-server-transaction-locking-and-row-versioning-guide)
azure-sql Glossary Terms https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/glossary-terms.md
Previously updated : 02/02/2022 Last updated : 04/06/2022 # Azure SQL glossary of terms [!INCLUDE[appliesto-asf](./includes/appliesto-asf.md)]
Last updated 02/02/2022
||vCore-based service tiers (recommended) |[General purpose, business critical, and hyperscale service tiers](database/service-tiers-sql-database-vcore.md#service-tiers) are available in the vCore-based purchasing model (recommended).| |Compute tier|| The compute tier determines whether resources are continuously available (provisioned) or autoscaled (serverless). Compute tier availability varies by purchasing model and service tier. Only the vCore purchasing model's general purpose service tier makes serverless compute available.| ||Provisioned compute|The [provisioned compute tier](database/service-tiers-sql-database-vcore.md#compute-tiers) provides a specific amount of compute resources that are continuously provisioned independent of workload activity. Under the provisioned compute tier, you are billed at a fixed price per hour.
-||Serverless compute| The [serverless compute tier](database/serverless-tier-overview.md) autoscales compute resources based on workload activity and bills for the amount of compute used per second. Azure SQL Database serverless is currently available in the vCore purchasing model's general purpose service tier with Generation 5 hardware or newer.|
-|Hardware generation| Available hardware configurations | The vCore-based purchasing model allows you to select the appropriate hardware generation for your workload. [Hardware configuration options](database/service-tiers-sql-database-vcore.md#hardware-generations) include Gen5, M-series, Fsv2-series, and DC-series.|
+||Serverless compute| The [serverless compute tier](database/serverless-tier-overview.md) autoscales compute resources based on workload activity and bills for the amount of compute used per second. Azure SQL Database serverless is currently available in the vCore purchasing model's general purpose service tier with Gen5 hardware or newer.|
+|Hardware configuration| Available hardware configurations | The vCore-based purchasing model allows you to select the appropriate hardware configuration for your workload. [Hardware configuration options](database/service-tiers-sql-database-vcore.md#hardware-configuration) include Gen5, M-series, Fsv2-series, and DC-series.|
|Compute size (service objective) ||Compute size (service objective) is the amount of CPU, memory, and storage resources available for a single database or elastic pool. Compute size also defines resource consumption limits, such as maximum IOPS, maximum log rate, etc.
-||vCore-based sizing options| Configure the compute size for your database or elastic pool by selecting the appropriate service tier, compute tier, and hardware generation for your workload. When using an elastic pool, configure the reserved vCores for the pool, and optionally configure per-database settings. For sizing options and resource limits in the vCore-based purchasing model, see [vCore single databases](database/resource-limits-vcore-single-databases.md), and [vCore elastic pools](database/resource-limits-vcore-elastic-pools.md).|
+||vCore-based sizing options| Configure the compute size for your database or elastic pool by selecting the appropriate service tier, compute tier, and hardware for your workload. When using an elastic pool, configure the reserved vCores for the pool, and optionally configure per-database settings. For sizing options and resource limits in the vCore-based purchasing model, see [vCore single databases](database/resource-limits-vcore-single-databases.md), and [vCore elastic pools](database/resource-limits-vcore-elastic-pools.md).|
||DTU-based sizing options| Configure the compute size for your database or elastic pool by selecting the appropriate service tier and selecting the maximum data size and number of DTUs. When using an elastic pool, configure the reserved eDTUs for the pool, and optionally configure per-database settings. For sizing options and resource limits in the DTU-based purchasing model, see [DTU single databases](database/resource-limits-dtu-single-databases.md) and [DTU elastic pools](database/resource-limits-dtu-elastic-pools.md).
Last updated 02/02/2022
|Purchasing model|vCore-based purchasing model| SQL Managed Instance is available under the [vCore-based purchasing model](managed-instance/service-tiers-managed-instance-vcore.md). [Azure Hybrid Benefit](azure-hybrid-benefit.md) is available for managed instances. | |Service tier| vCore-based service tiers| SQL Managed Instance offers two service tiers. Both service tiers guarantee 99.99% availability and enable you to independently select storage size and compute capacity. Select either the [general purpose or business critical service tier](managed-instance/sql-managed-instance-paas-overview.md#service-tiers) for a managed instance based upon your performance and latency requirements.| |Compute|Provisioned compute| SQL Managed Instance provides a specific amount of [compute resources](managed-instance/service-tiers-managed-instance-vcore.md#compute) that are continuously provisioned independent of workload activity, and bills for the amount of compute provisioned at a fixed price per hour. |
-|Hardware generation|Available hardware configurations| SQL Managed Instance [hardware generations](managed-instance/service-tiers-managed-instance-vcore.md#hardware-generations) include standard-series (Gen5), premium-series, and memory optimized premium-series hardware generations. |
-|Compute size | vCore-based sizing options | Compute size (service objective) is the maximum amount of CPU, memory, and storage resources available for a single managed instance or instance pool. Configure the compute size for your managed instance by selecting the appropriate service tier and hardware generation for your workload. Learn about [resource limits for managed instances](managed-instance/resource-limits.md). |
+|Hardware configuration|Available hardware configurations| SQL Managed Instance [hardware configurations](managed-instance/service-tiers-managed-instance-vcore.md#hardware-configurations) include standard-series (Gen5), premium-series, and memory optimized premium-series hardware. |
+|Compute size | vCore-based sizing options | Compute size (service objective) is the maximum amount of CPU, memory, and storage resources available for a single managed instance or instance pool. Configure the compute size for your managed instance by selecting the appropriate service tier and hardware for your workload. Learn about [resource limits for managed instances](managed-instance/resource-limits.md). |
## SQL Server on Azure VMs
azure-sql Auto Failover Group Sql Mi https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/managed-instance/auto-failover-group-sql-mi.md
There is some overlap of content in the following articles, be sure to make chan
- **Secondary**
- The managed instance that hosts the secondary databases in the failover group. The secondary cannot be in the same Azure region as the primaryF.
+ The managed instance that hosts the secondary databases in the failover group. The secondary cannot be in the same Azure region as the primary.
- **DNS zone**
There is some overlap of content in the following articles, be sure to make chan
Permissions for a failover group are managed via [Azure role-based access control (Azure RBAC)](../../role-based-access-control/overview.md).
-Azure RBAC write access is necessary to create and manage failover groups. The [SQL Server Contributor role](../../role-based-access-control/built-in-roles.md#sql-server-contributor) has all the necessary permissions to manage failover groups.
+Azure RBAC write access is necessary to create and manage failover groups. The [SQL Managed Instance Contributor](../../role-based-access-control/built-in-roles.md#sql-managed-instance-contributor) has all the necessary permissions to manage failover groups.
For specific permission scopes, review how to [configure auto-failover groups in Azure SQL Managed Instance](auto-failover-group-configure-sql-mi.md#permissions).
Be aware of the following limitations:
- Failover groups cannot be created between two instances in the same Azure region. - Failover groups cannot be renamed. You will need to delete the group and re-create it with a different name. - Database rename is not supported for databases in failover group. You will need to temporarily delete failover group to be able to rename a database, or remove the database from the failover group.-- System databases are not replicated to the secondary instance in a failover group. Therefore, scenarios that depend on objects from the system databases require objects to be manually created on the secondary instances and also manually kept in sync after any changes made on primary instance. The only exception is Service master Key (SMK) for SQL Managed Instance, that is replicated automatically to secondary instance during creation of failover group. Any subsequent changes of SMK on the primary instance however will not be replicated to secondary instance.
+- System databases are not replicated to the secondary instance in a failover group. Therefore, scenarios that depend on objects from the system databases such as Server Logins and Agent jobs, require objects to be manually created on the secondary instances and also manually kept in sync after any changes made on primary instance. The only exception is Service master Key (SMK) for SQL Managed Instance, that is replicated automatically to secondary instance during creation of failover group. Any subsequent changes of SMK on the primary instance however will not be replicated to secondary instance. To learn more, see how to [Enable scenarios dependent on objects from the system databases](#enable-scenarios-dependent-on-objects-from-the-system-databases).
- Failover groups cannot be created between instances if any of them are in an instance pool. ## <a name="programmatically-managing-failover-groups"></a> Programmatically manage failover groups
azure-sql Doc Changes Updates Release Notes Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/managed-instance/doc-changes-updates-release-notes-whats-new.md
ms.devlang: Previously updated : 03/28/2022 Last updated : 04/06/2022 # What's new in Azure SQL Managed Instance? [!INCLUDE[appliesto-sqldb-sqlmi](../includes/appliesto-sqlmi.md)]
The following table lists the features of Azure SQL Managed Instance that are cu
| Feature | Details | | | |
-| [16 TB support in Business Critical](resource-limits.md#service-tier-characteristics) | Support for allocation up to 16 TB of space on SQL Managed Instance in the Business Critical service tier using the new memory optimized premium-series hardware generation. |
+| [16 TB support in Business Critical](resource-limits.md#service-tier-characteristics) | Support for allocation up to 16 TB of space on SQL Managed Instance in the Business Critical service tier using the new memory optimized premium-series hardware. |
| [Data virtualization](data-virtualization-overview.md) | Join locally stored relational data with data queried from external data sources, such as Azure Data Lake Storage Gen2 or Azure Blob Storage. | |[Endpoint policies](../../azure-sql/managed-instance/service-endpoint-policies-configure.md) | Configure which Azure Storage accounts can be accessed from a SQL Managed Instance subnet. Grants an extra layer of protection against inadvertent or malicious data exfiltration.| | [Instance pools](instance-pools-overview.md) | A convenient and cost-efficient way to migrate smaller SQL Server instances to the cloud. | | [Managed Instance link](managed-instance-link-feature-overview.md)| Online replication of SQL Server databases hosted anywhere to Azure SQL Managed Instance. | | [Maintenance window advance notifications](../database/advance-notifications.md)| Advance notifications (preview) for databases configured to use a non-default [maintenance window](../database/maintenance-window.md). Advance notifications are in preview for Azure SQL Managed Instance. |
-| [Memory optimized premium-series hardware generation](resource-limits.md#service-tier-characteristics) | Deploy your SQL Managed Instance to the new memory optimized premium-series hardware generation to take advantage of the latest Intel Ice Lake CPUs. The memory optimized hardware generation offers higher memory to vCore ratios. |
+| [Memory optimized premium-series hardware](resource-limits.md#service-tier-characteristics) | Deploy your SQL Managed Instance to the new memory optimized premium-series hardware to take advantage of the latest Intel Ice Lake CPUs. Memory optimized hardware offers higher memory to vCore ratio. |
| [Migrate with Log Replay Service](log-replay-service-migrate.md) | Migrate databases from SQL Server to SQL Managed Instance by using Log Replay Service. |
-| [Premium-series hardware generation](resource-limits.md#service-tier-characteristics) | Deploy your SQL Managed Instance to the new premium-series hardware generation to take advantage of the latest Intel Ice Lake CPUs. |
+| [Premium-series hardware](resource-limits.md#service-tier-characteristics) | Deploy your SQL Managed Instance to the new premium-series hardware to take advantage of the latest Intel Ice Lake CPUs. |
| [Query Store hints](/sql/relational-databases/performance/query-store-hints?view=azuresqldb-mi-current&preserve-view=true) | Use query hints to optimize your query execution via the OPTION clause. | | [Service Broker cross-instance message exchange](/sql/database-engine/configure-windows/sql-server-service-broker) | Support for cross-instance message exchange using Service Broker on Azure SQL Managed Instance. | | [SQL insights](../../azure-monitor/insights/sql-insights-overview.md) | SQL insights is a comprehensive solution for monitoring any product in the Azure SQL family. SQL insights uses dynamic management views to expose the data you need to monitor health, diagnose problems, and tune performance. |
Learn about significant changes to the Azure SQL Managed Instance documentation.
| Changes | Details | | | |
-| **16 TB support for Business Critical preview** | The Business Critical service tier of SQL Managed Instance now provides increased maximum instance storage capacity of up to 16 TB with the new premium-series and memory optimized premium-series hardware generations, which are currently in preview. See [resource limits](resource-limits.md#service-tier-characteristics) to learn more. |
+| **16 TB support for Business Critical preview** | The Business Critical service tier of SQL Managed Instance now provides increased maximum instance storage capacity of up to 16 TB with the new premium-series and memory optimized premium-series hardware, which are currently in preview. See [resource limits](resource-limits.md#service-tier-characteristics) to learn more. |
|**16 TB support for General Purpose GA** | Deploying a 16 TB instance to the General Purpose service tier is now generally available. See [resource limits](resource-limits.md) to learn more. | | **Azure AD-only authentication GA** | Restricting authentication to your Azure SQL Managed Instance only to Azure Active Directory users is now generally available. To learn more, see [Azure AD-only authentication](../database/authentication-azure-ad-only-authentication.md). | | **Distributed transactions GA** | The ability to execute distributed transactions across managed instances is now generally available. See [Distributed transactions](../database/elastic-transactions-overview.md) to learn more. |
Learn about significant changes to the Azure SQL Managed Instance documentation.
|**Link feature preview** | Use the link feature for SQL Managed Instance to replicate data from your SQL Server hosted anywhere to Azure SQL Managed Instance, leveraging the benefits of Azure without moving your data to Azure, to offload your workloads, for disaster recovery, or to migrate to the cloud. See the [Link feature for SQL Managed Instance](managed-instance-link-feature-overview.md) to learn more. The link feature is currently in limited public preview. | |**Long-term backup retention GA** | Storing full backups for a specific database with configured redundancy for up to 10 years in Azure Blob storage is now generally available. To learn more, see [Long-term backup retention](long-term-backup-retention-configure.md). | | **Move instance to different subnet GA** | It's now possible to move your SQL Managed Instance to a different subnet. See [Move instance to different subnet](vnet-subnet-move-instance.md) to learn more. |
-|**New hardware generation preview** | There are now two new hardware generations for SQL Managed Instance: premium-series, and a memory optimized premium-series. Both offerings take advantage of a new generation of hardware powered by the latest Intel Ice Lake CPUs, and offer a higher memory to vCore ratio to support your most resource demanding database applications. As part of this announcement, the Gen5 hardware generation has been renamed to standard-series. The two new premium hardware generations are currently in preview. See [resource limits](resource-limits.md#service-tier-characteristics) to learn more. |
+|**New hardware preview** | There are now two new hardware configurations for SQL Managed Instance: premium-series, and a memory optimized premium-series. Both offerings take advantage of a new hardware powered by the latest Intel Ice Lake CPUs, and offer a higher memory to vCore ratio to support your most resource demanding database applications. As part of this announcement, the Gen5 hardware has been renamed to standard-series. The two new premium hardware offerings are currently in preview. See [resource limits](resource-limits.md#service-tier-characteristics) to learn more. |
|**Split what's new** | The previously-combined **What's new** article has been split by product - [What's new in SQL Database](../database/doc-changes-updates-release-notes-whats-new.md) and [What's new in SQL Managed Instance](doc-changes-updates-release-notes-whats-new.md), making it easier to identify what features are currently in preview, generally available, and significant documentation changes. Additionally, the [Known Issues in SQL Managed Instance](doc-changes-updates-known-issues.md) content has moved to its own page. | |**16 TB support for General Purpose preview** | Support has been added for allocation of up to 16 TB of space for SQL Managed Instance in the General Purpose service tier. See [resource limits](resource-limits.md) to learn more. This instance offer is currently in preview. | | **Parallel backup** | It's now possible to take backups in parallel for SQL Managed Instance in the general purpose tier, enabling faster backups. See the [Parallel backup for better performance](https://techcommunity.microsoft.com/t5/azure-sql/parallel-backup-for-better-performance-in-sql-managed-instance/ba-p/2421762) blog entry to learn more. |
azure-sql Instance Create Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/managed-instance/instance-create-quickstart.md
Previously updated : 1/29/2021 Last updated : 04/06/2022 # Quickstart: Create an Azure SQL Managed Instance [!INCLUDE[appliesto-sqlmi](../includes/appliesto-sqlmi.md)]
If you don't have an Azure subscription, [create a free account](https://azure.m
| Setting| Suggested value | DescriptionΓÇ»| | | | -- | | **Service Tier** | Select one of the options. | Based on your scenario, select one of the following options: </br> <ul><li>**General Purpose**: for most production workloads, and the default option.</li><li>**Business Critical**: designed for low-latency workloads with high resiliency to failures and fast failovers.</li></ul><BR>For more information, review [service tiers](service-tiers-managed-instance-vcore.md) and [resource limits](../../azure-sql/managed-instance/resource-limits.md).|
-| **Hardware Generation** | Select one of the options. | The hardware generation generally defines the compute and memory limits and other characteristics that impact the performance of the workload. **Gen5** is the default.|
+| **Hardware Configuration** | Select one of the options. | Hardware configuration generally defines the compute and memory limits and other characteristics that impact the performance of the workload. **Gen5** is the default.|
| **vCore compute model** | Select an option. | vCores represent exact amount of compute resources that are always provisioned for your workload. **Eight vCores** is the default.| | **Storage in GB** | Select an option. | Storage size in GB, select based on expected data size. If migrating existing data from on-premises or on various cloud platforms, see [Migration overview: SQL Server to SQL Managed Instance](../../azure-sql/migration-guides/managed-instance/sql-server-to-managed-instance-overview.md).| | **Azure Hybrid Benefit** | Check option if applicable. | For leveraging an existing license for Azure. For more information, see [Azure Hybrid Benefit - Azure SQL Database & SQL Managed Instance](../../azure-sql/azure-hybrid-benefit.md). |
azure-sql Management Operations Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/managed-instance/management-operations-overview.md
The following tables summarize operations and typical overall durations, based o
|Operation |Long-running segment |Estimated duration | |||| |First instance in an empty subnet|Virtual cluster creation|90% of operations finish in 4 hours.|
-|First instance of another hardware generation or maintenance window in a non-empty subnet (for example, first Premium series instance in a subnet with Standard series instances)|Virtual cluster creation<sup>1</sup>|90% of operations finish in 4 hours.|
+|First instance of another hardware or maintenance window in a non-empty subnet (for example, first Premium-series instance in a subnet with Standard-series instances)|Virtual cluster creation<sup>1</sup>|90% of operations finish in 4 hours.|
|Subsequent instance creation within the non-empty subnet (2nd, 3rd, etc. instance)|Virtual cluster resizing|90% of operations finish in 2.5 hours.|
-<sup>1</sup> Virtual cluster is built per hardware generation and maintenance window configuration.
+<sup>1</sup> A separate virtual cluster is created for each hardware configuration and for each maintenance window configuration.
**Category: Update**
The following tables summarize operations and typical overall durations, based o
|Instance compute (vCores) scaling up and down (General Purpose)|- Virtual cluster resizing|90% of operations finish in 2.5 hours.| |Instance compute (vCores) scaling up and down (Business Critical)|- Virtual cluster resizing<br>- Always On availability group seeding|90% of operations finish in 2.5 hours + time to seed all databases (220 GB/hour).| |Instance service tier change (General Purpose to Business Critical and vice versa)|- Virtual cluster resizing<br>- Always On availability group seeding|90% of operations finish in 2.5 hours + time to seed all databases (220 GB/hour).|
-|Instance hardware generation or maintenance window change (General Purpose)|- Virtual cluster creation or resizing<sup>1</sup>|90% of operations finish in 4 hours (creation) or 2.5 hours (resizing) .|
-|Instance hardware generation or maintenance window change (Business Critical)|- Virtual cluster creation or resizing<sup>1</sup><br>- Always On availability group seeding|90% of operations finish in 4 hours (creation) or 2.5 hours (resizing) + time to seed all databases (220 GB/hour).|
+|Instance hardware or maintenance window change (General Purpose)|- Virtual cluster creation or resizing<sup>1</sup>|90% of operations finish in 4 hours (creation) or 2.5 hours (resizing) .|
+|Instance hardware or maintenance window change (Business Critical)|- Virtual cluster creation or resizing<sup>1</sup><br>- Always On availability group seeding|90% of operations finish in 4 hours (creation) or 2.5 hours (resizing) + time to seed all databases (220 GB/hour).|
-<sup>1</sup> Managed instance must be placed in a virtual cluster with the corresponding hardware generation and maintenance window. If there is no such virtual cluster in the subnet, a new one must be created first to accommodate the instance.
+<sup>1</sup> Managed instance must be placed in a virtual cluster with the corresponding hardware and maintenance window. If there is no such virtual cluster in the subnet, a new one must be created first to accommodate the instance.
**Category: Delete**
azure-sql Resource Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/managed-instance/resource-limits.md
Previously updated : 02/02/2022 Last updated : 04/06/2022 # Overview of Azure SQL Managed Instance resource limits [!INCLUDE[appliesto-sqlmi](../includes/appliesto-sqlmi.md)]
This article provides an overview of the technical characteristics and resource
> [!NOTE] > For differences in supported features and T-SQL statements see [Feature differences](../database/features-comparison.md) and [T-SQL statement support](transact-sql-tsql-differences-sql-server.md). For general differences between service tiers for Azure SQL Database and SQL Managed Instance review [General Purpose](../database/service-tier-general-purpose.md) and [Business Critical](../database/service-tier-business-critical.md) service tiers.
-## Hardware generation characteristics
+## Hardware configuration characteristics
-SQL Managed Instance has characteristics and resource limits that depend on the underlying infrastructure and architecture. SQL Managed Instance can be deployed on multiple hardware generations.
+SQL Managed Instance has characteristics and resource limits that depend on the underlying infrastructure and architecture. SQL Managed Instance can be deployed on multiple hardware configurations.
> [!NOTE]
-> The Gen5 hardware generation has been renamed to the **standard-series (Gen5)**, and we are introducing two new hardware generations in limited preview: **premium-series** and **memory optimized premium-series**.
+> The Gen5 hardware has been renamed to the **standard-series (Gen5)**. We are introducing two new hardware configurations in limited preview: **premium-series** and **memory optimized premium-series**.
-For information on previous generation hardware generations, see [Previous generation hardware generation details](#previous-generation-hardware) later in this article.
+For information on previously available hardware, see [Previously available hardware](#previously-available-hardware) later in this article.
-Hardware generations have different characteristics, as described in the following table:
+Hardware configurations have different characteristics, as described in the following table:
| | **Standard-series (Gen5)** | **Premium-series (preview)** | **Memory optimized premium-series (preview)** | |:-- |:-- |:-- |:-- |
Hardware generations have different characteristics, as described in the followi
\* Dependent on [the number of vCores](#service-tier-characteristics). >[!NOTE]
-> If your business requires storage sizes greater than the available resource limits for Azure SQL Managed Instance, consider the Azure SQL Database [Hyperscale service tier](../database/service-tier-hyperscale.md).
+> If your workload requires storage sizes greater than the available resource limits for Azure SQL Managed Instance, consider the Azure SQL Database [Hyperscale service tier](../database/service-tier-hyperscale.md).
+### Regional support for premium-series hardware (preview)
-### Regional support for premium-series hardware generations (preview)
-
-Support for the premium-series hardware generations (public preview) is currently available only in these specific regions: <br>
+Support for the premium-series hardware (public preview) is currently available only in these specific regions: <br>
| Region | **Premium-series** | **Memory optimized premium-series** | |: |: |: |
Support for the premium-series hardware generations (public preview) is currentl
| West US 2 | Yes | Yes | | West US 3 | Yes | Yes |
-### In-memory OLTP available space
+### In-memory OLTP available space
-The amount of in-memory OLTP space in [Business Critical](../database/service-tier-business-critical.md) service tier depends on the number of vCores and hardware generation. The following table lists the limits of memory that can be used for in-memory OLTP objects.
+The amount of In-memory OLTP space in [Business Critical](../database/service-tier-business-critical.md) service tier depends on the number of vCores and hardware configuration. The following table lists the limits of memory that can be used for In-memory OLTP objects.
| **vCores** | **Standard-series (Gen5)** | **Premium-series** | **Memory optimized premium-series** | |: |: |: |: |
The following table shows the **default regional limits** for supported subscrip
If you need more instances in your current regions, send a support request to extend the quota using the Azure portal. For more information, see [Request quota increases for Azure SQL Database](../database/quota-increase-request.md).
-## Previous generation hardware
+## Previously available hardware
-This section includes details on previous generation hardware generations. Consider [moving your instance of SQL Managed Instance to the standard-series (Gen5)](../database/service-tiers-vcore.md) hardware to experience a wider range of vCore and storage scalability, accelerated networking, best IO performance, and minimal latency.
+This section includes details on previously available hardware. Consider [moving your instance of SQL Managed Instance to the standard-series (Gen5)](../database/service-tiers-vcore.md) hardware to experience a wider range of vCore and storage scalability, accelerated networking, best IO performance, and minimal latency.
-- Gen4 is being phased out and is not available for new deployments.
+> [!IMPORTANT]
+> Gen4 hardware is being retired and is not available for new deployments.
-### Hardware generation characteristics
+### Hardware characteristics
| | **Gen4** | | | |
This section includes details on previous generation hardware generations. Consi
| **Max In-Memory OLTP memory** | Instance limit: 1-1.5 GB per vCore | | **Max instance reserved storage** | General Purpose: 8 TB <br/>Business Critical: 1 TB |
-### In-memory OLTP available space
+### In-memory OLTP available space
-The amount of In-memory OLTP space in [Business Critical](../database/service-tier-business-critical.md) service tier depends on the number of vCores and hardware generation. The following table lists limits of memory that can be used for In-memory OLTP objects.
+The amount of In-memory OLTP space in [Business Critical](../database/service-tier-business-critical.md) service tier depends on the number of vCores and hardware configuration. The following table lists limits of memory that can be used for In-memory OLTP objects.
| In-memory OLTP space | **Gen4** | | | |
azure-sql Service Tiers Managed Instance Vcore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/managed-instance/service-tiers-managed-instance-vcore.md
Previously updated : 02/02/2022 Last updated : 04/06/2022 # vCore purchasing model - Azure SQL Managed Instance
This article reviews the [vCore purchasing model](../database/service-tiers-vcor
The virtual core (vCore) purchasing model used by Azure SQL Managed Instance provides the following benefits: -- Control over the hardware generation to better match the compute and memory requirements of the workload.
+- Control over hardware configuration to better match the compute and memory requirements of the workload.
- Pricing discounts for [Azure Hybrid Benefit (AHB)](../azure-hybrid-benefit.md) and [Reserved Instance (RI)](../database/reserved-capacity-overview.md). - Greater transparency in the hardware details that power compute, helping facilitate planning for migrations from on-premises deployments. - Higher scaling granularity with multiple compute sizes available. - ## <a id="compute-tiers"></a>Service tiers Service tier options in the vCore purchasing model include General Purpose and Business Critical. The service tier generally defines the storage architecture, space and I/O limits, and business continuity options related to availability and disaster recovery.
For information on selecting a service tier for your particular workload, see th
SQL Managed Instance compute provides a specific amount of compute resources that are continuously provisioned independent of workload activity, and bills for the amount of compute provisioned at a fixed price per hour.
-## Hardware generations
+## Hardware configurations
-Hardware generation options in the vCore model include standard-series (Gen5), premium-series, and memory optimized premium-series hardware generations. The hardware generation generally defines the compute and memory limits and other characteristics that impact the performance of the workload.
+Hardware configuration options in the vCore model include standard-series (Gen5), premium-series, and memory optimized premium-series. Hardware configuration generally defines the compute and memory limits and other characteristics that impact workload performance.
-For more information on the hardware generation specifics and limitations, see [Hardware generation characteristics](resource-limits.md#hardware-generation-characteristics).
+For more information on the hardware configuration specifics and limitations, see [Hardware configuration characteristics](resource-limits.md#hardware-configuration-characteristics).
In the [sys.dm_user_db_resource_governance](/sql/relational-databases/system-dynamic-management-views/sys-dm-user-db-resource-governor-azure-sql-database) dynamic management view, hardware generation for instances using Intel&reg; SP-8160 (Skylake) processors appears as Gen6, while hardware generation for instances using Intel&reg; 8272CL (Cascade Lake) appears as Gen7. The Intel&reg; 8370C (Ice Lake) CPUs used by premium-series and memory optimized premium-series hardware generations appear as Gen8. Resource limits for all standard-series (Gen5) instances are the same regardless of processor type (Broadwell, Skylake, or Cascade Lake).
-### Selecting a hardware generation
+### Selecting a hardware configuration
-In the Azure portal, you can select the hardware generation at the time of creation, or you can change the hardware generation of an existing SQL Managed Instance.
+You can select hardware configuration at the time of instance creation, or you can change hardware of an existing instance.
-**To select a hardware generation when creating a SQL Managed Instance**
+**To select hardware configuration when creating a SQL Managed Instance**
For detailed information, see [Create a SQL Managed Instance](../managed-instance/instance-create-quickstart.md).
-On the **Basics** tab, select the **Configure database** link in the **Compute + storage** section, and then select desired hardware generation:
+On the **Basics** tab, select the **Configure database** link in the **Compute + storage** section, and then select desired hardware:
:::image type="content" source="../database/media/service-tiers-vcore/configure-managed-instance.png" alt-text="configure SQL Managed Instance" loc-scope="azure-portal":::
-**To change the hardware generation of an existing SQL Managed Instance**
+**To change hardware of an existing SQL Managed Instance**
#### [The Azure portal](#tab/azure-portal)
From the SQL Managed Instance page, select **Pricing tier** link placed under th
:::image type="content" source="../database/media/service-tiers-vcore/change-managed-instance-hardware.png" alt-text="change SQL Managed Instance hardware" loc-scope="azure-portal":::
-On the Pricing tier page, you will be able to change hardware generation as described in the previous steps.
+On the Pricing tier page, you will be able to change hardware as described in the previous steps.
#### [PowerShell](#tab/azure-powershell)
For more details, check [az sql mi update](/cli/azure/sql/mi#az-sql-mi-update) c
#### <a id="gen4gen5-1"></a> Gen4
-Gen4 hardware is [being phased out](https://azure.microsoft.com/updates/gen-4-hardware-on-azure-sql-database-approaching-end-of-life-in-2020/) and is no longer available for new deployments. All new instances must be deployed on later hardware generations.
+Gen4 hardware is [being retired](https://azure.microsoft.com/updates/gen-4-hardware-on-azure-sql-database-approaching-end-of-life-in-2020/) and is no longer available for new deployments. All new instances must be deployed on other hardware configurations.
#### Standard-series (Gen5) and premium-series Standard-series (Gen5) hardware is available in all public regions worldwide.
-Premium-series and memory optimized premium-series hardware is in preview, and has limited regional availability. For more details, see [Azure SQL Managed Instance resource limits](../managed-instance/resource-limits.md#hardware-generation-characteristics).
+Premium-series and memory optimized premium-series hardware is in preview, and has limited regional availability. For more details, see [Azure SQL Managed Instance resource limits](../managed-instance/resource-limits.md#hardware-configuration-characteristics).
## Next steps
azure-sql Sql Managed Instance Paas Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/managed-instance/sql-managed-instance-paas-overview.md
Previously updated : 01/14/2021 Last updated : 04/06/2022 # What is Azure SQL Managed Instance?
The key features of SQL Managed Instance are shown in the following table:
The [vCore-based purchasing model](../database/service-tiers-vcore.md) for SQL Managed Instance gives you flexibility, control, transparency, and a straightforward way to translate on-premises workload requirements to the cloud. This model allows you to change compute, memory, and storage based upon your workload needs. The vCore model is also eligible for up to 55 percent savings with the [Azure Hybrid Benefit](https://azure.microsoft.com/pricing/hybrid-benefit/) for SQL Server.
-In the vCore model, you can choose between generations of hardware.
+In the vCore model, you can choose hardware configurations as follows:
- **Standard Series (Gen5)** logical CPUs are based on Intel&reg; E5-2673 v4 (Broadwell) 2.3 GHz, Intel&reg; SP-8160 (Skylake), and Intel&reg; 8272CL (Cascade Lake) 2.5 GHz processors, with **5.1 GB of RAM per CPU vCore**, fast NVMe SSD, hyper-threaded logical core, and compute sizes between 4 and 80 cores. - **Premium Series** logical CPUs are based on Intel&reg; 8370C (Ice Lake) 2.8 GHz processors, with **7 GB of RAM per CPU vCore**, fast NVMe SSD, hyper-threaded logical core, and compute sizes between 4 and 80 cores. - **Premium Series Memory-Optimized** logical CPUs are based on Intel&reg; 8370C (Ice Lake) 2.8 GHz processors, with **13.6 GB of RAM per CPU vCore**, fast NVMe SSD, hyper-threaded logical core, and compute sizes between 4 and 64 cores.
-Find more information about the difference between hardware generations in [SQL Managed Instance resource limits](resource-limits.md#hardware-generation-characteristics).
+Find more information about the difference between hardware configurations in [SQL Managed Instance resource limits](resource-limits.md#hardware-configuration-characteristics).
## Service tiers
azure-sql Vnet Subnet Determine Size https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/managed-instance/vnet-subnet-determine-size.md
Previously updated : 01/21/2022 Last updated : 04/06/2022 # Determine required subnet size and range for Azure SQL Managed Instance [!INCLUDE[appliesto-sqlmi](../includes/appliesto-sqlmi.md)]
By design, a managed instance needs a minimum of 32 IP addresses in a subnet. As
- Number of managed instances, including the following instance parameters: - Service tier - Number of vCores
- - [Hardware generation](resource-limits.md#hardware-generation-characteristics)
+ - [Hardware configuration](resource-limits.md#hardware-configuration-characteristics)
- [Maintenance window](../database/maintenance-window.md)-- Plans to scale up/down or change the service tier, hardware generation, or maintenance window
+- Plans to scale up/down or change the service tier, hardware configuration, or maintenance window
> [!IMPORTANT] > A subnet size of 16 IP addresses (subnet mask /28) allows the deployment of a single managed instance inside it. It should be used only for evaluation or for dev/test scenarios where scaling operations won't be performed.
Size your subnet according to your future needs for instance deployment and scal
- Azure uses five IP addresses in the subnet for its own needs. - Each virtual cluster allocates an additional number of addresses. -- Each managed instance uses a number of addresses that depend on pricing tier and hardware generation.
+- Each managed instance uses a number of addresses that depend on pricing tier and hardware configuration.
- Each scaling request temporarily allocates an additional number of addresses. > [!IMPORTANT]
In the preceding table:
Also consider the [maintenance window feature](../database/maintenance-window.md) when you're determining the subnet size, especially when multiple instances will be deployed inside the same subnet. Specifying a maintenance window for a managed instance during its creation or afterward means that it must be placed in a virtual cluster with the corresponding maintenance window. If there is no such virtual cluster in the subnet, a new one must be created first to accommodate the instance.
-The same scenario as for the maintenance window applies for changing the [hardware generation](resource-limits.md#hardware-generation-characteristics) as virtual cluster is built per hardware generation. In case of new instance creation or changing the hardware generation of the existing instance, if there is no such virtual cluster in the subnet, a new one must be created first to accommodate the instance.
+The same scenario as for the maintenance window applies for changing the [hardware configuration](resource-limits.md#hardware-configuration-characteristics) as a virtual cluster always uses the same hardware. In case of new instance creation or changing the hardware of the existing instance, if there is no such virtual cluster in the subnet, a new one must be created first to accommodate the instance.
An update operation typically requires [resizing the virtual cluster](management-operations-overview.md). When a new create or update request comes, the SQL Managed Instance service communicates with the compute platform with a request for new nodes that need to be added. Based on the compute response, the deployment system either expands the existing virtual cluster or creates a new one. Even if in most cases the operation will be completed within same virtual cluster, a new one might be created on the compute side.
During a scaling operation, instances temporarily require additional IP capacity
## Calculate the number of IP addresses
-We recommend the following formula for calculating the total number of IP addresses. This formula takes into account the potential creation of a new virtual cluster during a later create request or instance update. It also takes into account the maintenance window and hardware generation requirements of virtual clusters.
+We recommend the following formula for calculating the total number of IP addresses. This formula takes into account the potential creation of a new virtual cluster during a later create request or instance update. It also takes into account the maintenance window and hardware requirements of virtual clusters.
**Formula: 5 + (a * 12) + (b * 16) + (c * 16)** - a = number of GP instances - b = number of BC instances-- c = number of different maintenance window configurations and hardware generations
+- c = number of different maintenance window configurations and hardware configurations
Explanation: - 5 = number of IP addresses reserved by Azure
azure-sql Vnet Subnet Move Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/managed-instance/vnet-subnet-move-instance.md
Depending on the subnet state and designation, the following adjustments may be
> <sup>1</sup> Custom rules added to the source subnet configuration are not copied to the destination subnet. Any customization of the source subnet configuration must be replicated manually to the destination subnet. One way to achieve this is by using the same route table and network security group for the source and destination subnet.
-### Destination subnet limitations
+### Destination subnet limitations
-Consider the following limitations when choosing a destination subnet for an existing instance:
+Consider the following limitations when choosing a destination subnet for an existing instance:
-- The destination subnet must be in the same virtual network as the source subnet. -- The DNS zone of the destination subnet must match the DNS zone of the source subnet as changing the DNS zone of a managed instance is not currently supported. -- Instances running on Gen4 hardware must be upgraded to a newer hardware generation since Gen4 is being deprecated. Upgrading hardware generation and moving to another subnet can be performed in one operation.
+- The destination subnet must be in the same virtual network as the source subnet.
+- The DNS zone of the destination subnet must match the DNS zone of the source subnet as changing the DNS zone of a managed instance is not currently supported.
+- Instances running on Gen4 hardware must be upgraded to newer hardware since Gen4 is being retired. Upgrading hardware and moving to another subnet can be performed in one operation.
## Operation steps
azure-sql Sql Server To Managed Instance Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/migration-guides/managed-instance/sql-server-to-managed-instance-guide.md
Previously updated : 03/22/2022 Last updated : 04/06/2022 # Migration guide: SQL Server to Azure SQL Managed Instance [!INCLUDE[appliesto-sqldb-sqlmi](../../includes/appliesto-sqlmi.md)]
Based on the information in the discover and assess phase, create an appropriate
SQL Managed Instance is tailored for on-premises workloads that are planning to move to the cloud. It introduces a [purchasing model](../../database/service-tiers-vcore.md) that provides greater flexibility in selecting the right level of resources for your workloads. In the on-premises world, you are probably accustomed to sizing these workloads by using physical cores and IO bandwidth. The purchasing model for managed instance is based upon virtual cores, or "vCores," with additional storage and IO available separately. The vCore model is a simpler way to understand your compute requirements in the cloud versus what you use on-premises today. This purchasing model enables you to right-size your destination environment in the cloud. Some general guidelines that might help you to choose the right service tier and characteristics are described here: -- Based on the baseline CPU usage, you can provision a managed instance that matches the number of cores that you are using on SQL Server, having in mind that CPU characteristics might need to be scaled to match [VM characteristics where the managed instance is installed](../../managed-instance/resource-limits.md#hardware-generation-characteristics).-- Based on the baseline memory usage, choose [the service tier that has matching memory](../../managed-instance/resource-limits.md#hardware-generation-characteristics). The amount of memory cannot be directly chosen, so you would need to select the managed instance with the amount of vCores that has matching memory (for example, 5.1 GB/vCore in Gen5).
+- Based on the baseline CPU usage, you can provision a managed instance that matches the number of cores that you are using on SQL Server, having in mind that CPU characteristics might need to be scaled to match [VM characteristics where the managed instance is installed](../../managed-instance/resource-limits.md#hardware-configuration-characteristics).
+- Based on the baseline memory usage, choose [the service tier that has matching memory](../../managed-instance/resource-limits.md#hardware-configuration-characteristics). The amount of memory cannot be directly chosen, so you would need to select the managed instance with the amount of vCores that has matching memory (for example, 5.1 GB/vCore in Gen5).
- Based on the baseline IO latency of the file subsystem, choose between the General Purpose (latency greater than 5 ms) and Business Critical (latency less than 3 ms) service tiers. - Based on baseline throughput, pre-allocate the size of data or log files to get expected IO performance.
azure-sql Sql Server To Managed Instance Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/migration-guides/managed-instance/sql-server-to-managed-instance-overview.md
Previously updated : 03/22/2022 Last updated : 04/06/2022 # Migration overview: SQL Server to Azure SQL Managed Instance [!INCLUDE[appliesto--sqlmi](../../includes/appliesto-sqlmi.md)]
One of the key benefits of migrating your SQL Server databases to SQL Managed In
The following general guidelines can help you choose the right service tier and characteristics of SQL Managed Instance to help match your [performance baseline](sql-server-to-managed-instance-performance-baseline.md): -- Use the CPU usage baseline to provision a managed instance that matches the number of cores that your instance of SQL Server uses. It might be necessary to scale resources to match the [hardware generation characteristics](../../managed-instance/resource-limits.md#hardware-generation-characteristics).
+- Use the CPU usage baseline to provision a managed instance that matches the number of cores that your instance of SQL Server uses. It might be necessary to scale resources to match the [hardware characteristics](../../managed-instance/resource-limits.md#hardware-configuration-characteristics).
- Use the memory usage baseline to choose a [vCore option](../../managed-instance/resource-limits.md#service-tier-characteristics) that appropriately matches your memory allocation. - Use the baseline I/O latency of the file subsystem to choose between the General Purpose (latency greater than 5 ms) and Business Critical (latency less than 3 ms) service tiers. - Use the baseline throughput to preallocate the size of the data and log files to achieve expected I/O performance.
azure-sql Sql Server To Sql Managed Instance Assessment Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/migration-guides/managed-instance/sql-server-to-sql-managed-instance-assessment-rules.md
Previously updated : 12/15/2020 Last updated : 04/06/2022 # Assessment rules for SQL Server to Azure SQL Managed Instance migration [!INCLUDE[appliesto--sqlmi](../../includes/appliesto-sqlmi.md)]
The size of the database is greater than maximum instance reserved storage. **Th
**Recommendation** Evaluate if the data can be archived compressed or sharded into multiple databases. Alternatively, migrate to SQL Server on Azure Virtual Machine.
-More information: [Hardware generation characteristics of Azure SQL Managed Instance ](../../managed-instance/resource-limits.md#hardware-generation-characteristics)
+More information: [Hardware characteristics of Azure SQL Managed Instance ](../../managed-instance/resource-limits.md#hardware-configuration-characteristics)
The size of all databases is greater than maximum instance reserved storage.
**Recommendation** Consider migrating the databases to different Azure SQL Managed Instances or to SQL Server on Azure Virtual Machine if all the databases must exist on the same instance.
-More information: [Hardware generation characteristics of Azure SQL Managed Instance ](../../managed-instance/resource-limits.md#hardware-generation-characteristics)
+More information: [Hardware characteristics of Azure SQL Managed Instance ](../../managed-instance/resource-limits.md#hardware-configuration-characteristics)
## Multiple log files<a id="MultipleLogFiles<"></a>
azure-vmware Azure Vmware Solution Platform Updates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/azure-vmware-solution-platform-updates.md
No further action is required.
## December 22, 2021 Azure VMware Solution (AVS) has completed maintenance activities to address critical vulnerabilities in Apache Log4j.
-The fixes documented in the VMware security advisory [VMSA-2021-0028.6](https://www.vmware.com/security/advisories/VMSA-2021-0028.html) to address CVE-2021-44228 and CVE-2021-45046 have been applied to these AVS managed VMware products: vCenter, NSX-T, SRM and HCX.
+The fixes documented in the VMware security advisory [VMSA-2021-0028.6](https://www.vmware.com/security/advisories/VMSA-2021-0028.html) to address CVE-2021-44228 and CVE-2021-45046 have been applied to these AVS managed VMware products: vCenter Server, NSX-T Data Center, SRM and HCX.
We strongly encourage customers to apply the fixes to on-premises HCX connector appliances. We also recommend customers to review the security advisory and apply the fixes for other affected VMware products or workloads.
If you need any assistance or have questions, please [contact us](https://portal
## November 23, 2021
-Per VMware security advisory [VMSA-2021-0027](https://www.vmware.com/security/advisories/VMSA-2021-0027.html), multiple vulnerabilities in VMware vCenter server have been reported to VMware.
+Per VMware security advisory [VMSA-2021-0027](https://www.vmware.com/security/advisories/VMSA-2021-0027.html), multiple vulnerabilities in VMware vCenter Server have been reported to VMware.
-To address the vulnerabilities (CVE-2021-21980 and CVE-2021-22049) reported in VMware security advisory, vCenter server has been updated to 6.7 Update 3p release in all Azure VMware Solution private clouds.
+To address the vulnerabilities (CVE-2021-21980 and CVE-2021-22049) reported in VMware security advisory, vCenter Server has been updated to 6.7 Update 3p release in all Azure VMware Solution private clouds.
For more information, see [VMware vCenter Server 6.7 Update 3p Release Notes](https://docs.vmware.com/en/VMware-vSphere/6.7/rn/vsphere-vcenter-server-67u3p-release-notes.html)
No further action is required.
## September 21, 2021
-Per VMware security advisory [VMSA-2021-0020](https://www.vmware.com/security/advisories/VMSA-2021-0020.html), multiple vulnerabilities in the VMware vCenter server have been reported to VMware.
+Per VMware security advisory [VMSA-2021-0020](https://www.vmware.com/security/advisories/VMSA-2021-0020.html), multiple vulnerabilities in the VMware vCenter Server have been reported to VMware.
-To address the vulnerabilities (CVE-2021-21991, CVE-2021-21992, CVE-2021-21993, CVE-2021-22005, CVE-2021-22006, CVE-2021-22007, CVE-2021-22008, CVE-2021-22009, CVE-2021-22010, CVE-2021-22011, CVE-2021-22012,CVE-2021-22013, CVE-2021-22014, CVE-2021-22015, CVE-2021-22016, CVE-2021-22017, CVE-2021-22018, CVE-2021-22019, CVE-2021-22020) reported in VMware security advisory [VMSA-2021-0020](https://www.vmware.com/security/advisories/VMSA-2021-0020.html), vCenter Server has been updated to 6.7 Update 3o in all Azure VMware Solution private clouds. All new Azure VMware Solution private clouds are deployed with vCenter server version 6.7 Update 3o.
+To address the vulnerabilities (CVE-2021-21991, CVE-2021-21992, CVE-2021-21993, CVE-2021-22005, CVE-2021-22006, CVE-2021-22007, CVE-2021-22008, CVE-2021-22009, CVE-2021-22010, CVE-2021-22011, CVE-2021-22012,CVE-2021-22013, CVE-2021-22014, CVE-2021-22015, CVE-2021-22016, CVE-2021-22017, CVE-2021-22018, CVE-2021-22019, CVE-2021-22020) reported in VMware security advisory [VMSA-2021-0020](https://www.vmware.com/security/advisories/VMSA-2021-0020.html), vCenter Server has been updated to 6.7 Update 3o in all Azure VMware Solution private clouds. All new Azure VMware Solution private clouds are deployed with vCenter Server version 6.7 Update 3o.
For more information, see [VMware vCenter Server 6.7 Update 3o Release Notes](https://docs.vmware.com/en/VMware-vSphere/6.7/rn/vsphere-vcenter-server-67u3o-release-notes.html)
For more information on this ESXi version, see [VMware ESXi 6.7, Patch Release E
## July 23, 2021
-All new Azure VMware Solution private clouds are now deployed with NSX-T version [!INCLUDE [nsxt-version](includes/nsxt-version.md)]. NSX-T version in existing private clouds will be upgraded through September, 2021 to NSX-T [!INCLUDE [nsxt-version](includes/nsxt-version.md)] release.
+All new Azure VMware Solution private clouds are now deployed with NSX-T Data Center version [!INCLUDE [nsxt-version](includes/nsxt-version.md)]. NSX-T Data Center version in existing private clouds will be upgraded through September, 2021 to NSX-T Data Center [!INCLUDE [nsxt-version](includes/nsxt-version.md)] release.
You'll receive an email with the planned maintenance date and time. You can reschedule an upgrade. The email also provides details on the upgraded component, its effect on workloads, private cloud access, and other Azure services.
-For more information on this NSX-T version, see [VMware NSX-T Data Center [!INCLUDE [nsxt-version](includes/nsxt-version.md)] Release Notes](https://docs.vmware.com/en/VMware-NSX-T-Data-Center/3.1/rn/VMware-NSX-T-Data-Center-312-Release-Notes.html).
+For more information on this NSX-T Data Center version, see [VMware NSX-T Data Center [!INCLUDE [nsxt-version](includes/nsxt-version.md)] Release Notes](https://docs.vmware.com/en/VMware-NSX-T-Data-Center/3.1/rn/VMware-NSX-T-Data-Center-312-Release-Notes.html).
No further action is required.
## May 21, 2021
-Azure VMware Solution service will do maintenance work through May 23, 2021, to apply important updates to the vCenter server in your private cloud. You'll receive a notification through Azure Service Health that includes the timeline of the maintenance for your private cloud.
+Azure VMware Solution service will do maintenance work through May 23, 2021, to apply important updates to the vCenter Server in your private cloud. You'll receive a notification through Azure Service Health that includes the timeline of the maintenance for your private cloud.
-During this time, VMware vCenter will be unavailable and you won't be able to manage VMs (stop, start, create, or delete). It's recommended that, during this time, you don't plan any other activities like scaling up private cloud, creating new networks, and so on, in your private cloud.
+During this time, VMware vCenter Server will be unavailable and you won't be able to manage VMs (stop, start, create, or delete). It's recommended that, during this time, you don't plan any other activities like scaling up private cloud, creating new networks, and so on, in your private cloud.
There is no impact to workloads running in your private cloud. ## April 26, 2021
-All new Azure VMware Solution private clouds are now deployed with VMware vCenter version 6.7U3l and NSX-T version 2.5.2. We're not using NSX-T 3.1.1 for new private clouds because of an identified issue in NSX-T 3.1.1 that impacts customer VM connectivity.
+All new Azure VMware Solution private clouds are now deployed with VMware vCenter Server version 6.7U3l and NSX-T Data Center version 2.5.2. We're not using NSX-T Data Center 3.1.1 for new private clouds because of an identified issue in NSX-T Data Center 3.1.1 that impacts customer VM connectivity.
-The VMware recommended mitigation was applied to all existing private clouds currently running NSX-T 3.1.1 on Azure VMware Solution. The workaround has been confirmed that there's no impact to customer VM connectivity.
+The VMware recommended mitigation was applied to all existing private clouds currently running NSX-T Data Center 3.1.1 on Azure VMware Solution. The workaround has been confirmed that there's no impact to customer VM connectivity.
## March 24, 2021
-All new Azure VMware Solution private clouds are deployed with VMware vCenter version 6.7U3l and NSX-T version 3.1.1. Any existing private clouds will be updated and upgraded **through June 2021** to the releases mentioned above.
+All new Azure VMware Solution private clouds are deployed with VMware vCenter Server version 6.7U3l and NSX-T Data Center version 3.1.1. Any existing private clouds will be updated and upgraded **through June 2021** to the releases mentioned above.
You'll receive an email with the planned maintenance date and time. You can reschedule an upgrade. The email also provides details on the upgraded component, its effect on workloads, private cloud access, and other Azure services. An hour before the upgrade, you'll receive a notification and then again when it finishes. ## March 15, 2021 -- Azure VMware Solution service will do maintenance work **through March 19, 2021,** to update the vCenter server in your private cloud to vCenter Server 6.7 Update 3l version.
+- Azure VMware Solution service will do maintenance work **through March 19, 2021,** to update the vCenter Server in your private cloud to vCenter Server 6.7 Update 3l version.
-- VMware vCenter will be unavailable during this time, so you can't manage your VMs (stop, start, create, delete) or private cloud scaling (adding/removing servers and clusters). However, VMware High Availability (HA) will continue to operate to protect existing VMs.
+- VMware vCenter Server will be unavailable during this time, so you can't manage your VMs (stop, start, create, delete) or private cloud scaling (adding/removing servers and clusters). However, VMware High Availability (HA) will continue to operate to protect existing VMs.
For more information on this vCenter version, see [VMware vCenter Server 6.7 Update 3l Release Notes](https://docs.vmware.com/en/VMware-vSphere/6.7/rn/vsphere-vcenter-server-67u3l-release-notes.html).
For more information on this vCenter version, see [VMware vCenter Server 6.7 Upd
- Documented workarounds for the vSphere stack, as per [VMSA-2021-0002](https://www.vmware.com/security/advisories/VMSA-2021-0002.html), will also be applied **through March 15, 2021**. >[!NOTE]
->This is non-disruptive and should not impact Azure VMware Services or workloads. During maintenance, various VMware alerts, such as _Lost network connectivity on DVPorts_ and _Lost uplink redundancy on DVPorts_, appear in vCenter and clear automatically as the maintenance progresses.
+>This is non-disruptive and should not impact Azure VMware Services or workloads. During maintenance, various VMware alerts, such as _Lost network connectivity on DVPorts_ and _Lost uplink redundancy on DVPorts_, appear in vCenter Server and clear automatically as the maintenance progresses.
## Post update Once complete, newer versions of VMware components appear. If you notice any issues or have any questions, contact our support team by opening a support ticket.
azure-vmware Configure Dhcp Azure Vmware Solution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/configure-dhcp-azure-vmware-solution.md
Title: Configure DHCP for Azure VMware Solution
description: Learn how to configure DHCP by using either NSX-T Manager to host a DHCP server or use a third-party external DHCP server. Previously updated : 09/13/2021 Last updated : 04/08/2022 # Customer intent: As an Azure service administrator, I want to configure DHCP by using either NSX-T Manager to host a DHCP server or use a third-party external DHCP server.
In this how-to article, you'll use NSX-T Manager to configure DHCP for Azure VMw
- [Use the Azure portal to create a DHCP server or relay](#use-the-azure-portal-to-create-a-dhcp-server-or-relay) -- [Use NSX-T to host your DHCP server](#use-nsx-t-to-host-your-dhcp-server)
+- [Use NSX-T Data Center to host your DHCP server](#use-nsx-t-data-center-to-host-your-dhcp-server)
- [Use a third-party external DHCP server](#use-a-third-party-external-dhcp-server) >[!TIP]
->If you want to configure DHCP using a simplified view of NSX-T operations, see [Configure DHCP for Azure VMware Solution](configure-dhcp-azure-vmware-solution.md).
+>If you want to configure DHCP using a simplified view of NSX-T Data Center operations, see [Configure DHCP for Azure VMware Solution](configure-dhcp-azure-vmware-solution.md).
>[!IMPORTANT]
->For clouds created on or after July 1, 2021, the simplified view of NSX-T operations must be used to configure DHCP on the default Tier-1 Gateway in your environment.
+>For clouds created on or after July 1, 2021, the simplified view of NSX-T Data Center operations must be used to configure DHCP on the default Tier-1 Gateway in your environment.
>
->DHCP does not work for virtual machines (VMs) on the VMware HCX L2 stretch network when the DHCP server is in the on-premises datacenter. NSX, by default, blocks all DHCP requests from traversing the L2 stretch. For the solution, see the [Configure DHCP on L2 stretched VMware HCX networks](configure-l2-stretched-vmware-hcx-networks.md) procedure.
+>DHCP does not work for virtual machines (VMs) on the VMware HCX L2 stretch network when the DHCP server is in the on-premises datacenter. NSX-T Data Center, by default, blocks all DHCP requests from traversing the L2 stretch. For the solution, see the [Configure DHCP on L2 stretched VMware HCX networks](configure-l2-stretched-vmware-hcx-networks.md) procedure.
## Use the Azure portal to create a DHCP server or relay
You can create a DHCP server or relay directly from Azure VMware Solution in the
-## Use NSX-T to host your DHCP server
-If you want to use NSX-T to host your DHCP server, you'll create a DHCP server and a relay service. Then you'll add a network segment and specify the DHCP IP address range.
+## Use NSX-T Data Center to host your DHCP server
+If you want to use NSX-T Data Center to host your DHCP server, you'll create a DHCP server and a relay service. Then you'll add a network segment and specify the DHCP IP address range.
### Create a DHCP server
If you want to use NSX-T to host your DHCP server, you'll create a DHCP server a
1. Select **Tier 1 Gateways**, select the vertical ellipsis on the Tier-1 gateway, and then select **Edit**.
- :::image type="content" source="./media/manage-dhcp/edit-tier-1-gateway.png" alt-text="Screenshot showing how to edit the Tier-1 Gateway for using a DHCP server." border="true":::
+ :::image type="content" source="./media/manage-dhcp/edit-tier-1-gateway.png" alt-text="Screenshot showing how to edit the NSX-T Data Center Tier-1 Gateway for using a DHCP server." border="true":::
1. Select **No IP Allocation Set** to add a subnet.
- :::image type="content" source="./media/manage-dhcp/add-subnet.png" alt-text="Screenshot showing how to add a subnet to the Tier-1 Gateway for using a DHCP server." border="true":::
+ :::image type="content" source="./media/manage-dhcp/add-subnet.png" alt-text="Screenshot showing how to add a subnet to the NSX-T Data Center Tier-1 Gateway for using a DHCP server." border="true":::
1. For **Type**, select **DHCP Local Server**.
If you want to use a third-party external DHCP server, you'll create a DHCP rela
>[!IMPORTANT]
->For clouds created on or after July 1, 2021, the simplified view of NSX-T operations must be used to configure DHCP on the default Tier-1 Gateway in your environment.
+>For clouds created on or after July 1, 2021, the simplified view of NSX-T Data Center operations must be used to configure DHCP on the default Tier-1 Gateway in your environment.
### Create DHCP relay service
Use a DHCP relay for any non-NSX-based DHCP service. For example, a VM running D
1. Select **Tier 1 Gateways**, select the vertical ellipsis on the Tier-1 gateway, and then select **Edit**.
- :::image type="content" source="./media/manage-dhcp/edit-tier-1-gateway.png" alt-text="Screenshot showing how to edit the Tier-1 Gateway." border="true":::
+ :::image type="content" source="./media/manage-dhcp/edit-tier-1-gateway.png" alt-text="Screenshot showing how to edit the NSX-T Data Center Tier-1 Gateway." border="true":::
1. Select **No IP Allocation Set** to define the IP address allocation.
- :::image type="content" source="./media/manage-dhcp/add-subnet.png" alt-text="Screenshot showing how to add a subnet to the Tier-1 Gateway." border="true":::
+ :::image type="content" source="./media/manage-dhcp/add-subnet.png" alt-text="Screenshot showing how to add a subnet to the NSX-T Data Center Tier-1 Gateway." border="true":::
1. For **Type**, select **DHCP Server**.
azure-vmware Configure Github Enterprise Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/configure-github-enterprise-server.md
This article set up a new instance of GitHub Enterprise Server, the self-hosted
Now that you've covered setting up GitHub Enterprise Server on your Azure VMware Solution private cloud, you may want to learn about: - [How to get started with GitHub Actions](https://docs.github.com/en/actions)-- [How to join the beta program](https://resources.github.com/beta-signup/)
+- [How to join the beta program](https://docs.github.com/en/get-started/signing-up-for-github/signing-up-for-a-new-github-account)
- [Administration of GitHub Enterprise Server](https://githubtraining.github.io/admin-training/#/00_getting_started)
azure-vmware Configure Identity Source Vcenter https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/configure-identity-source-vcenter.md
Title: Configure external identity source for vCenter
-description: Learn how to configure Active Directory over LDAP or LDAPS for vCenter as an external identity source.
+ Title: Configure external identity source for vCenter Server
+description: Learn how to configure Active Directory over LDAP or LDAPS for vCenter Server as an external identity source.
Previously updated : 08/31/2021 Last updated : 04/07/2022
Last updated 08/31/2021
In this how-to, you learn how to: > [!div class="checklist"]
-> * List all existing external identity sources integrated with vCenter SSO
+> * List all existing external identity sources integrated with vCenter Server SSO
> * Add Active Directory over LDAP, with or without SSL > * Add existing AD group to cloudadmin group > * Remove AD group from the cloudadmin role
In this how-to, you learn how to:
-You'll run the `Get-ExternalIdentitySources` cmdlet to list all external identity sources already integrated with vCenter SSO.
+You'll run the `Get-ExternalIdentitySources` cmdlet to list all external identity sources already integrated with vCenter Server SSO.
1. Sign in to the [Azure portal](https://portal.azure.com).
You'll run the `Get-ExternalIdentitySources` cmdlet to list all external identit
## Add Active Directory over LDAP with SSL
-You'll run the `New-LDAPSIdentitySource` cmdlet to add an AD over LDAP with SSL as an external identity source to use with SSO into vCenter.
+You'll run the `New-LDAPSIdentitySource` cmdlet to add an AD over LDAP with SSL as an external identity source to use with SSO into vCenter Server.
1. Download the certificate for AD authentication and upload it to an Azure Storage account as blob storage. If multiple certificates are required, upload each certificate individually.
You'll run the `New-LDAPSIdentitySource` cmdlet to add an AD over LDAP with SSL
>[!NOTE] >We don't recommend this method. Instead, use the [Add Active Directory over LDAP with SSL](#add-active-directory-over-ldap-with-ssl) method.
-You'll run the `New-LDAPIdentitySource` cmdlet to add AD over LDAP as an external identity source to use with SSO into vCenter.
+You'll run the `New-LDAPIdentitySource` cmdlet to add AD over LDAP as an external identity source to use with SSO into vCenter Server.
1. Select **Run command** > **Packages** > **New-LDAPIdentitySource**.
You'll run the `New-LDAPIdentitySource` cmdlet to add AD over LDAP as an externa
## Add existing AD group to cloudadmin group
-You'll run the `Add-GroupToCloudAdmins` cmdlet to add an existing AD group to cloudadmin group. The users in this group have privileges equal to the cloudadmin (cloudadmin@vsphere.local) role defined in vCenter SSO.
+You'll run the `Add-GroupToCloudAdmins` cmdlet to add an existing AD group to cloudadmin group. The users in this group have privileges equal to the cloudadmin (cloudadmin@vsphere.local) role defined in vCenter Server SSO.
1. Select **Run command** > **Packages** > **Add-GroupToCloudAdmins**.
Now that you've learned about how to configure LDAP and LDAPS, you can learn mor
- [How to configure storage policy](configure-storage-policy.md) - Each VM deployed to a vSAN datastore is assigned at least one VM storage policy. You can assign a VM storage policy in an initial deployment of a VM or when you do other VM operations, such as cloning or migrating. -- [Azure VMware Solution identity concepts](concepts-identity.md) - Use vCenter to manage virtual machine (VM) workloads and NSX-T Manager to manage and extend the private cloud. Access and identity management use the CloudAdmin role for vCenter and restricted administrator rights for NSX-T Manager.
+- [Azure VMware Solution identity concepts](concepts-identity.md) - Use vCenter Server to manage virtual machine (VM) workloads and NSX-T Manager to manage and extend the private cloud. Access and identity management use the CloudAdmin role for vCenter Server and restricted administrator rights for NSX-T Manager.
azure-web-pubsub Concept Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/concept-metrics.md
+
+ Title: Metrics in Azure Web PubSub Service
+description: Metrics in Azure Web PubSub Service.
+++ Last updated : 04/08/2022++
+# Metrics in Azure Web PubSub Service
+
+Azure Web PubSub Service has some built-in metrics and you and sets up [alerts](../azure-monitor/alerts/alerts-overview.md) base on metrics.
+
+## Understand metrics
+
+Metrics provide the running info of the service. The available metrics are:
+
+|Metric|Unit|Recommended Aggregation Type|Description|Dimensions|
+||||||
+|Connection Close Count|Count|Sum|The count of connections closed by various reasons.|ConnectionCloseCategory|
+|Connection Count|Count|Max / Avg|The amount of connection.|No Dimensions|
+|Connection Open Count|Count|Sum|The count of new connections opened.|No Dimensions|
+|Connection Quota Utilization|Percent|Max / Avg|The percentage of connection connected relative to connection quota.|No Dimensions|
+|Inbound Traffic|Bytes|Sum|The inbound traffic of service|No Dimensions|
+|Outbound Traffic|Bytes|Sum|The outbound traffic of service|No Dimensions|
+
+### Understand Dimensions
+
+Dimensions of a metric are name/value pairs that carry extra data to describe the metric value.
+
+The dimension available in some metrics:
+
+* ConnectionCloseCategory: Describe the categories of why connection getting closed. Including dimension values:
+ - Normal: Normal closure.
+ - Throttled: With traffic or connection throttling, check Connection Count and Outbound Traffic usage and your resource limits.
+ - SendEventFailed: Event handler invokes failed.
+ - EventHandlerNotFound: Event handler not found.
+ - SlowClient: Too many messages queued up at service side, which needed to be sent.
+ - ServiceTransientError: Internal server error
+ - BadRequest: This caused by invalid hub name, wrong payload, etc.
+ - ServiceReload: This is triggered when a connection is dropped due to an internal service component reload. This event doesn't indicate a malfunction and is part of normal service operation.
+ - Unauthorized: The connection is unauthorized
+
+Learn more about [multi-dimensional metrics](../azure-monitor/essentials/data-platform-metrics.md#multi-dimensional-metrics)
+
+## Related resources
+
+- [Aggregation types in Azure Monitor](../azure-monitor/essentials/metrics-supported.md#microsoftsignalrservicewebpubsub )
cdn Cdn Pop Locations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-pop-locations.md
This article lists current metros containing point-of-presence (POP) locations,
|--|--|--| | North America | Guadalajara, Mexico<br />Mexico City, Mexico<br />Puebla, Mexico<br />Querétaro, Mexico<br />Atlanta, GA, USA<br />Boston, MA, USA<br />Chicago, IL, USA<br />Dallas, TX, USA<br />Denver, CO, USA<br />Detroit, MI, USA<br />Los Angeles, CA, USA<br />Miami, FL, USA<br />New York, NY, USA<br />Philadelphia, PA, USA<br />San Jose, CA, USA<br />Seattle, WA, USA<br />Washington, DC, USA <br /> Ashburn, VA, USA <br /> Phoenix, AZ, USA | Canada<br />Mexico<br />USA | | South America | Buenos Aires, Argentina<br />Rio de Janeiro, Brazil<br />São Paulo, Brazil<br />Valparaíso, Chile<br />Bogota, Colombia<br />Barranquilla, Colombia<br />Medellin, Colombia<br />Quito, Ecuador<br />Lima, Peru | Argentina<br />Brazil<br />Chile<br />Colombia<br />Ecuador<br />Peru<br />Uruguay |
-| Europe | Vienna, Austria<br />Copenhagen, Denmark<br />Helsinki, Finland<br />Marseille, France<br />Paris, France<br />Frankfurt, Germany<br />Milan, Italy<br />Riga, Latvia<br />Amsterdam, Netherlands<br />Warsaw, Poland<br />Madrid, Spain<br />Stockholm, Sweden<br />London, UK <br /> Manchester, UK | Austria<br />Bulgaria<br />Denmark<br />Finland<br />France<br />Germany<br />Greece<br />Ireland<br />Italy<br />Netherlands<br />Poland<br />Russia<br />Spain<br />Sweden<br />Switzerland<br />United Kingdom |
+| Europe | Vienna, Austria<br />Copenhagen, Denmark<br />Helsinki, Finland<br />Marseille, France<br />Paris, France<br />Frankfurt, Germany<br />Milan, Italy<br />Riga, Latvia<br />Amsterdam, Netherlands<br />Warsaw, Poland<br />Madrid, Spain<br />Stockholm, Sweden<br />London, UK <br /> Manchester, UK | Austria<br />Bulgaria<br />Denmark<br />Finland<br />France<br />Germany<br />Greece<br />Ireland<br />Italy<br />Netherlands<br />Norway<br />Poland<br />Russia<br />Spain<br />Sweden<br />Switzerland<br />United Kingdom |
| Africa | Johannesburg, South Africa <br/> Nairobi, Kenya | South Africa | | Middle East | Muscat, Oman<br />Fujirah, United Arab Emirates | Qatar<br />United Arab Emirates | | India | Bengaluru (Bangalore), India<br />Chennai, India<br />Mumbai, India<br />New Delhi, India<br /> | India |
cloud-services-extended-support States https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services-extended-support/states.md
+
+ Title: Available States for Azure Cloud Services (extended support)
+description: Available Power and Provisioning States for Azure Cloud Services (extended support)
++++ Last updated : 04/05/2022++
+# Available Provisioning and Power States for Azure Cloud Services (extended support)
+
+## Available Provisioning States for Azure Cloud Services (extended support)
+
+This table lists the different Provisioning states for Cloud Services (extended support) resource.
+
+| Status | Description |
+|||
+|Creating|The CSES resource is in the process of creating|
+|Updating|The CSES resource is in the state of being updated|
+|Failed|The CSES resource is unable to achieve the status requested in the deployment|
+|Succeeded|The CSES resource is successfully deployed with the latest deployment request|
+|Deleting|The CSES resource is in the process of deleting|
+
+## Available Role Instance/Power States for Azure Cloud Services (extended support)
+
+This table lists the different power states for Cloud Services (extended support) instances.
+
+|State|Details|
+|||
+|Started|The Role Instance is healthy and is currently running|
+|Stopping|The Role Instance is in the process of getting stopped|
+|Stopped|The Role Instance is in the Stopped State|
+|Unknown|The Role Instance is either in the process of creating or is not ready to service the traffic|
+|Starting|The Role Instance is in the process of moving to healthy/running state|
+|Busy|The Role Instance is not responding|
+|Destroyed|The Role instance is destroyed|
++
+## Next steps
+- Review the [deployment prerequisites](deploy-prerequisite.md) for Cloud Services (extended support).
+- Review [frequently asked questions](faq.yml) for Cloud Services (extended support).
+- Deploy a Cloud Service (extended support) using the [Azure portal](deploy-portal.md), [PowerShell](deploy-powershell.md), [Template](deploy-template.md) or [Visual Studio](deploy-visual-studio.md).
cognitive-services How To Lower Speech Synthesis Latency https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-lower-speech-synthesis-latency.md
Meanwhile, a compressed audio format helps to save the users' network bandwidth,
We support many compressed formats including `opus`, `webm`, `mp3`, `silk`, and so on, see the full list in [SpeechSynthesisOutputFormat](/cpp/cognitive-services/speech/microsoft-cognitiveservices-speech-namespace#speechsynthesisoutputformat). For example, the bitrate of `Riff24Khz16BitMonoPcm` format is 384 kbps, while `Audio24Khz48KBitRateMonoMp3` only costs 48 kbps.
-Our Speech SDK will automatically use a compressed format for transmission when a `pcm` output format is set and `GStreamer` is properly installed.
+Our Speech SDK will automatically use a compressed format for transmission when a `pcm` output format is set.
+For Linux and Windows, `GStreamer` is required to enable this feature.
Refer [this instruction](how-to-use-codec-compressed-audio-input-streams.md) to install and configure `GStreamer` for Speech SDK.
+For Android, iOS and macOS, no extra configuration is needed starting version 1.20.
## Others tips
See [How to configure OpenSSL for Linux](how-to-configure-openssl-linux.md#certi
### Use latest Speech SDK
-We keep improving the Speech SDK's performance, so try to use the latest Speech SDK in your application.
+We keep improving the Speech SDK's performance, so try to use the latest Speech SDK in your application.
## Load test guideline
cognitive-services Record Custom Voice Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/record-custom-voice-samples.md
Fortunately, it's possible to avoid these issues entirely. There are many source
|Text source|Description| |-|-|
-|[CMU Arctic corpus](http://festvox.org/cmu_arctic/)|About 1100 sentences selected from out-of-copyright works specifically for use in speech synthesis projects. An excellent starting point.|
+|[CMU Arctic corpus](https://pyroomacoustics.readthedocs.io/en/pypi-release/pyroomacoustics.datasets.cmu_arctic.html)|About 1100 sentences selected from out-of-copyright works specifically for use in speech synthesis projects. An excellent starting point.|
|Works no longer<br>under copyright|Typically works published prior to 1923. For English, [Project Gutenberg](https://www.gutenberg.org/) offers tens of thousands of such works. You may want to focus on newer works, as the language will be closer to modern English.| |Government&nbsp;works|Works created by the United States government are not copyrighted in the United States, though the government may claim copyright in other countries/regions.| |Public domain|Works for which copyright has been explicitly disclaimed or that have been dedicated to the public domain. It may not be possible to waive copyright entirely in some jurisdictions.|
cognitive-services Cognitive Services For Big Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/big-data/cognitive-services-for-big-data.md
Cognitive Services for Big Data is an example of how we can integrate intelligen
## Blog posts - [Learn more about how Cognitive Services work on Apache Spark&trade;](https://azure.microsoft.com/blog/dear-spark-developers-welcome-to-azure-cognitive-services/)-- [Saving Snow Leopards with Deep Learning and Computer Vision on Spark](http://www.datawizard.io/2017/06/27/saving-snow-leopards-with-deep-learning-and-computer-vision-on-spark/)
+- [Saving Snow Leopards with Deep Learning and Computer Vision on Spark](/archive/blogs/machinelearning/saving-snow-leopards-with-deep-learning-and-computer-vision-on-spark)
- [Microsoft Research Podcast: MMLSpark, empowering AI for Good with Mark Hamilton](https://blubrry.com/microsoftresearch/49485070/092-mmlspark-empowering-ai-for-good-with-mark-hamilton/) - [Academic Whitepaper: Large Scale Intelligent Microservices](https://arxiv.org/abs/2009.08044)
Cognitive Services for Big Data is an example of how we can integrate intelligen
- [Getting Started with the Cognitive Services for Big Data](getting-started.md) - [Simple Python Examples](samples-python.md)-- [Simple Scala Examples](samples-scala.md)
+- [Simple Scala Examples](samples-scala.md)
communication-services Credentials Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/credentials-best-practices.md
leaveChatBtn.addEventListener('click', function() {
### Clean up resources
-Communication Services applications should dispose the Credential instance when it's no longer needed. Disposing the credential is also the recommended way of canceling scheduled refresh actions when the proactive refreshing is enabled.
+Since the Credential object can be passed to multiple Chat or Calling client instances, the SDK will make no assumptions about its lifetime and leaves the responsibility of its disposal to the developer. It's up to the Communication Services applications to dispose the Credential instance when it's no longer needed. Disposing the credential is also the recommended way of canceling scheduled refresh actions when the proactive refreshing is enabled.
Call the `.dispose()` function.
communication-services Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/router/concepts.md
The three types of modes are
- **Round Robin**: Workers are ordered by `Id` and the next worker after the previous one that got an offer is picked. - **Longest Idle**: The worker that has not been working on a job for the longest.-- **Best Worker**: The workers that are best able to handle the job are picked first. The logic to rank Workers can be customized, with an expression or Azure function to compare two workers.
+- **Best Worker**: The workers that are best able to handle the job are picked first. The logic to rank Workers can be customized, with an expression or Azure function to compare two workers. [See example][worker-scoring]
## Labels
An exception policy controls the behavior of a Job based on a trigger and execut
[worker_registered_event]: ../../how-tos/router-sdk/subscribe-events.md#microsoftcommunicationrouterworkerregistered [job_classified_event]: ../../how-tos/router-sdk/subscribe-events.md#microsoftcommunicationrouterjobclassified [offer_issued_event]: ../../how-tos/router-sdk/subscribe-events.md#microsoftcommunicationrouterworkerofferissued
-[offer_accepted_event]: ../../how-tos/router-sdk/subscribe-events.md#microsoftcommunicationrouterworkerofferaccepted
+[offer_accepted_event]: ../../how-tos/router-sdk/subscribe-events.md#microsoftcommunicationrouterworkerofferaccepted
+[worker-scoring]: ../../how-tos/router-sdk/customize-worker-scoring.md
communication-services Router Rule Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/router/router-rule-concepts.md
await client.upsertClassificationPolicy({
## Next steps -- [Azure Function Rule](azure-function-rule-engine.md)
+- [Azure Function Rule How To](../../how-tos/router-sdk/azure-function.md)
communication-services Azure Function https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/router-sdk/azure-function.md
+
+ Title: Azure function rule example
+
+description: Learn how to wire up Azure Functions to Job Router decision points.
+++++ Last updated : 04/08/2022++
+
+
+# Azure function rule engine
++
+As part of the customer extensibility model, Azure Communication Services Job Router supports an Azure Function based rule engine. It gives you the ability to bring your own Azure function. With Azure Functions, you can incorporate custom and complex logic into the process of routing.
+
+## Creating an Azure function
+
+If you're new to Azure Functions, refer to [Getting started with Azure Functions](../../../azure-functions/functions-get-started.md) to learn how to create your first function with your favorite tool and language.
+
+> [!NOTE]
+> Your Azure Function will need to be configured to use an [Http trigger](../../../azure-functions/functions-triggers-bindings.md)
+
+The Http request body that is sent to your function will include the labels of each of the entities involved. For example, if you're writing a function to determine job priority, the payload will include the all the job labels under the `job` key.
+
+```json
+{
+ "job": {
+ "label1": "foo",
+ "label2": "bar",
+ "urgent": true,
+ }
+}
+```
+
+The following example will inspect the value of the `urgent` label and return a priority of 10 if it's true.
+
+```csharp
+public static class GetPriority
+{
+ [FunctionName("GetPriority")]
+ public static async Task<IActionResult> Run(
+ [HttpTrigger(AuthorizationLevel.Function, "get", "post", Route = null)] HttpRequest req,
+ ILogger log)
+ {
+ var priority = 5;
+ string requestBody = await new StreamReader(req.Body).ReadToEndAsync();
+ var data = JsonConvert.DeserializeObject<JObject>(requestBody);
+ var isUrgent = data["job"]["urgent"].Value<bool>();
+ if (isUrgent)
+ priority = 10;
+
+ return new OkObjectResult(JsonConvert.SerializeObject(priority));
+ }
+}
+```
+
+## Configure a policy to use the Azure function
+
+Inspect your deployed function in the Azure portal and locate the function Uri and authentication key. Then use the SDK to configure a policy that uses a rule engine to point to that function.
+
+```csharp
+await client.SetClassificationPolicyAsync(
+ "policy-1",
+ prioritizationRule: new AzureFunctionRule("<insert function uri>", new AzureFunctionRuleCredential("<insert function key>")));
+```
+
+When a new job is submitted or updated, this function will be called to determine the priority of the job.
+
+## Errors
+
+If the Azure Function fails or returns a non-200 code, the job will move to the `ClassificationFailed` state and you'll receive a `JobClassificationFailedEvent` from Event Grid containing details of the error.
communication-services Customize Worker Scoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/router-sdk/customize-worker-scoring.md
+
+ Title: Azure Function Rule concepts for Azure Communication Services
+
+description: Learn how to customize how workers are ranked for the best worker mode
+++++ Last updated : 02/23/2022++
+
+
+# How to customize how workers are ranked for the best worker distribution mode
++
+The `best-worker` distribution mode selects the workers that are best able to handle the job first. The logic to rank Workers can be customized, with an expression or Azure function to compare two workers. The following example shows how to customize this logic with your own Azure Function.
+
+## Scenario: Custom scoring rule in best worker distribution mode
+
+We want to distribute offers among their workers associated with a queue. The workers will be given a score based on their labels and skill set. The worker with the highest score should get the first offer (_BestWorker Distribution Mode_).
++
+### Situation
+
+- A job has been created and classified.
+ - Job has the following **labels** associated with it
+ - ["CommunicationType"] = "Chat"
+ - ["IssueType"] = "XboxSupport"
+ - ["Language"] = "en"
+ - ["HighPriority"] = true
+ - ["SubIssueType"] = "ConsoleMalfunction"
+ - ["ConsoleType"] = "XBOX_SERIES_X"
+ - ["Model"] = "XBOX_SERIES_X_1TB"
+ - Job has the following **WorkerSelectors** associated with it
+ - ["English"] >= 7
+ - ["ChatSupport"] = true
+ - ["XboxSupport"] = true
+- Job currently is in a state of '**Queued**'; enqueued in *Xbox Hardware Support Queue* waiting to be matched to a worker.
+- Multiple workers become available simultaneously.
+ - **Worker 1** has been created with the following **labels**
+ - ["HighPrioritySupport"] = true
+ - ["HardwareSupport"] = true
+ - ["Support_XBOX_SERIES_X"] = true
+ - ["English"] = 10
+ - ["ChatSupport"] = true
+ - ["XboxSupport"] = true
+ - **Worker 2** has been created with the following **labels**
+ - ["HighPrioritySupport"] = true
+ - ["HardwareSupport"] = true
+ - ["Support_XBOX_SERIES_X"] = true
+ - ["Support_XBOX_SERIES_S"] = true
+ - ["English"] = 8
+ - ["ChatSupport"] = true
+ - ["XboxSupport"] = true
+ - **Worker 3** has been created with the following **labels**
+ - ["HighPrioritySupport"] = false
+ - ["HardwareSupport"] = true
+ - ["Support_XBOX"] = true
+ - ["English"] = 7
+ - ["ChatSupport"] = true
+ - ["XboxSupport"] = true
+
+### Expectation
+
+We would like the following behavior when scoring workers to select which worker gets the first offer.
++
+The decision flow (as shown above) is as follows:
+
+- If a job is **NOT HighPriority**:
+ - Workers with label: **["Support_XBOX"] = true**; get a score of *100*
+ - Otherwise, get a score of *1*
+
+- If a job is **HighPriority**:
+ - Workers with label: **["HighPrioritySupport"] = false**; get a score of *1*
+ - Otherwise, if **["HighPrioritySupport"] = true**:
+ - Does Worker specialize in console type -> Does worker have label: **["Support_<**jobLabels.ConsoleType**>"] = true**? If true, worker gets score of *200*
+ - Otherwise, get a score of *100*
+
+## Creating an Azure function
+
+Before moving on any further in the process, let us first define an Azure function that scores worker.
+> [!NOTE]
+> The following Azure function is using JavaScript. For more information, please refer to [Quickstart: Create a JavaScript function in Azure using Visual Studio Code](../../../azure-functions/create-first-function-vs-code-node.md)
+
+Sample input for **Worker 1**
+
+```json
+{
+ "job": {
+ "CommunicationType": "Chat",
+ "IssueType": "XboxSupport",
+ "Language": "en",
+ "HighPriority": true,
+ "SubIssueType": "ConsoleMalfunction",
+ "ConsoleType": "XBOX_SERIES_X",
+ "Model": "XBOX_SERIES_X_1TB"
+ },
+ "selectors": [
+ {
+ "key": "English",
+ "operator": "GreaterThanEqual",
+ "value": 7,
+ "ttl": null
+ },
+ {
+ "key": "ChatSupport",
+ "operator": "Equal",
+ "value": true,
+ "ttl": null
+ },
+ {
+ "key": "XboxSupport",
+ "operator": "Equal",
+ "value": true,
+ "ttl": null
+ }
+ ],
+ "worker": {
+ "Id": "e3a3f2f9-3582-4bfe-9c5a-aa57831a0f88",
+ "HighPrioritySupport": true,
+ "HardwareSupport": true,
+ "Support_XBOX_SERIES_X": true,
+ "English": 10,
+ "ChatSupport": true,
+ "XboxSupport": true
+ }
+}
+```
+
+Sample implementation:
+
+```javascript
+module.exports = async function (context, req) {
+ context.log('Best Worker Distribution Mode using Azure Function');
+
+ let score = 0;
+ const jobLabels = req.body.job;
+ const workerLabels = req.body.worker;
+
+ const isHighPriority = !!jobLabels["HighPriority"];
+ context.log('Job is high priority? Status: ' + isHighPriority);
+
+ if(!isHighPriority) {
+ const isGenericXboxSupportWorker = !!workerLabels["Support_XBOX"];
+ context.log('Worker provides general xbox support? Status: ' + isGenericXboxSupportWorker);
+
+ score = isGenericXboxSupportWorker ? 100 : 1;
+
+ } else {
+ const workerSupportsHighPriorityJob = !!workerLabels["HighPrioritySupport"];
+ context.log('Worker provides high priority support? Status: ' + workerSupportsHighPriorityJob);
+
+ if(!workerSupportsHighPriorityJob) {
+ score = 1;
+ } else {
+ const key = `Support_${jobLabels["ConsoleType"]}`;
+
+ const workerSpecializeInConsoleType = !!workerLabels[key];
+ context.log(`Worker specializes in consoleType: ${jobLabels["ConsoleType"]} ? Status: ${workerSpecializeInConsoleType}`);
+
+ score = workerSpecializeInConsoleType ? 200 : 100;
+ }
+ }
+ context.log('Final score of worker: ' + score);
+
+ context.res = {
+ // status: 200, /* Defaults to 200 */
+ body: score
+ };
+}
+```
+
+Output for **Worker 1**
+
+```markdown
+200
+```
+
+With the aforementioned implementation, for the given job we'll get the following scores for workers:
+
+| Worker | Score |
+|--|-|
+| Worker 1 | 200 |
+| Worker 2 | 200 |
+| Worker 3 | 1 |
+
+## Distribute offers based on best worker mode
+
+Now that the Azure function app is ready, let us create an instance of **BestWorkerDistribution** mode using Router SDK.
+
+```csharp
+ // -- initialize router client
+ // Setup Distribution Policy
+ var bestWorkerDistributionMode = new BestWorkerMode(
+ scoringRule: new AzureFunctionRule(
+ functionAppUrl: "<insert function url>");
+
+ var distributionPolicy = await client.SetDistributionPolicyAsync(
+ id: "BestWorkerDistributionMode",
+ mode: bestWorkerDistributionMode,
+ name: "XBox hardware support distribution",
+ offerTTL: TimeSpan.FromMinutes(5));
+
+ // Setup Queue
+ var queue = await client.SetQueueAsync(
+ id: "XBox_Hardware_Support_Q",
+ distributionPolicyId: distributionPolicy.Value.Id,
+ name: "XBox Hardware Support Queue");
+
+ // Setup Channel
+ var channel = await client.SetChannelAsync("Xbox_Chat_Channel");
+
+ // Create workers
+
+ var worker1Labels = new LabelCollection()
+ {
+ ["HighPrioritySupport"] = true,
+ ["HardwareSupport"] = true,
+ ["Support_XBOX_SERIES_X"] = true,
+ ["English"] = 10,
+ ["ChatSupport"] = true,
+ ["XboxSupport"] = true
+ };
+ var worker1 = await client.RegisterWorkerAsync(
+ id: "Worker_1",
+ totalCapacity: 100,
+ queueIds: new[] {queue.Value.Id},
+ labels: worker1Labels,
+ channelConfigurations: new[] {new ChannelConfiguration(channel.Value.Id, 10)});
+
+ var worker2Labels = new LabelCollection()
+ {
+ ["HighPrioritySupport"] = true,
+ ["HardwareSupport"] = true,
+ ["Support_XBOX_SERIES_X"] = true,
+ ["Support_XBOX_SERIES_S"] = true,
+ ["English"] = 8,
+ ["ChatSupport"] = true,
+ ["XboxSupport"] = true
+ };
+ var worker2 = await client.RegisterWorkerAsync(
+ id: "Worker_2",
+ totalCapacity: 100,
+ queueIds: new[] { queue.Value.Id },
+ labels: worker2Labels,
+ channelConfigurations: new[] { new ChannelConfiguration(channel.Value.Id, 10) });
+
+ var worker3Labels = new LabelCollection()
+ {
+ ["HighPrioritySupport"] = false,
+ ["HardwareSupport"] = true,
+ ["Support_XBOX"] = true,
+ ["English"] = 7,
+ ["ChatSupport"] = true,
+ ["XboxSupport"] = true
+ };
+ var worker3 = await client.RegisterWorkerAsync(
+ id: "Worker_3",
+ totalCapacity: 100,
+ queueIds: new[] { queue.Value.Id },
+ labels: worker3Labels,
+ channelConfigurations: new[] { new ChannelConfiguration(channel.Value.Id, 10) });
+
+ // Create Job
+ var jobLabels = new LabelCollection()
+ {
+ ["CommunicationType"] = "Chat",
+ ["IssueType"] = "XboxSupport",
+ ["Language"] = "en",
+ ["HighPriority"] = true,
+ ["SubIssueType"] = "ConsoleMalfunction",
+ ["ConsoleType"] = "XBOX_SERIES_X",
+ ["Model"] = "XBOX_SERIES_X_1TB"
+ };
+ var workerSelectors = new List<LabelSelector>()
+ {
+ new LabelSelector("English", LabelOperator.GreaterThanEqual, 7),
+ new LabelSelector("ChatSupport", LabelOperator.Equal, true),
+ new LabelSelector("XboxSupport", LabelOperator.Equal, true)
+ };
+ var job = await client.CreateJobAsync(
+ channelId: channel.Value.Id,
+ queueId: queue.Value.Id,
+ priority: 100,
+ channelReference: "ChatChannel",
+ labels: jobLabels,
+ workerSelectors: workerSelectors);
+
+ var getJob = await client.GetJobAsync(job.Value.Id);
+ Console.WriteLine(getJob.Value.Assignments.Select(assignment => assignment.Value.WorkerId).First());
+```
+
+Output
+
+```markdown
+Worker_1 // or Worker_2
+
+Since both workers, Worker_1 and Worker_2, get the same score of 200,
+the worker who has been idle the longest will get the first offer.
+```
cosmos-db Configure Synapse Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/configure-synapse-link.md
The following links show how to update containers analytical TTL by using PowerS
## <a id="disable-analytical-store"></a> Optional - Disable analytical store in a container
-Analytical store can be disabled in SQL API containers using PowerShell, by updating `-AnalyticalStorageTtl` (analytical Time-To-Live) to `0`. Please note that currently this action can't be undone. If analytical store is disabled in a container, it can never be re-enabled.
+Analytical store can be disabled in SQL API containers using `Update-AzCosmosDBSqlContainer` PowerShell command, by updating `-AnalyticalStorageTtl` (analytical Time-To-Live) to `0`. Please note that currently this action can't be undone. If analytical store is disabled in a container, it can never be re-enabled.
Currently you can't be disabled in MongoDB API collections.
cosmos-db Continuous Backup Restore Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/continuous-backup-restore-permissions.md
Last updated 02/28/2022 + # Manage permissions to restore an Azure Cosmos DB account
Scope is a set of resources that have access, to learn more on scopes, see the [
To perform a restore, a user or a principal need the permission to restore (that is *restore/action* permission), and permission to provision a new account (that is *write* permission). To grant these permissions, the owner can assign the `CosmosRestoreOperator` and `Cosmos DB Operator` built in roles to a principal.
-1. Sign into the [Azure portal](https://portal.azure.com/)
+1. Sign into the [Azure portal](https://portal.azure.com/) and navigate to your subscription.
-1. Navigate to your subscription and go to **Access control (IAM)** tab and select **Add** > **Add role assignment**
+1. Select **Access control (IAM)**.
-1. In the **Add role assignment** pane, for **Role** field, select **CosmosRestoreOperator** role. Choose **User, group, or a service principal** for the **Assign access to** field and search for a user's name or email ID as shown in the following image:
+1. Select **Add** > **Add role assignment** to open the **Add role assignment** page.
- :::image type="content" source="./media/continuous-backup-restore-permissions/assign-restore-operator-roles.png" alt-text="Assign CosmosRestoreOperator and Cosmos DB Operator roles." border="true":::
+1. Assign the following role. For detailed steps, see [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.md).
-1. Select **Save** to grant the *restore/action* permission.
+ | Setting | Value |
+ | | |
+ | Role | CosmosRestoreOperator |
+ | Assign access to | User, group, or service principal |
+ | Members | &lt;User of your choice&gt; |
-1. Repeat Step 3 with **Cosmos DB Operator** role to grant the write permission. When assigning this role from the Azure portal, it grants the restore permission to the whole subscription.
+ :::image type="content" source="../../includes/role-based-access-control/media/add-role-assignment-page.png" alt-text="Screenshot that shows Add role assignment page in Azure portal.":::
+
+1. Repeat step 4 with the **Cosmos DB Operator** role to grant the write permission. When assigning this role from the Azure portal, it grants the restore permission to the whole subscription.
## Permission scopes
cosmos-db Get Latest Restore Timestamp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/get-latest-restore-timestamp.md
Previously updated : 03/02/2022 Last updated : 04/08/2022
# Get the latest restorable timestamp for continuous backup accounts [!INCLUDE[appliesto-all-apis-except-cassandra](includes/appliesto-all-apis-except-cassandra.md)]
-This article describes how to get the [latest restorable timestamp](latest-restore-timestamp-continuous-backup.md) for accounts with continuous backup mode. It explains how to get the latest restorable time for SQL containers, Table API Tables (in Preview), Graph API graphs(in Preview), and MongoDB collections using Azure PowerShell and Azure CLI. You can see the request and response format for the PowerShell and CLI commands.
+This article describes how to get the [latest restorable timestamp](latest-restore-timestamp-continuous-backup.md) for accounts with continuous backup mode. It explains how to get the latest restorable time using Azure PowerShell and Azure CLI, and provides the request and response format for the PowerShell and CLI commands.
+
+This feature is supported for Cosmos DB SQL API containers and Cosmos DB MongoDB API collections. This feature is in preview for Table API tables and Gremlin API graphs.
## SQL container
Get-AzCosmosDBSqlContainerBackupInformation -ResourceGroupName "rg" `
-Location "eastus" ```
-**Sample response (In UTC Format):**
+**Sample response (in UTC format):**
```console LatestRestorableTimestamp
az cosmosdb sql retrieve-latest-backup-time -g "rg" \
-l "eastus" ```
-**Sample response (In UTC Format):**
+**Sample response (in UTC format):**
```console {
Get-LatestRestorableTimestampForSqlDatabase `
-Location eastus ```
-**Sample response (In UTC Format):**
+**Sample response (in UTC format):**
```console Latest restorable timestamp for a database is minimum of restorable timestamps of all the underlying containers
Get-LatestRestorableTimestampForSqlAccount `
-location eastus ```
-**Sample response (In UTC Format):**
+**Sample response (in UTC format):**
```console Latest restorable timestamp for an account is minimum of restorable timestamps of all the underlying containers
Get-AzCosmosDBMongoDBCollectionBackupInformation `
-Location "eastus" ```
-**Sample response (In UTC Format):**
+**Sample response (in UTC format):**
```console LatestRestorableTimestamp
Import-Module .\LatestRestorableTimestampForMongoDBDatabase.ps1
Get-LatestRestorableTimestampForMongoDBDatabase -ResourceGroupName rg -accountName mongopitracc -databaseName db1 -location eastus ```
-**Sample response (In UTC Format):**
+**Sample response (in UTC format):**
```console Latest restorable timestamp for a database is minimum of restorable timestamps of all the underlying collections
Get-LatestRestorableTimestampForMongoDBAccount `
-Location eastus ```
-**Sample response (In UTC Format):**
+**Sample response (in UTC format):**
```console Latest restorable timestamp for an account is minimum of restorable timestamps of all the underlying collections Wednesday, November 3, 2021 8:33:49 PM ```
-## Gremlin Graph Backup information
+## Gremlin graph backup information
### PowerShell
Get-AzCosmosDBGremlinGraphBackupInformation `
-Location "eastus" ```
-**Sample response (In UTC Format):**
+**Sample response (in UTC format):**
```console LatestRestorableTimestamp
az cosmosdb gremlin retrieve-latest-backup-time \
} ```
-## Table Backup information
+## Table backup information
### PowerShell
Get-AzCosmosDBTableBackupInformation `
-Location "eastus" ```
-**Sample response (In UTC Format):**
+**Sample response (in UTC format):**
```console LatestRestorableTimestamp
az cosmosdb table retrieve-latest-backup-time \
## Next steps
-* [Introduction to continuous backup mode with point-in-time restore.](continuous-backup-restore-introduction.md)
+* [Introduction to continuous backup mode with point-in-time restore](continuous-backup-restore-introduction.md)
-* [Continuous backup mode resource model.](continuous-backup-restore-resource-model.md)
+* [Continuous backup mode resource model](continuous-backup-restore-resource-model.md)
* [Configure and manage continuous backup mode](continuous-backup-restore-portal.md) using Azure portal.
cosmos-db How To Setup Rbac https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/how-to-setup-rbac.md
The way you create a `TokenCredential` instance is beyond the scope of this arti
- [In .NET](/dotnet/api/overview/azure/identity-readme#credential-classes) - [In Java](/java/api/overview/azure/identity-readme#credential-classes) - [In JavaScript](/javascript/api/overview/azure/identity-readme#credential-classes)
+- [In Python](/python/api/overview/azure/identity-readme?view=azure-python#credential-classes)
The examples below use a service principal with a `ClientSecretCredential` instance.
const client = new CosmosClient({
}); ```
+### In Python
+
+The Azure Cosmos DB RBAC is supported in the [Python SDK versions 4.3.0b4](sql-api-sdk-python.md) and higher.
+
+```python
+aad_credentials = ClientSecretCredential(
+ tenant_id="<azure-ad-tenant-id>",
+ client_id="<client-application-id>",
+ client_secret="<client-application-secret>")
+client = CosmosClient("<account-endpoint>", aad_credentials)
+```
+ ## Authenticate requests on the REST API When constructing the [REST API authorization header](/rest/api/cosmos-db/access-control-on-cosmosdb-resources), set the **type** parameter to **aad** and the hash signature **(sig)** to the **oauth token** as shown in the following example:
cosmos-db Latest Restore Timestamp Continuous Backup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/latest-restore-timestamp-continuous-backup.md
Previously updated : 03/03/2022 Last updated : 04/08/2022
Azure Cosmos DB offers an API to get the latest restorable timestamp of a contai
This API also takes the account location as an input parameter and returns the latest restorable timestamp for the given container in this location. If an account exists in multiple locations, then the latest restorable timestamp for a container in different locations could be different because the backups in each location are taken independently.
-By default, the API only works at the container level, but it can be easily extended to work at the database or account level. This article helps you understand the semantics of latest restorable timestamp api, how it gets calculated and use cases for it. To learn more, see [how to get the latest restore timestamp](get-latest-restore-timestamp.md) for SQL, Table (preview), Graph API (preview), and MongoDB accounts.
+By default, the API only works at the container level, but it can be easily extended to work at the database or account level. This article helps you understand the semantics of latest restorable timestamp api, how it gets calculated and use cases for it. To learn more, see [how to get the latest restore timestamp](get-latest-restore-timestamp.md) for SQL API, MongoDB API, Table API (preview), Gremlin API (preview) accounts.
## Use cases
cosmos-db Managed Identity Based Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/managed-identity-based-authentication.md
Last updated 07/02/2021 -+
In this scenario, the function app will read the temperature of the aquarium, th
### Assign the role using Azure portal
-1. Sign in to the Azure portal and go to your Azure Cosmos DB account. Open the **Access control (IAM)** pane and then the **Role assignments** tab:
+1. Sign in to the Azure portal and go to your Azure Cosmos DB account.
- :::image type="content" source="./media/managed-identity-based-authentication/cosmos-db-iam-tab.png" alt-text="Screenshot showing the Access control pane and the Role assignments tab.":::
+1. Select **Access control (IAM)**.
-1. Select **+ Add** > **Add role assignment**.
+1. Select **Add** > **Add role assignment**.
-1. The **Add role assignment** panel opens to the right:
+ :::image type="content" source="../../includes/role-based-access-control/media/add-role-assignment-menu-generic.png" alt-text="Screenshot that shows Access control (IAM) page with Add role assignment menu open.":::
- :::image type="content" source="./media/managed-identity-based-authentication/cosmos-db-iam-tab-add-role-pane.png" alt-text="Screenshot showing the Add role assignment pane.":::
+1. On the **Roles** tab, select **DocumentDB Account Contributor**.
- * **Role**: Select **DocumentDB Account Contributor**
- * **Assign access to**: Under the **Select system-assigned managed identity** subsection, select **Function App**.
- * **Select**: The pane will be populated with all the function apps in your subscription that have a **Managed System Identity**. In this case, select the **FishTankTemperatureService** function app:
+1. On the **Members** tab, select **Managed identity**, and then select **Select members**.
- :::image type="content" source="./media/managed-identity-based-authentication/cosmos-db-iam-tab-add-role-pane-filled.png" alt-text="Screenshot showing the Add role assignment pane populated with examples.":::
+1. Select your Azure subscription.
-1. After you have selected your function app, select **Save**.
+1. Under **System-assigned managed identity**, select **Function App**, and then select **FishTankTemperatureService**.
+
+1. On the **Review + assign** tab, select **Review + assign** to assign the role.
### Assign the role using Azure CLI
cosmos-db Migrate Continuous Backup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/migrate-continuous-backup.md
Previously updated : 12/08/2021 Last updated : 04/08/2022 -+ # Migrate an Azure Cosmos DB account from periodic to continuous backup mode Azure Cosmos DB accounts with periodic mode backup policy can be migrated to continuous mode using [Azure portal](#portal), [CLI](#cli), [PowerShell](#powershell), or [Resource Manager templates](#ARM-template). Migration from periodic to continuous mode is a one-way migration and itΓÇÖs not reversible. After migrating from periodic to continuous mode, you can leverage the benefits of continuous mode.
The following are the key reasons to migrate into continuous mode:
> You can migrate an account to continuous backup mode only if the following conditions are true. Also checkout the [point in time restore limitations](continuous-backup-restore-introduction.md#current-limitations) before migrating your account: > > * If the account is of type SQL API or API for MongoDB.
+> * If the account is of type Table API or Gremlin API. These two APIs are in preview.
> * If the account has a single write region. > * If the account isn't enabled with analytical store. >
You can restore your account after the migration completes. If the migration com
Yes. #### Which accounts can be targeted for backup migration?
-Currently, SQL API and API for MongoDB accounts with single write region, that have shared, provisioned, or autoscale provisioned throughput support migration.
+Currently, SQL API and API for MongoDB accounts with single write region, that have shared, provisioned, or autoscale provisioned throughput support migration. Table API and Gremlin API are in preview.
Accounts enabled with analytical storage and multiple-write regions are not supported for migration.
cosmos-db Create Mongodb Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/create-mongodb-java.md
This step is optional. If you're interested in learning how the database resourc
The following snippets are all taken from the *Program.java* file.
-This console app uses the [MongoDB Java driver](https://docs.mongodb.com/ecosystem/drivers/java/).
+This console app uses the [MongoDB Java driver](https://www.mongodb.com/docs/drivers/java-drivers/).
* The DocumentClient is initialized.
cosmos-db Pre Migration Steps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/pre-migration-steps.md
description: This doc provides an overview of the prerequisites for a data migra
Previously updated : 08/26/2021 Last updated : 04/05/2022
This MongoDB pre-migration guide is part of series on MongoDB migration. The cri
## Overview of pre-migration
-It is critical to carry out certain planning and decision-making about your migration up-front before you actually move any data. This initial decision-making process is the ΓÇ£pre-migrationΓÇ¥. Your goal in pre-migration is to (1) ensure that you set up Azure Cosmos DB to fulfill your application's post-migration requirements, and (2) plan out how you will execute the migration.
+It's critical to carry out certain up-front planning and decision-making about your migration before you actually move any data. This initial decision-making process is the "pre-migration".
+
+Your goal in pre-migration is to:
+
+1. Ensure that you set up Azure Cosmos DB to fulfill your application's requirements, and
+2. Plan out how you will execute the migration.
Follow these steps to perform a thorough pre-migration
-* [Discover your existing MongoDB resources and create an artifact to track them](#pre-migration-discovery)
-* [Assess the readiness of your existing MongoDB resources for data migration](#pre-migration-assessment)
-* [Map your existing MongoDB resources to new Azure Cosmos DB resources](#pre-migration-mapping)
-* [Plan the logistics of migration process end-to-end, before you kick off the full-scale data migration](#execution-logistics)
+
+1. [Discover your existing MongoDB resources and create a data estate spreadsheet to track them](#pre-migration-discovery)
+2. [Assess the readiness of your existing MongoDB resources for data migration](#pre-migration-assessment)
+3. [Map your existing MongoDB resources to new Azure Cosmos DB resources](#pre-migration-mapping)
+4. [Plan the logistics of migration process end-to-end, before you kick off the full-scale data migration](#execution-logistics)
Then, execute your migration in accordance with your pre-migration plan.
All of the above steps are critical for ensuring a successful migration.
When you plan a migration, we recommend that whenever possible you plan at the per-resource level.
+The [Database Migration Assistant](https://aka.ms/mongodma)(DMA) assists you with the [Discovery](#programmatic-discovery-using-the-database-migration-assistant) and [Assessment](#programmatic-assessment-using-the-database-migration-assistant) stages of the planning.
+ ## Pre-migration discovery
-The first pre-migration step is resource discovery. In this step you attempt to make a comprehensive list of existing resources in your MongoDB data estate.
+The first pre-migration step is resource discovery.
+In this step, you need to create a **data estate migration spreadsheet**.
+
+* This sheet contains a comprehensive list of the existing resources (databases or collections) in your MongoDB data estate.
+* The purpose of this spreadsheet is to enhance your productivity and help you to plan migration from end-to-end.
+* You're recommended to extend this document and use it as a tracking document throughout the migration process.
+
+### Programmatic discovery using the Database Migration Assistant
+
+You may use the [Database Migration Assistant](https://aka.ms/mongodma) (DMA) to assist you with the discovery stage and create the data estate migration sheet programmatically.
+
+It's easy to [setup and run DMA](https://aka.ms/mongodma#how-to-run-the-dma) through an Azure Data Studio client. It can be run from any machine connected to your source MongoDB environment.
+
+You can use either one of the following DMA output files as the data estate migration spreadsheet:
-### Create a data estate migration spreadsheet
+* `workload_database_details.csv` - Gives a database-level view of the source workload. Columns in file are: Database Name, Collection count, Document Count, Average Document Size, Data Size, Index Count and Index Size.
+* `workload_collection_details.csv` - Gives a collection-level view of the source workload. Columns in file are: Database Name, Collection Name, Doc Count, Average Document Size, Data size, Index Count, Index Size and Index definitions.
-Create a **data estate migration spreadsheet** as a tracking document for your migration, using your preferred productivity software.
- * The purpose of this spreadsheet is to enhance your productivity and help you to plan migration from end-to-end.
- * The structure of the spreadsheet is up to you. The following bullet points provide some recommendations.
- * This spreadsheet should be structured as a record of your data estate resources, in list form.
- * Each row corresponds to a resource (database or collection).
- * Each column corresponds to a property of the resource; for now, you should at least have *name* and *data size (GB)* as columns, although ideally you can also collect information about the MongoDB version for each resource, in which case add a *Mongo version* column as well.
- * Initially, you will fill out this spreadsheet with a list of the existing resources in your MongoDB data estate. As you progress through this guide, you will build this spreadsheet into a tracking document for your end-to-end migration planning, adding columns as needed.
+Here's a sample database-level migration spreadsheet created by DMA:
+![Data estate spreadsheet example](./media/pre-migration-steps/data-estate-spreadsheet.png)
-### Discover existing MongoDB data estate resources
+### Manual discovery
-Using an appropriate discovery tool, identify the resources (databases, collections) in your existing MongoDB data estate, as comprehensively as possible.
+Alternately, you may refer to the sample spreadsheet above and create a similar document yourself.
+
+* The spreadsheet should be structured as a record of your data estate resources, in list form.
+* Each row corresponds to a resource (database or collection).
+* Each column corresponds to a property of the resource; start with at least *name* and *data size (GB)* as columns.
+* As you progress through this guide, you'll build this spreadsheet into a tracking document for your end-to-end migration planning, adding columns as needed.
Here are some tools you can use for discovering resources:
- * [MongoDB Shell](https://www.mongodb.com/try/download/shell)
- * [MongoDB Compass](https://www.mongodb.com/try/download/compass)
+
+* [MongoDB Shell](https://www.mongodb.com/try/download/shell)
+* [MongoDB Compass](https://www.mongodb.com/try/download/compass)
## Pre-migration assessment
-Second, as a prelude to planning your migration, assess the readiness of each resource in your data estate for migration.
+Second, as a prelude to planning your migration, assess the readiness of resources in your data estate for migration.
+
+Assessment involves finding out whether you're using the [features and syntax that are supported](./feature-support-42.md). It also includes making sure you're adhering to the [limits and quotas](../concepts-limits.md#per-account-limits). The aim of this stage is to create a list of incompatibilities and warnings, if any. After you have the assessment results, you can try to address the findings during rest of the migration planning.
+
+### Programmatic assessment using the Database Migration Assistant
+
+[Database Migration Assistant](https://aka.ms/mongodma) (DMA) also assists you with the assessment stage of pre-migration planning.
-The primary factor impacting readiness is MongoDB version. Azure Cosmos DB currently supports MongoDB binary protocol versions 3.2, 3.6 and 4.0. Hopefully you have a column in your migration planning spreadsheet for *MongoDB version*. Step through you spreadsheet and highlight any resources which use incompatible MongoDB versions for Azure Cosmos DB.
+Refer to the section [Programmatic discovery using the Database Migration Assistant](#programmatic-discovery-using-the-database-migration-assistant) to know how to setup and run DMA.
+
+The DMA notebook runs a few assessment rules against the resource list it gathers from source MongoDB. The assessment result lists the required and recommended changes needed to proceed with the migration.
+
+The results are printed as an output in the DMA notebook and saved to a CSV file - `assessment_result.csv`.
+
+> [!NOTE]
+> Database Migration Assistant is a preliminary utility meant to assist you with the pre-migration steps. It does not perform an end-to-end assessment.
+> In addition to running the DMA, we also recommend you to go through [the supported features and syntax](./feature-support-42.md), [Cosmos DB limits and quotas](../concepts-limits.md#per-account-limits) in detail, as well as perform a proof-of-concept prior to the actual migration.
## Pre-migration mapping
Before you plan your Azure Cosmos DB data estate, make sure you understand the f
### Plan the Azure Cosmos DB data estate
-Figure out what Azure Cosmos DB resources you will create. This means stepping through your data estate migration spreadsheet and mapping each existing MongoDB resource to a new Azure Cosmos DB resource.
-* Anticipate that each MongoDB database will become an Azure Cosmos DB database
-* Anticipate that each MongoDB collection will become an Azure Cosmos DB collection
+Figure out what Azure Cosmos DB resources you'll create. This means stepping through your data estate migration spreadsheet and mapping each existing MongoDB resource to a new Azure Cosmos DB resource.
+
+* Anticipate that each MongoDB database will become an Azure Cosmos DB database.
+* Anticipate that each MongoDB collection will become an Azure Cosmos DB collection.
* Choose a naming convention for your Azure Cosmos DB resources. Barring any change in the structure of databases and collections, keeping the same resource names is usually a fine choice.
-* In MongoDB, sharding collections is optional. In Azure Cosmos DB, every collection is sharded.
-* *Do not assume that your MongoDB collection shard key becomes your Azure Cosmos DB collection shard key. Do not assume that your existing MongoDB data model/document structure is what you will employ on Azure Cosmos DB.*
+* Determine whether you'll be using sharded or unsharded collections in Cosmos DB. The unsharded collection limit is 20 GB. Sharding, on the other hand, helps achieve horizontal scale that is critical to the performance of many workloads.
+* If using sharded collections, *do not assume that your MongoDB collection shard key becomes your Azure Cosmos DB collection shard key. Do not assume that your existing MongoDB data model/document structure is what you'll employ on Azure Cosmos DB.*
* Shard key is the single most important setting for optimizing the scalability and performance of Azure Cosmos DB, and data modeling is the second most important. Both of these settings are immutable and cannot be changed once they are set; therefore it is highly important to optimize them in the planning phase. Follow the guidance in the [Immutable decisions](#immutable-decisions) section for more information. * Azure Cosmos DB does not recognize certain MongoDB collection types such as capped collections. For these resources, just create normal Azure Cosmos DB collections. * Azure Cosmos DB has two collection types of its own ΓÇô shared and dedicated throughput. Shared vs dedicated throughput is another critical, immutable decision which it is vital to make in the planning phase. Follow the guidance in the [Immutable decisions](#immutable-decisions) section for more information.
Figure out what Azure Cosmos DB resources you will create. This means stepping t
### Immutable decisions The following Azure Cosmos DB configuration choices cannot be modified or undone once you have created an Azure Cosmos DB resource; therefore it is important to get these right during pre-migration planning, before you kick off any migrations:
-* Follow [this guide](../partitioning-overview.md) to choose the best shard key. Partitioning, also known as Sharding, is a key point of consideration before migrating data. Azure Cosmos DB uses fully-managed partitioning to increase the capacity in a database to meet the storage and throughput requirements. This feature doesn't need the hosting or configuration of routing servers.
+* Refer to [Partitioning and horizontal scaling in Azure Cosmos DB](../partitioning-overview.md) to choose the best shard key. Partitioning, also known as Sharding, is a key point of consideration before migrating data. Azure Cosmos DB uses fully-managed partitioning to increase the capacity in a database to meet the storage and throughput requirements. This feature doesn't need the hosting or configuration of routing servers.
* In a similar way, the partitioning capability automatically adds capacity and re-balances the data accordingly. For details and recommendations on choosing the right partition key for your data, please see the [Choosing a Partition Key article](../partitioning-overview.md#choose-partitionkey).
-* Follow [this guide](../modeling-data.md) to choose a data model
-* Follow [this guide](../optimize-cost-throughput.md#optimize-by-provisioning-throughput-at-different-levels) to choose between dedicated and shared throughput for each resource that you will migrate
-* [Here](../how-to-model-partition-example.md) is a real-world example of sharding and data modeling to aid you in your decision-making process
+* Follow the guide for [Data modeling in Azure Cosmos DB](../modeling-data.md) to choose a data model.
+* Follow [Optimize provisioned throughput cost in Azure Cosmos DB](../optimize-cost-throughput.md#optimize-by-provisioning-throughput-at-different-levels) to choose between dedicated and shared throughput for each resource that you will migrate
+* [How to model and partition data on Azure Cosmos DB using a real-world example](../how-to-model-partition-example.md) is a real-world example of sharding and data modeling to aid you in your decision-making process
### Cost of ownership
The following Azure Cosmos DB configuration choices cannot be modified or undone
### Estimating throughput
-* In Azure Cosmos DB, the throughput is provisioned in advance and is measured in Request Units (RU's) per second. Unlike VMs or on-premises servers, RUs are easy to scale up and down at any time. You can change the number of provisioned RUs instantly. For more information, see [Request units in Azure Cosmos DB](../request-units.md).
+* In Azure Cosmos DB, the throughput is provisioned in advance and is measured in Request Units (RUs) per second. Unlike VMs or on-premises servers, RUs are easy to scale up and down at any time. You can change the number of provisioned RUs instantly. For more information, see [Request units in Azure Cosmos DB](../request-units.md).
* You can use the [Azure Cosmos DB Capacity Calculator](https://cosmos.azure.com/capacitycalculator/) to determine the amount of Request Units based on your database account configuration, amount of data, document size, and required reads and writes per second.
Finally, now that you have a view of your existing data estate and a design for
Watch this video for an [overview and demo of the migration solutions](https://www.youtube.com/watch?v=WN9h80P4QJM) mentioned above. * Once you have chosen migration tools for each resource, the next step is to prioritize the resources you will migrate. Good prioritization can help keep your migration on schedule. A good practice is to prioritize migrating those resources which need the most time to be moved; migrating these resources first will bring the greatest progress toward completion. Furthermore, since these time-consuming migrations typically involve more data, they are usually more resource-intensive for the migration tool and therefore are more likely to expose any problems with your migration pipeline early on. This minimizes the chance that your schedule will slip due to any difficulties with your migration pipeline.
-* Plan how you will monitor the progress of migration once it has started. If you are coordinating your data migration effort among a team, plan a regular cadence of team syncs to so that you have a comprehensive view of how the high-priority migrations are going.
+* Plan how you will monitor the progress of migration once it has started. If you are coordinating your data migration effort among a team, plan a regular cadence of team syncs too, so that you have a comprehensive view of how the high-priority migrations are going.
### Supported migration scenarios
Given that you are migrating from a particular MongoDB version, the supported to
In the pre-migration phase, spend some time to plan what steps you will take toward app migration and optimization post-migration. * In the post-migration phase, you will execute a cutover of your application to use Azure Cosmos DB instead of your existing MongoDB data estate. * Make your best effort to plan out indexing, global distribution, consistency, and other *mutable* Azure Cosmos DB properties at a per resource level - however, these Azure Cosmos DB configuration settings *can* be modified later, so expect to make adjustments to these settings down the road. DonΓÇÖt let these aspects be a cause of analysis paralysis. You will apply these mutable configurations post-migration.
-* The best guide to post-migration can be found [here](post-migration-optimization.md).
+* For a post-migration guide, see [Post-migration optimization steps when using Azure Cosmos DB's API for MongoDB](post-migration-optimization.md).
## Next steps * Trying to do capacity planning for a migration to Azure Cosmos DB?
- * If all you know is the number of vcores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md)
+ * If all you know is the number of vCores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md)
* If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-capacity-planner.md) * Migrate to Azure Cosmos DB API for MongoDB * [Offline migration using MongoDB native tools](tutorial-mongotools-cosmos-db.md)
cosmos-db Storage Explorer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/storage-explorer.md
You can use Azure Storage explorer to connect to Azure Cosmos DB. It lets you connect to Azure Cosmos DB accounts hosted on Azure and sovereign clouds from Windows, macOS, or Linux.
-Use the same tool to manage your different Azure entities in one place. You can manage Azure Cosmos DB entities, manipulate data, update stored procedures and triggers along with other Azure entities like storage blobs and queues. Azure Storage Explorer supports Cosmos accounts configured for SQL, MongoDB, Graph, and Table APIs.
+Use the same tool to manage your different Azure entities in one place. You can manage Azure Cosmos DB entities, manipulate data, update stored procedures and triggers along with other Azure entities like storage blobs and queues. Azure Storage Explorer supports Cosmos accounts configured for SQL, MongoDB, Gremlin, and Table APIs.
> [!NOTE]
-> The Azure Cosmos DB integration with Storage Explorer has been deprecated. Any existing functionality will not be removed for a minimum of one year from this release. You should use the [Azure Portal](https://portal.azure.com/), [Azure Portal desktop app](https://portal.azure.com/App/Download) or the standalone [Azure Cosmos DB Explorer](data-explorer.md) instead. The alternative options contain many new features that arenΓÇÖt currently supported in Storage Explorer.
+> The Azure Cosmos DB integration with Storage Explorer has been deprecated. Any existing functionality will not be removed for a minimum of one year from this release. You should use the [Azure Portal](https://portal.azure.com/), [Azure Portal desktop app](https://portal.azure.com/App/Download) or the standalone [Azure Cosmos DB Explorer](data-explorer.md) instead. The alternative options contain many new features that aren't currently supported in Storage Explorer.
## Prerequisites
cost-management-billing Cost Management Billing Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/cost-management-billing-overview.md
description: You use Cost Management + Billing features to conduct billing admin
keywords: Previously updated : 10/07/2021 Last updated : 04/08/2022
The Azure portal currently supports the following types of billing accounts:
Although related, billing differs from cost management. Billing is the process of invoicing customers for goods or services and managing the commercial relationship.
-Cost Management shows organizational cost and usage patterns with advanced analytics. Reports in Cost Management show the usage-based costs consumed by Azure services and third-party Marketplace offerings. Costs are based on negotiated prices and factor in reservation and Azure Hybrid Benefit discounts. Collectively, the reports show your internal and external costs for usage and Azure Marketplace charges. Other charges, such as reservation purchases, support, and taxes aren't yet shown in reports. The reports help you understand your spending and resource use and can help find spending anomalies. Predictive analytics are also available. Cost Management uses Azure management groups, budgets, and recommendations to show clearly how your expenses are organized and how you might reduce costs.
+Cost Management shows organizational cost and usage patterns with advanced analytics. Reports in Cost Management show the usage-based costs consumed by Azure services and third-party Marketplace offerings. Costs are based on negotiated prices and factor in reservation and Azure Hybrid Benefit discounts. Collectively, the reports show your internal and external costs for usage and Azure Marketplace charges. **Other charges, such as reservation purchases, support, and taxes aren't yet shown in reports**. The reports help you understand your spending and resource use and can help find spending anomalies. Predictive analytics are also available. Cost Management uses Azure management groups, budgets, and recommendations to show clearly how your expenses are organized and how you might reduce costs.
You can use the Azure portal or various APIs for export automation to integrate cost data with external systems and processes. Automated billing data export and scheduled reports are also available.
cost-management-billing Allocate Costs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/allocate-costs.md
Title: Allocate Azure costs
description: This article explains how create cost allocation rules to distribute costs of subscriptions, resource groups, or tags to others. Previously updated : 10/07/2021 Last updated : 04/08/2022
Currently, cost allocation is supported in Cost Management by Cost analysis, bud
The following items are currently unsupported by the cost allocation public preview: -- Data exposed by the [Usage Details](/rest/api/consumption/usagedetails/list) API - Billing subscriptions area - [Cost Management Power BI App](https://appsource.microsoft.com/product/power-bi/costmanagement.azurecostmanagementapp) - [Power BI Desktop connector](/power-bi/connect-data/desktop-connect-azure-cost-management)
+Cost allocation data exposed by the [Usage Details](/rest/api/consumption/usagedetails/list) API is supported by the 2021-10-01 version or later. However, cost allocation data results might be empty if you're using an unsupported API or if you don't have any cost allocation rules.
## Next steps
cost-management-billing Spending Limit https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/spending-limit.md
tags: billing
Previously updated : 11/29/2021 Last updated : 04/08/2022
Custom spending limits aren't available.
![Marketplace purchase warning](./media/spending-limit/marketplace-warning01.png)
+## Troubleshoot spending limit banner
+
+If the spending limit banner doesn't appear, you can manually navigate to your subscription's URL.
+
+1. Ensure that you've navigated to the correct tenant/directory in the Azure portal.
+1. Navigate to `https://portal.azure.com/#blade/Microsoft_Azure_Billing/RemoveSpendingLimitBlade/subscriptionId/11111111-1111-1111-1111-111111111111` and replace the example subscription ID with your subscription ID.
+
+The spending limit banner should appear.
+ ## Need help? Contact us. If you have questions or need help, [create a support request](https://go.microsoft.com/fwlink/?linkid=2083458).
data-factory Connector Azure Sql Data Warehouse https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-azure-sql-data-warehouse.md
Settings specific to Azure Synapse Analytics are available in the **Settings** t
**Pre and Post SQL scripts**: Enter multi-line SQL scripts that will execute before (pre-processing) and after (post-processing) data is written to your Sink database > [!TIP] > 1. It's recommended to break single batch scripts with multiple commands into multiple batches.
data-factory Connector Azure Sql Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-azure-sql-database.md
You can parameterize the key column used here for updating your target Azure SQL
**Pre and Post SQL scripts**: Enter multi-line SQL scripts that will execute before (pre-processing) and after (post-processing) data is written to your Sink database > [!TIP] > 1. It's recommended to break single batch scripts with multiple commands into multiple batches.
data-factory Managed Virtual Network Private Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/managed-virtual-network-private-endpoint.md
Previously updated : 09/28/2021 Last updated : 04/01/2022 # Azure Data Factory Managed Virtual Network [!INCLUDE[appliesto-adf-xxx-md](includes/appliesto-adf-xxx-md.md)]
-This article will explain Managed Virtual Network and Managed Private endpoints in Azure Data Factory.
+This article will explain managed virtual network and Managed Private endpoints in Azure Data Factory.
## Managed virtual network
-When you create an Azure Integration Runtime (IR) within Azure Data Factory Managed Virtual Network (VNET), the integration runtime will be provisioned with the managed Virtual Network and will leverage private endpoints to securely connect to supported data stores.
+When you create an Azure Integration Runtime (IR) within Azure Data Factory managed virtual network (VNET), the integration runtime will be provisioned with the managed virtual network and will leverage private endpoints to securely connect to supported data stores.
-Creating an Azure IR within managed Virtual Network ensures that data integration process is isolated and secure.
+Creating an Azure IR within managed virtual network ensures that data integration process is isolated and secure.
-Benefits of using Managed Virtual Network:
+Benefits of using managed virtual network:
-- With a Managed Virtual Network, you can offload the burden of managing the Virtual Network to Azure Data Factory. You don't need to create a subnet for Azure Integration Runtime that could eventually use many private IPs from your Virtual Network and would require prior network infrastructure planning.
+- With a managed virtual network, you can offload the burden of managing the virtual network to Azure Data Factory. You don't need to create a subnet for Azure Integration Runtime that could eventually use many private IPs from your virtual network and would require prior network infrastructure planning.
- It does not require deep Azure networking knowledge to do data integrations securely. Instead getting started with secure ETL is much simplified for data engineers. -- Managed Virtual Network along with Managed private endpoints protects against data exfiltration.
+- Managed virtual network along with Managed private endpoints protects against data exfiltration.
> [!IMPORTANT]
->Currently, the managed Virtual Network is only supported in the same region as Azure Data Factory region.
+>Currently, the managed virtual network is only supported in the same region as Azure Data Factory region.
> [!Note] >Existing global Azure integration runtime can't switch to Azure integration runtime in Azure Data Factory managed virtual network and vice versa. ## Managed private endpoints
-Managed private endpoints are private endpoints created in the Azure Data Factory Managed Virtual Network establishing a private link to Azure resources. Azure Data Factory manages these private endpoints on your behalf.
+Managed private endpoints are private endpoints created in the Azure Data Factory managed virtual network establishing a private link to Azure resources. Azure Data Factory manages these private endpoints on your behalf.
:::image type="content" source="./media/tutorial-copy-data-portal-private/new-managed-private-endpoint.png" alt-text="New Managed private endpoint"::: Azure Data Factory supports private links. Private link enables you to access Azure (PaaS) services (such as Azure Storage, Azure Cosmos DB, Azure Synapse Analytics).
-When you use a private link, traffic between your data stores and managed Virtual Network traverses entirely over the Microsoft backbone network. Private Link protects against data exfiltration risks. You establish a private link to a resource by creating a private endpoint.
+When you use a private link, traffic between your data stores and managed virtual network traverses entirely over the Microsoft backbone network. Private Link protects against data exfiltration risks. You establish a private link to a resource by creating a private endpoint.
-Private endpoint uses a private IP address in the managed Virtual Network to effectively bring the service into it. Private endpoints are mapped to a specific resource in Azure and not the entire service. Customers can limit connectivity to a specific resource approved by their organization. Learn more about [private links and private endpoints](../private-link/index.yml).
+Private endpoint uses a private IP address in the managed virtual network to effectively bring the service into it. Private endpoints are mapped to a specific resource in Azure and not the entire service. Customers can limit connectivity to a specific resource approved by their organization. Learn more about [private links and private endpoints](../private-link/index.yml).
> [!NOTE] > It's recommended that you create Managed private endpoints to connect to all your Azure data sources.
New-AzResource -ApiVersion "${apiVersion}" -ResourceId "${integrationRuntimeReso
## Limitations and known issues ### Supported data sources
-The following data sources have native Private Endpoint support and can be connected through private link from ADF Managed Virtual Network.
+The following data sources have native Private Endpoint support and can be connected through private link from ADF managed virtual network.
- Azure Blob Storage (not including Storage account V1) - Azure Cognitive Search - Azure Cosmos DB MongoDB API
The following data sources have native Private Endpoint support and can be conne
> You still can access all data sources that are supported by Data Factory through public network. > [!NOTE]
-> Because Azure SQL Managed Instance native Private Endpoint in public preview, you can access it from managed Virtual Network using Private Linked Service and Load Balancer. Please see [How to access SQL Managed Instance from Data Factory Managed VNET using Private Endpoint](tutorial-managed-virtual-network-sql-managed-instance.md).
+> Because Azure SQL Managed Instance native Private Endpoint in public preview, you can access it from managed virtual network using Private Linked Service and Load Balancer. Please see [How to access SQL Managed Instance from Data Factory Managed VNET using Private Endpoint](tutorial-managed-virtual-network-sql-managed-instance.md).
### On-premises data sources
-To access on-premises data sources from managed Virtual Network using Private Endpoint, please see this tutorial [How to access on-premises SQL Server from Data Factory Managed VNET using Private Endpoint](tutorial-managed-virtual-network-on-premise-sql-server.md).
+To access on-premises data sources from managed virtual network using Private Endpoint, please see this tutorial [How to access on-premises SQL Server from Data Factory Managed VNET using Private Endpoint](tutorial-managed-virtual-network-on-premise-sql-server.md).
-### Azure Data Factory managed Virtual Network is available in the following Azure regions
-Generally, managed Virtual network is available to all Azure Data Factory regions, except:
+### Azure Data Factory managed virtual network is available in the following Azure regions
+Generally, managed virtual network is available to all Azure Data Factory regions, except:
- South India
-### Outbound communications through public endpoint from ADF Managed Virtual Network
+### Outbound communications through public endpoint from ADF managed virtual network
- All ports are opened for outbound communications. ### Linked Service creation of Azure Key Vault -- When you create a Linked Service for Azure Key Vault, there is no Azure Integration Runtime reference. So you can't create Private Endpoint during Linked Service creation of Azure Key Vault. But when you create Linked Service for data stores which references Azure Key Vault Linked Service and this Linked Service references Azure Integration Runtime with Managed Virtual Network enabled, then you are able to create a Private Endpoint for the Azure Key Vault Linked Service during the creation.
+- When you create a Linked Service for Azure Key Vault, there is no Azure Integration Runtime reference. So you can't create Private Endpoint during Linked Service creation of Azure Key Vault. But when you create Linked Service for data stores which references Azure Key Vault Linked Service and this Linked Service references Azure Integration Runtime with managed virtual network enabled, then you are able to create a Private Endpoint for the Azure Key Vault Linked Service during the creation.
- **Test connection** operation for Linked Service of Azure Key Vault only validates the URL format, but doesn't do any network operation. - The column **Using private endpoint** is always shown as blank even if you create Private Endpoint for Azure Key Vault.
Generally, managed Virtual network is available to all Azure Data Factory region
## Next steps -- Tutorial: [Build a copy pipeline using managed Virtual Network and private endpoints](tutorial-copy-data-portal-private.md) -- Tutorial: [Build mapping dataflow pipeline using managed Virtual Network and private endpoints](tutorial-data-flow-private.md)
+- Tutorial: [Build a copy pipeline using managed virtual network and private endpoints](tutorial-copy-data-portal-private.md)
+- Tutorial: [Build mapping dataflow pipeline using managed virtual network and private endpoints](tutorial-data-flow-private.md)
data-factory Transform Data Using Script https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/transform-data-using-script.md
Here is the JSON format for defining a Script activity:
] }, ...
- ],
- "scriptReference":{
- "linkedServiceName":{
- "referenceName": "<name>",
- "type": "<LinkedServiceReference>"
- },
- "path": "<file path>",
- "parameters":[
- {
- "name": "<name>",
- "value": "<value>",
- "type": "<type>",
- "direction": "<Input> or <Output> or <InputOutput> or <ReturnValue>",
- "size": 256
- },
- ...
+ ],
+ ...
] }, "logSettings": {
data-share Move To New Region https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-share/move-to-new-region.md
+
+ Title: Move Azure Data Share Accounts to another Azure region using the Azure portal
+description: Use Azure Resource Manager template to move Azure Data Share account from one Azure region to another using the Azure portal.
++ Last updated : 03/17/2022+++
+#Customer intent: As an Azure Data Share User, I want to move my Data Share account to a new region.
++
+# Move an Azure Data Share Account to another region using the Azure portal
+
+There are various scenarios in which you'd want to move your existing Azure Data Share accounts from one region to another. For example, you may want to create a Data Share Account for testing in a new region. You may also want to move a Data Share Account to another region as part of disaster recovery planning.
+
+Azure Data Share accounts canΓÇÖt be moved from one region to another. You can however, use an Azure Resource Manager template to export the existing Data Share account, modify the parameters to match the destination region, and then deploy the template to the new region. For more information on Resource Manager and templates, see [Quickstart: Create and deploy Azure Resource Manager templates by using the Azure portal](../azure-resource-manager/templates/quickstart-create-templates-use-the-portal.md).
++
+## Prerequisites
+
+- Make sure that the Azure Data Share account is in the Azure region from which you want to move.
+- Azure Data Share accounts canΓÇÖt be moved between regions. YouΓÇÖll have to re-add datasets to sent shares and resend invitations to Data Share recipients. For any received shares, you will need to request that the data provider sends you a new invitation.
++
+## Prepare and move
+The following steps show how to deploy a new Data Share account using a Resource Manager template via the portal.
++
+### Export the template and deploy from the portal
+
+1. Login to the [Azure portal](https://portal.azure.com).
+1. Select **All resources** and then select your Data Share account
+1. Select **Automation** > **Export template**
+1. Choose **Deploy** in the **Export template** blade.
+1. Select **Edit parameters** to open the **parameters.json** file in the online editor.
+1. To edit the parameter of the Data Share account name, change the property under **parameters** > **value** from the source Data Share Account's name to the name of the Data Share Account you want to create in a new region, ensure the name is in quotes:
+
+ ```json
+ {
+ "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentParameters.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "accounts_my_datashare_account_name": {
+ "value": "<target-datashare-account-name>"
+ }
+ }
+ }
+ ```
+
+1. Select **Save** in the editor.
+
+1. Select **Edit template** to open the **template.json** file in the online editor.
+
+1. To edit the target region where the Data Share account will be moved, change the **location** property under **resources** in the online editor:
+
+ ```json
+ "resources": [
+ {
+ "type": "Microsoft.DataShare/accounts",
+ "apiVersion": "2021-08-01",
+ "name": "[parameters('accounts_my_datashare_account_name')]",
+ "location": "<target-region>",
+ "identity": {
+ "type": "SystemAssigned"
+ }
+ "properties": {}
+ }
+ ]
+ ```
+
+1. To obtain region location codes, see [Azure Locations](https://azure.microsoft.com/global-infrastructure/locations/). The code for a region is the region name with no spaces, **Central US** = **centralus**.
+
+1. You can also change other parameters in the template if you choose. This is optional depending on your requirements:
+
+ * **Sent Shares** - You can edit which Sent Shares are deployed into the target Data Share Account by adding or removing Shares from the **resources** section in the **template.json** file.:
+ ```json
+ "resources": [
+ {
+ "type": "Microsoft.DataShare/accounts/shares",
+ "apiVersion": "2021-08-01",
+ "name": "[concat(parameters('accounts_my_datashare_account_name'), '/test_sent_share')]",
+ "dependsOn": [
+ "[resourceId('Microsoft.DataShare/accounts', parameters('accounts_my_datashare_account_name'))]"
+ ],
+ "properties": {
+ "shareKind": "CopyBased"
+ }
+ },
+ ]
+ ```
+
+ * **Sent Share Invitations** - You can edit which Invitations are deployed into the target Data Share account by adding or removing Invitations from the resources section in the **template.json** file.
+ ```json
+ "resources": [
+ {
+ "type": "Microsoft.DataShare/accounts/shares/invitations",
+ "apiVersion": "2021-08-01",
+ "name": "[concat(parameters('accounts_my_datashare_account_name'), '/test_sent_share/blob_snapshot_jsmith_microsoft_com')]",
+ "dependsOn": [
+ "[resourceId('Microsoft.DataShare/accounts/shares', parameters('accounts_my_datashare_account_name'), 'test_sent_share')]",
+ "[resourceId('Microsoft.DataShare/accounts', parameters('accounts_my_datashare_account_name'))]"
+ ],
+ "properties": {
+ "targetEmail": "jsmith@microsoft.com"
+ }
+ }
+ ]
+ ```
+
+ * **Datasets** - You can edit which datasets are deployed into the target Data Share account by adding or removing datasets from the resources section in the **template.json** file. Below is an example of a BlobFolder dataset.
+
+ * If you are also moving the resources contained in the datasets to a new region, you will have to remove the datasets from the **template.json** file and manually re-add them once the Data Share account and resources referenced in the datasets are moved to the new region.
+
+ >[!IMPORTANT]
+ >* Datasets will fail to deploy if the new Data Share account you are deploying will not automatically inherit required permissions to access the datasets. The required permissions depend on the dataset type. See here for required permissions for [Azure Synapse Analytics and Azure SQL Database datasets](how-to-share-from-sql.md#prerequisites-for-sharing-from-azure-sql-database-or-azure-synapse-analytics-formerly-azure-sql-dw). See here for required permissions for [Azure Storage and Azure Data Lake Gen 1 and Gen2 datasets](how-to-share-from-storage.md#prerequisites-for-the-source-storage-account).
+
+ ```json
+ "resources": [
+ {
+ "type": "Microsoft.DataShare/accounts/shares/dataSets",
+ "apiVersion": "2021-08-01",
+ "name": "[concat(parameters('accounts_my_datashare_account_name'), '/blobpath/directory')]",
+ "dependsOn": [
+ "[resourceId('Microsoft.DataShare/accounts/shares', parameters('accounts_my_datashare_account_name'), 'blobpath')]",
+ "[resourceId('Microsoft.DataShare/accounts', parameters('accounts_my_datashare_account_name'))]"
+ ],
+ "kind": "BlobFolder",
+ "properties": {
+ "containerName": "<container-name>",
+ "prefix": "<prefix>"
+ "subscriptionId": "<subscription-id>",
+ "resourceGroup": "<resource-group-name>",
+ "storageAccountName": "<storage-account-name>"
+ }
+ }
+ ]
+ ```
+
+
+1. Select **Save** in the online editor.
+
+1. Under the **Project details** section, select the **Subscription** dropdown to choose the subscription where the target Data Share account will be deployed.
+
+1. Select the **Resource group** dropdown to choose the resource group where the target Data Share account will be deployed. You can select **Create new** to create a new resource group for the target Data Share account.
+
+1. Verify that the **Location** field is set to the target location you want the Data Share account to be deployed to.
+
+1. Verify under **Instance details** that the name matches the name that you entered in the parameters editor above.
+
+1. Select **Review + Create** to advance to the next page.
+
+1. Review the terms and select **Create** to begin the deployment.
+
+1. Once the deployment finishes, go to the newly created Data Share account.
+
+1. If you were unable to transfer datasets using the template, you will need to re-add datasets to all of your Sent Shares.
+
+1. Resend invitations to all recipients of your sent shares and alert the consumers of your shares that they will need to reaccept and remap the data you are sharing with them.
+
+## Verify
+
+### Sent shares
+- Confirm that all sent shares in your source Data Share account are now present in the target Data Share account.
+- For each sent share, confirm that all data sets from the source share are now present in the target share. If they are not, you will need to manually re-add them.
+- For all share subscriptions in each sent share in your source account, confirm that you have sent invitations to all recipients of the shares so that they will be able to access the data again.
+
+### Received shares
+- Confirm that you have requested new invitations from data providers for all received shares from your source data share account.
+- Once you receive these invitations, you will need to remap the data sets and run snapshots to access the data again.
+
+## Clean up source resources
+
+To complete the move of the Data Share account, delete the source Data Share account. To do so, select the resource group from your dashboard in the Azure portal, navigate to the Data Share account you wish to delete and select **Delete** at the top of the page.
+
+## Next steps
+
+In this tutorial, you moved an Azure Data Share account from one region to another and cleaned up the source resources. To learn more about moving resources between regions and disaster recovery in Azure, refer to:
++
+- [Move resources to a new resource group or subscription](../azure-resource-manager/management/move-resource-group-and-subscription.md)
+- [Move Azure VMs to another region](../site-recovery/azure-to-azure-tutorial-migrate.md)
devtest-labs Devtest Lab Troubleshoot Apply Artifacts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/devtest-lab-troubleshoot-apply-artifacts.md
To troubleshoot connectivity issues to the Azure Storage account:
1. On the storage account **Overview** page, select **Firewalls and virtual networks** in the left navigation. 1. Ensure that **Firewalls and virtual networks** is set to **All networks**. Or, if the **Selected networks** option is selected, make sure the lab's virtual networks used to create VMs are added to the list.
-For in-depth troubleshooting, see [Configure Azure Storage firewalls and virtual networks](/azure/storage/common/storage-network-security.md).
+For in-depth troubleshooting, see [Configure Azure Storage firewalls and virtual networks](/azure/storage/common/storage-network-security).
## Troubleshoot artifact failures from the lab VM
If you need more help, try one of the following support channels:
- Get answers from Azure experts through [Azure Forums](https://azure.microsoft.com/support/forums). - Connect with [@AzureSupport](https://twitter.com/azuresupport), the official Microsoft Azure account for improving customer experience. Azure Support connects the Azure community to answers, support, and experts. - Go to the [Azure support site](https://azure.microsoft.com/support/options) and select **Get Support** to file an Azure support incident.-
devtest-labs Tutorial Create Custom Lab https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/tutorial-create-custom-lab.md
In the [next tutorial](tutorial-use-custom-lab.md), lab users, such as developer
- To create a lab, you need at least [Contributor](/azure/role-based-access-control/built-in-roles#contributor) role in an Azure subscription. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). -- To add users to a lab, you must have [User Access Administrator](/azure/role-based-access-control/built-in-roles#user-access-administrator) or [Owner](/azure/role-based-access-control/built-in-roles.md#owner) role in the subscription the lab is in.
+- To add users to a lab, you must have [User Access Administrator](/azure/role-based-access-control/built-in-roles#user-access-administrator) or [Owner](/azure/role-based-access-control/built-in-roles#owner) role in the subscription the lab is in.
## Create a lab
From the lab **Overview** page, you can select **Claimable virtual machines** in
## Add a user to the DevTest Labs User role
-To add users to a lab, you must be a [User Access Administrator](/azure/role-based-access-control/built-in-roles#user-access-administrator) or [Owner](/azure/role-based-access-control/built-in-roles.md#owner) of the subscription the lab is in. For more information, see [Add lab owners, contributors, and users in Azure DevTest Labs](devtest-lab-add-devtest-user.md).
+To add users to a lab, you must be a [User Access Administrator](/azure/role-based-access-control/built-in-roles#user-access-administrator) or [Owner](/azure/role-based-access-control/built-in-roles#owner) of the subscription the lab is in. For more information, see [Add lab owners, contributors, and users in Azure DevTest Labs](devtest-lab-add-devtest-user.md).
1. On the lab's **Overview** page, under **Settings**, select **Configuration and policies**.
devtest-labs Tutorial Use Custom Lab https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/tutorial-use-custom-lab.md
To connect to a Windows machine through Remote Desktop Protocol (RDP), follow th
:::image type="content" source="./media/tutorial-use-custom-lab/remote-computer-verification.png" alt-text="Screenshot of remote computer verification.":::
-Once you connect to the VM, you can use it to do your work. You have [Owner](/azure/role-based-access-control/built-in-roles.md#owner) role on all lab VMs you claim or create, unless you unclaim them.
+Once you connect to the VM, you can use it to do your work. You have [Owner](/azure/role-based-access-control/built-in-roles#owner) role on all lab VMs you claim or create, unless you unclaim them.
## Unclaim a lab VM
hdinsight Apache Kafka Spark Structured Streaming Cosmosdb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/apache-kafka-spark-structured-streaming-cosmosdb.md
description: Learn how to use Apache Spark Structured Streaming to read data fro
Previously updated : 11/18/2019 Last updated : 04/08/2022 # Use Apache Spark Structured Streaming with Apache Kafka and Azure Cosmos DB
hdinsight Apache Domain Joined Run Hive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/domain-joined/apache-domain-joined-run-hive.md
Title: Apache Hive policies in Apache Ranger - Azure HDInsight
description: Learn how to configure Apache Ranger policies for Hive in an Azure HDInsight service with Enterprise Security Package. Previously updated : 11/27/2019 Last updated : 04/08/2022 # Configure Apache Hive policies in HDInsight with Enterprise Security Package
To test the second policy (read-hivesampletable-devicemake), you created in the
* For running Hive queries using SSH on HDInsight clusters with ESP, see [Use SSH with HDInsight](../hdinsight-hadoop-linux-use-ssh-unix.md#authentication-domain-joined-hdinsight). * For Connecting Hive using Hive JDBC, see [Connect to Apache Hive on Azure HDInsight using the Hive JDBC driver](../hadoop/apache-hadoop-connect-hive-jdbc-driver.md) * For connecting Excel to Hadoop using Hive ODBC, see [Connect Excel to Apache Hadoop with the Microsoft Hive ODBC drive](../hadoop/apache-hadoop-connect-excel-hive-odbc-driver.md)
-* For connecting Excel to Hadoop using Power Query, see [Connect Excel to Apache Hadoop by using Power Query](../hadoop/apache-hadoop-connect-excel-power-query.md)
+* For connecting Excel to Hadoop using Power Query, see [Connect Excel to Apache Hadoop by using Power Query](../hadoop/apache-hadoop-connect-excel-power-query.md)
hdinsight Hdinsight Use Oozie Domain Joined Clusters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/domain-joined/hdinsight-use-oozie-domain-joined-clusters.md
description: Secure Apache Oozie workflows using the Azure HDInsight Enterprise
Previously updated : 05/14/2020 Last updated : 04/08/2022 # Run Apache Oozie in Azure HDInsight clusters with Enterprise Security Package
hdinsight Hbase Troubleshoot Phoenix Connectivity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hbase/hbase-troubleshoot-phoenix-connectivity.md
Title: Apache Phoenix connectivity issues in Azure HDInsight
description: Connectivity issues between Apache HBase and Apache Phoenix in Azure HDInsight Previously updated : 08/14/2019 Last updated : 04/08/2022 # Scenario: Apache Phoenix connectivity issues in Azure HDInsight
If you didn't see your problem or are unable to solve your issue, visit one of t
* Connect with [@AzureSupport](https://twitter.com/azuresupport) - the official Microsoft Azure account for improving customer experience. Connecting the Azure community to the right resources: answers, support, and experts.
-* If you need more help, you can submit a support request from the [Azure portal](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade/). Select **Support** from the menu bar or open the **Help + support** hub. For more detailed information, review [How to create an Azure support request](../../azure-portal/supportability/how-to-create-azure-support-request.md). Access to Subscription Management and billing support is included with your Microsoft Azure subscription, and Technical Support is provided through one of the [Azure Support Plans](https://azure.microsoft.com/support/plans/).
+* If you need more help, you can submit a support request from the [Azure portal](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade/). Select **Support** from the menu bar or open the **Help + support** hub. For more detailed information, review [How to create an Azure support request](../../azure-portal/supportability/how-to-create-azure-support-request.md). Access to Subscription Management and billing support is included with your Microsoft Azure subscription, and Technical Support is provided through one of the [Azure Support Plans](https://azure.microsoft.com/support/plans/).
hdinsight Hdinsight Apps Install Applications https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-apps-install-applications.md
The following list shows the published applications:
|[StreamSets Data Collector for HDInsight Cloud](https://azuremarketplace.microsoft.com/marketplace/apps/streamsets.streamsets-data-collector-hdinsight) |Hadoop,HBase,Spark,Kafka |StreamSets Data Collector is a lightweight, powerful engine that streams data in real time. Use Data Collector to route and process data in your data streams. It comes with a 30 day trial license. | |[Trifacta Wrangler Enterprise](https://azuremarketplace.microsoft.com/marketplace/apps/trifactainc1587522950142.trifactaazure) |Hadoop, Spark,HBase |Trifacta Wrangler Enterprise for HDInsight supports enterprise-wide data wrangling for any scale of data. The cost of running Trifacta on Azure is a combination of Trifacta subscription costs plus the Azure infrastructure costs for the virtual machines. | |[Unifi Data Platform](https://www.crunchbase.com/organization/unifi-software) |Hadoop,HBase,Storm,Spark |The Unifi Data Platform is a seamlessly integrated suite of self-service data tools designed to empower the business user to tackle data challenges that drive incremental revenue, reduce costs or operational complexity. |
-|[Unraveldata APM](https://azuremarketplace.microsoft.com/marketplace/apps/unravel-data.unravel-app) |Spark |Unravel Data app for HDInsight Spark cluster. |
The instructions provided in this article use Azure portal. You can also export the Azure Resource Manager template from the portal or obtain a copy of the Resource Manager template from vendors, and use Azure PowerShell and Azure Classic CLI to deploy the template. See [Create Apache Hadoop clusters on HDInsight using Resource Manager templates](hdinsight-hadoop-create-linux-clusters-arm-templates.md).
hdinsight Hdinsight Hadoop Add Hive Libraries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-hadoop-add-hive-libraries.md
description: Learn how to add Apache Hive libraries (jar files) to an HDInsight
Previously updated : 02/14/2020 Last updated : 04/08/2022 # Add custom Apache Hive libraries when creating your HDInsight cluster
hdinsight Hdinsight Phoenix In Hdinsight https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-phoenix-in-hdinsight.md
description: Overview of Apache Phoenix
Previously updated : 12/17/2019 Last updated : 04/08/2022 # Apache Phoenix in Azure HDInsight
hdinsight Interactive Query Troubleshoot Zookeeperhiveclientexception Hiveserver Configs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/interactive-query/interactive-query-troubleshoot-zookeeperhiveclientexception-hiveserver-configs.md
Title: Apache Hive Zeppelin Interpreter error - Azure HDInsight
description: The Apache Zeppelin Hive JDBC Interpreter is pointing to the wrong URL in Azure HDInsight Previously updated : 07/30/2019 Last updated : 04/08/2022 # Scenario: Apache Hive Zeppelin Interpreter gives a Zookeeper error in Azure HDInsight
The Zeppelin Hive JDBC Interpreter is pointing to the wrong URL.
## Next steps
hdinsight Apache Kafka Scalability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/kafka/apache-kafka-scalability.md
description: Learn how to configure managed disks for Apache Kafka cluster on Az
Previously updated : 12/09/2019 Last updated : 04/08/2022 # Configure storage and scalability for Apache Kafka on HDInsight
hdinsight Apache Spark Connect To Sql Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/spark/apache-spark-connect-to-sql-database.md
description: Learn how to set up a connection between HDInsight Spark cluster an
Previously updated : 04/20/2020 Last updated : 04/08/2022 # Use HDInsight Spark cluster to read and write data to Azure SQL Database
iot-develop Concepts Model Discovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/concepts-model-discovery.md
Title: Use IoT Plug and Play models in a solution | Microsoft Docs description: As a solution builder, learn about how you can use IoT Plug and Play models in your IoT solution.-+ Last updated 07/23/2020
iot-hub Iot Hub Dev Guide Sas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-dev-guide-sas.md
Title: Control access to IoT Hub using SAS tokens | Microsoft Docs description: How to control access to IoT Hub for device apps and back-end apps using shared access signature tokens.-+ -+
iot-hub Iot Hub Devguide C2d Guidance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-devguide-c2d-guidance.md
Title: Azure IoT Hub cloud-to-device options | Microsoft Docs description: Developer guide - guidance on when to use direct methods, device twin's desired properties, or cloud-to-device messages for cloud-to-device communications. -+ -+
iot-hub Iot Hub Devguide D2c Guidance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-devguide-d2c-guidance.md
Title: Azure IoT Hub device-to-cloud options | Microsoft Docs description: Developer guide - guidance on when to use device-to-cloud messages, reported properties, or file upload for cloud-to-device communications. -+ -+
iot-hub Iot Hub Devguide Identity Registry https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-devguide-identity-registry.md
Title: Understand the Azure IoT Hub identity registry | Microsoft Docs description: Developer guide - description of the IoT Hub identity registry and how to use it to manage your devices. Includes information about the import and export of device identities in bulk.--++
iot-hub Iot Hub Devguide Messages C2d https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-devguide-messages-c2d.md
Title: Understand Azure IoT Hub cloud-to-device messaging | Microsoft Docs description: This developer guide discusses how to use cloud-to-device messaging with your IoT hub. It includes information about the message life cycle and configuration options.-+ -+
iot-hub Iot Hub Devguide Messages Read Builtin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-devguide-messages-read-builtin.md
Title: Understand the Azure IoT Hub built-in endpoint | Microsoft Docs description: Developer guide - describes how to use the built-in, Event Hub-compatible endpoint to read device-to-cloud messages.-+ -+
iot-hub Iot Hub Devguide Messages Read Custom https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-devguide-messages-read-custom.md
Title: Understand Azure IoT Hub custom endpoints | Microsoft Docs description: Developer guide - using routing queries to route device-to-cloud messages to custom endpoints.-+ -+
iot-hub Iot Hub Devguide Messaging https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-devguide-messaging.md
Title: Understand Azure IoT Hub messaging | Microsoft Docs description: Developer guide - device-to-cloud and cloud-to-device messaging with IoT Hub. Includes information about message formats and supported communications protocols.-+ -+
iot-hub Iot Hub Devguide Sdks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-devguide-sdks.md
Title: Azure IoT Hub SDKs | Microsoft Docs description: Links to the Azure IoT Hub SDKs which you can use to build device apps and back-end apps.-+ -+
iot-hub Iot Hub Devguide Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-devguide-security.md
Title: Access control and security for IoT Hub | Microsoft Docs description: Overview on how to control access to IoT Hub, includes links to depth articles on AAD integration and SAS options.-+ -+
iot-hub Iot Hub Devguide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-devguide.md
Title: Developer guide for Azure IoT Hub | Microsoft Docs description: The Azure IoT Hub developer guide includes discussions of endpoints, security, the identity registry, device management, direct methods, device twins, file uploads, jobs, the IoT Hub query language, and messaging.-+ -+
iot-hub Iot Hub Get Started Physical https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-get-started-physical.md
Title: Get started connecting physical devices to Azure IoT Hub | Microsoft Docs description: Learn how to connect physical devices and boards to Azure IoT Hub. Your devices can send telemetry to IoT Hub and IoT Hub can monitor and manage your devices.-+ keywords: azure iot hub tutorial Last updated 06/14/2021-+
iot-hub Iot Hub Java Java C2d https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-java-java-c2d.md
Title: Cloud-to-device messages with Azure IoT Hub (Java) | Microsoft Docs description: How to send cloud-to-device messages to a device from an Azure IoT hub using the Azure IoT SDKs for Java. You modify a simulated device app to receive cloud-to-device messages and modify a back-end app to send the cloud-to-device messages.-+ -+ ms.devlang: java
iot-hub Iot Hub Java Java Device Management Getstarted https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-java-java-device-management-getstarted.md
Title: Get started with Azure IoT Hub device management (Java) | Microsoft Docs description: How to use Azure IoT Hub device management to initiate a remote device reboot. You use the Azure IoT device SDK for Java to implement a simulated device app that includes a direct method and the Azure IoT service SDK for Java to implement a service app that invokes the direct method.-+ -+ ms.devlang: java
iot-hub Iot Hub Java Java File Upload https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-java-java-file-upload.md
Title: Upload files from devices to Azure IoT Hub with Java | Microsoft Docs description: How to upload files from a device to the cloud using Azure IoT device SDK for Java. Uploaded files are stored in an Azure storage blob container.-+ -+ ms.devlang: java
iot-hub Iot Hub Java Java Schedule Jobs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-java-java-schedule-jobs.md
Title: Schedule jobs with Azure IoT Hub (Java) | Microsoft Docs description: How to schedule an Azure IoT Hub job to invoke a direct method and set a desired property on multiple devices. You use the Azure IoT device SDK for Java to implement the simulated device apps and the Azure IoT service SDK for Java to implement a service app to run the job.-+ -+ ms.devlang: java
iot-hub Iot Hub Java Java Twin Getstarted https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-java-java-twin-getstarted.md
Title: Get started with Azure IoT Hub device twins (Java) | Microsoft Docs description: How to use Azure IoT Hub device twins to add tags and then use an IoT Hub query. You use the Azure IoT device SDK for Java to implement the device app and the Azure IoT service SDK for Java to implement a service app that adds the tags and runs the IoT Hub query.-+ -+ ms.devlang: java
iot-hub Iot Hub Node Node C2d https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-node-node-c2d.md
Title: Cloud-to-device messages with Azure IoT Hub (Node) | Microsoft Docs description: How to send cloud-to-device messages to a device from an Azure IoT hub using the Azure IoT SDKs for Node.js. You modify a simulated device app to receive cloud-to-device messages and modify a back-end app to send the cloud-to-device messages.-+ -+ ms.devlang: javascript
iot-hub Iot Hub Node Node Device Management Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-node-node-device-management-get-started.md
Title: Get started with Azure IoT Hub device management (Node) | Microsoft Docs description: How to use IoT Hub device management to initiate a remote device reboot. You use the Azure IoT SDK for Node.js to implement a simulated device app that includes a direct method and a service app that invokes the direct method.-+ -+
iot-hub Iot Hub Node Node File Upload https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-node-node-file-upload.md
Title: Upload files from devices to Azure IoT Hub with Node | Microsoft Docs description: How to upload files from a device to the cloud using Azure IoT device SDK for Node.js. Uploaded files are stored in an Azure storage blob container.-+ -+ ms.devlang: javascript
iot-hub Iot Hub Node Node Module Twin Getstarted https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-node-node-module-twin-getstarted.md
Title: Start with Azure IoT Hub module identity & module twin (Node.js) description: Learn how to create module identity and update module twin using IoT SDKs for Node.js.--++ ms.devlang: javascript
iot-hub Iot Hub Node Node Schedule Jobs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-node-node-schedule-jobs.md
Title: Schedule jobs with Azure IoT Hub (Node) | Microsoft Docs description: How to schedule an Azure IoT Hub job to invoke a direct method on multiple devices. You use the Azure IoT SDKs for Node.js to implement the simulated device apps and a service app to run the job.-+ -+ ms.devlang: javascript
iot-hub Iot Hub Raspberry Pi Kit C Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-raspberry-pi-kit-c-get-started.md
Title: Connect Raspberry Pi to Azure IoT Hub using C | Microsoft Docs description: Learn how to setup and connect Raspberry Pi to Azure IoT Hub for Raspberry Pi to send data to the Azure cloud platform-+ ms.devlang: c Last updated 06/14/2021-+
iot-hub Iot Hub Raspberry Pi Kit Node Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-raspberry-pi-kit-node-get-started.md
Title: Connect Raspberry Pi to Azure IoT Hub in the cloud (Node.js) description: Learn how to set up and connect Raspberry Pi to Azure IoT Hub for Raspberry Pi to send data to the Azure cloud platform in this tutorial.-+ keywords: azure iot raspberry pi, raspberry pi iot hub, raspberry pi send data to cloud, raspberry pi to cloud
ms.devlang: javascript Last updated 02/22/2022-+
iot-hub Iot Hub Raspberry Pi Web Simulator Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-raspberry-pi-web-simulator-get-started.md
Title: Connect Raspberry Pi web simulator to Azure IoT Hub (Node.js) description: Connect Raspberry Pi web simulator to Azure IoT Hub for Raspberry Pi to send data to the Azure cloud.-+ keywords: raspberry pi simulator, azure iot raspberry pi, raspberry pi iot hub, raspberry pi send data to cloud, raspberry pi to cloud ms.devlang: javascript Last updated 05/27/2021-+
iot-hub Iot Hub Scaling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-scaling.md
Title: Azure IoT Hub scaling | Microsoft Docs description: How to scale your IoT hub to support your anticipated message throughput and desired features. Includes a summary of the supported throughput for each tier and options for sharding.-+ Last updated 06/28/2019-+ # Choose the right IoT Hub tier for your solution
The difference in supported capabilities between the basic and standard tiers of
| API | Basic tier | Free/Standard tier | | | - | - | | [Delete device](/javascript/api/azure-iot-digitaltwins-service/registrymanager#azure-iot-digitaltwins-service-registrymanager-deletedevice) | Yes | Yes |
-| [Get device](/azure/iot-hub/iot-c-sdk-ref/iothub-registrymanager-h/iothubregistrymanager-getdevice) | Yes | Yes |
-| [Delete module](/azure/iot-hub/iot-c-sdk-ref/iothub-registrymanager-h/iothubregistrymanager-deletemodule) | Yes | Yes |
+| [Get device](/rest/api/iothub/service/devices/get-identity) | Yes | Yes |
+| [Delete module](/rest/api/iothub/service/modules/delete-identity) | Yes | Yes |
| [Get module](/java/api/com.microsoft.azure.sdk.iot.service.registrymanager.getmodule) | Yes | Yes | | [Get registry statistics](/javascript/api/azure-iot-digitaltwins-service/registrymanager#azure-iot-digitaltwins-service-registrymanager-getdevicestatistics) | Yes | Yes | | [Get services statistics](/javascript/api/azure-iot-digitaltwins-service/registrymanager#azure-iot-digitaltwins-service-registrymanager-getservicestatistics) | Yes | Yes |
The difference in supported capabilities between the basic and standard tiers of
| [Get import export jobs](/rest/api/iothub/service/jobs/getimportexportjobs) | Yes | Yes | | [Purge command queue](/javascript/api/azure-iot-digitaltwins-service/registrymanager#azure-iot-digitaltwins-service-registrymanager-purgecommandqueue) | | Yes | | [Get device twin](/java/api/com.microsoft.azure.sdk.iot.device.deviceclient.getdevicetwin) | | Yes |
-| [Get module twin](/azure/iot-hub/iot-c-sdk-ref/iothub-devicetwin-h/iothubdevicetwin-getmoduletwin) | | Yes |
+| [Get module twin](/rest/api/iothub/service/modules/get-twin) | | Yes |
| [Invoke device method](./iot-hub-devguide-direct-methods.md) | | Yes | | [Update device twin](./iot-hub-devguide-device-twins.md) | | Yes |
-| [Update module twin](/azure/iot-hub/iot-c-sdk-ref/iothub-devicetwin-h/iothubdevicetwin-updatemoduletwin) | | Yes |
+| [Update module twin](/rest/api/iothub/service/modules/update-twin) | | Yes |
| [Abandon device bound notification](/rest/api/iothub/device/abandondeviceboundnotification) | | Yes | | [Complete device bound notification](/rest/api/iothub/device/completedeviceboundnotification) | | Yes | | [Cancel job](/rest/api/media/jobs/canceljob) | | Yes |
If you are approaching the allowed message limit on your IoT hub, you can use th
* For more information about IoT Hub capabilities and performance details, see [IoT Hub pricing](https://azure.microsoft.com/pricing/details/iot-hub) or [IoT Hub quotas and throttles](iot-hub-devguide-quotas-throttling.md).
-* To change your IoT Hub tier, follow the steps in [Upgrade your IoT hub](iot-hub-upgrade.md).
+* To change your IoT Hub tier, follow the steps in [Upgrade your IoT hub](iot-hub-upgrade.md).
iot-hub Quickstart Control Device Android https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/quickstart-control-device-android.md
Title: Control a device from Azure IoT Hub (Android) | Microsoft Docs description: In this quickstart, you run two sample Java applications. One application is a service application that can remotely control devices connected to your hub. The other application runs on a physical or simulated device connected to your hub that can be controlled remotely.-+ ms.devlang: java Last updated 06/21/2019-+ #Customer intent: As a developer new to IoT Hub, I need to use a service application written for Android to control devices connected to the hub.
iot-hub Tutorial Connectivity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/tutorial-connectivity.md
Title: Tutorial - Check device connectivity to Azure IoT Hub description: Tutorial - Use IoT Hub tools to troubleshoot, during development, device connectivity issues to your IoT hub. -+ -+ Last updated 10/26/2021
iot-hub Tutorial Device Twins https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/tutorial-device-twins.md
Title: Tutorial - Synchronize device state from Azure IoT Hub | Microsoft Docs description: Tutorial - Learn how to use device twins to configure your devices from the cloud, and receive status and compliance data from your devices. --++ ms.devlang: javascript
logic-apps Logic Apps Limits And Config https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-limits-and-config.md
ms.suite: integration Previously updated : 04/04/2022 Last updated : 04/08/2022 # Limits and configuration reference for Azure Logic Apps
The following tables list the values for a single workflow definition:
| Name | Limit | Notes | | - | -- | -- |
-| Workflows per region per subscription | 1,000 workflows ||
+| Workflows per region per subscription | - Consumption: 1,000 workflows where each logic app is limited to 1 workflow <br><br>- Standard: Unlimited, based on the selected hosting plan, app activity, size of machine instances, and resource usage, where each logic app can have multiple workflows ||
| Workflow - Maximum name length | - Consumption: 80 characters <br><br>- Standard: 43 characters || | Triggers per workflow | 10 triggers | This limit applies only when you work on the JSON workflow definition, whether in code view or an Azure Resource Manager (ARM) template, not the designer. | | Actions per workflow | 500 actions | To extend this limit, you can use nested workflows as necessary. |
logic-apps Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Logic Apps description: Lists Azure Policy Regulatory Compliance controls available for Azure Logic Apps. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 03/10/2022 Last updated : 04/06/2022 --++
with the specific standard.
## Next steps - Learn more about [Azure Policy Regulatory Compliance](../governance/policy/concepts/regulatory-compliance.md).
+- Review the [regulatory compliance details for Azure Security Benchmark](../governance/policy/samples/gov-azure-security-benchmark.md).
- See the built-ins on the [Azure Policy GitHub repo](https://github.com/Azure/azure-policy).
machine-learning Concept Train Model Git Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-train-model-git-integration.md
Last updated 04/05/2022- # Git integration for Azure Machine Learning
If your training files are not located in a git repository on your development e
## View the logged information
-The git information is stored in the properties for a training run. You can view this information using the Azure portal, Python SDK, and Azure CLI.
+The git information is stored in the properties for a training run. You can view this information using the Azure portal or Python SDK.
### Azure portal
After submitting a training run, a [Run](/python/api/azureml-core/azureml.core.r
run.properties['azureml.git.commit'] ```
-### Azure CLI
-
-The `az ml run` CLI command can be used to retrieve the properties from a run. For example, the following command returns the properties for the last run in the experiment named `train-on-amlcompute`:
-
-```azurecli-interactive
-az ml run list -e train-on-amlcompute --last 1 -w myworkspace -g myresourcegroup --query '[].properties'
-```
-
-For more information, see the [az ml run](/cli/azure/ml(v1)/run) reference documentation.
## Next steps
machine-learning How To Auto Train Nlp Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-auto-train-nlp-models.md
For multi-label classification, the dataset columns would be the same as multi-c
|Python list with quotes| `"['label1','label2','label3']"`| `"['label1']"`|`"[]"` > [!IMPORTANT]
-> Different parsers are used to read labels for these formats. If you are using the plaint text format, only use alphabetical, numerical and `'_'` in your labels. All other characters are recognized as the separator of labels.
+> Different parsers are used to read labels for these formats. If you are using the plain text format, only use alphabetical, numerical and `'_'` in your labels. All other characters are recognized as the separator of labels.
> > For example, if your label is `"cs.AI"`, it's read as `"cs"` and `"AI"`. Whereas with the Python list format, the label would be `"['cs.AI']"`, which is read as `"cs.AI"` .
machine-learning How To Manage Optimize Cost https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-manage-optimize-cost.md
description: Learn tips to optimize your cost when building machine learning models in Azure Machine Learning -+ Last updated 06/08/2021 # Manage and optimize Azure Machine Learning costs
You can also configure the amount of time the node is idle before scale down. By
AmlCompute clusters can be configured for your changing workload requirements in Azure portal, using the [AmlCompute SDK class](/python/api/azureml-core/azureml.core.compute.amlcompute.amlcompute), [AmlCompute CLI](/cli/azure/ml(v1)/computetarget/create#az-ml-v1--computetarget-create-amlcompute), with the [REST APIs](https://github.com/Azure/azure-rest-api-specs/tree/master/specification/machinelearningservices/resource-manager/Microsoft.MachineLearningServices/stable). -
-```azurecli
-az ml computetarget create amlcompute --name testcluster --vm-size Standard_NC6 --min-nodes 0 --max-nodes 5 --idle-seconds-before-scaledown 300
-```
## Set quotas on resources
mysql How To Move Regions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/how-to-move-regions.md
+
+ Title: Move Azure regions - Azure portal - Azure Database for MySQL Flexible server
+description: Move an Azure Database for MySQL Flexible server from one Azure region to another using the Azure portal.
+++++ Last updated : 04/08/2022
+#Customer intent: As an Azure service administrator, I want to move my service resources to another Azure region.
++
+# Move an Azure Database for MySQL flexible server to another region by using the Azure portal
+
+There are various scenarios for moving an existing Azure Database for MySQL flexible server from one region to another. For example, you might want to move a production server to another region as part of your disaster recovery planning.
+
+You can use Azure Database for MySQL flexible server's [geo-restore](concepts-backup-restore.md#geo-restore) feature to complete the move to another region. To do so, first ensure geo-redundancy is enabled for your flexible server. Next, trigger geo-restore for your geo-redundant server and move your server to the geo-paired region.
+
+> [!NOTE]
+> This article focuses on moving your server to a different region. If you want to move your server to a different resource group or subscription, refer to the [move](../../azure-resource-manager/management/move-resource-group-and-subscription.md) article.
+
+## Prerequisites
+
+- Ensure the source server has geo-redundancy enabled. You can enable geo-redundancy post server-create for locally redundant or same-zone redundant servers. Currently, for a Zone-redundant High Availability server geo-redundancy can only be enabled/disabled at server create time.
+
+- Make sure that your Azure Database for MySQL source flexible server is deployed in the Azure region that you want to move from.
+
+## Move
+
+To move the Azure Database for MySQL flexible server to the geo-paired region using the Azure portal, use the following steps:
+
+1. In the [Azure portal](https://portal.azure.com/), choose your flexible server that you want to restore the backup from.
+
+2. Click **Overview** from the left panel.
+
+3. From the overview page, click **Restore**.
+
+4. Restore page will be shown with an option to choose **Geo-redundant restore**. If you have configured your server for geographically redundant backups, the server can be restored to the corresponding Azure paired region and the geo-redundant restore option can be enabled. Geo-redundant restore option restores the server to Latest UTC Now timestamp and hence after selection of Geo-redundant restore, the point-in-time restore options cannot be selected simultaneously.
+
+ :::image type="content" source="./media/how-to-restore-server-portal/georestore-flex.png" alt-text="Geo-restore option":::
+
+ :::image type="content" source="./media/how-to-restore-server-portal/georestore-enabled-flex.png" alt-text="Enabling Geo-Restore":::
+
+5. Provide a new server name in the **Name** field in the Server details section.
+
+6. When primary region is down, one cannot create geo-redundant servers in the respective geo-paired region as storage cannot be provisioned in the primary region. One must wait for the primary region to be up to provision geo-redundant servers in the geo-paired region. With the primary region down one can still geo-restore the source server to the geo-paired region by disabling the geo-redundancy option in the Compute + Storage Configure Server settings in the restore portal experience and restore as a locally redundant server to ensure business continuity.
+
+ :::image type="content" source="./media/how-to-restore-server-portal/georestore-region-down-1.png" alt-text="Compute + Storage window":::
+
+ :::image type="content" source="./media/how-to-restore-server-portal/georestore-region-down-2.png" alt-text="Disabling Geo-Redundancy":::
+
+ :::image type="content" source="./media/how-to-restore-server-portal/georestore-region-down-3.png" alt-text="Restoring as Locally redundant server":::
+
+7. Select **Review + Create** to review your selections.
+
+8. A notification will be shown that the restore operation has been initiated. This operation may take a few minutes.
+
+The new server created by geo-restore has the same server admin login name and password that was valid for the existing server at the time the restore was initiated. The password can be changed from the new server's Overview page. Additionally during a geo-restore, **Networking** settings such as virtual network settings and firewall rules can be configured as described in the below section.
+
+## Clean up source server
+
+You may want to delete the source Azure Database for MySQL flexible server. To do so, use the following steps:
+
+1. Once the replica has been created, locate and select your Azure Database for MySQL source flexible server.
+1. In the **Overview** window, select **Delete**.
+1. Type in the name of the source server to confirm you want to delete.
+1. Select **Delete**.
+
+## Next steps
+
+In this tutorial, you moved an Azure Database for MySQL flexible server from one region to another by using the Azure portal and then cleaned up the unneeded source resources.
+
+- Learn more about [geo-restore](concepts-backup-restore.md#geo-restore)
+- Learn more about [Azure paired regions](overview.md#azure-regions) supported for Azure Database for MySQL flexible server
+- Learn more about [business continuity](concepts-business-continuity.md) options
purview Azure Purview Connector Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/azure-purview-connector-overview.md
Previously updated : 01/24/2022 Last updated : 04/08/2022
The table below shows the supported capabilities for each data source. Select th
| Azure | [Azure Blob Storage](register-scan-azure-blob-storage-source.md)| [Yes](register-scan-azure-blob-storage-source.md#register) | [Yes](register-scan-azure-blob-storage-source.md#scan)| Limited* | [Yes](how-to-access-policies-storage.md) | || [Azure Cosmos DB](register-scan-azure-cosmos-database.md)| [Yes](register-scan-azure-cosmos-database.md#register) | [Yes](register-scan-azure-cosmos-database.md#scan)|No*|No| || [Azure Data Explorer](register-scan-azure-data-explorer.md)| [Yes](register-scan-azure-data-explorer.md#register) | [Yes](register-scan-azure-data-explorer.md#scan)| No* | No |
+|| [Azure Data Factory](how-to-link-azure-data-factory.md) | [Yes](how-to-link-azure-data-factory.md) | No | [Yes](how-to-link-azure-data-factory.md) | No |
|| [Azure Data Lake Storage Gen1](register-scan-adls-gen1.md)| [Yes](register-scan-adls-gen1.md#register) | [Yes](register-scan-adls-gen1.md#scan)| Limited* | No | || [Azure Data Lake Storage Gen2](register-scan-adls-gen2.md)| [Yes](register-scan-adls-gen2.md#register) | [Yes](register-scan-adls-gen2.md#scan)| Limited* | [Yes](how-to-access-policies-storage.md) |
+|| [Azure Data Share](how-to-link-azure-data-share.md) | [Yes](how-to-link-azure-data-share.md) | No | [Yes](how-to-link-azure-data-share.md) | No |
|| [Azure Database for MySQL](register-scan-azure-mysql-database.md) | [Yes](register-scan-azure-mysql-database.md#register) | [Yes](register-scan-azure-mysql-database.md#scan) | No* | No | || [Azure Database for PostgreSQL](register-scan-azure-postgresql.md) | [Yes](register-scan-azure-postgresql.md#register) | [Yes](register-scan-azure-postgresql.md#scan) | No* | No | || [Azure Dedicated SQL pool (formerly SQL DW)](register-scan-azure-synapse-analytics.md)| [Yes](register-scan-azure-synapse-analytics.md#register) | [Yes](register-scan-azure-synapse-analytics.md#scan)| No* | No | || [Azure Files](register-scan-azure-files-storage-source.md)|[Yes](register-scan-azure-files-storage-source.md#register) | [Yes](register-scan-azure-files-storage-source.md#scan) | Limited* | No | || [Azure SQL Database](register-scan-azure-sql-database.md)| [Yes](register-scan-azure-sql-database.md#register) |[Yes](register-scan-azure-sql-database.md#scan)| [Yes (Preview)](register-scan-azure-sql-database.md#lineagepreview) | No |
-|| [Azure SQL Database Managed Instance](register-scan-azure-sql-database-managed-instance.md)| [Yes](register-scan-azure-sql-database-managed-instance.md#scan) | [Yes](register-scan-azure-sql-database-managed-instance.md#scan) | No* | No |
+|| [Azure SQL Managed Instance](register-scan-azure-sql-database-managed-instance.md)| [Yes](register-scan-azure-sql-database-managed-instance.md#scan) | [Yes](register-scan-azure-sql-database-managed-instance.md#scan) | No* | No |
|| [Azure Synapse Analytics (Workspace)](register-scan-synapse-workspace.md)| [Yes](register-scan-synapse-workspace.md#register) | [Yes](register-scan-synapse-workspace.md#scan)| [Yes - Synapse pipelines](how-to-lineage-azure-synapse-analytics.md)| No| |Database| [Amazon RDS](register-scan-amazon-rds.md) | [Yes](register-scan-amazon-rds.md#register-an-amazon-rds-data-source) | [Yes](register-scan-amazon-rds.md#scan-an-amazon-rds-database) | No | No | || [Cassandra](register-scan-cassandra-source.md)|[Yes](register-scan-cassandra-source.md#register) | No | [Yes](register-scan-cassandra-source.md#lineage)| No|
The table below shows the supported capabilities for each data source. Select th
|| [MySQL](register-scan-mysql.md) | [Yes](register-scan-mysql.md#register) | No | [Yes](register-scan-mysql.md#lineage) | No | || [Oracle](register-scan-oracle-source.md) | [Yes](register-scan-oracle-source.md#register)| No | [Yes*](register-scan-oracle-source.md#lineage) | No| || [PostgreSQL](register-scan-postgresql.md) | [Yes](register-scan-postgresql.md#register) | No | [Yes](register-scan-postgresql.md#lineage) | No |
-|| [SAP Business Warehose](register-scan-sap-bw.md) | [Yes](register-scan-sap-bw.md#register) | No | No | No |
+|| [SAP Business Warehouse](register-scan-sap-bw.md) | [Yes](register-scan-sap-bw.md#register) | No | No | No |
|| [SAP HANA](register-scan-sap-hana.md) | [Yes](register-scan-sap-hana.md#register) | No | No | No | || [Snowflake](register-scan-snowflake.md) | [Yes](register-scan-snowflake.md#register) | No | [Yes](register-scan-snowflake.md#lineage) | No | || [SQL Server](register-scan-on-premises-sql-server.md)| [Yes](register-scan-on-premises-sql-server.md#register) |[Yes](register-scan-on-premises-sql-server.md#scan) | No* | No|
purview How To Monitor With Azure Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-monitor-with-azure-monitor.md
Title: How to monitor Azure Purview description: Learn how to configure Azure Purview metrics, alerts, and diagnostic settings by using Azure Monitor.--++ Previously updated : 12/03/2020 Last updated : 04/07/2022 # Azure Purview metrics in Azure Monitor
The Sample log for an event instance is shown in the below section.
## Next steps
-[View Asset insights](asset-insights.md)
+[Elastic data map in Azure Purview](concept-elastic-data-map.md)
purview Register Scan Oracle Source https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-oracle-source.md
To create and run a new scan, do the following:
:::image type="content" source="media/register-scan-oracle-source/scan.png" alt-text="scan oracle" border="true":::
+1. Select **Test connection**.
+ 1. Select **Continue**. 1. Choose your **scan trigger**. You can set up a schedule or ran the scan once.
purview Register Scan Power Bi Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-power-bi-tenant.md
Previously updated : 03/04/2022 Last updated : 04/08/2022
This article outlines how to register a Power BI tenant, and how to authenticate
- You will need to be a Data Source Administrator and Data Reader to register a source and manage it in the Azure Purview Studio. See our [Azure Purview Permissions page](catalog-permissions.md) for details. -- If delegated auth is used, make sure proper [Power BI license](/power-bi/admin/service-admin-licensing-organization#subscription-license-types) is assigned to Power BI admin user that is used for the scan.
+- If delegated auth is used:
+ - Make sure proper [Power BI license](/power-bi/admin/service-admin-licensing-organization#subscription-license-types) is assigned to Power BI admin user that is used for the scan.
+
+ - Exclude the user from Azure multi-factor authentication.
- If self-hosted integration runtime is used:
purview Register Scan Salesforce https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-salesforce.md
When setting up scan, you can choose to scan an entire Salesforce organization,
* You'll need to be a Data Source Administrator and Data Reader to register a source and manage it in the Azure Purview Studio. See our [Azure Purview Permissions page](catalog-permissions.md) for details.
-* Set up the latest [self-hosted integration runtime](https://www.microsoft.com/download/details.aspx?id=39717). For more information, see [the create and configure a self-hosted integration runtime guide](manage-integration-runtimes.md). The minimal supported Self-hosted Integration Runtime version is 5.11.7953.1.
+You can use the fully managed Azure integration runtime for scan - make sure to provide the security token to authenticate to Salesforce, learn more from the credential configuration in [Scan](#scan) section. Otherwise, if you want the scan to be initiated from a Salesforce trusted IP range for your organization, you can configure a self-hosted integration runtime to connect to it:
- You can use the managed Azure integration runtime for scan - make sure to provide the security token to authenticate to Salesforce, learn more from the credential configuration in [Scan](#scan) section. Otherwise, if you want the scan to be initiated from a Salesforce trusted IP range for your organization, you can configure a self-hosted integration runtime to connect to it:
+* Set up the latest [self-hosted integration runtime](https://www.microsoft.com/download/details.aspx?id=39717). For more information, see [the create and configure a self-hosted integration runtime guide](manage-integration-runtimes.md). The minimal supported Self-hosted Integration Runtime version is 5.11.7953.1.
* Ensure [JDK 11](https://www.oracle.com/java/technologies/javase/jdk11-archive-downloads.html) is installed on the machine where the self-hosted integration runtime is installed.
search Search Howto Managed Identities Sql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-howto-managed-identities-sql.md
description: Learn how to set up an indexer connection to Azure SQL Database us
-+ Last updated 02/11/2022
DROP USER IF EXISTS [insert your search service name or user-assigned managed id
## 2 - Add a role assignment
-In this step you will give your Azure Cognitive Search service permission to read data from your SQL Server.
+In this section you'll give your Azure Cognitive Search service permission to read data from your SQL Server. For detailed steps, see [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.md).
+
+1. In the Azure portal, navigate to your Azure SQL Server page.
+
+1. Select **Access control (IAM)**.
+
+1. Select **Add > Add role assignment**.
+
+ :::image type="content" source="../../includes/role-based-access-control/media/add-role-assignment-menu-generic.png" alt-text="Screenshot that shows Access control (IAM) page with Add role assignment menu open.":::
+
+1. On the **Roles** tab, select the appropriate **Reader** role.
-1. In the Azure portal navigate to your Azure SQL Server page.
-2. Select **Access control (IAM)**
-3. Select **Add** then **Add role assignment**
+1. On the **Members** tab, select **Managed identity**, and then select **Select members**.
- ![Add role assignment](./media/search-managed-identities/add-role-assignment-sql-server.png "Add role assignment")
+1. Select your Azure subscription.
-4. Select the appropriate **Reader** role.
-5. Leave **Assign access to** as **Azure AD user, group or service principal**
-6. If you're using a system-assigned managed identity, search for your search service, then select it. If you're using a user-assigned managed identity, search for the name of the user-assigned managed identity, then select it. Select **Save**.
+1. If you're using a system-assigned managed identity, select **System-assigned managed identity**, search for your search service, and then select it.
- Example for Azure SQL using a system-assigned managed identity:
+1. Otherwise, if you're using a user-assigned managed identity, select **User-assigned managed identity**, search for the name of the user-assigned managed identity, and then select it.
- ![Add reader role assignment](./media/search-managed-identities/add-role-assignment-sql-server-reader-role.png "Add reader role assignment")
+1. On the **Review + assign** tab, select **Review + assign** to assign the role.
## 3 - Create the data source
search Search Security Rbac https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-security-rbac.md
Previously updated : 02/03/2022 Last updated : 04/08/2022
Built-in roles include generally available and preview roles.
| [Owner](../role-based-access-control/built-in-roles.md#owner) | (Generally available) Full access to the search resource, including the ability to assign Azure roles. Subscription administrators are members by default. | | [Contributor](../role-based-access-control/built-in-roles.md#contributor) | (Generally available) Same level of access as Owner, minus the ability to assign roles or change authorization options. | | [Reader](../role-based-access-control/built-in-roles.md#reader) | (Generally available) Limited access to partial service information. In the portal, the Reader role can access information in the service Overview page, in the Essentials section and under the Monitoring tab. All other tabs and pages are off limits. </br></br>This role has access to service information: resource group, service status, location, subscription name and ID, tags, URL, pricing tier, replicas, partitions, and search units. This role also has access to service metrics: search latency, percentage of throttled requests, average queries per second. </br></br>There is no access to API keys, role assignments, content (indexes or synonym maps), or content metrics (storage consumed, number of objects). |
-| [Search Service Contributor](../role-based-access-control/built-in-roles.md#search-service-contributor) | (Generally available and preview) This role is equivalent to Contributor at the service-level, but with full access to all actions on indexes, synonym maps, indexers, data sources, and skillsets through [`Microsoft.Search/searchServices/*`](../role-based-access-control/resource-provider-operations.md#microsoftsearch). This role is for search service administrators who need to fully manage the service. </br></br>Like Contributor, members of this role cannot make or manage role assignments or change authorization options. Your service must be enabled for the preview for data requests. |
+| [Search Service Contributor](../role-based-access-control/built-in-roles.md#search-service-contributor) | (Generally available) This role is equivalent to Contributor at the service-level. </br></br>(Preview) Provides full access to all actions on indexes, synonym maps, indexers, data sources, and skillsets through [`Microsoft.Search/searchServices/*`](../role-based-access-control/resource-provider-operations.md#microsoftsearch). This role is for search service administrators who need to fully manage the service. This role has been extended to include data plane operations. Data plane support is in preview. </br></br>Like Contributor, members of this role cannot make or manage role assignments or change authorization options. Your service must be enabled for the preview for data requests. |
| [Search Index Data Contributor](../role-based-access-control/built-in-roles.md#search-index-data-contributor) | (Preview) Provides full access to content in all indexes on the search service. This role is for developers or index owners who need to import, refresh, or query the documents collection of an index. | | [Search Index Data Reader](../role-based-access-control/built-in-roles.md#search-index-data-reader) | (Preview) Provides read-only access to search indexes on the search service. This role is for apps and users who run queries. |
search Service Create Private Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/service-create-private-endpoint.md
Private endpoints are provided by [Azure Private Link](../private-link/private-l
You can create a private endpoint in the Azure portal, as described in this article. Alternatively, you can use the [Management REST API version 2020-03-13](/rest/api/searchmanagement/), [Azure PowerShell](/powershell/module/az.search), or [Azure CLI](/cli/azure/search). > [!NOTE]
-> When the service endpoint is private, some portal features are disabled. You can view and manage service level information, but index, indexer, and skillset information is hidden for security reasons. As an alternative to the portal, you can use the [VS Code Extension](https://aka.ms/vscode-search) to interact with the various components in the service.
+> When the service endpoint is private, some portal features are disabled. You can view and manage service level information, but index, indexer, and skillset information is hidden for security reasons. As an alternative to the portal, you can use the [VS Code Extension](https://aka.ms/vscode-search) to interact with the various components in the service. Additionally, ARM templates don't currently have support for updating existing Private Endpoints that are connected to a search service.
## Why use a Private Endpoint for secure access?
When you're done using the Private Endpoint, search service, and the VM, delete
1. Enter *myResourceGroup* for **TYPE THE RESOURCE GROUP NAME** and select **Delete**. ## Next steps
-In this article, you created a VM on a virtual network and a search service with a Private Endpoint. You connected to the VM from the internet and securely communicated to the search service using Private Link. To learn more about Private Endpoint, seeΓÇ»[What is Azure Private Endpoint?](../private-link/private-endpoint-overview.md).
+In this article, you created a VM on a virtual network and a search service with a Private Endpoint. You connected to the VM from the internet and securely communicated to the search service using Private Link. To learn more about Private Endpoint, seeΓÇ»[What is Azure Private Endpoint?](../private-link/private-endpoint-overview.md).
security Protection Customer Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/protection-customer-data.md
na Previously updated : 07/10/2020 Last updated : 04/8/2022
When you create your storage account, select one of the following replication op
- **Zone-redundant storage (ZRS)**: Zone-redundant storage maintains three copies of your data. ZRS is replicated three times across two to three facilities to provide higher durability than LRS. Replication occurs within a single region or across two regions. ZRS helps ensure that your data is durable within a single region. - **Geo-redundant storage (GRS)**: Geo-redundant storage is enabled for your storage account by default when you create it. GRS maintains six copies of your data. With GRS, your data is replicated three times within the primary region. Your data is also replicated three times in a secondary region hundreds of miles away from the primary region, providing the highest level of durability. In the event of a failure at the primary region, Azure Storage fails over to the secondary region. GRS helps ensure that your data is durable in two separate regions.
-**Data destruction**: When customers delete data or leave Azure, Microsoft follows strict standards for overwriting storage resources before their reuse, as well as the physical destruction of decommissioned hardware. Microsoft executes a complete deletion of data on customer request and on contract termination.
+**Data destruction**: When customers delete data or leave Azure, Microsoft follows strict standards for deleting data, as well as the physical destruction of decommissioned hardware. Microsoft executes a complete deletion of data on customer request and on contract termination. For more information, see [Data management at Microsoft](https://www.microsoft.com/trust-center/privacy/data-management).
## Customer data ownership Microsoft does not inspect, approve, or monitor applications that customers deploy to Azure. Moreover, Microsoft does not know what kind of data customers choose to store in Azure. Microsoft does not claim data ownership over the customer information that's entered into Azure.
service-bus-messaging Service Bus Ip Filtering https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-ip-filtering.md
By default, Service Bus namespaces are accessible from internet as long as the r
This feature is helpful in scenarios in which Azure Service Bus should be only accessible from certain well-known sites. Firewall rules enable you to configure rules to accept traffic originating from specific IPv4 addresses. For example, if you use Service Bus with [Azure Express Route][express-route], you can create a **firewall rule** to allow traffic from only your on-premises infrastructure IP addresses or addresses of a corporate NAT gateway. ## IP firewall rules
-The IP firewall rules are applied at the Service Bus namespace level. Therefore, the rules apply to all connections from clients using any supported protocol. Any connection attempt from an IP address that doesn't match an allowed IP rule on the Service Bus namespace is rejected as unauthorized. The response doesn't mention the IP rule. IP filter rules are applied in order, and the first rule that matches the IP address determines the accept or reject action.
+The IP firewall rules are applied at the Service Bus namespace level. Therefore, the rules apply to all connections from clients using any **supported protocol** (AMQP (5671) and HTTPS (443)). Any connection attempt from an IP address that doesn't match an allowed IP rule on the Service Bus namespace is rejected as unauthorized. The response doesn't mention the IP rule. IP filter rules are applied in order, and the first rule that matches the IP address determines the accept or reject action.
## Important points - Firewalls and Virtual Networks are supported only in the **premium** tier of Service Bus. If upgrading to the **premier** tier isn't an option, we recommend that you keep the Shared Access Signature (SAS) token secure and share with only authorized users. For information about SAS authentication, see [Authentication and authorization](service-bus-authentication-and-authorization.md#shared-access-signature).
For constraining access to Service Bus to Azure virtual networks, see the follow
[lnk-deploy]: ../azure-resource-manager/templates/deploy-powershell.md [lnk-vnet]: service-bus-service-endpoints.md
-[express-route]: ../expressroute/expressroute-faqs.md#supported-services
+[express-route]: ../expressroute/expressroute-faqs.md#supported-services
service-fabric Cluster Security Certificate Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/cluster-security-certificate-management.md
As we've seen in the [companion article](cluster-security-certificates.md), a ce
The goal is to automate certificate management as much as possible to ensure uninterrupted availability of the cluster and offer security assurances, given that the process is user-touch-free. This goal is attainable currently in Azure Service Fabric clusters; the remainder of the article will first deconstruct certificate management, and later will focus on enabling autorollover. Specifically, the topics in scope are:
- - assumptions related to the separation of attributions between owner and platform, in the context of managing certificates
- - the long pipeline of certificates from issuance to consumption
- - certificate rotation - why, how and when
- - what could possibly go wrong?
+ - Assumptions related to the separation of attributions between owner and platform, in the context of managing certificates
+ - The long pipeline of certificates from issuance to consumption
+ - Certificate rotation - why, how and when
+ - What could possibly go wrong?
-Aspect such as securing/managing domain names, enrolling into certificates, or setting up authorization controls to enforce certificate issuance are out of the scope of this article. Refer to the Registration Authority (RA) of your favorite Public Key Infrastructure (PKI) service. Microsoft-internal consumers: please reach out to Azure Security.
+Aspects such as securing/managing domain names, enrolling into certificates, or setting up authorization controls to enforce certificate issuance are beyond the scope of this article. Refer to the Registration Authority (RA) of your favorite Public Key Infrastructure (PKI) service. Microsoft-internal consumers: please reach out to Azure Security.
## Roles and entities involved in certificate management The security approach in a Service Fabric cluster is a case of "cluster owner declares it, Service Fabric runtime enforces it". By that we mean that almost none of the certificates, keys, or other credentials of identities participating in a cluster's functioning come from the service itself; they are all declared by the cluster owner. Furthermore, the cluster owner is also responsible for provisioning the certificates into the cluster, renewing them as needed, and ensuring the security of the certificates at all times. More specifically, the cluster owner must ensure that:
- - certificates declared in the NodeType section of the cluster manifest can be found on each node of that type, according to the [presentation rules](cluster-security-certificates.md#presentation-rules)
- - certificates declared above are installed inclusive of their corresponding private keys
- - certificates declared in the presentation rules should pass the [validation rules](cluster-security-certificates.md#validation-rules)
+ - Certificates declared in the NodeType section of the cluster manifest can be found on each node of that type, according to the [presentation rules](cluster-security-certificates.md#presentation-rules)
+ - Certificates declared above are installed with their corresponding private keys included.
+ - Certificates declared in the presentation rules should pass the [validation rules](cluster-security-certificates.md#validation-rules)
-Service Fabric, for its part, assumes the responsibilities of:
- - locating/finding certificates matching the declarations in the cluster definition
- - granting access to the corresponding private keys to Service Fabric-controlled entities on a 'need' basis
- - validating certificates in strict accordance with established security best-practices and the cluster definition
- - raising alerts on impending expiration of certificates, or failures to perform the basic steps of certificate validation
- - validating (to some degree) that the certificate-related aspects of the cluster definition are met by the underlying configuration of the hosts
+Service Fabric, for its part, assumes the following responsibilities:
+ - Locating certificates that match the declarations in the cluster definition
+ - Granting access to the corresponding private keys to Service Fabric-controlled entities on a 'need' basis
+ - Validating certificates in strict accordance with established security best-practices and the cluster definition
+ - Raising alerts on impending expiration of certificates, or failures to perform the basic steps of certificate validation
+ - Validating (to some degree) that the certificate-related aspects of the cluster definition are met by the underlying configuration of the hosts
It follows that the certificate management burden (as active operations) falls solely on the cluster owner. In the following sections, we'll take a closer look at each of the management operations, with available mechanisms and their impact on the cluster. ## The journey of a certificate Let us quickly revisit the progression of a certificate from issuance to consumption in the context of a Service Fabric cluster:
- 1. A domain owner registers with the RA of a PKI a domain or subject that they'd like to associate with ensuing certificates; the certificates will, in turn, constitute proofs of ownership of said domain or subject
+ 1. A domain owner registers with the RA of a PKI a domain or subject that they'd like to associate with ensuing certificates; the certificates will, in turn, constitute proofs of ownership of said domain or subject.
2. The domain owner also designates in the RA the identities of authorized requesters - entities that are entitled to request the enrollment of certificates with the specified domain or subject; in Microsoft Azure, the default identity provider is Azure Active Directory, and authorized requesters are designated by their corresponding AAD identity (or via security groups)
- 3. An authorized requester then enrolls into a certificate via a Secret Management Service; in Microsoft Azure, the SMS of choice is Azure Key Vault (AKV), which securely stores and allows the retrieval of secrets/certificates by authorized entities. AKV also renews/re-keys the certificate as configured in the associated certificate policy. (AKV uses AAD as the identity provider.)
- 4. An authorized retriever - which we'll refer to as a 'provisioning agent' - retrieves the certificate, inclusive of its private key, from the vault, and installs it on the machines hosting the cluster
+ 3. An authorized requester then enrolls into a certificate via a Secret Management Service; in Microsoft Azure, the SMS of choice is Azure Key Vault (AKV), which securely stores and allows the retrieval of secrets and certificates by authorized entities. AKV also renews/re-keys the certificate as configured in the associated certificate policy (AKV uses AAD as the identity provider).
+ 4. An authorized retriever - which we'll refer to as a 'provisioning agent' - retrieves the certificate, inclusive of its private key, from the vault, and installs it on the machines hosting the cluster.
5. The Service Fabric service (running elevated on each node) grants access to the certificate to allowed Service Fabric entities; these are designated by local groups, and split between ServiceFabricAdministrators and ServiceFabricAllowedUsers 6. The Service Fabric runtime accesses and uses the certificate to establish federation, or to authenticate to inbound requests from authorized clients 7. The provisioning agent monitors the vault certificate, and triggers the provisioning flow upon detecting renewal; subsequently, the cluster owner updates the cluster definition, if needed, to indicate the intent to roll over the certificate.
These steps are illustrated below; note the differences in provisioning between
![Provisioning certificates declared by subject common name][Image2] ### Certificate enrollment
-This topic is covered in detail in the Key Vault [documentation](../key-vault/certificates/create-certificate.md); we're including a synopsis here for continuity and easier reference. Continuing with Azure as the context, and using Azure Key Vault as the secret management service, an authorized certificate requester must have at least certificate management permissions on the vault, granted by the vault owner; the requester would then enroll into a certificate as follows:
- - creates a certificate policy in Azure Key Vault (AKV), which specifies the domain/subject of the certificate, the desired issuer, key type and length, intended key usage and more; see [Certificates in Azure Key Vault](../key-vault/certificates/certificate-scenarios.md) for details.
- - creates a certificate in the same vault with the policy specified above; this, in turn, generates a key pair as vault objects, a certificate signing request signed with the private key, and which is then forwarded to the designated issuer for signing
- - once the issuer (Certificate Authority) replies with the signed certificate, the result is merged into the vault, and the certificate is available for the following operations:
- - under {vaultUri}/certificates/{name}: the certificate including the public key and metadata
- - under {vaultUri}/keys/{name}: the certificate's private key, available for cryptographic operations (wrap/unwrap, sign/verify)
- - under {vaultUri}/secrets/{name}: the certificate inclusive of its private key, available for downloading as an unprotected pfx or pem file
- Recall that a vault certificate is, in fact, a chronological line of certificate instances, sharing a policy. Certificate versions will be created according to the lifetime and renewal attributes of the policy. It is highly recommended that vault certificates not share subjects or domains/DNS names; it can be disruptive in a cluster to provision certificate instances from different vault certificates, with identical subjects but substantially different other attributes, such as issuer, key usages etc.
+
+This topic is covered in detail in the [Key Vault documentation](../key-vault/certificates/create-certificate.md); we're including a synopsis here for continuity and easier reference. Continuing with Azure as the context, and using Azure Key Vault as the secret management service, an authorized certificate requester must have at least certificate management permissions on the vault, granted by the vault owner; the requester would then enroll into a certificate as follows:
+
+ - By creating a certificate policy in Azure Key Vault (AKV), which specifies the domain/subject of the certificate, the desired issuer, key type and length, intended key usage and more; see [Certificates in Azure Key Vault](../key-vault/certificates/certificate-scenarios.md) for details.
+ - Creates a certificate in the same vault with the policy specified above; this, in turn, generates a key pair as vault objects, a certificate signing request signed with the private key, and which is then forwarded to the designated issuer for signing
+ - Once the issuer (Certificate Authority) replies with the signed certificate, the result is merged into the vault, and the certificate data is available:
+ - Under `{vaultUri}/certificates/{name}`: The certificate including the public key and metadata.
+ - Under `{vaultUri}/keys/{name}`: The certificate's private key, available for cryptographic operations (wrap/unwrap, sign/verify).
+ - Under `{vaultUri}/secrets/{name}`: The certificate inclusive of its private key, available for downloading as an unprotected pfx or pem file.
+
+Recall that a certificate in the vault contains a chronological list of certificate instances that share a policy. Certificate versions will be created according to the lifetime and renewal attributes of this policy. It is highly recommended that vault certificates not share subjects or domains/DNS names, as it can be disruptive in a cluster to provision certificate instances from different vault certificates, with identical subjects but substantially different other attributes, such as issuer, key usages etc.
At this point, a certificate exists in the vault, ready for consumption. Onward to: ### Certificate provisioning+ We mentioned a 'provisioning agent', which is the entity that retrieves the certificate, inclusive of its private key, from the vault and installs it on to each of the hosts of the cluster. (Recall that Service Fabric does not provision certificates.) In our context, the cluster will be hosted on a collection of Azure VMs and/or virtual machine scale sets. In Azure, provisioning a certificate from a vault to a VM/VMSS can be achieved with the following mechanisms - assuming, as above, that the provisioning agent was previously granted 'get' permissions on the vault by the vault owner:
- - ad-hoc: an operator retrieves the certificate from the vault (as pfx/PKCS #12 or pem) and installs it on each node
- - as a virtual machine scale set 'secret' during deployment: the Compute service retrieves, using its first party identity on behalf of the operator, the certificate from a template-deployment-enabled vault and installs it on each node of the virtual machine scale set ([like so](../virtual-machine-scale-sets/virtual-machine-scale-sets-faq.yml)); note this allows the provisioning of versioned secrets only
- - using the [Key Vault VM extension](../virtual-machines/extensions/key-vault-windows.md); this allows the provisioning of certificates using version-less declarations, with periodic refreshing of observed certificates. In this case, the VM/VMSS is expected to have a [managed identity](../virtual-machines/security-policy.md#managed-identities-for-azure-resources), an identity that has been granted access to the vault(s) containing the observed certificates.
+
+ - Ad-hoc: an operator retrieves the certificate from the vault (as pfx/PKCS #12 or pem) and installs it on each node
+ - As a virtual machine scale set 'secret' during deployment: Using its first party identity on behalf of the operator, the Compute service retrieves the certificate from a template-deployment-enabled vault and installs it on each node of the virtual machine scale set ([like so](../virtual-machine-scale-sets/virtual-machine-scale-sets-faq.yml)); note this allows the provisioning of versioned secrets only
+ - Using the [Key Vault VM extension](../virtual-machines/extensions/key-vault-windows.md); this allows the provisioning of certificates using version-less declarations, with periodic refreshing of observed certificates. In this case, the VM/VMSS is expected to have a [managed identity](../virtual-machines/security-policy.md#managed-identities-for-azure-resources), an identity that has been granted access to the vault(s) containing the observed certificates.
The ad-hoc mechanism is not recommended for multiple reasons, ranging from security to availability, and won't be discussed here further; for details, refer to [certificates in virtual machine scale sets](../virtual-machine-scale-sets/virtual-machine-scale-sets-faq.yml).
In either case, the rotated certificate is now provisioned to all of the nodes,
- existing connections will be kept alive/allowed to naturally expire or otherwise terminate; an internal handler will have been notified that a new match exists > [!NOTE]
-> Prior to version 7.2.445 (7.2 CU4), Service Fabric selected the farthest expiring certificate (the certificate with the farthest 'NotAfter' property)
+> Currently (7.2 CU4+), Service Fabric selects the cert with the largest 'NotBefore' property value (most recently issued). Prior to 7.2CU4 Service Fabric picked the valid cert with the largest NotAfter (furthest expiring).
This translates into the following important observations: - The renewal certificate may be ignored if its expiration date is sooner than that of the certificate currently in use.
site-recovery Deploy Vmware Azure Replication Appliance Preview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/deploy-vmware-azure-replication-appliance-preview.md
In case of any organizational restrictions, you can manually set up the Site Rec
![Register appliance](./media/deploy-vmware-azure-replication-appliance-preview/app-setup-register.png)
- - **Friendly name of appliance** : Provide a friendly name with which you want to track this appliance in the Azure portal under recovery services vault infrastructure.
+ - **Friendly name of appliance**: Provide a friendly name with which you want to track this appliance in the Azure portal under recovery services vault infrastructure.
- - **Azure Site Recovery replication appliance key** : Copy the key from the portal by navigating to **Recovery Services vault** > **Getting started** > **VMware to Azure Prepare Infrastructure**.
+ - **Azure Site Recovery replication appliance key**: Copy the key from the portal by navigating to **Recovery Services vault** > **Getting started** > **Site Recovery** > **VMware to Azure: Prepare Infrastructure**.
- - After pasting the key, select **Login.**
- You will be redirected to a new authentication tab.
+ - After pasting the key, click **Login.** You will be redirected to a new authentication tab.
- By default, an authentication code will be generated as highlighted below, in the authentication manager page. Use this code in the authentication tab.
+ By default, an authentication code will be generated as highlighted below, in the **Appliance configuration manager** page. Use this code in the authentication tab.
- Enter your Microsoft Azure credentials to complete registration.
- After successful registration, you can close the tab and move to configuration manager to continue the set up.
+ After successful registration, you can close the tab and move to appliance configuration manager to continue the set up.
![authentication code](./media/deploy-vmware-azure-replication-appliance-preview/enter-code.png)
In case of any organizational restrictions, you can manually set up the Site Rec
> An authentication code expires within 5 minutes of generation. In case of inactivity for more than this duration, you will be prompted to login again to Azure.
-6. Select **Login** to reconnect with the session. For authentication code, refer to the section *Summary* or *Register with Azure Recovery Services vault* in the configuration manger.
-
-7. After successful login, Subscription, Resource Group and Recovery Services vault details are displayed. You can logout in case you want to change the vault. Else, Select **Continue** to proceed.
+6. After successful login, Subscription, Resource Group and Recovery Services vault details are displayed. You can log out in case you want to change the vault. Else, select **Continue** to proceed.
![Appliance registered](./media/deploy-vmware-azure-replication-appliance-preview/app-setup.png)
In case of any organizational restrictions, you can manually set up the Site Rec
![Configuration of vCenter](./media/deploy-vmware-azure-replication-appliance-preview/vcenter-information.png)
-8. Select **Add vCenter Server** to add vCenter information. Enter the server name or IP address of the vCenter and port information. Post that, provide username, password and friendly name and is used to fetch details of [virtual machine managed through the vCenter](vmware-azure-tutorial-prepare-on-premises.md#prepare-an-account-for-automatic-discovery). The user account details will be encrypted and stored locally in the machine.
+7. Select **Add vCenter Server** to add vCenter information. Enter the server name or IP address of the vCenter and port information. Post that, provide username, password and friendly name. This is used to fetch details of [virtual machine managed through the vCenter](vmware-azure-tutorial-prepare-on-premises.md#prepare-an-account-for-automatic-discovery). The user account details will be encrypted and stored locally in the machine.
>[!NOTE]
-> iF you're trying to add the same vCenter Server to multiple appliances, then ensure that the same friendly name is used in both the appliances.
+> If you're trying to add the same vCenter Server to multiple appliances, then ensure that the same friendly name is used in all the appliances.
-9. After successfully saving the vCenter information, select **Add virtual machine credentials** to provide user details of the VMs discovered through the vCenter.
+8. After successfully saving the vCenter information, select **Add virtual machine credentials** to provide user details of the VMs discovered through the vCenter.
>[!NOTE]
- > - For Linux OS, ensure to provide root credentials and for Windows OS, a user account with admin privileges should be added, these credentials will be used to push mobility agent on to the source VM during enable replication operation. The credentials can be chosen per VM in the Azure portal during enable replication workflow.
+ > - For Linux OS, ensure to provide root credentials and for Windows OS, a user account with admin privileges should be added, these credentials will be used to push install mobility agent on to the source VM during enable replication operation. The credentials can be chosen per VM in the Azure portal during enable replication workflow.
> - Visit the appliance configurator to edit or add credentials to access your machines.
-10. After successfully adding the details, select **Continue** to install all Azure Site Recovery replication appliance components and register with Azure services. This activity can take up to 30 minutes.
+9. After successfully adding the details, select **Continue** to install all Azure Site Recovery replication appliance components and register with Azure services. This activity can take up to 30 minutes.
Ensure you do not close the browser while configuration is in progress.
site-recovery Site Recovery Deployment Planner https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/site-recovery-deployment-planner.md
Previously updated : 05/27/2021 Last updated : 04/06/2022
This article is the Azure Site Recovery Deployment Planner user guide for VMware
## Overview
-Before you begin to protect any VMware virtual machines (VMs) by using Azure Site Recovery, allocate sufficient bandwidth, based on your daily data-change rate, to meet your desired recovery point objective (RPO). Be sure to deploy the right number of configuration servers and process servers on-premises.
+Before you begin to protect any VMware vSphere virtual machines (VMs) by using Azure Site Recovery, allocate sufficient bandwidth, based on your daily data-change rate, to meet your desired recovery point objective (RPO). Be sure to deploy the right number of configuration servers and process servers on-premises.
You also need to create the right type and number of target Azure Storage accounts. You create either standard or premium storage accounts, factoring in growth on your source production servers because of increased usage over time. You choose the storage type per VM, based on workload characteristics (for example, read/write I/O operations per second [IOPS] or data churn) and Site Recovery limits.
The tool provides the following details:
| **Category** | **VMware to Azure** |**Hyper-V to Azure**|**Azure to Azure**|**Hyper-V to secondary site**|**VMware to secondary site** --|--|--|--|--|-- Supported scenarios |Yes|Yes|No|Yes*|No
-Supported version | vCenter 7.0, 6.7, 6.5, 6.0 or 5.5| Windows Server 2016, Windows Server 2012 R2 | NA |Windows Server 2016, Windows Server 2012 R2|NA
-Supported configuration|vCenter, ESXi| Hyper-V cluster, Hyper-V host|NA|Hyper-V cluster, Hyper-V host|NA|
+Supported version | vCenter Server 7.0, 6.7, 6.5, 6.0 or 5.5| Windows Server 2016, Windows Server 2012 R2 | NA |Windows Server 2016, Windows Server 2012 R2|NA
+Supported configuration|vCenter Server, ESXi| Hyper-V cluster, Hyper-V host|NA|Hyper-V cluster, Hyper-V host|NA|
Number of servers that can be profiled per running instance of Site Recovery Deployment Planner |Single (VMs belonging to one vCenter Server or one ESXi server can be profiled at a time)|Multiple (VMs across multiple hosts or host clusters can be profiled at a time)| NA |Multiple (VMs across multiple hosts or host clusters can be profiled at a time)| NA *The tool is primarily for the Hyper-V to Azure disaster recovery scenario. For Hyper-V to secondary site disaster recovery, it can be used only to understand source-side recommendations like required network bandwidth, required free storage space on each of the source Hyper-V servers, and initial replication batching numbers and batch definitions. Ignore the Azure recommendations and costs from the report. Also, the Get Throughput operation is not applicable for the Hyper-V-to-secondary-site disaster recovery scenario.
The tool has two main phases: profiling and report generation. There is also a t
The tool is packaged in a .zip folder. The current version of the tool supports only the VMware to Azure scenario. 2. Copy the .zip folder to the Windows server from which you want to run the tool.
-You can run the tool from Windows Server 2012 R2 if the server has network access to connect to the vCenter server/vSphere ESXi host that holds the VMs to be profiled. However, we recommend that you run the tool on a server whose hardware configuration meets the [configuration server sizing guidelines](/en-in/azure/site-recovery/site-recovery-plan-capacity-vmware#size-recommendations-for-the-configuration-server). If you already deployed Site Recovery components on-premises, run the tool from the configuration server.
+You can run the tool from Windows Server 2012 R2 if the server has network access to connect to the vCenter Server/vSphere ESXi host that holds the VMs to be profiled. However, we recommend that you run the tool on a server whose hardware configuration meets the [configuration server sizing guidelines](site-recovery-plan-capacity-vmware.md#size-recommendations-for-the-configuration-server-and-inbuilt-process-server). If you already deployed Site Recovery components on-premises, run the tool from the configuration server.
We recommend that you have the same hardware configuration as the configuration server (which has an in-built process server) on the server where you run the tool. Such a configuration ensures that the achieved throughput that the tool reports matches the actual throughput that Site Recovery can achieve during replication. The throughput calculation depends on available network bandwidth on the server and hardware configuration (such as CPU and storage) of the server. If you run the tool from any other server, the throughput is calculated from that server to Azure. Also, because the hardware configuration of the server might differ from that of the configuration server, the achieved throughput that the tool reports might be inaccurate.
site-recovery Vmware Azure Set Up Replication Tutorial Preview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-azure-set-up-replication-tutorial-preview.md
This article describes how to enable replication for on-premises VMware VMs, for
For information on how to set up disaster recovery in Azure Site Recovery Classic releases, see [the tutorial](vmware-azure-tutorial.md).
-This is the third tutorial in a series that shows you how to set up disaster recovery to Azure for on-premises VMware VMs. In the previous tutorial, we [prepared the on-premises VMware environment](vmware-azure-tutorial-prepare-on-premises.md) for disaster recovery to Azure.
+This is the second tutorial in a series that shows you how to set up disaster recovery to Azure for on-premises VMware VMs. In the previous tutorial, we [prepared the on-premises Azure Site Recovery replication appliance](deploy-vmware-azure-replication-appliance-preview.md) for disaster recovery to Azure.
In this tutorial, you learn how to:
In this tutorial, you learn how to:
> * Set up the replication target settings. > * Enable replication for a VMware VM.
-> [!NOTE]
-> Tutorials show you the simplest deployment path for a scenario. They use default options where possible, and don't show all possible settings and paths. For detailed instructions, review the article in the How To section of the Site Recovery Table of Contents.
- ## Get started VMware to Azure replication includes the following procedures: - Sign in to the [Azure portal](https://portal.azure.com/).-- To get started, navigate to [Azure preview portal](https://aka.ms/rcmcanary). And do the steps detailed in the following sections. - Prepare Azure account - Prepare infrastructure - [Create a recovery Services vault](./quickstart-create-vault-template.md?tabs=CLI)
spatial-anchors Move Azure Spatial Anchors Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spatial-anchors/how-tos/move-azure-spatial-anchors-account.md
+
+ Title: Move an Azure Spatial Anchors account between regions
+description: Move an Azure Spatial Anchors account between regions
+++++ Last updated : 04/07/2022+++
+#Customer intent: As an Azure service administrator, I want to move my service resources to another Azure region.
++
+# Move a Spatial Anchors account between regions
+
+This article describes how to move a Spatial Anchors account to a different Azure region. You might move your resources to another region for many reasons. For example, to take advantage of a new Azure region, to deploy features or services available in specific regions only, to meet internal policy and governance requirements, or in response to capacity planning requirements.
+
+## Prerequisites
+
+* Make sure that the Spatial Anchors account is in the Azure region from which you want to move.
+* Spatial Anchors accounts can't be moved between regions. You'll have to associate a new Spatial Anchors account in your source code to point to the target region. You'll also have to recreate all the anchors that you had previously created.
+
+## Prepare and move
+
+### Create a new Spatial Anchors account in the target region
++
+### Update your source code
+
+The next step is to associate your new Spatial Anchors account in your source code. You already took note of the **Account Key**, **Account ID**, and **Account Domain** values. You can use them to update your apps or web services source code.
+
+## Verify
+
+Run your app or web service and verify it's still functional after the move. You'll need to recreate all the anchors that you had previously created.
+
+## Clean up
+
+To complete the move of the Spatial Anchors account, delete the source Spatial Anchors account or resource group. To do so, select the Spatial Anchors account or resource group from your dashboard in the portal and select Delete at the top of each page.
+
+## Next steps
+
+In this tutorial, you moved a Spatial Anchors account from one region to another and cleaned up the source resources. To learn more about moving resources between regions and disaster recovery in Azure, refer to:
+
+> [!div class="nextstepaction"]
+> [Move resources to a new resource group or subscription](../../azure-resource-manager/management/move-resource-group-and-subscription.md)
spatial-anchors Get Started Android https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spatial-anchors/quickstarts/get-started-android.md
To complete this quickstart, make sure you have:
- Additional device drivers may be required for your computer to communicate with your Android device. See [here](https://developer.android.com/studio/run/device.html) for additional information and instructions. - Your app must target ARCore **1.11.0**.
+## Create a Spatial Anchors resource
+ [!INCLUDE [Create Spatial Anchors resource](../../../includes/spatial-anchors-get-started-create-resource.md)] ## Open the sample project
spatial-anchors Get Started Hololens https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spatial-anchors/quickstarts/get-started-hololens.md
To complete this quickstart, make sure you have:
- A HoloLens device with [developer mode](/windows/mixed-reality/using-visual-studio) enabled. This article requires a HoloLens device with the [Windows 10 May 2020 Update](/windows/mixed-reality/whats-new/release-notes-may-2020). To update to the latest release on HoloLens, open the **Settings** app, go to **Update & Security**, then select the **Check for updates** button. - Your app must set the **spatialPerception** capability in its AppX manifest.
+## Create a Spatial Anchors resource
+ [!INCLUDE [Create Spatial Anchors resource](../../../includes/spatial-anchors-get-started-create-resource.md)] ## Open the sample project
spatial-anchors Get Started Ios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spatial-anchors/quickstarts/get-started-ios.md
To complete this quickstart, make sure you have:
1. Update your git config with `git lfs install` (for the current user) or `git lfs install --system` (for the entire system). - A developer enabled <a href="https://developer.apple.com/documentation/arkit/verifying_device_support_and_user_permission" target="_blank">ARKit compatible</a> iOS device.
+## Create a Spatial Anchors resource
+ [!INCLUDE [Create Spatial Anchors resource](../../../includes/spatial-anchors-get-started-create-resource.md)] ## Open the sample project
spatial-anchors Get Started Unity Android https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spatial-anchors/quickstarts/get-started-unity-android.md
To complete this quickstart, make sure you have:
- If running on macOS, get Git installed via HomeBrew. Enter the following command into a single line of the Terminal: `/usr/bin/ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"`. Then, run `brew install git` and `brew install git-lfs`. - A Unity installation, including the **Android Build Support** with **Android SDK & NDK Tools** and **OpenJDK** modules. For supported versions and required capabilities, visit the [Unity project setup page](../how-tos/setup-unity-project.md).
+## Create a Spatial Anchors resource
+ [!INCLUDE [Create Spatial Anchors resource](../../../includes/spatial-anchors-get-started-create-resource.md)] ## Download sample project and import SDK
spatial-anchors Get Started Unity Hololens https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spatial-anchors/quickstarts/get-started-unity-hololens.md
To complete this quickstart:
- You need a Windows computer with <a href="https://www.visualstudio.com/downloads/" target="_blank">Visual Studio 2019</a> or later installed. Your Visual Studio installation must include the **Universal Windows Platform development** workload and the **Windows 10 SDK (10.0.18362.0 or newer)** component. You must also install <a href="https://git-scm.com/download/win" target="_blank">Git for Windows</a> and <a href="https://git-lfs.github.com/">Git LFS</a>. - You need to have Unity installed. For supported versions and required capabilities, visit the [Unity project setup page](../how-tos/setup-unity-project.md).
+## Create a Spatial Anchors resource
[!INCLUDE [Create Spatial Anchors resource](../../../includes/spatial-anchors-get-started-create-resource.md)]
spatial-anchors Get Started Unity Ios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spatial-anchors/quickstarts/get-started-unity-ios.md
To complete this quickstart, make sure you have:
- A Unity installation. For supported versions and required capabilities, visit the [Unity project setup page](../how-tos/setup-unity-project.md). - Git installed via HomeBrew. Enter the following command into a single line of the Terminal: `/usr/bin/ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"`. Then, run `brew install git` and `brew install git-lfs`.
+## Create a Spatial Anchors resource
+ [!INCLUDE [Create Spatial Anchors resource](../../../includes/spatial-anchors-get-started-create-resource.md)] ## Download sample project and import SDK
spatial-anchors Get Started Xamarin Android https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spatial-anchors/quickstarts/get-started-xamarin-android.md
To complete this quickstart, make sure you have:
- Additional device drivers may be required for your computer to communicate with your Android device. For more information, see [here](https://developer.android.com/studio/run/device.html). - Your app must target ARCore **1.8**.
+## Create a Spatial Anchors resource
+ [!INCLUDE [Create Spatial Anchors resource](../../../includes/spatial-anchors-get-started-create-resource.md)] ## Open the sample project
spatial-anchors Get Started Xamarin Ios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spatial-anchors/quickstarts/get-started-xamarin-ios.md
To complete this quickstart, make sure you have:
- <a href="https://git-scm.com/download/mac" target="_blank">Git for macOS</a>. - <a href="https://git-lfs.github.com/">Git LFS</a>.
+## Create a Spatial Anchors resource
+ [!INCLUDE [Create Spatial Anchors resource](../../../includes/spatial-anchors-get-started-create-resource.md)] ## Open the sample project
spatial-anchors Tutorial New Android App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spatial-anchors/tutorials/tutorial-new-android-app.md
Finally, add the following code into your `handleTap()` method. It will attach a
Before proceeding any further, you'll need to create an Azure Spatial Anchors account to get the account Identifier, Key, and Domain, if you don't already have them. Follow the following section to obtain them.
+## Create a Spatial Anchors resource
+ [!INCLUDE [Create Spatial Anchors resource](../../../includes/spatial-anchors-get-started-create-resource.md)] ## Upload your local anchor into the cloud
spatial-anchors Tutorial New Unity Hololens App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spatial-anchors/tutorials/tutorial-new-unity-hololens-app.md
We'll now set some Unity project settings that help us target the Windows Hologr
## Try it out #1 You should now have an empty scene that is ready to be deployed to your HoloLens device. To test out that everything is working, build your app in **Unity** and deploy it from **Visual Studio**. Follow [**Using Visual Studio to deploy and debug**](/windows/mixed-reality/develop/advanced-concepts/using-visual-studio?tabs=hl2) to do so. You should see the Unity start screen, and then a clear display.
+## Create a Spatial Anchors resource
+ [!INCLUDE [Create Spatial Anchors resource](../../../includes/spatial-anchors-get-started-create-resource.md)] ## Creating & Adding Scripts
spatial-anchors Tutorial Share Anchors Across Devices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spatial-anchors/tutorials/tutorial-share-anchors-across-devices.md
In this tutorial, you'll learn how to:
> [!NOTE] > You'll be using Unity and an ASP.NET Core web app in this tutorial, but the approach here is only to provide an example of how to share Azure Spatial Anchors identifiers across other devices. You can use other languages and back-end technologies to achieve the same goal.
+## Create a Spatial Anchors resource
+ [!INCLUDE [Create Spatial Anchors resource](../../../includes/spatial-anchors-get-started-create-resource.md)] ## Download the sample project
spring-cloud How To Manage User Assigned Managed Identities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/how-to-manage-user-assigned-managed-identities.md
Managed identities for Azure resources provide an automatically managed identity
::: zone pivot="sc-enterprise-tier" - An already provisioned Azure Spring Cloud Enterprise tier instance. For more information, see [Quickstart: Provision an Azure Spring Cloud service instance using the Enterprise tier](quickstart-provision-service-instance-enterprise.md).-- [Azure CLI version 3.1.0 or later](/cli/azure/install-azure-cli).-- [!INCLUDE [install-enterprise-extension](includes/install-enterprise-extension.md)]
+- [Azure CLI version 2.30.0 or higher](/cli/azure/install-azure-cli).
+- [!INCLUDE [install-app-user-identity-extension](includes/install-app-user-identity-extension.md)]
- At least one already provisioned user-assigned managed identity. For more information, see [Manage user-assigned managed identities](../active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md). ::: zone-end
Managed identities for Azure resources provide an automatically managed identity
::: zone pivot="sc-standard-tier" - An already provisioned Azure Spring Cloud instance. For more information, see [Quickstart: Deploy your first application to Azure Spring Cloud](./quickstart.md).-- [Azure CLI version 3.1.0 or later](/cli/azure/install-azure-cli).
+- [Azure CLI version 2.30.0 or higher](/cli/azure/install-azure-cli).
+- [!INCLUDE [install-app-user-identity-extension](includes/install-app-user-identity-extension.md)]
- At least one already provisioned user-assigned managed identity. For more information, see [Manage user-assigned managed identities](../active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md). ::: zone-end
az spring-cloud app identity remove \
--user-assigned <space-separated user identity resource IDs to remove> ``` ++ ## Limitations For user-assigned managed identity limitations, see [Quotas and service plans for Azure Spring Cloud](./quotas.md). - ## Next steps
spring-cloud Quotas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/quotas.md
All Azure services set default limits and quotas for resources and features. A
| Persistent volumes | per Azure Spring Cloud service instance | 1 GB/app x 10 apps | 50 GB/app x 10 apps | | Inbound Public Endpoints | per Azure Spring Cloud service instance | 10 <sup>1</sup> | 10 <sup>1</sup> | | Outbound Public IPs | per Azure Spring Cloud service instance | 1 <sup>2</sup> | 2 <sup>2</sup> <br> 1 if using VNet<sup>2</sup> |
+| User-assigned managed identities | per app instance | 20 | 20 |
<sup>1</sup> You can increase this limit via support request to a maximum of 1 per app.
storage Storage Files Scale Targets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-scale-targets.md
There are two main types of storage accounts for Azure Files:
| Maximum size of a file share | <ul><li>100 TiB, with large file share feature enabled<sup>2</sup></li><li>5 TiB, default</li></ul> | 100 TiB | | Maximum number of files in a file share | No limit | No limit | | Maximum request rate (Max IOPS) | <ul><li>20,000, with large file share feature enabled<sup>2</sup></li><li>1,000 or 100 requests per 100 ms, default</li></ul> | <ul><li>Baseline IOPS: 3000 + 1 IOPS per GiB, up to 100,000</li><li>IOPS bursting: Max (10000, 3x IOPS per GiB), up to 100,000</li></ul> |
-| Throughput (ingress + egress) for a single file share | <ul><li>Up to 300 MiB/sec, with large file share feature enabled<sup>2</sup></li><li>Up to 60 MiB/sec, default</li></ul> | 100 + CEILING(0.04 * ProvisionedGiB) + CEILING(0.06 * ProvisionedGiB) |
+| Throughput (ingress + egress) for a single file share (MiB/sec) | <ul><li>Up to 300 MiB/sec, with large file share feature enabled<sup>2</sup></li><li>Up to 60 MiB/sec, default</li></ul> | 100 + CEILING(0.04 * ProvisionedGiB) + CEILING(0.06 * ProvisionedGiB) |
| Maximum number of share snapshots | 200 snapshots | 200 snapshots | | Maximum object (directories and files) name length | 2,048 characters | 2,048 characters | | Maximum pathname component (in the path \A\B\C\D, each letter is a component) | 255 characters | 255 characters |
storage Understanding Billing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/understanding-billing.md
When you provision a premium file share, you specify how many GiBs your workload
| Baseline IOPS formula | `MIN(3000 + 1 * ProvisionedGiB, 100000)` | | Burst limit | `MIN(MAX(10000, 3 * ProvisionedGiB), 100000)` | | Burst credits | `(BurstLimit - BaselineIOPS) * 3600` |
-| Throughput rate (ingress + egress) | `100 + CEILING(0.04 * ProvisionedGiB) + CEILING(0.06 * ProvisionedGiB)` |
+| Throughput rate (ingress + egress) (MiB/sec) | `100 + CEILING(0.04 * ProvisionedGiB) + CEILING(0.06 * ProvisionedGiB)` |
The following table illustrates a few examples of these formulae for the provisioned share sizes:
stream-analytics Stream Analytics Troubleshoot Input https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/stream-analytics-troubleshoot-input.md
Previously updated : 05/01/2020 Last updated : 04/08/2022
Enable resource logs to view the details of the error and the message (payload)
In cases where the message payload is greater than 32 KB or is in binary format, run the CheckMalformedEvents.cs code available in the [GitHub samples repository](https://github.com/Azure/azure-stream-analytics/tree/master/Samples/CheckMalformedEventsEH). This code reads the partition ID, offset, and prints the data that's located in that offset.
+Other common reasons that result in input deserialization errors are:
+1. Integer column having a value greater than 9223372036854775807.
+2. Strings instead of array of objects or line separated objects. Valid example : *[{'a':1}]*. Invalid example : *"'a' :1"*.
+3. Using Event Hub capture blob in Avro format as input in your job.
+4. Having two columns in a single input event that differ only in case. Example: *column1* and *COLUMN1*.
+ ## Job exceeds maximum Event Hub receivers A best practice for using Event Hubs is to use multiple consumer groups for job scalability. The number of readers in the Stream Analytics job for a specific input affects the number of readers in a single consumer group. The precise number of receivers is based on internal implementation details for the scale-out topology logic and is not exposed externally. The number of readers can change when a job is started or during job upgrades.
For further assistance, try our [Microsoft Q&A question page for Azure Stream An
* [Get started using Azure Stream Analytics](stream-analytics-real-time-fraud-detection.md) * [Scale Azure Stream Analytics jobs](stream-analytics-scale-jobs.md) * [Azure Stream Analytics Query Language Reference](/stream-analytics-query/stream-analytics-query-language-reference)
-* [Azure Stream Analytics Management REST API Reference](/rest/api/streamanalytics/)
+* [Azure Stream Analytics Management REST API Reference](/rest/api/streamanalytics/)
virtual-desktop Create Host Pools User Profile https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/create-host-pools-user-profile.md
Title: Azure Virtual Desktop FSLogix profile container share - Azure
description: How to set up an FSLogix profile container for a Azure Virtual Desktop host pool using a virtual machine-based file share. Previously updated : 08/20/2019 Last updated : 04/08/2022
This article will tell you how to set up a FSLogix profile container share for a
## Create a new virtual machine that will act as a file share
-When creating the virtual machine, be sure to place it on either the same virtual network as the host pool virtual machines or on a virtual network that has connectivity to the host pool virtual machines. You can create a virtual machine in multiple ways:
+When creating the virtual machine, be sure to place it on either the same virtual network as the host pool virtual machines or on a virtual network that has connectivity to the host pool virtual machines. It must also be joined to your Active Directory domain. You can create a virtual machine in multiple ways. Here are a few options:
- [Create a virtual machine from an Azure Gallery image](../virtual-machines/windows/quick-create-portal.md#create-virtual-machine) - [Create a virtual machine from a managed image](../virtual-machines/windows/create-vm-generalized-managed.md) - [Create a virtual machine from an unmanaged image](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.compute/vm-from-user-image)
-After creating the virtual machine, join it to the domain by doing the following things:
-
-1. [Connect to the virtual machine](../virtual-machines/windows/quick-create-portal.md#connect-to-virtual-machine) with the credentials you provided when creating the virtual machine.
-2. On the virtual machine, launch **Control Panel** and select **System**.
-3. Select **Computer name**, select **Change settings**, and then select **Change…**
-4. Select **Domain** and then enter the Active Directory domain on the virtual network.
-5. Authenticate with a domain account that has privileges to domain-join machines.
- ## Prepare the virtual machine to act as a file share for user profiles The following are general instructions about how to prepare a virtual machine to act as a file share for user profiles:
For more information about permissions, see the [FSLogix documentation](/fslogix
## Configure the FSLogix profile container
-To configure the virtual machines with the FSLogix software, do the following on each machine registered to the host pool:
+To configure FSLogix profile container, do the following on each session host registered to the host pool:
1. [Connect to the virtual machine](../virtual-machines/windows/quick-create-portal.md#connect-to-virtual-machine) with the credentials you provided when creating the virtual machine.
-2. Launch an internet browser and navigate to [this link](https://aka.ms/fslogix_download) to download the FSLogix agent.
-3. Navigate to either \\\\Win32\\Release or \\\\X64\\Release in the .zip file and run **FSLogixAppsSetup** to install the FSLogix agent. To learn more about how to install FSLogix, see [Download and install FSLogix](/fslogix/install-ht/).
-4. Navigate to **Program Files** > **FSLogix** > **Apps** to confirm the agent installed.
-5. From the start menu, run **RegEdit** as an administrator. Navigate to **Computer\\HKEY_LOCAL_MACHINE\\software\\FSLogix**.
+2. Launch an internet browser and [download the FSLogix agent](https://aka.ms/fslogix_download).
+3. Open the downloaded .zip file, navigate to either **Win32\\Release** or **x64\\Release** (depending on your operating system) and run **FSLogixAppsSetup** to install the FSLogix agent. To learn more about how to install FSLogix, see [Download and install FSLogix](/fslogix/install-ht/).
+4. Navigate to **Program Files** > **FSLogix** > **Apps** to confirm the agent installed successfully.
+5. From the start menu, run **regedit** as an administrator. Navigate to **Computer\\HKEY_LOCAL_MACHINE\\Software\\FSLogix**.
6. Create a key named **Profiles**.
-7. Create the following values for the Profiles key:
-
-| Name | Type | Data/Value |
-||--|--|
-| Enabled | DWORD | 1 |
-| VHDLocations | Multi-String Value | "Network path for file share" |
+7. Create the following values for the **Profiles** key (replacing **\\\\hostname\\share** with your real path):
->[!IMPORTANT]
->To help secure your Azure Virtual Desktop environment in Azure, we recommend you don't open inbound port 3389 on your VMs. Azure Virtual Desktop doesn't require an open inbound port 3389 for users to access the host pool's VMs. If you must open port 3389 for troubleshooting purposes, we recommend you use [just-in-time VM access](../security-center/security-center-just-in-time.md).
+ | Name | Type | Data/Value |
+ ||--|--|
+ | Enabled | DWORD | 1 |
+ | VHDLocations | Multi-String Value | \\\\hostname\\share |
virtual-desktop Set Up Mfa https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/set-up-mfa.md
Here's how to create a Conditional Access policy that requires multifactor authe
- If you're using Azure Virtual Desktop (classic), choose these apps:
- - **Azure Virtual Desktop** (App ID 5a0aa725-4958-4b0c-80a9-34562e23f3b7)
- - **Azure Virtual Desktop Client** (App ID fa4345a4-a730-4230-84a8-7d9651b86739), which will let you set policies on the web client
+ - **Windows Virtual Desktop** (App ID 5a0aa725-4958-4b0c-80a9-34562e23f3b7)
+ - **Windows Virtual Desktop Client** (App ID fa4345a4-a730-4230-84a8-7d9651b86739), which will let you set policies on the web client
After that, skip ahead to step 11.
Here's how to create a Conditional Access policy that requires multifactor authe
> > If you're using Azure Virtual Desktop (classic), if the Conditional Access policy blocks all access and only excludes Azure Virtual Desktop app IDs, you can fix this by adding the app ID 9cdead84-a844-4324-93f2-b2e6bb768d07 to the policy. Not adding this app ID will block feed discovery of Azure Virtual Desktop (classic) resources.
-10. Go to **Conditions** > **Client apps**. In **Configure**, select **Yes**, and then select where to apply the policy:
+10. Once you've selected your app, choose **Select**, and then select **Done**.
+
+ > [!div class="mx-imgBorder"]
+ > ![A screenshot of the Cloud apps or actions page. The Azure Virtual Desktop and Azure Virtual Desktop Client apps are highlighted in red.](media/cloud-apps-enterprise.png)
+
+ >[!NOTE]
+ >To find the App ID of the app you want to select, go to **Enterprise Applications** and select **Microsoft Applications** from the application type drop-down menu.
+
+11. Go to **Conditions** > **Client apps**. In **Configure**, select **Yes**, and then select where to apply the policy:
- Select **Browser** if you want the policy to apply to the web client. - Select **Mobile apps and desktop clients** if you want to apply the policy to other clients.
Here's how to create a Conditional Access policy that requires multifactor authe
> [!div class="mx-imgBorder"] > ![A screenshot of the Client apps page. The user has selected the mobile apps and desktop clients check box.](media/select-apply.png)
-11. Once you've selected your app, choose **Select**, and then select **Done**.
-
- > [!div class="mx-imgBorder"]
- > ![A screenshot of the Cloud apps or actions page. The Azure Virtual Desktop and Azure Virtual Desktop Client apps are highlighted in red.](media/cloud-apps-enterprise.png)
-
- >[!NOTE]
- >To find the App ID of the app you want to select, go to **Enterprise Applications** and select **Microsoft Applications** from the application type drop-down menu.
- 12. Under **Access controls** > **Grant**, select **Grant access**, **Require multi-factor authentication**, and then **Select**. 13. Under **Access controls** > **Session**, select **Sign-in frequency**, set the value to the time you want between prompts, and then select **Select**. For example, setting the value to **1** and the unit to **Hours**, will require multifactor authentication if a connection is launched an hour after the last one. 14. Confirm your settings and set **Enable policy** to **On**.
virtual-desktop Teams On Avd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/teams-on-avd.md
>[!NOTE] >Media optimization for Microsoft Teams is only available for the following two clients: >
->- Windows Desktop and client on Windows 10 machines. Windows Desktop client version 1.2.1026.0 or later.
+>- Windows Desktop and client on Windows 10/11 machines. Windows Desktop client version 1.2.1026.0 or later.
>- macOS Remote Desktop client, version 10.7.7 or later (preview) > [!IMPORTANT]
Before you can use Microsoft Teams on Azure Virtual Desktop, you'll need to do t
- [Prepare your network](/microsoftteams/prepare-network/) for Microsoft Teams. - Install the [Remote Desktop client](./user-documentation/connect-windows-7-10.md) on a Windows 10 or Windows 10 IoT Enterprise device that meets the Microsoft Teams [hardware requirements for Teams](/microsoftteams/hardware-requirements-for-the-teams-app#hardware-requirements-for-teams-on-a-windows-pc/).-- Connect to a Windows 10 Multi-session or Windows 10 Enterprise virtual machine (VM).
+- Connect to a Windows 10/11 Multi-session or Windows 10/11 Enterprise virtual machine (VM).
## Install the Teams desktop app
-This section will show you how to install the Teams desktop app on your Windows 10 Multi-session or Windows 10 Enterprise VM image. To learn more, check out [Install or update the Teams desktop app on VDI](/microsoftteams/teams-for-vdi#install-or-update-the-teams-desktop-app-on-vdi).
+This section will show you how to install the Teams desktop app on your Windows 10/11 Multi-session or Windows 10/11 Enterprise VM image. To learn more, check out [Install or update the Teams desktop app on VDI](/microsoftteams/teams-for-vdi#install-or-update-the-teams-desktop-app-on-vdi).
### Prepare your image for Teams
virtual-machines Disks Pools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/disks-pools.md
Disk pools are currently available in the following regions:
- Canada Central - Central US - East US
+- East US 2
- West US 2 - Japan East - North Europe - West Europe - Southeast Asia - UK South
+- Korea Central
+- Sweden Central
+- Central India
## Billing
virtual-machines Tutorial Custom Images https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/tutorial-custom-images.md
If you choose to install and use the CLI locally, this tutorial requires that yo
## Overview
-an [Azure Compute Gallery](../shared-image-galleries.md) simplifies custom image sharing across your organization. Custom images are like marketplace images, but you create them yourself. Custom images can be used to bootstrap configurations such as preloading applications, application configurations, and other OS configurations.
+An [Azure Compute Gallery](../shared-image-galleries.md) simplifies custom image sharing across your organization. Custom images are like marketplace images, but you create them yourself. Custom images can be used to bootstrap configurations such as preloading applications, application configurations, and other OS configurations.
The Azure Compute Gallery lets you share your custom VM images with others. Choose which images you want to share, which regions you want to make them available in, and who you want to share them with.
Copy the ID of the image definition from the output to use later.
## Create the image version
-Create an image version from the VM using [az image gallery create-image-version](/cli/azure/sig/image-version#az-sig-image-version-create).
+Create an image version from the VM using [az sig image-version create](/cli/azure/sig/image-version#az-sig-image-version-create).
Allowed characters for image version are numbers and periods. Numbers must be within the range of a 32-bit integer. Format: *MajorVersion*.*MinorVersion*.*Patch*.
In this example, we are creating a VM from the latest version of the *myImageDef
```azurecli az group create --name myResourceGroup --location eastus az vm create --resource-group myResourceGroup \
- --name myVM \
+ --name myVM2 \
--image "/subscriptions/<Subscription ID>/resourceGroups/myGalleryRG/providers/Microsoft.Compute/galleries/myGallery/images/myImageDefinition" \ --specialized ```
In this tutorial, you created a custom VM image. You learned how to:
Advance to the next tutorial to learn about highly available virtual machines. > [!div class="nextstepaction"]
-> [Create highly available VMs](tutorial-availability-sets.md)
+> [Create highly available VMs](tutorial-availability-sets.md)
virtual-machines N Series Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/n-series-migration.md
In general, NC-Series customers should consider moving directly across from NC s
| Current VM Size | Target VM Size | Difference in Specification | ||||
-Standard_NC6 <br> Standard_NC6_Promo | Standard_NC4as_T4_v3 <br>or<br>Standard_NC8as_T4 | CPU: Intel Haswell vs AMD Rome<br>GPU count: 1 (same)< br>GPU generation: NVIDIA Keppler vs. Turing (+2 generations, ~2x FP32 FLOPs)<br>GPU memory (GiB per GPU): 16 (+4)<br>vCPU: 4 (-2) or 8 (+2)<br>Memory GiB: 16 (-40) or 56 (same)<br>Temp Storage (SSD) GiB: 180 (-160) or 360 (+20)<br>Max data disks: 8 (-4) or 16 (+4)<br>Accelerated Networking: Yes (+)<br>Premium Storage: Yes (+)|
+Standard_NC6 <br> Standard_NC6_Promo | Standard_NC4as_T4_v3 <br>or<br>Standard_NC8as_T4 | CPU: Intel Haswell vs AMD Rome<br>GPU count: 1 (same)<br>GPU generation: NVIDIA Keppler vs. Turing (+2 generations, ~2x FP32 FLOPs)<br>GPU memory (GiB per GPU): 16 (+4)<br>vCPU: 4 (-2) or 8 (+2)<br>Memory GiB: 16 (-40) or 56 (same)<br>Temp Storage (SSD) GiB: 180 (-160) or 360 (+20)<br>Max data disks: 8 (-4) or 16 (+4)<br>Accelerated Networking: Yes (+)<br>Premium Storage: Yes (+)|
| Standard_NC12<br>Standard_NC12_Promo | Standard_NC16as_T4_v3 | CPU: Intel Haswell vs AMD Rome<br>GPU count: 1 (-1)<br>GPU generation: NVIDIA Keppler vs. Turing (+2 generations, ~2x FP32 FLOPs)<br>GPU memory (GiB per GPU): 16 (+4)<br>vCPU: 16 (+4)<br>Memory GiB: 110 (-2)<br>Temp Storage (SSD) GiB: 360 (-320)<br>Max data disks: 48 (+16)<br>Accelerated Networking: Yes (+)<br>Premium Storage: Yes (+) | | Standard_NC24<br>Standard_NC24_Promo | Standard_NC64as_T4_v3* | CPU: Intel Haswell vs AMD Rome<br>GPU count: 4 (same)<br>GPU generation: NVIDIA Keppler vs. Turing (+2 generations, ~2x FP32 FLOPs)<br>GPU memory (GiB per GPU): 16 (+4)<br>vCPU: 64 (+40)<br>Memory GiB: 440 (+216)<br>Temp Storage (SSD) GiB: 2880 (+1440)<br>Max data disks: 32 (-32)<br>Accelerated Networking: Yes (+)<br>Premium Storage: Yes (+) | |Standard_NC24r<br>Standard_NC24r_Promo<br><br>(InfiniBand clustering-enabled sizes) | Standard_NC24rs_v3* | CPU: Intel Haswell vs Intel Broadwell<br>GPU count: 4 (same)<br>GPU generation: NVIDIA Keppler vs. Volta (+2 generations)<br>GPU memory (GiB per GPU): 16 (+4)<br>vCPU: 24 (+0)<br>Memory GiB: 448 (+224)<br>Temp Storage (SSD) GiB: 2948 (+1440)<br>Max data disks: 32 (same)<br>Accelerated Networking: No (Same)<br>Premium Storage: Yes (+)<br>InfiniBand interconnect: Yes |
virtual-network Create Vm Accelerated Networking Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/create-vm-accelerated-networking-cli.md
ms.devlang: na
na Previously updated : 02/15/2022 Last updated : 03/24/2022
From the Bash shell, enter `uname -r` and confirm that the kernel version is one
* **Ubuntu 16.04**: 4.11.0-1013 * **SLES SP3**: 4.4.92-6.18
-* **RHEL**: 3.10.0-693
+* **RHEL**: 3.10.0-693, 2.6.32-573*
* **CentOS**: 3.10.0-693
+> [!NOTE]
+> Other kernel versions may be supported. For the most up to date list, reference the compatibility tables for each distrubution at [Supported Linux and FreeBSD virtual machines for Hyper-V](/windows-server/virtualization/hyper-v/supported-linux-and-freebsd-virtual-machines-for-hyper-v-on-windows) and confirm that SR-IOV is supported. Additional details can be found in the release notes for the [Linux Integration Services for Hyper-V and Azure](https://www.microsoft.com/download/details.aspx?id=55106). * RHEL 6.7-6.10 are supported if the Mellanox VF version 4.5+ is installed before Linux Integration Services 4.3+.
Confirm that the Mellanox VF device is exposed to the VM with the `lspci` command. The returned output is similar to the following output:
virtual-wan Scenario 365 Expressroute Private https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/scenario-365-expressroute-private.md
+
+ Title: 'Scenario: Connect to Microsoft 365 using ExpressRoute private peering'
+
+description: Learn about how to connect to Microsoft 365 through Virtual WAN using ExpressRoute private peering.
+++++ Last updated : 04/08/2022+++
+# Scenario: Connect to Microsoft 365 via Virtual WAN using ExpressRoute private peering
+
+This article walks you through a solution using Virtual WAN to create a connection to Microsoft 365 using Azure Virtual WAN and ExpressRoute private peering. A few examples of when you might want to use this type of connection are:
+
+* The customer is in an area where internet isn't available.
+* The customer is in a highly regulated environment.
+
+In many cases, Microsoft doesn't recommend using Azure ExpressRoute with Microsoft peering to connect to Microsoft 365. When you aren't using Azure Virtual WAN, the most common concerns are:
+
+* Implementing Azure ExpressRoute can be complex when it comes to routing.
+* The use of Public IP addresses for the peering.
+* ExpressRoute is normally against Microsoft Edge distribution policy.
+* Egress costs can have a high-cost implication.
+* Cost and scalability are normally not comparable to premium internet connections and higher.
+
+While it's possible for you to request that the subscription allowlists use Microsoft 365 via Azure ExpressRoute, this still doesn't remove the limitations and implications listed above.
+
+When you use Virtual WAN and ExpressRoute with private peering for your solution, you can keep down costs and enable redundancy. The solution in this article uses default behavior from the Microsoft global network in combination with Microsoft services. Microsoft service traffic is always transported on the Microsoft global network. For more information, see [Microsoft global network](../networking/microsoft-global-network.md).
+
+The technologies engaged in this solution are:
+
+* Azure ExpressRoute local.
+* Azure Virtual WAN secured virtual hub.
+* Firewall Appliance that is capable of service-based routing.
+
+## <a name="architecture"></a>Architecture
+
+The architecture is simple. However, when using this Virtual WAN solution, the ExpressRoute circuit isn't geo-redundant because it's only deployed in one edge co-location. To establish the necessary geo-redundancy, you must build additional ExpressRoute circuits.
+
+**Figure 1**
++
+## <a name="workflow"></a>Workflow
+
+### 1. Deploy a virtual hub with Azure Firewall
+
+First, deploy an Azure Virtual WAN hub with Azure Firewall. Azure Firewall is used not only to make the solution secure, but also as an Internet access point. For steps, see [Install Azure Firewall in a Virtual WAN hub](howto-firewall.md).
+
+**Figure 2**
++
+### 2. Deploy ExpressRoute gateway and connections
+
+Next, deploy an Azure Virtual WAN ExpressRoute gateway into the Virtual WAN connection. Then, connect your ExpressRoute local to the gateway and secure Internet for that ExpressRoute. That will announce a default route (0.0.0.0/0) to your on-premises environment. For steps, see [Create ExpressRoute connections using Azure Virtual WAN](virtual-wan-expressroute-portal.md).
+
+**Figure 3**
++
+### 3. Set static route
+
+From on-premises, you can now set a static route to point to the gateway. Or, you can use newer SDWAN or firewall devices to use service-based routing and only send traffic for Microsoft 365 Services to the secured Virtual WAN hub. For steps, see [Install Azure Firewall in a Virtual WAN hub](howto-firewall.md#configure-additional-settings).
+
+### 4. Build ExpressRoute circuit for geo-redundancy
+
+To implement a highly available architecture and improve latency for your user, you should distribute additional hubs. We suggest putting ExpressRoute circuits in different locations. For example, in Germany Frankfurt and West Europe Amsterdam. For a list of ExpressRoute locations and providers, see the [ExpressRoute locations and connectivity providers](../expressroute/expressroute-locations-providers.md#global-commercial-azure) article.
+
+You have two design options for this configuration:
+
+* Option 1: Create two separate circuits connected to two separate hubs (Figure 4).
+* Option 2: Interconnect both Virtual WAN hubs (Figure 5), then disable branch-to-branch connectivity within the virtual hub properties (Figure 6). With this option, the result is a private, redundant, high performant connection to Microsoft 365 services.
+
+**Option 1: Figure 4**
++
+**Option 2: Figure 5**
++
+**Option 2: Figure 6**
++
+## Next steps
+
+* [Azure Peering Service overview](../peering-service/about.md).
web-application-firewall Waf Front Door Drs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/afds/waf-front-door-drs.md
DRS 2.0 includes 17 rule groups, as shown in the following table. Each group con
|**[APPLICATION-ATTACK-RFI](#drs931-10)**|Protection against remote file inclusion attacks| |**[APPLICATION-ATTACK-RCE](#drs932-10)**|Protection against remote command execution| |**[APPLICATION-ATTACK-PHP](#drs933-10)**|Protect against PHP-injection attacks|
+|**[CROSS-SITE-SCRIPTING](#drs941-10)**|XSS - Cross-site Scripting|
|**[APPLICATION-ATTACK-SQLI](#drs942-10)**|Protect against SQL-injection attacks| |**[APPLICATION-ATTACK-SESSION-FIXATION](#drs943-10)**|Protect against session-fixation attacks| |**[APPLICATION-ATTACK-SESSION-JAVA](#drs944-10)**|Protect against JAVA attacks|
+|**[MS-ThreatIntel-WebShells](#drs9905-10)**|Protect against Web shell attacks|
+|**[MS-ThreatIntel-CVEs](#drs99001-10)**|Protect against CVE attacks|
+
Front Door.
|99005002|Web Shell Interaction Attempt (POST)| |99005003|Web Shell Upload Attempt (POST) - CHOPPER PHP| |99005004|Web Shell Upload Attempt (POST) - CHOPPER ASPX|
+|99005006|Spring4Shell Interaction Attempt|
### <a name="drs9903-20"></a> MS-ThreatIntel-AppSec |RuleId|Description|
Front Door.
|RuleId|Description| ||| |99001001|Attempted F5 tmui (CVE-2020-5902) REST API Exploitation with known credentials|
+|99001014|Attempted Spring Cloud routing-expression injection [CVE-2022-22963](https://www.cve.org/CVERecord?id=CVE-2022-22963)|
+|99001015|Attempted Spring Framework unsafe class object exploitation [CVE-2022-22965](https://www.cve.org/CVERecord?id=CVE-2022-22965)|
+|99001016|Attempted Spring Cloud Gateway Actuator injection [CVE-2022-22947](https://www.cve.org/CVERecord?id=CVE-2022-22947)
# [DRS 1.1](#tab/drs11)
Front Door.
|99005002|Web Shell Interaction Attempt (POST)| |99005003|Web Shell Upload Attempt (POST) - CHOPPER PHP| |99005004|Web Shell Upload Attempt (POST) - CHOPPER ASPX|
+|99005006|Spring4Shell Interaction Attempt|
### <a name="drs9903-11"></a> MS-ThreatIntel-AppSec |RuleId|Description|
Front Door.
|RuleId|Description| ||| |99001001|Attempted F5 tmui (CVE-2020-5902) REST API Exploitation with known credentials|
+|99001014|Attempted Spring Cloud routing-expression injection [CVE-2022-22963](https://www.cve.org/CVERecord?id=CVE-2022-22963)|
+|99001015|Attempted Spring Framework unsafe class object exploitation [CVE-2022-22965](https://www.cve.org/CVERecord?id=CVE-2022-22965)|
+|99001016|Attempted Spring Cloud Gateway Actuator injection [CVE-2022-22947](https://www.cve.org/CVERecord?id=CVE-2022-22947)|
# [DRS 1.0](#tab/drs10)
Front Door.
|944240|Remote Command Execution: Java serialization and Log4j vulnerability ([CVE-2021-44228](https://www.cve.org/CVERecord?id=CVE-2021-44228), [CVE-2021-45046](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-45046))| |944250|Remote Command Execution: Suspicious Java method detected|
+### <a name="drs9905-10"></a> MS-ThreatIntel-WebShells
+|RuleId|Description|
+|||
+|99005006|Spring4Shell Interaction Attempt|
+
+### <a name="drs99001-10"></a> MS-ThreatIntel-CVEs
+|RuleId|Description|
+|||
+|99001014|Attempted Spring Cloud routing-expression injection [CVE-2022-22963](https://www.cve.org/CVERecord?id=CVE-2022-22963)|
+|99001015|Attempted Spring Framework unsafe class object exploitation [CVE-2022-22965](https://www.cve.org/CVERecord?id=CVE-2022-22965)|
+|99001016|Attempted Spring Cloud Gateway Actuator injection [CVE-2022-22947](https://www.cve.org/CVERecord?id=CVE-2022-22947)|
++ # [Bot rules](#tab/bot) ## <a name="bot"></a> Bot Manager rule sets
web-application-firewall Application Gateway Crs Rulegroups Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/ag/application-gateway-crs-rulegroups-rules.md
description: This page provides information on web application firewall CRS rule
Previously updated : 02/04/2022 Last updated : 04/07/2022
The following rule groups and rules are available when using Web Application Fir
|RuleId|Description| ||| |800100|Rule to help detect and mitigate log4j vulnerability [CVE-2021-44228](https://www.cve.org/CVERecord?id=CVE-2021-44228), [CVE-2021-45046](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-45046)|
+|800110|Spring4Shell Interaction Attempt|
+|800111|Attempted Spring Cloud routing-expression injection - [CVE-2022-22963](https://www.cve.org/CVERecord?id=CVE-2022-22963)|
+|800112|Attempted Spring Framework unsafe class object exploitation - [CVE-2022-22965](https://www.cve.org/CVERecord?id=CVE-2022-22965)|
+|800113|Attempted Spring Cloud Gateway Actuator injection - [CVE-2022-22947](https://www.cve.org/CVERecord?id=CVE-2022-22947)|
### <a name="crs911-32"></a> REQUEST-911-METHOD-ENFORCEMENT |RuleId|Description|
The following rule groups and rules are available when using Web Application Fir
|RuleId|Description| ||| |800100|Rule to help detect and mitigate log4j vulnerability [CVE-2021-44228](https://www.cve.org/CVERecord?id=CVE-2021-44228), [CVE-2021-45046](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-45046)|
+|800110|Spring4Shell Interaction Attempt|
+|800111|Attempted Spring Cloud routing-expression injection - [CVE-2022-22963](https://www.cve.org/CVERecord?id=CVE-2022-22963)|
+|800112|Attempted Spring Framework unsafe class object exploitation - [CVE-2022-22965](https://www.cve.org/CVERecord?id=CVE-2022-22965)|
+|800113|Attempted Spring Cloud Gateway Actuator injection - [CVE-2022-22947](https://www.cve.org/CVERecord?id=CVE-2022-22947)|
### <a name="crs911-31"></a> REQUEST-911-METHOD-ENFORCEMENT
The following rule groups and rules are available when using Web Application Fir
|RuleId|Description| ||| |800100|Rule to help detect and mitigate log4j vulnerability [CVE-2021-44228](https://www.cve.org/CVERecord?id=CVE-2021-44228), [CVE-2021-45046](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-45046)|
+|800110|Spring4Shell Interaction Attempt|
+|800111|Attempted Spring Cloud routing-expression injection - [CVE-2022-22963](https://www.cve.org/CVERecord?id=CVE-2022-22963)|
+|800112|Attempted Spring Framework unsafe class object exploitation - [CVE-2022-22965](https://www.cve.org/CVERecord?id=CVE-2022-22965)|
+|800113|Attempted Spring Cloud Gateway Actuator injection - [CVE-2022-22947](https://www.cve.org/CVERecord?id=CVE-2022-22947)|
### <a name="crs911-30"></a> REQUEST-911-METHOD-ENFORCEMENT