Updates from: 04/22/2021 03:23:01
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Conditional Access User Flow https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/conditional-access-user-flow.md
After you've added the Azure AD Conditional Access policy, enable conditional ac
Multiple Conditional Access policies may apply to an individual user at any time. In this case, the most strict access control policy takes precedence. For example, if one policy requires multi-factor authentication (MFA), while the other blocks access, the user will be blocked.
+## Conditional Access Template 1: Sign-in risk-based Conditional Access
+
+Most users have a normal behavior that can be tracked, when they fall outside of this norm it could be risky to allow them to just sign in. You may want to block that user or maybe just ask them to perform multi-factor authentication to prove that they are really who they say they are.
+
+A sign-in risk represents the probability that a given authentication request isn't authorized by the identity owner. Organizations with P2 licenses can create Conditional Access policies incorporating [Azure AD Identity Protection sign-in risk detections](https://docs.microsoft.com/azure/active-directory/identity-protection/concept-identity-protection-risks#sign-in-risk). Please note the [limitations on Identity Protection detections for B2C](https://docs.microsoft.com/azure/active-directory-b2c/identity-protection-investigate-risk?pivots=b2c-user-flow#service-limitations-and-considerations).
+
+If risk is detected, users can perform multi-factor authentication to self-remediate and close the risky sign-in event to prevent unnecessary noise for administrators.
+
+Organizations should choose one of the following options to enable a sign-in risk-based Conditional Access policy requiring multi-factor authentication (MFA) when sign-in risk is medium OR high.
+
+### Enable with Conditional Access policy
+
+1. Sign in to the **Azure portal**.
+2. Browse to **Azure AD B2C** > **Security** > **Conditional Access**.
+3. Select **New policy**.
+4. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies.
+5. Under **Assignments**, select **Users and groups**.
+ 1. Under **Include**, select **All users**.
+ 2. Under **Exclude**, select **Users and groups** and choose your organization's emergency access or break-glass accounts.
+ 3. Select **Done**.
+6. Under **Cloud apps or actions** > **Include**, select **All cloud apps**.
+7. Under **Conditions** > **Sign-in risk**, set **Configure** to **Yes**. Under **Select the sign-in risk level this policy will apply to**
+ 1. Select **High** and **Medium**.
+ 2. Select **Done**.
+8. Under **Access controls** > **Grant**, select **Grant access**, **Require multi-factor authentication**, and select **Select**.
+9. Confirm your settings and set **Enable policy** to **On**.
+10. Select **Create** to create to enable your policy.
+
+### Enable with Conditional Access APIs
+
+To create a Sign-in risk-based Conditional Access policy with Conditional Access APIs, please refer to the documentation for [Conditional Access APIs](https://docs.microsoft.com/azure/active-directory/conditional-access/howto-conditional-access-apis#graph-api).
+
+The following template can be used to create a Conditional Access policy with display name "CA002: Require MFA for medium+ sign-in risk" in report-only mode.
+
+```json
+{
+ "displayName": "Template 1: Require MFA for medium+ sign-in risk",
+ "state": "enabledForReportingButNotEnforced",
+ "conditions": {
+ "signInRiskLevels": [ "high" ,
+ "medium"
+ ],
+ "applications": {
+ "includeApplications": [
+ "All"
+ ]
+ },
+ "users": {
+ "includeUsers": [
+ "All"
+ ],
+ "excludeUsers": [
+ "f753047e-de31-4c74-a6fb-c38589047723"
+ ]
+ }
+ },
+ "grantControls": {
+ "operator": "OR",
+ "builtInControls": [
+ "mfa"
+ ]
+ }
+}
+```
+ ## Enable multi-factor authentication (optional) When adding Conditional Access to a user flow, consider the use of **Multi-factor authentication (MFA)**. Users can use a one-time code via SMS or voice, or a one-time password via email for multi-factor authentication. MFA settings are independent from Conditional Access settings. You can set MFA to **Always On** so that MFA is always required regardless of your Conditional Access setup. Or, you can set MFA to **Conditional** so that MFA is required only when an active Conditional Access Policy requires it.
active-directory-b2c Partner Biocatch https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/partner-biocatch.md
+
+ Title: Tutorial to configure BioCatch with Azure Active Directory B2C
+
+description: Tutorial to configure Azure Active Directory B2C with BioCatch to identify risky and fraudulent users
+++++++ Last updated : 04/20/2021++++
+# Tutorial: Configure BioCatch with Azure Active Directory B2C
+
+In this sample tutorial, learn how to integrate Azure Active Directory (AD) B2C authentication with [BioCatch](https://www.biocatch.com/) to further augment your Customer Identity and Access Management (CIAM) security posture. BioCatch analyzes a user's physical and cognitive digital behaviors to generate insights that distinguish between legitimate customers and cyber-criminals.
+
+## Prerequisites
+
+To get started, you'll need:
+
+- An Azure subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+
+- [An Azure AD B2C tenant](tutorial-create-tenant.md) that is linked to your Azure subscription.
+
+- A [BioCatch](https://www.biocatch.com/contact-us) account.
+
+## Scenario description
+
+BioCatch integration includes the following components:
+
+- **A web app or web service** - The user first browses to this web service. This web service instantiates a unique client session ID that is sent to BioCatch. The client session ID then immediately begins transmitting user behavior characteristics to BioCatch.
+
+- **A method** - Sends the unique client session ID to Azure AD B2C. In the provided example, JavaScript is used to input the value into a hidden HTML field.
+
+- **An Azure AD B2C customized UI** - Hides an HTML field for the client session ID input from JavaScript, if using the above method
+
+- **Azure AD B2C custom policy**
+
+ - Takes the custom client session ID from the UI in the form of a claim. This is achieved via a self-asserted technical profile
+
+ - Integrates with BioCatch via a REST API claims provider and passes the client session ID to the BioCatch platform
+
+ - Multiple custom claims are returned from BioCatch for the custom policy logic to then act upon
+
+ - A userjourney, which evaluates a returned claim, for example, session risk, and conditionally executes an action, such as invoke Multi-factor authentication (MFA).
+
+![Diagram of the bio catch architecture.](media/partner-biocatch/biocatch-architecture-diagram.png)
+
+| Step | Description |
+|:|:--|
+|1a | The user browses the web service. The web service then returns HTML, CSS, or JavaScript values and configures to load the BioCatch JavaScript SDK. Client-side JavaScript configures/sets client session ID for the BioCatch SDK. Alternately, the web service can pre-configure client session ID and send to the client. |
+|1b | Configure the instantiated BioCatch JavaScript SDK against the BioCatch platform. Immediately begin to send user behavior characteristics to BioCatch from the client device, using the client session ID from step 1a. |
+|2 | User tries to sign-up/sign-in and is redirected to Azure AD B2C. |
+|3a | Part of the userjourney is a self-asserted claimsprovider, which takes the client session ID as input. This field is hidden on the screen. You can use JavaScript to input the session ID into the field. Select the *next* button, to continue the sign-up/sign-in process.|
+|3b | The client session ID is submitted to the BioCatch platform to determine a risk score. |
+|3c | BioCatch returns information about the session, such as risk score, and a recommendation on what to do ΓÇô allow or block |
+|3d |The userjourney has a conditional check step, which acts upon the returned claims|
+| 4 | Based on the conditional check result, an action such as *step-up MFA* is invoked|
+|5 | At any time from when the user first hits the web service page, the web service can use the client session ID to query the BioCatch API to determine risk score and session information in real-time. |
+
+## Onboard with BioCatch
+
+Contact [BioCatch](https://www.biocatch.com/contact-us) and create an account.
+
+## Configure the custom UI
+
+It's recommended to hide the client session ID field. Use CSS, JavaScript, or any other method to hide the field. For testing purposes, you may unhide the field. For example, JavaScript is used to hide the input field as:
+
+```
+document.getElementById("clientSessionId").style.displayΓÇ»=ΓÇ»'none';
+```
+
+## Configure Azure AD B2C Identity Experience Framework policies
+
+1. Configure the initial [custom policy configuration](https://docs.microsoft.com/azure/active-directory-b2c/custom-policy-get-started).
+
+2. Create a new file, which inherits from the extensions file.
+
+ ```
+ <BasePolicy>
+
+ <TenantId>tenant.onmicrosoft.com</TenantId>
+
+ <PolicyId>B2C_1A_TrustFrameworkExtensions</PolicyId>
+
+ </BasePolicy>
+ ```
+
+3. Create a reference to the custom UI to hide the input box, under the BuildingBlocks resource.
+
+ ```
+ <ContentDefinitions>
+
+ <ContentDefinition Id="api.selfasserted">
+
+ <LoadUri>https://domain.com/path/to/selfAsserted.cshtml</LoadUri>
+
+ <DataUri>urn:com:microsoft:aad:b2c:elements:contract:selfasserted:2.1.0</DataUri>
+
+ </ContentDefinition>
+
+ </ContentDefinitions>
+ ```
+
+4. Add the following claims under the BuildingBlocks resource.
+
+ ```
+ <ClaimsSchema>
+
+ <ClaimType Id="riskLevel">
+
+ <DisplayName>Session risk level</DisplayName>
+
+ <DataType>string</DataType>      
+
+ </ClaimType>
+
+ <ClaimType Id="score">
+
+ <DisplayName>Session risk score</DisplayName>
+
+ <DataType>int</DataType>      
+
+ </ClaimType>
+
+ <ClaimType Id="clientSessionId">
+
+ <DisplayName>The ID of the client session</DisplayName>
+
+ <DataType>string</DataType>
+
+ <UserInputType>TextBox</UserInputType>
+
+ </ClaimType>
+
+ <ClaimsSchema>
+ ```
+
+5. Configure self-asserted claims provider for the client session ID field.
+
+ ```
+ <ClaimsProvider>
+
+ <DisplayName>Client Session ID Claims Provider</DisplayName>
+
+ <TechnicalProfiles>
+
+ <TechnicalProfile Id="login-NonInteractive-clientSessionId">
+
+ <DisplayName>Client Session ID TP</DisplayName>
+
+ <Protocol Name="Proprietary" Handler="Web.TPEngine.Providers.SelfAssertedAttributeProvider, Web.TPEngine, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null" />
+
+ <Metadata>
+
+ <Item Key="ContentDefinitionReferenceId">api.selfasserted</Item>
+
+ </Metadata>
+
+ <CryptographicKeys>
+
+ <Key Id="issuer_secret" StorageReferenceId="B2C_1A_TokenSigningKeyContainer" />
+
+ </CryptographicKeys>
+
+ <!ΓÇöClaim we created earlierΓÇ»-->
+
+ <OutputClaims>
+
+ <OutputClaim ClaimTypeReferenceId="clientSessionId" Required="false" DefaultValue="100"/>
+
+ </OutputClaims>
+
+ <UseTechnicalProfileForSessionManagement ReferenceId="SM-AAD" />
+
+ </TechnicalProfile>
+
+ </TechnicalProfiles>
+
+ </ClaimsProvider>
+ ```
+
+6. Configure REST API claims provider for BioCatch.
+
+ ```
+ <TechnicalProfile Id="BioCatch-API-GETSCORE">
+
+ <DisplayName>Technical profile for BioCatch API to return session information</DisplayName>
+
+ <Protocol Name="Proprietary" Handler="Web.TPEngine.Providers.RestfulProvider, Web.TPEngine, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null" />
+
+ <Metadata>
+
+ <Item Key="ServiceUrl">https://biocatch-url.com/api/v6/score?customerID=<customerid>&amp;action=getScore&amp;uuid=<uuid>&amp;customerSessionID={clientSessionId}&amp;solution=ATO&amp;activtyType=<activity_type>&amp;brand=<brand></Item>
+
+ <Item Key="SendClaimsIn">Url</Item>
+
+ <Item Key="IncludeClaimResolvingInClaimsHandling">true</Item>
+
+ <!-- Set AuthenticationType to Basic or ClientCertificate in production environments -->
+
+ <Item Key="AuthenticationType">None</Item>
+
+ <!-- REMOVE the following line in production environments -->
+
+ <Item Key="AllowInsecureAuthInProduction">true</Item>
+
+ </Metadata>
+
+ <InputClaims>
+
+ <InputClaim ClaimTypeReferenceId="clientsessionId" />
+
+ </InputClaims>
+
+ <OutputClaims>
+
+ <OutputClaim ClaimTypeReferenceId="riskLevel" />
+
+ <OutputClaim ClaimTypeReferenceId="score" />
+
+ </OutputClaims>
+
+ <UseTechnicalProfileForSessionManagement ReferenceId="SM-Noop" />
+
+ </TechnicalProfile>
+
+ </TechnicalProfiles>
+ ```
+
+ > [!Note]
+ > BioCatch will provide you the URL, customer ID and unique user ID (uuID) to configure. The customer SessionID claim is passed through as a querystring parameter to BioCatch. You can choose the activity type, for example *MAKE_PAYMENT*.
+
+7. Configure the userjourney; follow the example
+
+ 1. Get the clientSessionID as a claim
+
+ 1. Call the BioCatch API to get the session information
+
+ 1. If the returned claim *risk* equals *low*, skip the step for MFA, else force user MFA
+
+ ```
+ <OrchestrationStep Order="8" Type="ClaimsExchange">
+
+ <ClaimsExchanges>
+
+ <ClaimsExchange Id="clientSessionIdInput" TechnicalProfileReferenceId="login-NonInteractive-clientSessionId" />
+
+ </ClaimsExchanges>
+
+ </OrchestrationStep>
+
+ <OrchestrationStep Order="9" Type="ClaimsExchange">
+
+ <ClaimsExchanges>
+
+ <ClaimsExchange Id="BcGetScore" TechnicalProfileReferenceId=" BioCatch-API-GETSCORE" />
+
+ </ClaimsExchanges>
+
+ </OrchestrationStep>
+
+ <OrchestrationStep Order="10" Type="ClaimsExchange">
+
+ <Preconditions>
+
+ <Precondition Type="ClaimEquals" ExecuteActionsIf="true">
+
+ <Value>riskLevel</Value>
+
+ <Value>LOW</Value>
+
+ <Action>SkipThisOrchestrationStep</Action>
+
+ </Precondition>
+
+ </Preconditions>
+
+ <ClaimsExchanges>
+
+ <ClaimsExchange Id="PhoneFactor-Verify" TechnicalProfileReferenceId="PhoneFactor-InputOrVerify" />
+
+ </ClaimsExchanges>
+
+ ```
+
+8. Configure on relying party configuration (optional)
+
+ It is useful to pass the BioCatch returned information to your application as claims in the token, specifically *risklevel* and *score*.
+
+ ```
+ <RelyingParty>
+
+ <DefaultUserJourney ReferenceId="SignUpOrSignInMfa" />
+
+ <UserJourneyBehaviors>
+
+ <SingleSignOn Scope="Tenant" KeepAliveInDays="30" />
+
+ <SessionExpiryType>Absolute</SessionExpiryType>
+
+ <SessionExpiryInSeconds>1200</SessionExpiryInSeconds>
+
+ <ScriptExecution>Allow</ScriptExecution>
+
+ </UserJourneyBehaviors>
+
+ <TechnicalProfile Id="PolicyProfile">
+
+ <DisplayName>PolicyProfile</DisplayName>
+
+ <Protocol Name="OpenIdConnect" />
+
+ <OutputClaims>
+
+ <OutputClaim ClaimTypeReferenceId="displayName" />
+
+ <OutputClaim ClaimTypeReferenceId="givenName" />
+
+ <OutputClaim ClaimTypeReferenceId="surname" />
+
+ <OutputClaim ClaimTypeReferenceId="email" />
+
+ <OutputClaim ClaimTypeReferenceId="objectId" PartnerClaimType="sub" />
+
+ <OutputClaim ClaimTypeReferenceId="identityProvider" />                
+
+ <OutputClaim ClaimTypeReferenceId="riskLevel" />
+
+ <OutputClaim ClaimTypeReferenceId="score" />
+
+ <OutputClaim ClaimTypeReferenceId="tenantId" AlwaysUseDefaultValue="true" DefaultValue="{Policy:TenantObjectId}" />
+
+ </OutputClaims>
+
+ <SubjectNamingInfo ClaimType="sub" />
+
+ </TechnicalProfile>
+
+ </RelyingParty>
+
+ ```
+
+## Integrate with Azure AD B2C
+
+Follow these steps to add the policy files to Azure AD B2C
+
+1. Sign in to theΓÇ»[**Azure portal**](https://portal.azure.com/) as the global administrator of your Azure AD B2C tenant.
+
+2. Make sure you're using the directory that contains your Azure AD B2C tenant. Select **Directory + subscription** filter in the top menu and choose the directory that contains your tenant.
+
+3. Choose **All services** in the top-left corner of the Azure portal, search for and select Azure AD B2C.
+
+4. Navigate toΓÇ»**Azure AD B2C**ΓÇ»>ΓÇ»**Identity Experience Framework**
+
+3. Upload all the policy files to your tenant.
+
+## Test the solution
+
+1. [Register a dummy application, which redirects to JWT.MS](https://docs.microsoft.com/azure/active-directory-b2c/tutorial-register-applications?tabs=app-reg-ga)
+
+2. Under the **Identity Experience Framework**, select the policy you created
+
+3. In the policy window, select the dummy JWT.MS application, and select **run now**
+
+4. Go through sign-up flow and create an account. Token returned to JWT.MS should have 2x claims for riskLevel and score. Follow the example.
+
+ ```
+ {
+
+ "typ": "JWT",
+
+ "alg": "RS256",
+
+ "kid": "_keyid"
+
+ }.{
+
+ "exp": 1615872580,
+
+ "nbf": 1615868980,
+
+ "ver": "1.0",
+
+ "iss": "https://tenant.b2clogin.com/12345678-1234-1234-1234-123456789012/v2.0/",
+
+ "sub": "12345678-1234-1234-1234-123456789012",
+
+ "aud": "12345678-1234-1234-1234-123456789012",
+
+ "acr": "b2c_1a_signup_signin_biocatch_policy",
+
+ "nonce": "defaultNonce",
+
+ "iat": 1615868980,
+
+ "auth_time": 1615868980,
+
+ "name": "John Smith",
+
+ "email": "john.smith@contoso.com",
+
+ "given_name": "John",
+
+ "family_name": "Smith",
+
+ "riskLevel": "LOW",
+
+ "score": 275,
+
+ "tid": "12345678-1234-1234-1234-123456789012"
+
+ }.[Signature]
+
+ ```
+
+## Additional resources
+
+- [Custom policies in Azure AD B2C](https://docs.microsoft.com/azure/active-directory-b2c/custom-policy-overview)
+
+- [Get started with custom policies in Azure AD B2C](https://docs.microsoft.com/azure/active-directory-b2c/custom-policy-get-started?tabs=applications)
active-directory-b2c Partner Gallery https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/partner-gallery.md
Microsoft partners with the following ISVs for role-based access control.
| ![Screenshot of a n8identity logo](./medi) is an Identity-as-a-Service governance platform that provides solution to address customer accounts migration and Customer Service Requests (CSR) administration running on Microsoft Azure. | | ![Screenshot of a Saviynt logo](./medi) cloud-native platform promotes better security, compliance, and governance through intelligent analytics and cross application integration for streamlining IT modernization. |
-## Security
+## Secure hybrid access to on-premises application
-Microsoft partners with the following ISVs for security.
+Microsoft partners with the following ISVs to provide secure hybrid access to on-premises application.
| ISV partner | Description and integration walkthroughs | |:-|:--|
-| ![Screenshot of a Arkose lab logo](./medi) is a fraud prevention solution provider that helps organizations protect against bot attacks, account takeover attacks, and fraudulent account openings. |
-| ![Screenshot of a Microsoft Dynamics 365 logo](./medi) is a solution that helps organizations protect against fraudulent account openings through device fingerprinting. |
| ![Screenshot of a Ping logo](./medi) enables secure hybrid access to on-premises legacy applications across multiple clouds. | | ![Screenshot of a strata logo](./medi) provides secure hybrid access to on-premises applications by enforcing consistent access policies, keeping identities in sync, and making it simple to transition applications from legacy identity systems to standards-based authentication and access control provided by Azure AD B2C. | | ![Screenshot of a zscaler logo](./medi) delivers policy-based, secure access to private applications and assets without the cost, hassle, or security risks of a VPN. |
+## Fraud protection
+
+Microsoft partners with the following ISVs for fraud detection and prevention.
+
+| ISV partner | Description and integration walkthroughs |
+|:-|:--|
+| ![Screenshot of a Arkose lab logo](./medi) is a fraud prevention solution provider that helps organizations protect against bot attacks, account takeover attacks, and fraudulent account openings. |
+| ![Screenshot of a BioCatch logo](./medi) is a fraud prevention solution provider that analyzes a user's physical and cognitive digital behaviors to generate insights that distinguish between legitimate customers and cyber-criminals. |
+| ![Screenshot of a Microsoft Dynamics 365 logo](./medi) is a solution that helps organizations protect against fraudulent account openings through device fingerprinting. |
++ ## Additional information - [Custom policies in Azure AD B2C](./custom-policy-overview.md)
active-directory-b2c Secure Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/secure-rest-api.md
Previously updated : 04/19/2021 Last updated : 04/21/2021
You can obtain an access token in one of several ways: by obtaining it [from a f
The following example uses a REST API technical profile to make a request to the Azure AD token endpoint using the client credentials passed as HTTP basic authentication. For more information, see [Microsoft identity platform and the OAuth 2.0 client credentials flow](../active-directory/develop/v2-oauth2-client-creds-grant-flow.md).
-To acquire an Azure AD access token, create an application in your Azure AD tenant:
+Before the technical profile can interact with Azure AD to obtain an access token, you need to register an application. Azure AD B2C relies the Azure AD platform. You can create the app in your Azure AD B2C tenant, or in any Azure AD tenant you manage. To register an application:
1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Select the **Directory + subscription** filter in the top menu, and then select the directory that contains your Azure AD tenant.
+1. Select the **Directory + subscription** filter in the top menu, and then select the directory that contains your Azure AD, or Azure AD B2C tenant.
1. In the left menu, select **Azure Active Directory**. Or, select **All services** and search for and select **Azure Active Directory**. 1. Select **App registrations**, and then select **New registration**. 1. Enter a **Name** for the application. For example, *Client_Credentials_Auth_app*.
To acquire an Azure AD access token, create an application in your Azure AD tena
For a client credentials flow, you need to create an application secret. The client secret is also known as an application password. The secret will be used by your application to acquire an access token.
-1. In the **Azure AD B2C - App registrations** page, select the application you created, for example *Client_Credentials_Auth_app*.
+1. In the **Azure AD - App registrations** page, select the application you created, for example *Client_Credentials_Auth_app*.
1. In the left menu, under **Manage**, select **Certificates & secrets**. 1. Select **New client secret**. 1. Enter a description for the client secret in the **Description** box. For example, *clientsecret1*.
You need to store the client ID and the client secret that you previously record
7. Enter a **Name** for the policy key, `SecureRESTClientId`. The prefix `B2C_1A_` is added automatically to the name of your key. 8. In **Secret**, enter your client ID that you previously recorded. 9. For **Key usage**, select `Signature`.
-10. Click **Create**.
+10. Select **Create**.
11. Create another policy key with the following settings: - **Name**: `SecureRESTClientSecret`. - **Secret**: enter your client secret that you previously recorded
You need to store the client ID and the client secret that you previously record
For the ServiceUrl, replace your-tenant-name with the name of your Azure AD tenant. See the [RESTful technical profile](restful-technical-profile.md) reference for all options available. ```xml
-<TechnicalProfile Id="SecureREST-AccessToken">
+<TechnicalProfile Id="REST-AcquireAccessToken">
<DisplayName></DisplayName> <Protocol Name="Proprietary" Handler="Web.TPEngine.Providers.RestfulProvider, Web.TPEngine, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null" /> <Metadata>
To support bearer token authentication in your custom policy, modify the REST AP
```xml <Item Key="AuthenticationType">Bearer</Item> ```
-1. Change or add the *UseClaimAsBearerToken* to *bearerToken*, as follows. The *bearerToken* is the name of the claim that the bearer token will be retrieved from (the output claim from `SecureREST-AccessToken`).
+1. Change or add the *UseClaimAsBearerToken* to *bearerToken*, as follows. The *bearerToken* is the name of the claim that the bearer token will be retrieved from (the output claim from `REST-AcquireAccessToken`).
```xml <Item Key="UseClaimAsBearerToken">bearerToken</Item>
active-directory Howto Authentication Passwordless Phone https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/authentication/howto-authentication-passwordless-phone.md
Previously updated : 11/11/2020 Last updated : 04/21/2021
To use passwordless phone sign-in with the Microsoft Authenticator app, the foll
- Azure AD Multi-Factor Authentication, with push notifications allowed as a verification method. - Latest version of Microsoft Authenticator installed on devices running iOS 8.0 or greater, or Android 6.0 or greater.
+- The device on which the Microsoft Authenticator app is installed must be registered within the Azure AD tenant to an individual user.
> [!NOTE] > If you enabled Microsoft Authenticator passwordless sign-in using Azure AD PowerShell, it was enabled for your entire directory. If you enable using this new method, it supercedes the PowerShell policy. We recommend you enable for all users in your tenant via the new *Authentication Methods* menu, otherwise users not in the new policy are no longer be able to sign in without a password.
active-directory Howto Authentication Passwordless Security Key https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/authentication/howto-authentication-passwordless-security-key.md
Previously updated : 02/22/2021 Last updated : 04/21/2021
This document focuses on enabling security key based passwordless authentication
- Compatible [FIDO2 security keys](concept-authentication-passwordless.md#fido2-security-keys) - WebAuthN requires Windows 10 version 1903 or higher**
-To use security keys for logging in to web apps and services, you must have a browser that supports the WebAuthN protocol. These include Microsoft Edge, Chrome, Firefox, and Safari.
+To use security keys for logging in to web apps and services, you must have a browser that supports the WebAuthN protocol.
+These include Microsoft Edge, Chrome, Firefox, and Safari.
+ ## Prepare devices
active-directory Tutorial Enable Sspr https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/authentication/tutorial-enable-sspr.md
Previously updated : 03/25/2021 Last updated : 04/21/2021
In this tutorial you learn how to:
To finish this tutorial, you need the following resources and privileges:
-* A working Azure AD tenant with at least an Azure AD free or trial license enabled. In the Free tier, SSPR only works for cloud users in Azure AD.
+* A working Azure AD tenant with at least an Azure AD free or trial license enabled. In the Free tier, SSPR only works for cloud users in Azure AD. Password change is supported in the Free tier, but password reset is not.
* For later tutorials in this series, you'll need an Azure AD Premium P1 or trial license for on-premises password writeback. * If needed, [create an Azure account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). * An account with *Global Administrator* privileges.
active-directory Block Legacy Authentication https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/conditional-access/block-legacy-authentication.md
To give your users easy access to your cloud apps, Azure Active Directory (Azure AD) supports a broad variety of authentication protocols including legacy authentication. However, legacy protocols don't support multi-factor authentication (MFA). MFA is in many environments a common requirement to address identity theft.
-Alex Weinert, Director of Identity Security at Microsoft, in his March 12, 2020 blog post [New tools to block legacy authentication in your organization](https://techcommunity.microsoft.com/t5/azure-active-directory-identity/new-tools-to-block-legacy-authentication-in-your-organization/ba-p/1225302#) emphasizes why organizations should block legacy authentication and what additional tools Microsoft provides to accomplish this task:
+Alex Weinert, Director of Identity Security at Microsoft, in his March 12, 2020 blog post [New tools to block legacy authentication in your organization](https://techcommunity.microsoft.com/t5/azure-active-directory-identity/new-tools-to-block-legacy-authentication-in-your-organization/ba-p/1225302#) emphasizes why organizations should block legacy authentication and what other tools Microsoft provides to accomplish this task:
> For MFA to be effective, you also need to block legacy authentication. This is because legacy authentication protocols like POP, SMTP, IMAP, and MAPI can't enforce MFA, making them preferred entry points for adversaries attacking your organization... >
Alex Weinert, Director of Identity Security at Microsoft, in his March 12, 2020
> - Azure AD accounts in organizations that have disabled legacy authentication experience 67 percent fewer compromises than those where legacy authentication is enabled >
-If your environment is ready to block legacy authentication to improve your tenant's protection, you can accomplish this goal with Conditional Access. This article explains how you can configure Conditional Access policies that block legacy authentication for your tenant.
+If your environment is ready to block legacy authentication to improve your tenant's protection, you can accomplish this goal with Conditional Access. This article explains how you can configure Conditional Access policies that block legacy authentication for your tenant. Customers without licenses that include Conditional Access can make use of [security defaults](../fundamentals/concept-fundamentals-security-defaults.md)) to block legacy authentication.
## Prerequisites
Azure AD supports several of the most widely used authentication and authorizati
- Older Microsoft Office apps - Apps using mail protocols like POP, IMAP, and SMTP
-Single factor authentication (for example, username and password) is not enough these days. Passwords are bad as they are easy to guess and we (humans) are bad at choosing good passwords. Passwords are also vulnerable to a variety of attacks like phishing and password spray. One of the easiest things you can do to protect against password threats is to implement multi-factor authentication (MFA). With MFA, even if an attacker gets in possession of a user's password, the password alone is not sufficient to successfully authenticate and access the data.
+Single factor authentication (for example, username and password) is not enough these days. Passwords are bad as they are easy to guess and we (humans) are bad at choosing good passwords. Passwords are also vulnerable to various attacks, like phishing and password spray. One of the easiest things you can do to protect against password threats is to implement multi-factor authentication (MFA). With MFA, even if an attacker gets in possession of a user's password, the password alone is not sufficient to successfully authenticate and access the data.
How can you prevent apps using legacy authentication from accessing your tenant's resources? The recommendation is to just block them with a Conditional Access policy. If necessary, you allow only certain users and specific network locations to use apps that are based on legacy authentication.
Before you can block legacy authentication in your directory, you need to first
1. **Add filters** > **Client App** > select all of the legacy authentication protocols. Select outside the filtering dialog box to apply your selections and close the dialog box. 1. If you have activated the [new sign-in activity reports preview](../reports-monitoring/concept-all-sign-ins.md), repeat the above steps also on the **User sign-ins (non-interactive)** tab.
-Filtering will only show you sign-in attempts that were made by legacy authentication protocols. Clicking on each individual sign-in attempt will show you additional details. The **Client App** field under the **Basic Info** tab will indicate which legacy authentication protocol was used.
+Filtering will only show you sign-in attempts that were made by legacy authentication protocols. Clicking on each individual sign-in attempt will show you more details. The **Client App** field under the **Basic Info** tab will indicate which legacy authentication protocol was used.
These logs will indicate which users are still depending on legacy authentication and which applications are using legacy protocols to make authentication requests. For users that do not appear in these logs and are confirmed to not be using legacy authentication, implement a Conditional Access policy for these users only.
active-directory Howto Create Service Principal Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/howto-create-service-principal-portal.md
If you choose not to use a certificate, you can create a new application secret.
![Copy the secret value because you can't retrieve this later](./media/howto-create-service-principal-portal/copy-secret.png) ## Configure access policies on resources
-Keep in mind, you might need to configure additional permissions on resources that your application needs to access. For example, you must also [update a key vault's access policies](../../key-vault/general/security-overview.md#privileged-access) to give your application access to keys, secrets, or certificates.
+Keep in mind, you might need to configure additional permissions on resources that your application needs to access. For example, you must also [update a key vault's access policies](../../key-vault/general/security-features.md#privileged-access) to give your application access to keys, secrets, or certificates.
1. In the <a href="https://portal.azure.com/" target="_blank">Azure portal</a>, navigate to your key vault and select **Access policies**. 1. Select **Add access policy**, then select the key, secret, and certificate permissions you want to grant your application. Select the service principal you created previously.
active-directory Msal Net Migration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/msal-net-migration.md
If you are already familiar with the Azure AD for developers (v1.0) endpoint (an
However, you still need to use ADAL.NET if your application needs to sign in users with earlier versions of [Active Directory Federation Services (ADFS)](/windows-server/identity/active-directory-federation-services). For more information, see [ADFS support](https://aka.ms/msal-net-adfs-support).
-The following picture summarizes some of the differences between ADAL.NET and MSAL.NET
+The following picture summarizes some of the differences between ADAL.NET and MSAL.NET for a public client application
![Side-by-side code](./media/msal-compare-msaldotnet-and-adaldotnet/differences.png) ### NuGet packages and Namespaces
Here are the grants supported in ADAL.NET and MSAL.NET for Desktop and Mobile ap
Grant | ADAL.NET | MSAL.NET -- |-- | --
-Interactive | [Interactive Auth](https://github.com/AzureAD/azure-activedirectory-library-for-dotnet/wiki/Acquiring-tokens-interactivelyPublic-client-application-flows) | [Acquiring tokens interactively in MSAL.NET](https://github.com/AzureAD/microsoft-authentication-library-for-dotnet/wiki/Acquiring-tokens-interactively)
-Integrated Windows Authentication | [Integrated authentication on Windows (Kerberos)](https://github.com/AzureAD/azure-activedirectory-library-for-dotnet/wiki/AcquireTokenSilentAsync-using-Integrated-authentication-on-Windows-(Kerberos)) | [Integrated Windows Authentication](msal-authentication-flows.md#integrated-windows-authentication)
-Username / Password | [Acquiring tokens with username and password](https://github.com/AzureAD/azure-activedirectory-library-for-dotnet/wiki/Acquiring-tokens-with-username-and-password)| [Username Password Authentication](msal-authentication-flows.md#usernamepassword)
-Device code flow | [Device profile for devices without web browsers](https://github.com/AzureAD/azure-activedirectory-library-for-dotnet/wiki/Device-profile-for-devices-without-web-browsers) | [Device Code flow](msal-authentication-flows.md#device-code)
+Interactive | [Interactive Auth](https://github.com/AzureAD/azure-activedirectory-library-for-dotnet/wiki/Acquiring-tokens-interactivelyPublic-client-application-flows) | [Acquiring tokens interactively in MSAL.NET](scenario-desktop-acquire-token.md?tabs=dotnet#acquire-a-token-interactively)
+Integrated Windows Authentication | [Integrated authentication on Windows (Kerberos)](https://github.com/AzureAD/azure-activedirectory-library-for-dotnet/wiki/AcquireTokenSilentAsync-using-Integrated-authentication-on-Windows-(Kerberos)) | [Integrated Windows Authentication](scenario-desktop-acquire-token.md?tabs=dotnet#integrated-windows-authentication)
+Username / Password | [Acquiring tokens with username and password](https://github.com/AzureAD/azure-activedirectory-library-for-dotnet/wiki/Acquiring-tokens-with-username-and-password)| [Username Password Authentication](scenario-desktop-acquire-token.md?tabs=dotnet#username-and-password)
+Device code flow | [Device profile for devices without web browsers](https://github.com/AzureAD/azure-activedirectory-library-for-dotnet/wiki/Device-profile-for-devices-without-web-browsers) | [Device Code flow](scenario-desktop-acquire-token.md?tabs=dotnet#command-line-tool-without-a-web-browser)
#### Confidential client applications
-Here are the grants supported in ADAL.NET and MSAL.NET for web applications, web APIs, and daemon applications:
+Here are the grants supported in ADAL.NET, MSAL.NET, and Microsoft.Identity.Web for web applications, web APIs, and daemon applications:
Type of App | Grant | ADAL.NET | MSAL.NET -- | -- | -- | --
-Web app, web API, daemon | Client Credentials | [Client credential flows in ADAL.NET](https://github.com/AzureAD/azure-activedirectory-library-for-dotnet/wiki/Client-credential-flows) | [Client credential flows in MSAL.NET](msal-authentication-flows.md#client-credentials)
-Web API | On behalf of | [Service to service calls on behalf of the user with ADAL.NET](https://github.com/AzureAD/azure-activedirectory-library-for-dotnet/wiki/Service-to-service-calls-on-behalf-of-the-user) | [On behalf of in MSAL.NET](msal-authentication-flows.md#on-behalf-of)
-Web app | Auth Code | [Acquiring tokens with authorization codes on web apps with ADAL.NET](https://github.com/AzureAD/azure-activedirectory-library-for-dotnet/wiki/Acquiring-tokens-with-authorization-codes-on-web-apps) | [Acquiring tokens with authorization codes on web apps with A MSAL.NET](msal-authentication-flows.md#authorization-code)
+Web app, web API, daemon | Client Credentials | [Client credential flows in ADAL.NET](https://github.com/AzureAD/azure-activedirectory-library-for-dotnet/wiki/Client-credential-flows) | [Client credential flows in MSAL.NET](scenario-daemon-acquire-token.md?tabs=dotnet#acquiretokenforclient-api)
+Web API | On behalf of | [Service to service calls on behalf of the user with ADAL.NET](https://github.com/AzureAD/azure-activedirectory-library-for-dotnet/wiki/Service-to-service-calls-on-behalf-of-the-user) | [On behalf of in MSAL.NET](https://github.com/AzureAD/microsoft-authentication-library-for-dotnet/wiki/on-behalf-of)
+Web app | Auth Code | [Acquiring tokens with authorization codes on web apps with ADAL.NET](https://github.com/AzureAD/azure-activedirectory-library-for-dotnet/wiki/Acquiring-tokens-with-authorization-codes-on-web-apps) | [Acquiring tokens with authorization codes on web apps with A MSAL.NET](scenario-web-app-call-api-acquire-token.md?tabs=aspnetcore)
### Cache persistence
active-directory Quickstart V2 Angular https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/quickstart-v2-angular.md
In this quickstart, you download and run a code sample that demonstrates how an
## Prerequisites
-* Azure subscription. [Create one for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+* An Azure subscription. [Create one for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
* [Node.js](https://nodejs.org/en/download/). * [Visual Studio Code](https://code.visualstudio.com/download) to edit project files, or [Visual Studio 2019](https://visualstudio.microsoft.com/downloads/) to run the project. > [!div renderon="docs"]
+>
> ## Register and download the quickstart app
+>
> To start the quickstart app, use either of the following options. > > ### Option 1 (express): Register and automatically configure the app, and then download the code sample > > 1. Go to the <a href="https://portal.azure.com/#blade/Microsoft_AAD_RegisteredApps/ApplicationsListBlade/quickStartType/JavascriptSpaQuickstartPage/sourceType/docs" target="_blank">Azure portal - App registrations</a> quickstart experience. > 1. Enter a name for your application, and then select **Register**.
-> 1. Go to the quickstart pane and view the Angular quickstart. Follow the instructions to download and automatically configure your new application.
+> 1. On the quickstart pane, find the Angular quickstart. Follow the instructions to download and automatically configure your new application.
> > ### Option 2 (manual): Register and manually configure the application and code sample > > #### Step 1: Register the application >
-> 1. Sign in to the <a href="https://portal.azure.com/" target="_blank">Azure portal</a>.
-> 1. If you have access to multiple tenants, use the **Directory + subscription** filter :::image type="icon" source="./media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to select the tenant in which you want to register an application.
> 1. Follow the instructions to [register a single-page application](./scenario-spa-app-registration.md) in the Azure portal. > 1. Add a new platform on the **Authentication** pane of your app registration and register the redirect URI: `http://localhost:4200/`.
-> 1. This quickstart uses the [implicit grant flow](v2-oauth2-implicit-grant-flow.md). In the **Implicit grant and hybrid flows** section, select **ID tokens** and **Access tokens**. ID tokens and access tokens are required because this app signs in users and calls an API.
+> 1. This quickstart uses the [implicit grant flow](v2-oauth2-implicit-grant-flow.md). In the **Implicit grant and hybrid flows** section, select **ID tokens** and **Access tokens**. ID tokens and access tokens are required because this app signs users in and calls an API.
> [!div class="sxs-lookup" renderon="portal"]
+>
> #### Step 1: Configure the application in the Azure portal
-> For the code sample in this quickstart to work, you need to add a redirect URI as **http://localhost:4200/** and enable **Implicit grant**.
+>
+> For the code sample in this quickstart to work, you need to add a redirect URI to `http://localhost:4200/` and enable **Implicit grant**.
> > [!div renderon="portal" id="makechanges" class="nextstepaction"] > > [Make these changes for me]() >
In this quickstart, you download and run a code sample that demonstrates how an
#### Step 2: Download the code sample >[!div renderon="docs"]
->To run the project with a web server by using Node.js, [clone the sample repository](https://github.com/Azure-Samples/active-directory-javascript-singlepageapp-angular) or [download the core project files](https://github.com/Azure-Samples/active-directory-javascript-singlepageapp-angular/archive/master.zip). Open the files by using an editor such as Visual Studio Code.
+>To run the project with a web server by using Node.js, clone the [sample repository](https://github.com/Azure-Samples/active-directory-javascript-singlepageapp-angular) or [download the core project files](https://github.com/Azure-Samples/active-directory-javascript-singlepageapp-angular/archive/master.zip). Open the files in an editor such as Visual Studio Code.
> [!div renderon="portal" id="autoupdate" class="sxs-lookup nextstepaction"] > [Download the code sample](https://github.com/Azure-Samples/active-directory-javascript-singlepageapp-angular/archive/master.zip)
In this quickstart, you download and run a code sample that demonstrates how an
> > [!NOTE] > > Enter_the_Supported_Account_Info_Here - > [!div renderon="docs"] > > Replace these values: > >|Value name|Description| >|||
->|Enter_the_Application_Id_Here|On the **Overview** page of your application registration, this is your **Application(client) ID** value. |
->|Enter_the_Cloud_Instance_Id_Here|This is the instance of the Azure cloud. For the main or global Azure cloud, enter **https://login.microsoftonline.com**. For national clouds (for example, China), see [National clouds](./authentication-national-cloud.md).|
->|Enter_the_Tenant_Info_Here| Set to one of the following options: If your application supports *accounts in this organizational directory*, replace this value with the directory (tenant) ID or tenant name (for example, **contoso.microsoft.com**). If your application supports *accounts in any organizational directory*, replace this value with **organizations**. If your application supports *accounts in any organizational directory and personal Microsoft accounts*, replace this value with **common**. To restrict support to *personal Microsoft accounts only*, replace this value with **consumers**. |
->|Enter_the_Redirect_Uri_Here|Replace with **http://localhost:4200**.|
->|cacheLocation | (Optional) Set the browser storage for the authentication state. The default is **sessionStorage**. |
->|storeAuthStateInCookie | (Optional) Identify the library that stores the authentication request state. This state is required to validate the authentication flows in the browser cookies. This cookie is set for Internet Explorer and Edge to accommodate those two browsers. For more details, see the [known issues](https://github.com/AzureAD/microsoft-authentication-library-for-js/wiki/Known-issues->on-IE-and-Edge-Browser#issues). |
+>|Enter_the_Application_Id_Here|On the **Overview** page of your application registration, this is your **Application (client) ID** value. |
+>|Enter_the_Cloud_Instance_Id_Here|This is the Azure cloud instance. For the main or global Azure cloud, enter `https://login.microsoftonline.com`. For national clouds (for example, China), see [National clouds](./authentication-national-cloud.md).|
+>|Enter_the_Tenant_Info_Here| Set to one of the following options: If your application supports *accounts in this organizational directory*, replace this value with the directory (tenant) ID or tenant name (for example, `contoso.microsoft.com`). If your application supports *accounts in any organizational directory*, replace this value with `organizations`. If your application supports *accounts in any organizational directory and personal Microsoft accounts*, replace this value with `common`. To restrict support to *personal Microsoft accounts only*, replace this value with `consumers`. |
+>|Enter_the_Redirect_Uri_Here|Replace with `http://localhost:4200`.|
+>|cacheLocation | (Optional) Set the browser storage for the authentication state. The default is `sessionStorage`. |
+>|storeAuthStateInCookie | (Optional) Identify the library that stores the authentication request state. This state is required to validate the authentication flows in the browser cookies. This cookie is set for Internet Explorer and Microsoft Edge to accommodate those two browsers. For more details, see the [known issues](https://github.com/AzureAD/microsoft-authentication-library-for-js/wiki/Known-issues->on-IE-and-Edge-Browser#issues). |
> > To find the values of **Application (client) ID**, **Directory (tenant) ID**, and **Supported account types**, go to the app's **Overview** page in the Azure portal.
If you're using Node.js:
npm start ```
-1. Browse to **http://localhost:4200/**.
-1. Select **Login**.
-1. Select **Profile** to call Microsoft Graph.
-
-After the browser loads the application, select **Login**. The first time you start to sign in, you're prompted to provide your consent to allow the application to access your profile and sign you in. After you're signed in successfully, select **Profile**, and your user profile information will be displayed on the page.
+1. Go to `http://localhost:4200/`.
+1. Select **Login**. The first time you sign in, you're prompted to allow the application to access your profile and sign you in automatically.
+1. Select **Profile** to call Microsoft Graph. Your user profile information is displayed on the page.
## How the sample works
-![Diagram that shows how the sample app in this quickstart works](./media/quickstart-v2-angular/diagram-auth-flow-spa-angular.svg)
-
+![Diagram that shows how the sample app in this quickstart works.](./media/quickstart-v2-angular/diagram-auth-flow-spa-angular.svg)
## Next steps
-Next, learn how to sign in a user and acquire tokens in the Angular tutorial:
+Learn how to sign in a user and acquire tokens in the Angular tutorial:
> [!div class="nextstepaction"]
-> [Angular tutorial](./tutorial-v2-angular.md)
+> [Angular tutorial](./tutorial-v2-angular.md)
active-directory Quickstart V2 Netcore Daemon https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/quickstart-v2-netcore-daemon.md
In this quickstart, you download and run a code sample that demonstrates how a .
## Prerequisites
-This quickstart requires [.NET Core 3.1](https://www.microsoft.com/net/download/dotnet-core) but will also work with .NET Core 5.0.
+This quickstart requires [.NET Core 3.1 SDK](https://dotnet.microsoft.com/download) but will also work with .NET 5.0 SDK.
> [!div renderon="docs"] > ## Register and download the app
active-directory How To Connect Password Hash Synchronization https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/how-to-connect-password-hash-synchronization.md
If your server has been locked down according to Federal Information Processing
**To enable MD5 for password hash synchronization, perform the following steps:**
-1. Go to %programfiles%\Azure AD Sync\Bin.
+1. Go to %programfiles%\Microsoft Azure AD Sync\Bin.
2. Open miiserver.exe.config. 3. Go to the configuration/runtime node at the end of the file. 4. Add the following node: `<enforceFIPSPolicy enabled="false"/>`
active-directory Services Support Managed Identities https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/managed-identities-azure-resources/services-support-managed-identities.md
All Azure Arc enabled servers have a system assigned identity. You cannot disabl
| User assigned | Not available | Not available | Not available | Not available | Refer to the following document to reconfigure a managed identity if you have moved your subscription to a new tenant:+ * [Repair a broken Automanage Account](../../automanage/repair-automanage-account.md)
+### Azure Automation
+
+| Managed identity type | All Generally Available<br>Global Azure Regions | Azure Government | Azure Germany | Azure China 21Vianet |
+| | :-: | :-: | :-: | :-: |
+| System assigned | ![Available][check] | ![Available][check] | Not available | Not available |
+| User assigned | Not available | Not available | Not available | Not available |
+
+Refer to the following documents to use managed identity with [Azure Automation](../../automation/automation-intro.md):
+
+* [Automation account authentication overview - Managed identities](../../automation/automation-security-overview.md#managed-identities)
+* [Enable and use managed identity for Automation](https://docs.microsoft.com/azure/automation/enable-managed-identity-for-automation)
+ ### Azure Blueprints |Managed identity type | All Generally Available<br>Global Azure Regions | Azure Government | Azure Germany | Azure China 21Vianet |
aks Aks Migration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/aks-migration.md Binary files differ
aks Kubernetes Action https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/kubernetes-action.md
Before you can deploy to AKS, you'll need to set target Kubernetes namespace and
container-registry-password: ${{ secrets.REGISTRY_PASSWORD }} secret-name: ${{ env.SECRET }} namespace: ${{ env.NAMESPACE }}
- force: true
+ arguments: --force true
```
aks Operator Best Practices Advanced Scheduler https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/operator-best-practices-advanced-scheduler.md
The Kubernetes scheduler uses taints and tolerations to restrict what workloads
* Apply a **taint** to a node to indicate only specific pods can be scheduled on them. * Then apply a **toleration** to a pod, allowing them to *tolerate* a node's taint.
-When you deploy a pod to an AKS cluster, Kubernetes only schedules pods on nodes whose taint aligns with the toleration. For example, assume you have a node pool in your AKS cluster for nodes with GPU support. You define name, such as *gpu*, then a value for scheduling. Setting this value to *NoSchedule* restricts the Kubernetes scheduler from scheduling pods with undefined toleration on the node.
-
-```console
-kubectl taint node aks-nodepool1 sku=gpu:NoSchedule
+When you deploy a pod to an AKS cluster, Kubernetes only schedules pods on nodes whose taint aligns with the toleration. For example, assume you added a node pool in your AKS cluster for nodes with GPU support. You define name, such as *gpu*, then a value for scheduling. Setting this value to *NoSchedule* restricts the Kubernetes scheduler from scheduling pods with undefined toleration on the node.
+
+```azurecli-interactive
+az aks nodepool add \
+ --resource-group myResourceGroup \
+ --cluster-name myAKSCluster \
+ --name taintnp \
+ --node-taints sku=gpu:NoSchedule \
+ --no-wait
```
-With a taint applied to nodes, you'll define a toleration in the pod specification that allows scheduling on the nodes. The following example defines the `sku: gpu` and `effect: NoSchedule` to tolerate the taint applied to the node in the previous step:
+With a taint applied to nodes in the node pool, you'll define a toleration in the pod specification that allows scheduling on the nodes. The following example defines the `sku: gpu` and `effect: NoSchedule` to tolerate the taint applied to the node pool in the previous step:
```yaml kind: Pod
When you scale a node pool in AKS, taints and tolerations do not carry over by d
> > Control the scheduling of pods on nodes using node selectors, node affinity, or inter-pod affinity. These settings allow the Kubernetes scheduler to logically isolate workloads, such as by hardware in the node.
-Taints and tolerations logically isolate resources with a hard cut-off. If the pod doesn't tolerate a node's taint, it isn't scheduled on the node.
+Taints and tolerations logically isolate resources with a hard cut-off. If the pod doesn't tolerate a node's taint, it isn't scheduled on the node.
-Alternatively, you can use node selectors. For example, you label nodes to indicate locally attached SSD storage or a large amount of memory, and then define in the pod specification a node selector. Kubernetes schedules those pods on a matching node.
+Alternatively, you can use node selectors. For example, you label nodes to indicate locally attached SSD storage or a large amount of memory, and then define in the pod specification a node selector. Kubernetes schedules those pods on a matching node.
Unlike tolerations, pods without a matching node selector can still be scheduled on labeled nodes. This behavior allows unused resources on the nodes to consume, but prioritizes pods that define the matching node selector.
-Let's look at an example of nodes with a high amount of memory. These nodes prioritize pods that request a high amount of memory. To ensure the resources don't sit idle, they also allow other pods to run.
+Let's look at an example of nodes with a high amount of memory. These nodes prioritize pods that request a high amount of memory. To ensure the resources don't sit idle, they also allow other pods to run. The follow example command adds a node pool with the label *hardware=highmem* to the *myAKSCluster* in the *myResourceGroup*. All nodes in that node pool will have this label.
-```console
-kubectl label node aks-nodepool1 hardware=highmem
+```azurecli-interactive
+az aks nodepool add \
+ --resource-group myResourceGroup \
+ --cluster-name myAKSCluster \
+ --name labelnp \
+ --node-count 1 \
+ --labels hardware=highmem \
+ --no-wait
``` A pod specification then adds the `nodeSelector` property to define a node selector that matches the label set on a node:
aks Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/policy-reference.md
Title: Built-in policy definitions for Azure Kubernetes Service description: Lists Azure Policy built-in policy definitions for Azure Kubernetes Service. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 04/14/2021 Last updated : 04/21/2021
aks Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Kubernetes Service (AKS) description: Lists Azure Policy Regulatory Compliance controls available for Azure Kubernetes Service (AKS). These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 04/14/2021 Last updated : 04/21/2021
aks Servicemesh Osm About https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/servicemesh-osm-about.md
-
+ Title: Open Service Mesh (Preview) description: Open Service Mesh (OSM) in Azure Kubernetes Service (AKS)
aks Use Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/use-managed-identity.md
A successful cluster creation using your own managed identities contains this us
}, ```
+## Bring your own kubelet MI (Preview)
++
+A Kubelet identity enables access to be granted to the existing identity prior to cluster creation. This feature enables scenarios such as connection to ACR with a pre-created managed identity.
+
+### Prerequisites
+
+- You must have the Azure CLI, version 2.21.1 or later installed.
+- You must have the aks-preview, version 0.5.10 or later installed.
+
+### Limitations
+
+- Only works with a User-Assigned Managed cluster.
+- Azure Government isn't currently supported.
+- Azure China 21Vianet isn't currently supported.
+
+First, register the feature flag for Kubelet identity:
+
+```azurecli-interactive
+az feature register --namespace Microsoft.ContainerService -n CustomKubeletIdentityPreview
+```
+
+It takes a few minutes for the status to show *Registered*. You can check on the registration status using the [az feature list][az-feature-list] command:
+
+```azurecli-interactive
+az feature list -o table --query "[?contains(name, 'Microsoft.ContainerService/CustomKubeletIdentityPreview')].{Name:name,State:properties.state}"
+```
+
+When ready, refresh the registration of the *Microsoft.ContainerService* resource provider using the [az provider register][az-provider-register] command:
+
+```azurecli-interactive
+az provider register --namespace Microsoft.ContainerService
+```
+
+### Create or obtain managed identities
+
+If you don't have a control plane managed identity yet, you should go ahead and create one. The following example uses the [az identity create][az-identity-create] command:
+
+```azurecli-interactive
+az identity create --name myIdentity --resource-group myResourceGroup
+```
+
+The result should look like:
+
+```output
+{
+ "clientId": "<client-id>",
+ "clientSecretUrl": "<clientSecretUrl>",
+ "id": "/subscriptions/<subscriptionid>/resourcegroups/myResourceGroup/providers/Microsoft.ManagedIdentity/userAssignedIdentities/myIdentity",
+ "location": "westus2",
+ "name": "myIdentity",
+ "principalId": "<principalId>",
+ "resourceGroup": "myResourceGroup",
+ "tags": {},
+ "tenantId": "<tenant-id>",
+ "type": "Microsoft.ManagedIdentity/userAssignedIdentities"
+}
+```
+
+If you don't have a kubelet managed identity yet, you should go ahead and create one. The following example uses the [az identity create][az-identity-create] command:
+
+```azurecli-interactive
+az identity create --name myKubeletIdentity --resource-group myResourceGroup
+```
+
+The result should look like:
+
+```output
+{
+ "clientId": "<client-id>",
+ "clientSecretUrl": "<clientSecretUrl>",
+ "id": "/subscriptions/<subscriptionid>/resourcegroups/myResourceGroup/providers/Microsoft.ManagedIdentity/userAssignedIdentities/myKubeletIdentity",
+ "location": "westus2",
+ "name": "myKubeletIdentity",
+ "principalId": "<principalId>",
+ "resourceGroup": "myResourceGroup",
+ "tags": {},
+ "tenantId": "<tenant-id>",
+ "type": "Microsoft.ManagedIdentity/userAssignedIdentities"
+}
+```
+
+If your existing managed identity is part of your subscription, you can use the [az identity list][az-identity-list] command to query it:
+
+```azurecli-interactive
+az identity list --query "[].{Name:name, Id:id, Location:location}" -o table
+```
+
+### Create a cluster using kubelet identity
+
+Now you can use the following command to create your cluster with your existing identities. Provide the control plane identity id via `assign-identity` and the kubelet managed identity via `assign-kublet-identity`:
+
+```azurecli-interactive
+az aks create \
+ --resource-group myResourceGroup \
+ --name myManagedCluster \
+ --network-plugin azure \
+ --vnet-subnet-id <subnet-id> \
+ --docker-bridge-address 172.17.0.1/16 \
+ --dns-service-ip 10.2.0.10 \
+ --service-cidr 10.2.0.0/24 \
+ --enable-managed-identity \
+ --assign-identity <identity-id> \
+ --assign-kubelet-identity <kubelet-identity-id> \
+```
+
+A successful cluster creation using your own kubelet managed identity contains the following output:
+
+```output
+ "identity": {
+ "principalId": null,
+ "tenantId": null,
+ "type": "UserAssigned",
+ "userAssignedIdentities": {
+ "/subscriptions/<subscriptionid>/resourcegroups/resourcegroups/providers/Microsoft.ManagedIdentity/userAssignedIdentities/myIdentity": {
+ "clientId": "<client-id>",
+ "principalId": "<principal-id>"
+ }
+ }
+ },
+ "identityProfile": {
+ "kubeletidentity": {
+ "clientId": "<client-id>",
+ "objectId": "<object-id>",
+ "resourceId": "/subscriptions/<subscriptionid>/resourcegroups/resourcegroups/providers/Microsoft.ManagedIdentity/userAssignedIdentities/myKubeletIdentity"
+ }
+ },
+```
+ ## Next steps
-* Use [Azure Resource Manager (ARM) templates ][aks-arm-template] to create Managed Identity enabled clusters.
+* Use [Azure Resource Manager templates ][aks-arm-template] to create Managed Identity enabled clusters.
<!-- LINKS - external --> [aks-arm-template]: /azure/templates/microsoft.containerservice/managedclusters+
+<!-- LINKS - internal -->
[az-identity-create]: /cli/azure/identity#az_identity_create [az-identity-list]: /cli/azure/identity#az_identity_list
+[az-feature-list]: /cli/azure/feature#az_feature_list
+[az-provider-register]: /cli/azure/provider#az_provider_register
aks Use Multiple Node Pools https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/use-multiple-node-pools.md
It takes a few minutes for the *gpunodepool* to be successfully created.
## Specify a taint, label, or tag for a node pool
-### Setting nodepool taints
- When creating a node pool, you can add taints, labels, or tags to that node pool. When you add a taint, label, or tag, all nodes within that node pool also get that taint, label, or tag.
+> [!IMPORTANT]
+> Adding taints, labels, or tags to nodes should be done for the entire node pool using `az aks nodepool`. Applying taints, lablels, or tags to individual nodes in a node pool using `kubectl` is not recommended.
+
+### Setting nodepool taints
+ To create a node pool with a taint, use [az aks nodepool add][az-aks-nodepool-add]. Specify the name *taintnp* and use the `--node-taints` parameter to specify *sku=gpu:NoSchedule* for the taint. ```azurecli-interactive
analysis-services Analysis Services Connect Pbi https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/analysis-services/analysis-services-connect-pbi.md
description: Learn how to connect to an Azure Analysis Services server by using
Previously updated : 12/01/2020 Last updated : 4/20/2021
Once you've created a server in Azure, and deployed a tabular model to it, users in your organization are ready to connect and begin exploring data.
-> [!TIP]
-> Be sure to use the latest version of [Power BI Desktop](https://powerbi.microsoft.com/desktop/).
+> [!NOTE]
+> If publishing a Power BI Desktop model to the Power BI service, on the Azure Analysis Services server, ensure the Case-Sensitive collation server property is not selected (default). The Case-Sensitive server property can be set by using SQL Server Management Studio.
> >
Once you've created a server in Azure, and deployed a tabular model to it, users
## See also [Connect to Azure Analysis Services](analysis-services-connect.md)
-[Client libraries](/analysis-services/client-libraries?view=azure-analysis-services-current&preserve-view=true)
+[Client libraries](/analysis-services/client-libraries?view=azure-analysis-services-current&preserve-view=true)
api-management Api Management Faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/api-management-faq.md
na Last updated 11/19/2017-++ # Azure API Management FAQs Get the answers to common questions, patterns, and best practices for Azure API Management.
api-management Api Management Get Started Revise Api https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/api-management-get-started-revise-api.md Binary files differ
api-management Api Management Howto Ca Certificates https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/api-management-howto-ca-certificates.md
na Last updated 08/20/2018-++ # How to add a custom CA certificate in Azure API Management
api-management Api Management Howto Developer Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/api-management-howto-developer-portal.md
Last updated 04/15/2021-++ # Overview of the developer portal
api-management Api Management Howto Disaster Recovery Backup Restore https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/api-management-howto-disaster-recovery-backup-restore.md
na Last updated 12/05/2020-++ # How to implement disaster recovery using service backup and restore in Azure API Management
api-management Api Management Howto Integrate Internal Vnet Appgateway https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/api-management-howto-integrate-internal-vnet-appgateway.md
na Last updated 11/04/2019-++ # Integrate API Management in an internal VNET with Application Gateway
api-management Api Management Howto Mutual Certificates https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/api-management-howto-mutual-certificates.md
Last updated 01/26/2021-++ # Secure backend services using client certificate authentication in Azure API Management
API Management provides two options to manage certificates used to secure access
Using key vault certificates is recommended because it helps improve API Management security: * Certificates stored in key vaults can be reused across services
-* Granular [access policies](../key-vault/general/security-overview.md#privileged-access) can be applied to certificates stored in key vaults
+* Granular [access policies](../key-vault/general/security-features.md#privileged-access) can be applied to certificates stored in key vaults
* Certificates updated in the key vault are automatically rotated in API Management. After update in the key vault, a certificate in API Management is updated within 4 hours. You can also manually refresh the certificate using the Azure portal or via the management REST API. ## Prerequisites
api-management Api Management Howto Properties https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/api-management-howto-properties.md
Secret values can be stored either as encrypted strings in API Management (custo
Using key vault secrets is recommended because it helps improve API Management security: * Secrets stored in key vaults can be reused across services
-* Granular [access policies](../key-vault/general/security-overview.md#privileged-access) can be applied to secrets
+* Granular [access policies](../key-vault/general/security-features.md#privileged-access) can be applied to secrets
* Secrets updated in the key vault are automatically rotated in API Management. After update in the key vault, a named value in API Management is updated within 4 hours. You can also manually refresh the secret using the Azure portal or via the management REST API. ### Prerequisites for key vault integration
api-management Api Management Howto Use Managed Service Identity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/api-management-howto-use-managed-service-identity.md
Last updated 03/09/2021-++ # Use managed identities in Azure API Management
api-management Api Management Revisions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/api-management-revisions.md
Last updated 06/12/2020 -+ # Revisions in Azure API Management
api-management Api Management Role Based Access Control https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/api-management-role-based-access-control.md
na Last updated 06/20/2018-++ # How to use Role-Based Access Control in Azure API Management
api-management Api Management Using With Internal Vnet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/api-management-using-with-internal-vnet.md
editor: ''
Last updated 04/12/2021-++ # Using Azure API Management service with an internal virtual network
api-management Api Management Using With Vnet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/api-management-using-with-vnet.md
Last updated 04/12/2021 -+ # How to use Azure API Management with virtual networks Azure Virtual Networks (VNETs) allow you to place any of your Azure resources in a non-internet routable network that you control access to. These networks can then be connected to your on-premises networks using various VPN technologies. To learn more about Azure Virtual Networks start with the information here: [Azure Virtual Network Overview](../virtual-network/virtual-networks-overview.md).
api-management Backends https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/backends.md
editor: ''
Last updated 01/29/2021-++ # Backends in API Management
api-management How To Configure Service Fabric Backend https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/how-to-configure-service-fabric-backend.md
editor: ''
Last updated 01/29/2021-++ # Set up a Service Fabric backend in API Management using the Azure portal
api-management Mock Api Responses https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/mock-api-responses.md
-
+ Title: Tutorial - Mock API responses in API Management - Azure portal | Microsoft Docs description: In this tutorial, you use API Management to set a policy on an API so it returns a mocked response if the backend is not available to send real responses.
api-management Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/policy-reference.md
Title: Built-in policy definitions for Azure API Management description: Lists Azure Policy built-in policy definitions for Azure API Management. These built-in policy definitions provide approaches to managing your Azure resources. Previously updated : 04/14/2021 Last updated : 04/21/2021
api-management Powershell Create Service Instance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/powershell-create-service-instance.md
-
+ Title: Quickstart - Create Azure API Management instance using PowerShell | Microsoft Docs description: Create a new Azure API Management instance by using Azure PowerShell.
-+ Last updated 09/14/2020
api-management Powershell Add User And Get Subscription Key https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/scripts/powershell-add-user-and-get-subscription-key.md
Last updated 11/16/2017 -+ # Add a user
api-management Powershell Backup Restore Apim Service https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/scripts/powershell-backup-restore-apim-service.md
Last updated 11/16/2017 -+ # Backup and restore service
api-management Powershell Create Apim Service https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/scripts/powershell-create-apim-service.md
Last updated 11/16/2017 -+ # Create an API Management service
api-management Powershell Import Api And Add To Product https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/scripts/powershell-import-api-and-add-to-product.md
Last updated 11/16/2017 -+ # Import an API
api-management Powershell Scale And Addregion Apim Service https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/scripts/powershell-scale-and-addregion-apim-service.md
Last updated 11/16/2017 -+ # Scale the service instance
api-management Powershell Secure Backend With Mutual Certificate Authentication https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/scripts/powershell-secure-backend-with-mutual-certificate-authentication.md
Last updated 11/16/2017 -+ # Secure back end
api-management Powershell Setup Custom Domain https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/scripts/powershell-setup-custom-domain.md
Last updated 12/14/2017 -+ # Set up custom domain
api-management Powershell Setup Rate Limit Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/scripts/powershell-setup-rate-limit-policy.md
Last updated 11/16/2017 -+ # Set up rate limit policy
api-management Security Baseline https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/security-baseline.md
Last updated 02/17/2021 -+ # Important: This content is machine generated; do not modify this topic directly. Contact mbaldwin for more information.
api-management Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure API Management description: Lists Azure Policy Regulatory Compliance controls available for Azure API Management. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 04/14/2021 Last updated : 04/21/2021
app-service App Service Configure Premium Tier https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/app-service-configure-premium-tier.md
keywords: app service, azure app service, scale, scalable, app service plan, app
ms.assetid: ff00902b-9858-4bee-ab95-d3406018c688 Last updated 10/01/2020-+
app-service App Service Ip Restrictions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/app-service-ip-restrictions.md
ms.assetid: 3be1f4bd-8a81-4565-8a56-528c037b24bd
Last updated 12/17/2020 -+ # Set up Azure App Service access restrictions
Access restrictions are also available for function apps with the same functiona
<!--Links--> [serviceendpoints]: ../virtual-network/virtual-network-service-endpoints-overview.md
-[servicetags]: ../virtual-network/service-tags-overview.md
+[servicetags]: ../virtual-network/service-tags-overview.md
app-service App Service Key Vault References https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/app-service-key-vault-references.md
In order to read secrets from Key Vault, you need to have a vault created and gi
> [!NOTE] > Key Vault references currently only support system-assigned managed identities. User-assigned identities cannot be used.
-1. Create an [access policy in Key Vault](../key-vault/general/security-overview.md#privileged-access) for the application identity you created earlier. Enable the "Get" secret permission on this policy. Do not configure the "authorized application" or `applicationId` settings, as this is not compatible with a managed identity.
+1. Create an [access policy in Key Vault](../key-vault/general/security-features.md#privileged-access) for the application identity you created earlier. Enable the "Get" secret permission on this policy. Do not configure the "authorized application" or `applicationId` settings, as this is not compatible with a managed identity.
### Access network-restricted vaults
app-service App Service Undelete https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/app-service-undelete.md
description: Learn how to restore a deleted app in Azure App Service. Avoid the
Last updated 9/23/2019-++ # Restore deleted App Service app Using PowerShell
app-service App Service Web App Cloning https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/app-service-web-app-cloning.md
description: Learn how to clone your App Service app to a new app using PowerShe
ms.assetid: f9a5cfa1-fbb0-41e6-95d1-75d457347a35 Last updated 01/14/2016-+ # Azure App Service App Cloning Using PowerShell
app-service App Service Web Tutorial Custom Domain Uiex https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/app-service-web-tutorial-custom-domain-uiex.md
ms.assetid: dc446e0e-0958-48ea-8d99-441d2b947a7c
ms.devlang: nodejs Last updated 08/25/2020-+
For more information, see [Assign a custom domain to a web app](scripts/powershe
Continue to the next tutorial to learn how to bind a custom TLS/SSL certificate to a web app. > [!div class="nextstepaction"]
-> [Secure a custom DNS name with a TLS/SSL binding in Azure App Service](configure-ssl-bindings.md)
+> [Secure a custom DNS name with a TLS/SSL binding in Azure App Service](configure-ssl-bindings.md)
app-service App Service Web Tutorial Custom Domain https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/app-service-web-tutorial-custom-domain.md
ms.assetid: dc446e0e-0958-48ea-8d99-441d2b947a7c
ms.devlang: nodejs Last updated 08/25/2020-+ adobe-target: true adobe-target-activity: DocsExpΓÇô386541ΓÇôA/BΓÇôEnhanced-Readability-QuickstartsΓÇô2.19.2021 adobe-target-experience: Experience B
app-service Configure Custom Container https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/configure-custom-container.md
Title: Configure a custom container
description: Learn how to configure a custom container in Azure App Service. This article shows the most common configuration tasks. Previously updated : 02/23/2021 Last updated : 02/23/2021 + zone_pivot_groups: app-service-containers-windows-linux
app-service Configure Language Java https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/configure-language-java.md
Next, determine if the data source should be available to one application or to
</resource-env-ref> ```
+#### Shared server-level resources
+
+Tomcat installations on App Service on Windows exist in shared space on the App Service Plan. You can't directly modify a Tomcat installation for server-wide configuration. To make server-level configuration changes to your Tomcat installation, you must copy Tomcat to a local folder, in which you can modify Tomcat's configuration.
+
+##### Automate creating custom Tomcat on app start
+
+You can use a startup script to perform actions before a web app starts. The startup script for customizing Tomcat needs to complete the following steps:
+
+1. Check whether Tomcat was already copied and configured locally. If it was, the startup script can end here.
+2. Copy Tomcat locally.
+3. Make the required configuration changes.
+4. Indicate that configuration was successfully completed.
+
+Here's a PowerShell script that completes these steps:
+
+```powershell
+ # Check for marker file indicating that config has already been done
+ if(Test-Path "$LOCAL_EXPANDED\tomcat\config_done_marker"){
+ return 0
+ }
+
+ # Delete previous Tomcat directory if it exists
+ # In case previous config could not be completed or a new config should be forcefully installed
+ if(Test-Path "$LOCAL_EXPANDED\tomcat"){
+ Remove-Item "$LOCAL_EXPANDED\tomcat" --recurse
+ }
+
+ # Copy Tomcat to local
+ # Using the environment variable $AZURE_TOMCAT90_HOME uses the 'default' version of Tomcat
+ Copy-Item -Path "$AZURE_TOMCAT90_HOME\*" -Destination "$LOCAL_EXPANDED\tomcat" -Recurse
+
+ # Perform the required customization of Tomcat
+ {... customization ...}
+
+ # Mark that the operation was a success
+ New-Item -Path "$LOCAL_EXPANDED\tomcat\config_done_marker" -ItemType File
+```
+
+##### Transforms
+
+A common use case for customizing a Tomcat version is to modify the `server.xml`, `context.xml`, or `web.xml` Tomcat configuration files. App Service already modifies these files to provide platform features. To continue to use these features, it's important to preserve the content of these files when you make changes to them. To accomplish this, we recommend that you use an [XSL transformation (XSLT)](https://www.w3schools.com/xml/xsl_intro.asp). Use an XSL transform to make changes to the XML files while preserving the original contents of the file.
+
+###### Example XSLT file
+
+This example transform adds a new connector node to `server.xml`. Note the *Identity Transform*, which preserves the original contents of the file.
+
+```xml
+ <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform">
+ <xsl:output method="xml" indent="yes"/>
+
+ <!-- Identity transform: this ensures that the original contents of the file are included in the new file -->
+ <!-- Ensure that your transform files include this block -->
+ <xsl:template match="@* | node()" name="Copy">
+ <xsl:copy>
+ <xsl:apply-templates select="@* | node()"/>
+ </xsl:copy>
+ </xsl:template>
+
+ <xsl:template match="@* | node()" mode="insertConnector">
+ <xsl:call-template name="Copy" />
+ </xsl:template>
+
+ <xsl:template match="comment()[not(../Connector[@scheme = 'https']) and
+ contains(., '&lt;Connector') and
+ (contains(., 'scheme=&quot;https&quot;') or
+ contains(., &quot;scheme='https'&quot;))]">
+ <xsl:value-of select="." disable-output-escaping="yes" />
+ </xsl:template>
+
+ <xsl:template match="Service[not(Connector[@scheme = 'https'] or
+ comment()[contains(., '&lt;Connector') and
+ (contains(., 'scheme=&quot;https&quot;') or
+ contains(., &quot;scheme='https'&quot;))]
+ )]
+ ">
+ <xsl:copy>
+ <xsl:apply-templates select="@* | node()" mode="insertConnector" />
+ </xsl:copy>
+ </xsl:template>
+
+ <!-- Add the new connector after the last existing Connnector if there is one -->
+ <xsl:template match="Connector[last()]" mode="insertConnector">
+ <xsl:call-template name="Copy" />
+
+ <xsl:call-template name="AddConnector" />
+ </xsl:template>
+
+ <!-- ... or before the first Engine if there is no existing Connector -->
+ <xsl:template match="Engine[1][not(preceding-sibling::Connector)]"
+ mode="insertConnector">
+ <xsl:call-template name="AddConnector" />
+
+ <xsl:call-template name="Copy" />
+ </xsl:template>
+
+ <xsl:template name="AddConnector">
+ <!-- Add new line -->
+ <xsl:text>&#xa;</xsl:text>
+ <!-- This is the new connector -->
+ <Connector port="8443" protocol="HTTP/1.1" SSLEnabled="true"
+ maxThreads="150" scheme="https" secure="true"
+ keystroreFile="${{user.home}}/.keystore" keystorePass="changeit"
+ clientAuth="false" sslProtocol="TLS" />
+ </xsl:template>
+
+</xsl:stylesheet>
+```
+
+###### Function for XSL transform
+
+PowerShell has built-in tools for transforming XML files by using XSL transforms. The following script is an example function that you can use in `startup.ps1` to perform the transform:
+
+```powershell
+ function TransformXML{
+ param ($xml, $xsl, $output)
+
+ if (-not $xml -or -not $xsl -or -not $output)
+ {
+ return 0
+ }
+
+ Try
+ {
+ $xslt_settings = New-Object System.Xml.Xsl.XsltSettings;
+ $XmlUrlResolver = New-Object System.Xml.XmlUrlResolver;
+ $xslt_settings.EnableScript = 1;
+
+ $xslt = New-Object System.Xml.Xsl.XslCompiledTransform;
+ $xslt.Load($xsl,$xslt_settings,$XmlUrlResolver);
+ $xslt.Transform($xml, $output);
+
+ }
+
+ Catch
+ {
+ $ErrorMessage = $_.Exception.Message
+ $FailedItem = $_.Exception.ItemName
+ Write-Host 'Error'$ErrorMessage':'$FailedItem':' $_.Exception;
+ return 0
+ }
+ return 1
+ }
+```
+
+##### App settings
+
+The platform also needs to know where your custom version of Tomcat is installed. You can set the installation's location in the `CATALINA_BASE` app setting.
+
+You can use the Azure CLI to change this setting:
+
+```powershell
+ az webapp config appsettings set -g $MyResourceGroup -n $MyUniqueApp --settings CATALINA_BASE="%LOCAL_EXPANDED%\tomcat"
+```
+
+Or, you can manually change the setting in the Azure portal:
+
+1. Go to **Settings** > **Configuration** > **Application settings**.
+1. Select **New Application Setting**.
+1. Use these values to create the setting:
+ 1. **Name**: `CATALINA_BASE`
+ 1. **Value**: `"%LOCAL_EXPANDED%\tomcat"`
+
+##### Example startup.ps1
+
+The following example script copies a custom Tomcat to a local folder, performs an XSL transform, and indicates that the transform was successful:
+
+```powershell
+ # Locations of xml and xsl files
+ $target_xml="$LOCAL_EXPANDED\tomcat\conf\server.xml"
+ $target_xsl="$HOME\site\server.xsl"
+
+ # Define the transform function
+ # Useful if transforming multiple files
+ function TransformXML{
+ param ($xml, $xsl, $output)
+
+ if (-not $xml -or -not $xsl -or -not $output)
+ {
+ return 0
+ }
+
+ Try
+ {
+ $xslt_settings = New-Object System.Xml.Xsl.XsltSettings;
+ $XmlUrlResolver = New-Object System.Xml.XmlUrlResolver;
+ $xslt_settings.EnableScript = 1;
+
+ $xslt = New-Object System.Xml.Xsl.XslCompiledTransform;
+ $xslt.Load($xsl,$xslt_settings,$XmlUrlResolver);
+ $xslt.Transform($xml, $output);
+ }
+
+ Catch
+ {
+ $ErrorMessage = $_.Exception.Message
+ $FailedItem = $_.Exception.ItemName
+ Write-Host 'Error'$ErrorMessage':'$FailedItem':' $_.Exception;
+ return 0
+ }
+ return 1
+ }
+
+ # Check for marker file indicating that config has already been done
+ if(Test-Path "$LOCAL_EXPANDED\tomcat\config_done_marker"){
+ return 0
+ }
+
+ # Delete previous Tomcat directory if it exists
+ # In case previous config could not be completed or a new config should be forcefully installed
+ if(Test-Path "$LOCAL_EXPANDED\tomcat"){
+ Remove-Item "$LOCAL_EXPANDED\tomcat" --recurse
+ }
+
+ # Copy Tomcat to local
+ # Using the environment variable $AZURE_TOMCAT90_HOME uses the 'default' version of Tomcat
+ Copy-Item -Path "$AZURE_TOMCAT90_HOME\*" -Destination "$LOCAL_EXPANDED\tomcat" -Recurse
+
+ # Perform the required customization of Tomcat
+ $success = TransformXML -xml $target_xml -xsl $target_xsl -output $target_xml
+
+ # Mark that the operation was a success if successful
+ if($success){
+ New-Item -Path "$LOCAL_EXPANDED\tomcat\config_done_marker" -ItemType File
+ }
+```
+ #### Finalize configuration Finally, we will place the driver JARs in the Tomcat classpath and restart your App Service. Ensure that the JDBC driver files are available to the Tomcat classloader by placing them in the */home/tomcat/lib* directory. (Create this directory if it does not already exist.) To upload these files to your App Service instance, perform the following steps:
app-service Configure Language Php https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/configure-language-php.md
To show the current PHP version, run the following command in the [Cloud Shell](
az webapp config show --resource-group <resource-group-name> --name <app-name> --query phpVersion ```
+> [!NOTE]
+> To address a development slot, include the parameter `--slot` followed by the name of the slot.
+ To show all supported PHP versions, run the following command in the [Cloud Shell](https://shell.azure.com): ```azurecli-interactive
To show the current PHP version, run the following command in the [Cloud Shell](
az webapp config show --resource-group <resource-group-name> --name <app-name> --query linuxFxVersion ```
+> [!NOTE]
+> To address a development slot, include the parameter `--slot` followed by the name of the slot.
+ To show all supported PHP versions, run the following command in the [Cloud Shell](https://shell.azure.com): ```azurecli-interactive
az webapp list-runtimes --linux | grep PHP
Run the following command in the [Cloud Shell](https://shell.azure.com) to set the PHP version to 7.4: ```azurecli-interactive
-az webapp config set --name <app-name> --resource-group <resource-group-name> --php-version 7.4
+az webapp config set --resource-group <resource-group-name> --name <app-name> --php-version 7.4
``` ::: zone-end
az webapp config set --name <app-name> --resource-group <resource-group-name> --
Run the following command in the [Cloud Shell](https://shell.azure.com) to set the PHP version to 7.2: ```azurecli-interactive
-az webapp config set --name <app-name> --resource-group <resource-group-name> --linux-fx-version "PHP|7.2"
+az webapp config set --resource-group <resource-group-name> --name <app-name> --linux-fx-version "PHP|7.2"
``` ::: zone-end
app-service Configure Language Python https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/configure-language-python.md
For App Service, you then make the following modifications:
1. Also modify the `MIDDLEWARE` and `INSTALLED_APPS` lists to include Whitenoise: ```python
- MIDDLEWARE = [
- "whitenoise.middleware.WhiteNoiseMiddleware",
+ MIDDLEWARE = [
+ 'django.middleware.security.SecurityMiddleware',
+ # Add whitenoise middleware after the security middleware
+ 'whitenoise.middleware.WhiteNoiseMiddleware',
# Other values follow ]
app-service Configure Linux Open Ssh Session https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/configure-linux-open-ssh-session.md Binary files differ
app-service Configure Ssl Certificate In Code https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/configure-ssl-certificate-in-code.md
var cert = new X509Certificate2(bytes);
// Use the loaded certificate ```
+The following C# code shows how to load a private certificate in a Linux app.
+
+```csharp
+using System;
+using System.IO;
+using System.Security.Cryptography.X509Certificates;
+...
+var bytes = File.ReadAllBytes("/var/ssl/private/<thumbprint>.p12");
+var cert = new X509Certificate2(bytes);
+
+// Use the loaded certificate
+```
+ To see how to load a TLS/SSL certificate from a file in Node.js, PHP, Python, Java, or Ruby, see the documentation for the respective language or web platform. ## More resources
To see how to load a TLS/SSL certificate from a file in Node.js, PHP, Python, Ja
* [Secure a custom DNS name with a TLS/SSL binding in Azure App Service](configure-ssl-bindings.md) * [Enforce HTTPS](configure-ssl-bindings.md#enforce-https) * [Enforce TLS 1.1/1.2](configure-ssl-bindings.md#enforce-tls-versions)
-* [FAQ : App Service Certificates](./faq-configuration-and-management.md)
+* [FAQ : App Service Certificates](./faq-configuration-and-management.md)
app-service Configure Ssl Certificate https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/configure-ssl-certificate.md
When the operation completes, you see the certificate in the **Private Key Certi
![Import Key Vault certificate finished](./media/configure-ssl-certificate/import-app-service-cert-finished.png) > [!NOTE]
-> If you update your certificate in Key Vault with a new certificate, App Service automatically syncs your certificate within 48 hours.
+> If you update your certificate in Key Vault with a new certificate, App Service automatically syncs your certificate within 24 hours.
> [!IMPORTANT] > To secure a custom domain with this certificate, you still need to create a certificate binding. Follow the steps in [Create binding](configure-ssl-bindings.md#create-binding).
Rekeying your certificate rolls the certificate with a new certificate issued fr
Once the rekey operation is complete, click **Sync**. The sync operation automatically updates the hostname bindings for the certificate in App Service without causing any downtime to your apps. > [!NOTE]
-> If you don't click **Sync**, App Service automatically syncs your certificate within 48 hours.
+> If you don't click **Sync**, App Service automatically syncs your certificate within 24 hours.
### Renew certificate
To manually renew the certificate instead, click **Manual Renew**. You can reque
Once the renew operation is complete, click **Sync**. The sync operation automatically updates the hostname bindings for the certificate in App Service without causing any downtime to your apps. > [!NOTE]
-> If you don't click **Sync**, App Service automatically syncs your certificate within 48 hours.
+> If you don't click **Sync**, App Service automatically syncs your certificate within 24 hours.
### Export certificate
app-service Deploy Configure Credentials https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/deploy-configure-credentials.md
description: Learn what types of deployment credentials are in Azure App Service
Last updated 02/11/2021 -+
app-service Deploy Ftp https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/deploy-ftp.md
ms.assetid: ae78b410-1bc0-4d72-8fc4-ac69801247ae
Last updated 02/26/2021 -+
app-service Deploy Github Actions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/deploy-github-actions.md
You can restore NuGet dependencies and run msbuild with `run`.
run: nuget restore - name: Add msbuild to PATH
- uses: microsoft/setup-msbuild@v1.0.0
+ uses: microsoft/setup-msbuild@v1.0.2
- name: Run msbuild run: msbuild .\SampleWebApplication.sln
jobs:
run: nuget restore - name: Add msbuild to PATH
- uses: microsoft/setup-msbuild@v1.0.0
+ uses: microsoft/setup-msbuild@v1.0.2
- name: Run MSBuild run: msbuild .\SampleWebApplication.sln
jobs:
run: nuget restore - name: Add msbuild to PATH
- uses: microsoft/setup-msbuild@v1.0.0
+ uses: microsoft/setup-msbuild@v1.0.2
- name: Run MSBuild run: msbuild .\SampleWebApplication.sln
app-service Deploy Local Git https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/deploy-local-git.md
ms.assetid: ac50a623-c4b8-4dfd-96b2-a09420770063
Last updated 02/16/2021 -+ # Local Git deployment to Azure App Service
app-service Deploy Resource Manager Template https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/deploy-resource-manager-template.md
-
+ Title: Deploy apps with templates description: Find guidance on creating Azure Resource Manager templates to provision and deploy App Service apps.
Last updated 01/03/2019 -+ # Guidance on deploying web apps by using Azure Resource Manager templates
For an example template, see [Deploy a Web App certificate from Key Vault secret
## Next steps * For a tutorial on deploying web apps with a template, see [Provision and deploy microservices predictably in Azure](deploy-complex-application-predictably.md).
-* To learn about JSON syntax and properties for resource types in templates, see [Azure Resource Manager template reference](/azure/templates/).
+* To learn about JSON syntax and properties for resource types in templates, see [Azure Resource Manager template reference](/azure/templates/).
app-service Deploy Staging Slots https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/deploy-staging-slots.md
description: Learn how to deploy apps to a non-production slot and autoswap into
ms.assetid: e224fc4f-800d-469a-8d6a-72bcde612450 Last updated 04/30/2020-+ # Set up staging environments in Azure App Service
app-service Deploy Zip https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/deploy-zip.md
description: Learn how to deploy your app to Azure App Service with a ZIP file (
Last updated 08/12/2019 -+
app-service App Service App Service Environment Create Ilb Ase Resourcemanager https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/environment/app-service-app-service-environment-create-ilb-ase-resourcemanager.md
ms.assetid: 091decb6-b0de-42a1-9f2f-c18d9b2e67df
Last updated 07/11/2017 -+ # How To Create an ILB ASE Using Azure Resource Manager Templates
app-service App Service App Service Environment Geo Distributed Scale https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/environment/app-service-app-service-environment-geo-distributed-scale.md
ms.assetid: c1b05ca8-3703-4d87-a9ae-819d741787fb
Last updated 09/07/2016 -+ # Geo Distributed Scale with App Service Environments
app-service Create From Template https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/environment/create-from-template.md
ms.assetid: 6eb7d43d-e820-4a47-818c-80ff7d3b6f8e
Last updated 06/13/2017 -+ # Create an ASE by using an Azure Resource Manager template
To create an ASEv1 by using a Resource Manager template, see [Create an ILB ASE
[Kudu]: https://azure.microsoft.com/resources/videos/super-secret-kudu-debug-console-for-azure-web-sites/ [ASEWAF]: app-service-app-service-environment-web-application-firewall.md [AppGW]: ../../web-application-firewall/ag/ag-overview.md
-[ILBASEv1Template]: app-service-app-service-environment-create-ilb-ase-resourcemanager.md
+[ILBASEv1Template]: app-service-app-service-environment-create-ilb-ase-resourcemanager.md
app-service Management Addresses https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/environment/management-addresses.md Binary files differ
app-service Faq Configuration And Management https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/faq-configuration-and-management.md
tags: top-support-issue
ms.assetid: 2fa5ee6b-51a6-4237-805f-518e6c57d11b Last updated 10/30/2018-++ # Configuration and management FAQs for Web Apps in Azure
app-service Manage Scale Per App https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/manage-scale-per-app.md
ms.assetid: a903cb78-4927-47b0-8427-56412c4e3e64
Last updated 05/13/2019 -+ # High-density hosting on Azure App Service using per-app scaling
app-service Monitor Instances Health Check https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/monitor-instances-health-check.md
In addition to configuring the Health check options, you can also configure the
| App setting name | Allowed values | Description | |-|-|-| |`WEBSITE_HEALTHCHECK_MAXPINGFAILURES` | 2 - 10 | The maximum number of ping failures. For example, when set to `2`, your instances will be removed after `2` failed pings. Furthermore, when you are scaling up or out, App Service pings the Health check path to ensure new instances are ready. |
-|`WEBSITE_HEALTHCHECK_MAXUNHEALTYWORKERPERCENT` | 0 - 100 | To avoid overwhelming healthy instances, no more than half of the instances will be excluded. For example, if an App Service Plan is scaled to four instances and three are unhealthy, at most two will be excluded. The other two instances (one healthy and one unhealthy) will continue to receive requests. In the worst-case scenario where all instances are unhealthy, none will be excluded. To override this behavior, set app setting to a value between `0` and `100`. A higher value means more unhealthy instances will be removed (default is 50). |
+|`WEBSITE_HEALTHCHECK_MAXUNHEALTHYWORKERPERCENT` | 0 - 100 | To avoid overwhelming healthy instances, no more than half of the instances will be excluded. For example, if an App Service Plan is scaled to four instances and three are unhealthy, at most two will be excluded. The other two instances (one healthy and one unhealthy) will continue to receive requests. In the worst-case scenario where all instances are unhealthy, none will be excluded. To override this behavior, set app setting to a value between `0` and `100`. A higher value means more unhealthy instances will be removed (default is 50). |
#### Authentication and security
app-service Overview Inbound Outbound Ips https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/overview-inbound-outbound-ips.md
Title: Inbound/Outbound IP addresses
description: Learn how inbound and outbound IP addresses are used in Azure App Service, when they change, and how to find the addresses for your app. Last updated 08/25/2020-+
app-service Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/policy-reference.md
Title: Built-in policy definitions for Azure App Service description: Lists Azure Policy built-in policy definitions for Azure App Service. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 04/14/2021 Last updated : 04/21/2021
app-service Scenario Secure App Access Microsoft Graph As App https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/scenario-secure-app-access-microsoft-graph-as-app.md
Last updated 01/28/2021 -+ #Customer intent: As an application developer, I want to learn how to access data in Microsoft Graph by using managed identities.
In this tutorial, you learned how to:
> * Add Microsoft Graph API permissions to a managed identity. > * Call Microsoft Graph from a web app by using managed identities.
-Learn how to connect a [.NET Core app](tutorial-dotnetcore-sqldb-app.md), [Python app](tutorial-python-postgresql-app.md), [Java app](tutorial-java-spring-cosmosdb.md), or [Node.js app](tutorial-nodejs-mongodb-app.md) to a database.
+Learn how to connect a [.NET Core app](tutorial-dotnetcore-sqldb-app.md), [Python app](tutorial-python-postgresql-app.md), [Java app](tutorial-java-spring-cosmosdb.md), or [Node.js app](tutorial-nodejs-mongodb-app.md) to a database.
app-service Scenario Secure App Access Storage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/scenario-secure-app-access-storage.md Binary files differ
app-service Powershell Deploy Private Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/scripts/powershell-deploy-private-endpoint.md
Last updated 07/07/2020 -++ # Create an App Service app and deploy Private Endpoint using PowerShell
app-service Template Deploy Private Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/scripts/template-deploy-private-endpoint.md
Last updated 07/08/2020 -++ # Create an App Service app and deploy a private endpoint by using an Azure Resource Manager template
app-service Security Baseline https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/security-baseline.md
Last updated 02/17/2021 -+ # Important: This content is machine generated; do not modify this topic directly. Contact mbaldwin for more information.
app-service Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure App Service description: Lists Azure Policy Regulatory Compliance controls available for Azure App Service. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 04/14/2021 Last updated : 04/21/2021
app-service Troubleshoot Diagnostic Logs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/troubleshoot-diagnostic-logs.md
The following table shows the supported log types and descriptions:
| AppServiceEnvironmentPlatformLogs | Yes | N/A | Yes | Yes | App Service Environment: scaling, configuration changes, and status logs| | AppServiceAuditLogs | Yes | Yes | Yes | Yes | Login activity via FTP and Kudu | | AppServiceFileAuditLogs | Yes | Yes | TBA | TBA | File changes made to the site content; **only available for Premium tier and above** |
-| AppServiceAppLogs | ASP .NET | ASP .NET | Java SE & Tomcat Blessed Images <sup>1</sup> | Java SE & Tomcat Blessed Images <sup>1</sup> | Application logs |
+| AppServiceAppLogs | ASP .NET & Java Tomcat <sup>1</sup> | ASP .NET & Java Tomcat <sup>1</sup> | Java SE & Tomcat Blessed Images <sup>2</sup> | Java SE & Tomcat Blessed Images <sup>2</sup> | Application logs |
| AppServiceIPSecAuditLogs | Yes | Yes | Yes | Yes | Requests from IP Rules | | AppServicePlatformLogs | TBA | Yes | Yes | Yes | Container operation logs | | AppServiceAntivirusScanAuditLogs | Yes | Yes | Yes | Yes | [Anti-virus scan logs](https://azure.github.io/AppService/2020/12/09/AzMon-AppServiceAntivirusScanAuditLogs.html) using Microsoft Defender; **only available for Premium tier** |
-<sup>1</sup> For Java SE apps, add "$WEBSITE_AZMON_PREVIEW_ENABLED" to the app settings and set it to 1 or to true.
+<sup>1</sup> For Java Tomcat apps, add "TOMCAT_USE_STARTUP_BAT" to the app settings and set it to false or 0. Need to be on the *latest* Tomcat version and use *java.util.logging*.
+
+<sup>2</sup> For Java SE apps, add "$WEBSITE_AZMON_PREVIEW_ENABLED" to the app settings and set it to true or to 1.
## <a name="nextsteps"></a> Next steps * [Query logs with Azure Monitor](../azure-monitor/logs/log-query-overview.md) * [How to Monitor Azure App Service](web-sites-monitor.md) * [Troubleshooting Azure App Service in Visual Studio](troubleshoot-dotnet-visual-studio.md)
-* [Analyze app Logs in HDInsight](https://gallery.technet.microsoft.com/scriptcenter/Analyses-Windows-Azure-web-0b27d413)
+* [Analyze app Logs in HDInsight](https://gallery.technet.microsoft.com/scriptcenter/Analyses-Windows-Azure-web-0b27d413)
app-service Troubleshoot Domain Ssl Certificates https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/troubleshoot-domain-ssl-certificates.md
tags: top-support-issue
Last updated 03/01/2019 -+ # Troubleshoot domain and TLS/SSL certificate problems in Azure App Service
You can manage your domain even if you donΓÇÖt have an App Service Web App. Doma
Yes, you can move your web app across subscriptions. Follow the guidance in [How to move resources in Azure](../azure-resource-manager/management/move-resource-group-and-subscription.md). There are a few limitations when moving the web app. For more information, see [Limitations for moving App Service resources](../azure-resource-manager/management/move-limitations/app-service-move-limitations.md).
-After moving the web app, the host name bindings of the domains within the custom domains setting should remain the same. No additional steps are required to configure the host name bindings.
+After moving the web app, the host name bindings of the domains within the custom domains setting should remain the same. No additional steps are required to configure the host name bindings.
app-service Tutorial Send Email https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/tutorial-send-email.md
In Python, you can send the HTTP post easily with [requests](https://pypi.org/pr
```python # Requires pip install requests && pip freeze > requirements.txt import requests
+import os
... payload = { "email": "a-valid@emailaddress.com", "due": "4/1/2020", "task": "My new task!" }
-response = requests.post("https://prod-112.westeurope.logic.azure.com:443/workfl$
+response = requests.post(os.environ['LOGIC_APP_URL'], data = payload)
print(response.status_code) ``` <!-- ```python
If you're testing this code on the sample app for [Build a Ruby and Postgres app
[Tutorial: Host a RESTful API with CORS in Azure App Service](app-service-web-tutorial-rest-api.md) [HTTP request/response reference for Logic Apps](../connectors/connectors-native-reqres.md)
-[Quickstart: Create your first workflow by using Azure Logic Apps - Azure portal](../logic-apps/quickstart-create-first-logic-app-workflow.md)
+[Quickstart: Create your first workflow by using Azure Logic Apps - Azure portal](../logic-apps/quickstart-create-first-logic-app-workflow.md)
app-service Web Sites Integrate With Vnet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/web-sites-integrate-with-vnet.md
ms.assetid: 90bc6ec6-133d-4d87-a867-fcf77da75f5a
Last updated 08/05/2020 -+ # Integrate your app with an Azure virtual network
application-gateway Create Ssl Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/application-gateway/create-ssl-portal.md
Sign in to the Azure portal at [https://portal.azure.com](https://portal.azure.c
## Create a self-signed certificate
-In this section, you use [New-SelfSignedCertificate](/powershell/module/pkiclient/new-selfsignedcertificate) to create a self-signed certificate. You upload the certificate to the Azure portal when you create the listener for the application gateway.
+In this section, you use [New-SelfSignedCertificate](/powershell/module/pki/new-selfsignedcertificate) to create a self-signed certificate. You upload the certificate to the Azure portal when you create the listener for the application gateway.
On your local computer, open a Windows PowerShell window as an administrator. Run the following command to create the certificate:
Thumbprint Subject
E1E81C23B3AD33F9B4D1717B20AB65DBB91AC630 CN=www.contoso.com ```
-Use [Export-PfxCertificate](/powershell/module/pkiclient/export-pfxcertificate) with the Thumbprint that was returned to export a pfx file from the certificate. Make sure your password is 4 - 12 characters long:
+Use [Export-PfxCertificate](/powershell/module/pki/export-pfxcertificate) with the Thumbprint that was returned to export a pfx file from the certificate. Make sure your password is 4 - 12 characters long:
```powershell
When no longer needed, delete the resource group and all related resources. To d
## Next steps > [!div class="nextstepaction"]
-> [Learn more about Application Gateway TLS support](ssl-overview.md)
+> [Learn more about Application Gateway TLS support](ssl-overview.md)
application-gateway Redirect Http To Https Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/application-gateway/redirect-http-to-https-portal.md
This tutorial requires the Azure PowerShell module version 1.0.0 or later to cre
## Create a self-signed certificate
-For production use, you should import a valid certificate signed by a trusted provider. For this tutorial, you create a self-signed certificate using [New-SelfSignedCertificate](/powershell/module/pkiclient/new-selfsignedcertificate). You can use [Export-PfxCertificate](/powershell/module/pkiclient/export-pfxcertificate) with the Thumbprint that was returned to export a pfx file from the certificate.
+For production use, you should import a valid certificate signed by a trusted provider. For this tutorial, you create a self-signed certificate using [New-SelfSignedCertificate](/powershell/module/pki/new-selfsignedcertificate). You can use [Export-PfxCertificate](/powershell/module/pki/export-pfxcertificate) with the Thumbprint that was returned to export a pfx file from the certificate.
```powershell New-SelfSignedCertificate `
You can get the application public IP address from the application gateway Overv
## Next steps
-Learn how to [Create an application gateway with internal redirection](redirect-internal-site-powershell.md).
+Learn how to [Create an application gateway with internal redirection](redirect-internal-site-powershell.md).
application-gateway Redirect Http To Https Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/application-gateway/redirect-http-to-https-powershell.md
This tutorial requires the Azure PowerShell module version 1.0.0 or later. Run `
## Create a self-signed certificate
-For production use, you should import a valid certificate signed by a trusted provider. For this tutorial, you create a self-signed certificate using [New-SelfSignedCertificate](/powershell/module/pkiclient/new-selfsignedcertificate). You can use [Export-PfxCertificate](/powershell/module/pkiclient/export-pfxcertificate) with the Thumbprint that was returned to export a pfx file from the certificate.
+For production use, you should import a valid certificate signed by a trusted provider. For this tutorial, you create a self-signed certificate using [New-SelfSignedCertificate](/powershell/module/pki/new-selfsignedcertificate). You can use [Export-PfxCertificate](/powershell/module/pki/export-pfxcertificate) with the Thumbprint that was returned to export a pfx file from the certificate.
```powershell New-SelfSignedCertificate `
To accept the security warning if you used a self-signed certificate, select **D
## Next steps -- [Rewrite HTTP headers and URL with Application Gateway](rewrite-http-headers-url.md)
+- [Rewrite HTTP headers and URL with Application Gateway](rewrite-http-headers-url.md)
application-gateway Tutorial Autoscale Ps https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/application-gateway/tutorial-autoscale-ps.md
New-AzResourceGroup -Name $rg -Location $location
## Create a self-signed certificate
-For production use, you should import a valid certificate signed by trusted provider. For this tutorial, you create a self-signed certificate using [New-SelfSignedCertificate](/powershell/module/pkiclient/new-selfsignedcertificate). You can use [Export-PfxCertificate](/powershell/module/pkiclient/export-pfxcertificate) with the Thumbprint that was returned to export a pfx file from the certificate.
+For production use, you should import a valid certificate signed by trusted provider. For this tutorial, you create a self-signed certificate using [New-SelfSignedCertificate](/powershell/module/pki/new-selfsignedcertificate). You can use [Export-PfxCertificate](/powershell/module/pki/export-pfxcertificate) with the Thumbprint that was returned to export a pfx file from the certificate.
```powershell New-SelfSignedCertificate `
First explore the resources that were created with the application gateway. Then
## Next steps > [!div class="nextstepaction"]
-> [Create an application gateway with URL path-based routing rules](./tutorial-url-route-powershell.md)
+> [Create an application gateway with URL path-based routing rules](./tutorial-url-route-powershell.md)
application-gateway Tutorial Ssl Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/application-gateway/tutorial-ssl-powershell.md
This article requires the Azure PowerShell module version 1.0.0 or later. Run `G
## Create a self-signed certificate
-For production use, you should import a valid certificate signed by trusted provider. For this article, you create a self-signed certificate using [New-SelfSignedCertificate](/powershell/module/pkiclient/new-selfsignedcertificate). You can use [Export-PfxCertificate](/powershell/module/pkiclient/export-pfxcertificate) with the Thumbprint that was returned to export a pfx file from the certificate.
+For production use, you should import a valid certificate signed by trusted provider. For this article, you create a self-signed certificate using [New-SelfSignedCertificate](/powershell/module/pki/new-selfsignedcertificate). You can use [Export-PfxCertificate](/powershell/module/pki/export-pfxcertificate) with the Thumbprint that was returned to export a pfx file from the certificate.
```powershell New-SelfSignedCertificate `
Remove-AzResourceGroup -Name myResourceGroupAG
## Next steps
-[Create an application gateway that hosts multiple web sites](./tutorial-multiple-sites-powershell.md)
+[Create an application gateway that hosts multiple web sites](./tutorial-multiple-sites-powershell.md)
attestation Azure Diagnostic Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/attestation/azure-diagnostic-monitoring.md
Last updated 08/31/2020-++ # Set up diagnostics with a Trusted Platform Module (TPM) endpoint of Azure Attestation
attestation Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/attestation/policy-reference.md
Title: Built-in policy definitions for Azure Attestation description: Lists Azure Policy built-in policy definitions for Azure Attestation. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 04/14/2021 Last updated : 04/21/2021
attestation Private Endpoint Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/attestation/private-endpoint-powershell.md
Last updated 03/26/2021-++
attestation Quickstart Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/attestation/quickstart-powershell.md
Last updated 08/31/2020-++
attestation Quickstart Template https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/attestation/quickstart-template.md
-+ Last updated 10/16/2020
attestation Troubleshoot Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/attestation/troubleshoot-guide.md
Last updated 07/20/2020-++
automanage Automanage Hotpatch https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automanage/automanage-hotpatch.md
Last updated 02/22/2021-++ # Hotpatch for new virtual machines (Preview)
There are some important considerations to running a Windows Server Azure editio
## Next steps * Learn about Azure Update Management [here](../automation/update-management/overview.md).
-* Learn more about Automatic VM Guest Patching [here](../virtual-machines/automatic-vm-guest-patching.md)
+* Learn more about Automatic VM Guest Patching [here](../virtual-machines/automatic-vm-guest-patching.md)
automanage Automanage Linux https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automanage/automanage-linux.md
Automanage supports the following Linux distributions and versions:
|[Update Management](https://docs.microsoft.com/azure/automation/update-management/overview) |You can use Update Management in Azure Automation to manage operating system updates for your virtual machines. You can quickly assess the status of available updates on all agent machines and manage the process of installing required updates for servers. Learn [more](../automation/update-management/overview.md). |Production, Dev/Test |No | |[Change Tracking & Inventory](https://docs.microsoft.com/azure/automation/change-tracking/overview) |Change Tracking and Inventory combines change tracking and inventory functions to allow you to track virtual machine and server infrastructure changes. The service supports change tracking across services, daemons software, registry, and files in your environment to help you diagnose unwanted changes and raise alerts. Inventory support allows you to query in-guest resources for visibility into installed applications and other configuration items. Learn [more](../automation/change-tracking/overview.md). |Production, Dev/Test |No | |[Azure Guest Configuration](https://docs.microsoft.com/azure/governance/policy/concepts/guest-configuration) | Guest Configuration policy is used to monitor the configuration and report on the compliance of the machine. The Automanage service will install the Azure Linux baseline using the Guest Configuration extension. For Linux machines, the guest configuration service will install the baseline in audit-only mode. You will be able to see where your VM is out of compliance with the baseline, but noncompliance won't be automatically remediated. Learn [more](../governance/policy/concepts/guest-configuration.md). |Production, Dev/Test |No |
-|[Boot Diagnostics](https://docs.microsoft.com/azure/virtual-machines/boot-diagnostics) | Boot diagnostics is a debugging feature for Azure virtual machines (VM) that allows diagnosis of VM boot failures. Boot diagnostics enables a user to observe the state of their VM as it is booting up by collecting serial log information and screenshots. |Production, Dev/Test |No |
+|[Boot Diagnostics](https://docs.microsoft.com/azure/virtual-machines/boot-diagnostics) | Boot diagnostics is a debugging feature for Azure virtual machines (VM) that allows diagnosis of VM boot failures. Boot diagnostics enables a user to observe the state of their VM as it is booting up by collecting serial log information and screenshots. This will only be enabled for machines that are using managed disks. |Production, Dev/Test |No |
|[Azure Automation Account](https://docs.microsoft.com/azure/automation/automation-create-standalone-account) |Azure Automation supports management throughout the lifecycle of your infrastructure and applications. Learn [more](../automation/automation-intro.md). |Production, Dev/Test |No | |[Log Analytics Workspace](https://docs.microsoft.com/azure/azure-monitor/logs/log-analytics-overview) |Azure Monitor stores log data in a Log Analytics workspace, which is an Azure resource and a container where data is collected, aggregated, and serves as an administrative boundary. Learn [more](../azure-monitor/logs/design-logs-deployment.md). |Production, Dev/Test |No |
automanage Automanage Virtual Machines https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automanage/automanage-virtual-machines.md
Read carefully through the messaging in the resulting pop-up before agreeing to
First and foremost, we will not off-board the virtual machine from any of the services that we onboarded it to and configured. So any charges incurred by those services will continue to remain billable. You will need to off-board if necessary. Any Automanage behavior will stop immediately. For example, we will no longer monitor the VM for drift.
+## Automanage and Azure Disk Encryption
+Automanage is compatible with VMs that have Azure Disk Encryption (ADE) enabled.
+
+If you are using the Production environment, you will also be onboarded to Azure Backup. There is one prerequisite to successfully using ADE and Azure Backup:
+* Before you onboard your ADE-enabled VM to Automanage's Production environment, ensure that you have followed the steps located in the **Before you start** section of [this document](https://docs.microsoft.com/azure/backup/backup-azure-vms-encryption#before-you-start).
+ ## Next steps In this article, you learned that Automanage for virtual machines provides a means for which you can eliminate the need for you to know of, onboard to, and configure best practices Azure services. In addition, if a machine you onboarded to Automanage for virtual machines drifts from the environment setup, we will automatically bring it back into compliance.
automanage Automanage Windows Server https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automanage/automanage-windows-server.md
Automanage supports the following Windows Server versions:
|[Update Management](https://docs.microsoft.com/azure/automation/update-management/overview) |You can use Update Management in Azure Automation to manage operating system updates for your virtual machines. You can quickly assess the status of available updates on all agent machines and manage the process of installing required updates for servers. Learn [more](../automation/update-management/overview.md). |Production, Dev/Test |No | |[Change Tracking & Inventory](https://docs.microsoft.com/azure/automation/change-tracking/overview) |Change Tracking and Inventory combines change tracking and inventory functions to allow you to track virtual machine and server infrastructure changes. The service supports change tracking across services, daemons software, registry, and files in your environment to help you diagnose unwanted changes and raise alerts. Inventory support allows you to query in-guest resources for visibility into installed applications and other configuration items. Learn [more](../automation/change-tracking/overview.md). |Production, Dev/Test |No | |[Azure Guest Configuration](https://docs.microsoft.com/azure/governance/policy/concepts/guest-configuration) | Guest Configuration policy is used to monitor the configuration and report on the compliance of the machine. The Automanage service will install the [Windows security baselines](/windows/security/threat-protection/windows-security-baselines) using the Guest Configuration extension. For Windows machines, the guest configuration service will automatically reapply the baseline settings if they are out of compliance. Learn [more](../governance/policy/concepts/guest-configuration.md). |Production, Dev/Test |No |
-|[Boot Diagnostics](https://docs.microsoft.com/azure/virtual-machines/boot-diagnostics) | Boot diagnostics is a debugging feature for Azure virtual machines (VM) that allows diagnosis of VM boot failures. Boot diagnostics enables a user to observe the state of their VM as it is booting up by collecting serial log information and screenshots. |Production, Dev/Test |No |
+|[Boot Diagnostics](https://docs.microsoft.com/azure/virtual-machines/boot-diagnostics) | Boot diagnostics is a debugging feature for Azure virtual machines (VM) that allows diagnosis of VM boot failures. Boot diagnostics enables a user to observe the state of their VM as it is booting up by collecting serial log information and screenshots. This will only be enabled for machines that are using managed disks. |Production, Dev/Test |No |
|[Azure Automation Account](https://docs.microsoft.com/azure/automation/automation-create-standalone-account) |Azure Automation supports management throughout the lifecycle of your infrastructure and applications. Learn [more](../automation/automation-intro.md). |Production, Dev/Test |No | |[Log Analytics Workspace](https://docs.microsoft.com/azure/azure-monitor/logs/log-analytics-overview) |Azure Monitor stores log data in a Log Analytics workspace, which is an Azure resource and a container where data is collected, aggregated, and serves as an administrative boundary. Learn [more](../azure-monitor/logs/design-logs-deployment.md). |Production, Dev/Test |No |
automanage Common Errors https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automanage/common-errors.md
Workspace region not matching region mapping requirements | Automanage was unabl
"Access denied because of the deny assignment with name 'System deny assignment created by managed application'" | A [denyAssignment](https://docs.microsoft.com/azure/role-based-access-control/deny-assignments) was created on your resource, which prevented Automanage from accessing your resource. This denyAssignment may have been created by either a [Blueprint](https://docs.microsoft.com/azure/governance/blueprints/concepts/resource-locking) or a [Managed Application](https://docs.microsoft.com/azure/azure-resource-manager/managed-applications/overview). "OS Information: Name='(null)', ver='(null)', agent status='Not Ready'." | Ensure that you're running a [minimum supported agent version](https://docs.microsoft.com/troubleshoot/azure/virtual-machines/support-extensions-agent-version), the agent is running ([Linux](https://docs.microsoft.com/troubleshoot/azure/virtual-machines/linux-azure-guest-agent) and [Windows](https://docs.microsoft.com/troubleshoot/azure/virtual-machines/windows-azure-guest-agent)), and that the agent is up to date ([Linux](https://docs.microsoft.com/azure/virtual-machines/extensions/update-linux-agent) and [Windows](https://docs.microsoft.com/azure/virtual-machines/extensions/agent-windows)). "Unable to determine the OS for the VM OS Name:, ver . Please check that the VM Agent is running, the current status is Ready." | Ensure that you're running a [minimum supported agent version](https://docs.microsoft.com/troubleshoot/azure/virtual-machines/support-extensions-agent-version), the agent is running ([Linux](https://docs.microsoft.com/troubleshoot/azure/virtual-machines/linux-azure-guest-agent) and [Windows](https://docs.microsoft.com/troubleshoot/azure/virtual-machines/windows-azure-guest-agent)), and that the agent is up to date ([Linux](https://docs.microsoft.com/azure/virtual-machines/extensions/update-linux-agent) and [Windows](https://docs.microsoft.com/azure/virtual-machines/extensions/agent-windows)).- "VM has reported a failure when processing extension 'IaaSAntimalware'" | Ensure you don't have another antimalware/antivirus offering already installed on your VM. If that fails, contact support. ASC workspace: Automanage does not currently support the Log Analytics service in _location_. | Check that your VM is located in a [supported region](./automanage-virtual-machines.md#supported-regions). The template deployment failed because of policy violation. Please see details for more information. | There is a policy preventing Automanage from onboarding your VM. Check the policies that are applied to your subscription or resource group containing your VM you want to onboard to Automanage.
automanage Repair Automanage Account https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automanage/repair-automanage-account.md
Last updated 11/05/2020-++ # Repair an Automanage Account
az role assignment create --assignee-object-id <your Automanage Account Object I
``` ## Next steps
-[Learn more about Azure Automanage](./automanage-virtual-machines.md)
+[Learn more about Azure Automanage](./automanage-virtual-machines.md)
automation Automation Child Runbooks https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/automation-child-runbooks.md
description: This article tells how to create a runbook that is called by anothe
Last updated 01/17/2019-++ # Create modular runbooks
automation Automation Connections https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/automation-connections.md
Last updated 12/22/2020 -+ # Manage connections in Azure Automation
The following image shows an example of using a connection object in a graphical
* To learn more about the cmdlets used to access connections, see [Manage modules in Azure Automation](shared-resources/modules.md). * For general information about runbooks, see [Runbook execution in Azure Automation](automation-runbook-execution.md).
-* For details of DSC configurations, see [State Configuration overview](automation-dsc-overview.md).
+* For details of DSC configurations, see [State Configuration overview](automation-dsc-overview.md).
automation Automation Create Alert Triggered Runbook https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/automation-create-alert-triggered-runbook.md
description: This article tells how to trigger a runbook to run when an Azure al
Last updated 02/14/2021-++ # Use an alert to trigger an Azure Automation runbook
Alerts use action groups, which are collections of actions that are triggered by
* To discover different ways to start a runbook, see [Start a runbook](./start-runbooks.md). * To create an activity log alert, see [Create activity log alerts](../azure-monitor/alerts/activity-log-alerts.md). * To learn how to create a near real-time alert, see [Create an alert rule in the Azure portal](../azure-monitor/alerts/alerts-metric.md?toc=/azure/azure-monitor/toc.json).
-* For a PowerShell cmdlet reference, see [Az.Automation](/powershell/module/az.automation).
+* For a PowerShell cmdlet reference, see [Az.Automation](/powershell/module/az.automation).
automation Automation Deploy Template Runbook https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/automation-deploy-template-runbook.md
description: This article describes how to deploy an Azure Resource Manager temp
Last updated 09/22/2020-++ keywords: powershell, runbook, json, azure automation
automation Automation Dsc Cd Chocolatey https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/automation-dsc-cd-chocolatey.md
Last updated 08/08/2018 -+ # Set up continuous deployment with Chocolatey
automation Automation Dsc Compile https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/automation-dsc-compile.md
description: This article tells how to compile Desired State Configuration (DSC)
Last updated 04/06/2020-++ # Compile DSC configurations in Azure Automation State Configuration
automation Automation Dsc Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/automation-dsc-diagnostics.md
Last updated 11/06/2018-++ # Integrate with Azure Monitor logs
Azure Automation diagnostics create two categories of records in Azure Monitor l
- For pricing information, see [Azure Automation State Configuration pricing](https://azure.microsoft.com/pricing/details/automation/). - To see an example of using Azure Automation State Configuration in a continuous deployment pipeline, see [Set up continuous deployment with Chocolatey](automation-dsc-cd-chocolatey.md). - To learn more about how to construct different search queries and review the Automation State Configuration logs with Azure Monitor logs, see [Log searches in Azure Monitor logs](../azure-monitor/logs/log-query-overview.md).-- To learn more about Azure Monitor logs and data collection sources, see [Collecting Azure storage data in Azure Monitor logs overview](../azure-monitor/essentials/resource-logs.md#send-to-log-analytics-workspace).
+- To learn more about Azure Monitor logs and data collection sources, see [Collecting Azure storage data in Azure Monitor logs overview](../azure-monitor/essentials/resource-logs.md#send-to-log-analytics-workspace).
automation Automation Dsc Onboarding https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/automation-dsc-onboarding.md
Previously updated : 12/10/2019 Last updated : 12/10/2019 + # Enable Azure Automation State Configuration
automation Automation Edit Textual Runbook https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/automation-edit-textual-runbook.md
Last updated 08/01/2018-++ # Edit textual runbooks in Azure Automation
automation Automation Faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/automation-faq.md
description: This article gives answers to frequently asked questions about Azur
Previously updated : 12/17/2020 Last updated : 12/17/2020 + # Azure Automation frequently asked questions
automation Automation Graphical Authoring Intro https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/automation-graphical-authoring-intro.md
description: This article tells how to author a graphical runbook without workin
Last updated 03/16/2018-++ # Author graphical runbooks in Azure Automation
automation Automation Hrw Run Runbooks https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/automation-hrw-run-runbooks.md
description: This article describes how to run runbooks on machines in your loca
Last updated 03/10/2021-++ # Run runbooks on a Hybrid Runbook Worker
automation Automation Hybrid Runbook Worker https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/automation-hybrid-runbook-worker.md
description: This article provides an overview of the Hybrid Runbook Worker, whi
Last updated 01/22/2021-++ # Hybrid Runbook Worker overview
automation Automation Linux Hrw Install https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/automation-linux-hrw-install.md
description: This article tells how to install an Azure Automation Hybrid Runboo
Last updated 04/06/2021-++ # Deploy a Linux Hybrid Runbook Worker
To remove a Hybrid Runbook Worker group of Linux machines, you use the same step
* To learn how to configure your runbooks to automate processes in your on-premises datacenter or other cloud environment, see [Run runbooks on a Hybrid Runbook Worker](automation-hrw-run-runbooks.md).
-* To learn how to troubleshoot your Hybrid Runbook Workers, see [Troubleshoot Hybrid Runbook Worker issues - Linux](troubleshoot/hybrid-runbook-worker.md#linux).
+* To learn how to troubleshoot your Hybrid Runbook Workers, see [Troubleshoot Hybrid Runbook Worker issues - Linux](troubleshoot/hybrid-runbook-worker.md#linux).
automation Automation Manage Send Joblogs Log Analytics https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/automation-manage-send-joblogs-log-analytics.md
description: This article tells how to send job status and runbook job streams t
Last updated 09/02/2020-++ # Forward Azure Automation job data to Azure Monitor logs
AzureDiagnostics
* To understand creation and retrieval of output and error messages from runbooks, see [Monitor runbook output](automation-runbook-output-and-messages.md). * To learn more about runbook execution, how to monitor runbook jobs, and other technical details, see [Runbook execution in Azure Automation](automation-runbook-execution.md). * To learn more about Azure Monitor logs and data collection sources, see [Collecting Azure storage data in Azure Monitor logs overview](../azure-monitor/essentials/resource-logs.md#send-to-log-analytics-workspace).
-* For help troubleshooting Log Analytics, see [Troubleshooting why Log Analytics is no longer collecting data](../azure-monitor/logs/manage-cost-storage.md#troubleshooting-why-log-analytics-is-no-longer-collecting-data).
+* For help troubleshooting Log Analytics, see [Troubleshooting why Log Analytics is no longer collecting data](../azure-monitor/logs/manage-cost-storage.md#troubleshooting-why-log-analytics-is-no-longer-collecting-data).
automation Automation Managing Data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/automation-managing-data.md
description: This article helps you learn how Azure Automation protects your pri
Last updated 03/10/2021-++ # Management of Azure Automation data
automation Automation Orchestrator Migration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/automation-orchestrator-migration.md
description: This article tells how to migrate runbooks and integration packs fr
Last updated 03/16/2018-++ # Migrate from Orchestrator to Azure Automation (Beta)
automation Automation Powershell Workflow https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/automation-powershell-workflow.md
description: This article teaches you the differences between PowerShell Workflo
Last updated 12/14/2018-++ # Learn PowerShell Workflow for Azure Automation
automation Automation Role Based Access Control https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/automation-role-based-access-control.md
keywords: automation rbac, role based access control, azure rbac
Last updated 07/21/2020-++ # Manage role permissions and security
automation Automation Runbook Execution https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/automation-runbook-execution.md
description: This article provides an overview of the processing of runbooks in
Last updated 03/23/2021-++ # Runbook execution in Azure Automation
automation Automation Runbook Graphical Error Handling https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/automation-runbook-graphical-error-handling.md
description: This article tells how to implement error handling logic in graphic
Last updated 03/16/2018-++ # Handle errors in graphical runbooks
automation Automation Runbook Output And Messages https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/automation-runbook-output-and-messages.md
description: This article tells how to implement error handling logic and descri
Last updated 11/03/2020-++ # Configure runbook output and message streams
automation Automation Runbook Types https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/automation-runbook-types.md
description: This article describes the types of runbooks that you can use in Az
Last updated 02/17/2021-++ # Azure Automation runbook types
automation Automation Security Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/automation-security-overview.md
keywords: automation security, secure automation; automation authentication
Last updated 04/14/2021-++ # Azure Automation account authentication overview
automation Automation Send Email https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/automation-send-email.md
description: This article tells how to send an email from within a runbook.
Last updated 01/05/2021-++ # Send an email from a runbook
automation Automation Solution Vm Management https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/automation-solution-vm-management.md
description: This article describes the Start/Stop VMs during off-hours feature,
Last updated 02/04/2020-++ # Start/Stop VMs during off-hours overview
If you've deployed a previous version of Start/Stop VMs during off-hours, delete
## Next steps
-To enable the feature on VMs in your environment, see [Enable Start/Stop VMs during off-hours](automation-solution-vm-management-enable.md).
+To enable the feature on VMs in your environment, see [Enable Start/Stop VMs during off-hours](automation-solution-vm-management-enable.md).
automation Automation Update Azure Modules https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/automation-update-azure-modules.md
description: This article tells how to update common Azure PowerShell modules pr
Last updated 06/14/2019-++ # Update Azure PowerShell modules
automation Automation Use Azure Ad https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/automation-use-azure-ad.md
Title: Use Azure AD in Azure Automation to authenticate to Azure
description: This article tells how to use Azure AD within Azure Automation as the provider for authentication to Azure. Last updated 03/30/2020-++ # Use Azure AD to authenticate to Azure
automation Automation Webhooks https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/automation-webhooks.md
description: This article tells how to use a webhook to start a runbook in Azure
Last updated 03/18/2021-++ # Start a runbook from a webhook
The following image shows the request being sent from Windows PowerShell and the
## Next steps
-* To trigger a runbook from an alert, see [Use an alert to trigger an Azure Automation runbook](automation-create-alert-triggered-runbook.md).
+* To trigger a runbook from an alert, see [Use an alert to trigger an Azure Automation runbook](automation-create-alert-triggered-runbook.md).
automation Automation Windows Hrw Install https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/automation-windows-hrw-install.md
description: This article tells how to deploy a Hybrid Runbook Worker that you c
Last updated 04/02/2021-++ # Deploy a Windows Hybrid Runbook Worker
automation Delete Account https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/delete-account.md
Last updated 04/15/2021-++ # How to delete your Azure Automation account
After the Automation account is successfully unlinked from the workspace, perfor
## Next steps
-To create an Automation account from the Azure portal, see [Create a standalone Azure Automation account](automation-create-standalone-account.md). If you prefer to create your account using a template, see [Create an Automation account using an Azure Resource Manager template](quickstart-create-automation-account-template.md).
+To create an Automation account from the Azure portal, see [Create a standalone Azure Automation account](automation-create-standalone-account.md). If you prefer to create your account using a template, see [Create an Automation account using an Azure Resource Manager template](quickstart-create-automation-account-template.md).
automation Enable Managed Identity For Automation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/enable-managed-identity-for-automation.md
description: This article describes how to set up managed identity for Azure Aut
Last updated 04/14/2021-++ # Enable a managed identity for your Azure Automation account (preview)
print(response.text)
- If you need to disable a managed identity, see [Disable your Azure Automation account managed identity (preview)](disable-managed-identity-for-automation.md). -- For an overview of Azure Automation account security, see [Automation account authentication overview](automation-security-overview.md).
+- For an overview of Azure Automation account security, see [Automation account authentication overview](automation-security-overview.md).
automation Move Account https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/how-to/move-account.md
description: This article tells how to move your Automation account to another s
Last updated 01/07/2021-++ # Move your Azure Automation account to another subscription
automation Private Link Security https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/how-to/private-link-security.md
Last updated 12/11/2020-++ # Use Azure Private Link to securely connect networks to Azure Automation
For more information, see [Azure Private Endpoint DNS configuration](../../priva
## Next steps
-To learn more about Private Endpoint, see [What is Azure Private Endpoint?](../../private-link/private-endpoint-overview.md).
+To learn more about Private Endpoint, see [What is Azure Private Endpoint?](../../private-link/private-endpoint-overview.md).
automation Automation Tutorial Runbook Graphical https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/learn/automation-tutorial-runbook-graphical.md
keywords: runbook, runbook template, runbook automation, azure runbook
Last updated 09/15/2020-++ # Tutorial: Create a graphical runbook
automation Automation Tutorial Runbook Textual Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/learn/automation-tutorial-runbook-textual-powershell.md
keywords: azure powershell, powershell script tutorial, powershell automation
Last updated 04/19/2020-++ # Tutorial: Create a PowerShell runbook
automation Automation Tutorial Runbook Textual https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/learn/automation-tutorial-runbook-textual.md
description: This article teaches you to create, test, and publish a simple Powe
Last updated 04/19/2020-++ # Tutorial: Create a PowerShell Workflow runbook
automation Manage Runas Account https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/manage-runas-account.md
description: This article tells how to manage your Run As account with PowerShel
Last updated 01/19/2021-++ # Manage an Azure Automation Run As account
You can quickly resolve these Run As account issues by [deleting](delete-run-as-
* [Application Objects and Service Principal Objects](../active-directory/develop/app-objects-and-service-principals.md). * [Certificates overview for Azure Cloud Services](../cloud-services/cloud-services-certs-create.md). * To create or re-create a Run As account, see [Create a Run As account](create-run-as-account.md).
-* If you no longer need to use a Run As account, see [Delete a Run As account](delete-run-as-account.md).
+* If you no longer need to use a Run As account, see [Delete a Run As account](delete-run-as-account.md).
automation Manage Runbooks https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/manage-runbooks.md
description: This article tells how to manage runbooks in Azure Automation.
Last updated 02/24/2021-++ # Manage runbooks in Azure Automation
automation Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/policy-reference.md
Title: Built-in policy definitions for Azure Automation description: Lists Azure Policy built-in policy definitions for Azure Automation. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 04/14/2021 Last updated : 04/21/2021
automation Runbook Input Parameters https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/runbook-input-parameters.md
description: This article tells how to configure runbook input parameters, which
Last updated 02/14/2019-++ # Configure runbook input parameters
automation Security Baseline https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/security-baseline.md
Last updated 02/17/2021 -+ # Important: This content is machine generated; do not modify this topic directly. Contact mbaldwin for more information.
automation Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Automation description: Lists Azure Policy Regulatory Compliance controls available for Azure Automation. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 04/14/2021 Last updated : 04/21/2021
automation Certificates https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/shared-resources/certificates.md
description: This article tells how to work with certificates for access by runb
Last updated 12/22/2020-++ # Manage certificates in Azure Automation
automation Credentials https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/shared-resources/credentials.md
description: This article tells how to create credential assets and use them in
Last updated 12/22/2020-++ # Manage credentials in Azure Automation
While DSC configurations in Azure Automation can work with credential assets usi
* To learn more about the cmdlets used to access certificates, see [Manage modules in Azure Automation](modules.md). * For general information about runbooks, see [Runbook execution in Azure Automation](../automation-runbook-execution.md).
-* For details of DSC configurations, see [Azure Automation State Configuration overview](../automation-dsc-overview.md).
+* For details of DSC configurations, see [Azure Automation State Configuration overview](../automation-dsc-overview.md).
automation Modules https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/shared-resources/modules.md
description: This article tells how to use PowerShell modules to enable cmdlets
Last updated 02/01/2021-++ # Manage modules in Azure Automation
Remove-AzAutomationModule -Name <moduleName> -AutomationAccountName <automationA
* For more information about using Azure PowerShell modules, see [Get started with Azure PowerShell](/powershell/azure/get-started-azureps).
-* To learn more about creating PowerShell modules, see [Writing a Windows PowerShell module](/powershell/scripting/developer/module/writing-a-windows-powershell-module).
+* To learn more about creating PowerShell modules, see [Writing a Windows PowerShell module](/powershell/scripting/developer/module/writing-a-windows-powershell-module).
automation Schedules https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/shared-resources/schedules.md
description: This article tells how to create and work with a schedule in Azure
Last updated 03/29/2021-++ # Manage schedules in Azure Automation
Remove-AzAutomationSchedule -AutomationAccountName $automationAccountName `
## Next steps * To learn more about the cmdlets used to access schedules, see [Manage modules in Azure Automation](modules.md).
-* For general information about runbooks, see [Runbook execution in Azure Automation](../automation-runbook-execution.md).
+* For general information about runbooks, see [Runbook execution in Azure Automation](../automation-runbook-execution.md).
automation Variables https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/shared-resources/variables.md
description: This article tells how to work with variables in runbooks and DSC c
Last updated 03/28/2021-++ # Manage variables in Azure Automation
automation Source Control Integration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/source-control-integration.md
description: This article tells how to synchronize Azure Automation source contr
Last updated 03/10/2021-++ # Use source control integration
automation Start Runbooks https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/start-runbooks.md
description: This article tells how to start a runbook in Azure Automation.
Last updated 03/16/2018-++ # Start a runbook in Azure Automation
automation Desired State Configuration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/troubleshoot/desired-state-configuration.md
description: This article tells how to troubleshoot and resolve Azure Automation
Last updated 04/16/2019-++ # Troubleshoot Azure Automation State Configuration issues
automation Hybrid Runbook Worker https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/troubleshoot/hybrid-runbook-worker.md
Last updated 02/11/2021-++ # Troubleshoot Hybrid Runbook Worker issues
automation Runbooks https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/troubleshoot/runbooks.md
description: This article tells how to troubleshoot and resolve issues with Azur
Last updated 02/11/2021 -+ # Troubleshoot runbook issues
automation Shared Resources https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/troubleshoot/shared-resources.md
description: This article tells how to troubleshoot and resolve issues with Azur
Last updated 01/27/2021-++ # Troubleshoot shared resource issues
If this article doesn't resolve your issue, try one of the following channels fo
* Get answers from Azure experts through [Azure Forums](https://azure.microsoft.com/support/forums/). * Connect with [@AzureSupport](https://twitter.com/azuresupport). This is the official Microsoft Azure account for connecting the Azure community to the right resources: answers, support, and experts.
-* File an Azure support incident. Go to the [Azure support site](https://azure.microsoft.com/support/options/), and select **Get Support**.
+* File an Azure support incident. Go to the [Azure support site](https://azure.microsoft.com/support/options/), and select **Get Support**.
automation Start Stop Vm https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/troubleshoot/start-stop-vm.md
description: This article tells how to troubleshoot and resolve issues arising d
Last updated 04/04/2019-++ # Troubleshoot Start/Stop VMs during off-hours issues
If you don't see your problem here or you can't resolve your issue, try one of t
* Get answers from Azure experts through [Azure Forums](https://azure.microsoft.com/support/forums/). * Connect with [@AzureSupport](https://twitter.com/azuresupport), the official Microsoft Azure account for improving customer experience. Azure Support connects the Azure community to answers, support, and experts.
-* File an Azure support incident. Go to the [Azure support site](https://azure.microsoft.com/support/options/), and select **Get Support**.
+* File an Azure support incident. Go to the [Azure support site](https://azure.microsoft.com/support/options/), and select **Get Support**.
automation Update Management https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/troubleshoot/update-management.md
description: This article tells how to troubleshoot and resolve issues with Azur
Last updated 04/16/2021-++ # Troubleshoot Update Management issues
automation Tutorial Configure Servers Desired State https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/tutorial-configure-servers-desired-state.md
description: This article tells how to configure machines to a desired state usi
Previously updated : 08/08/2018 Last updated : 08/08/2018 + # Configure machines to a desired state
automation Enable From Template https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/update-management/enable-from-template.md
description: This article tells how to use an Azure Resource Manager template to
Previously updated : 09/18/2020 Last updated : 09/18/2020 + # Enable Update Management using Azure Resource Manager template
When you no longer need them, delete the **Updates** solution in the Log Analyti
* If you no longer want to use Update Management and wish to remove it, see instructions in [Remove Update Management feature](remove-feature.md).
-* To delete VMs from Update Management, see [Remove VMs from Update Management](remove-vms.md).
+* To delete VMs from Update Management, see [Remove VMs from Update Management](remove-vms.md).
automation Pre Post Scripts https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/update-management/pre-post-scripts.md
description: This article tells how to configure and manage pre-scripts and post
Last updated 03/08/2021-++ # Manage pre-scripts and post-scripts
availability-zones Az Region https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/availability-zones/az-region.md
description: To create highly available and resilient applications in Azure, Ava
Previously updated : 04/13/2021 Last updated : 04/21/2021
Azure services supporting Availability Zones fall into three categories: **zonal
- **Zonal services** ΓÇô A resource can be deployed to a specific, self-selected Availability Zone to achieve more stringent latency or performance requirements. Resiliency is self-architected by replicating applications and data to one or more zones within the region. Resources can be pinned to a specific zone. For example, virtual machines, managed disks, or standard IP addresses can be pinned to a specific zone, which allows for increased resilience by having one or more instances of resources spread across zones. -- **Zone-redundant services** ΓÇô Azure platform replicates the resource and data across zones. Microsoft manages the delivery of high availability since Azure automatically replicates and distributes instances within the region. ZRS, for example, replicates the data across three zones so that a zone failure does not impact the HA of the data.
+- **Zone-redundant services** – Resources are replicated or distributed across zones automatically. For example ZRS, replicates the data across three zones so that a zone failure does not impact the HA of the data.  
- **Non-regional services** ΓÇô Services are always available from Azure geographies and are resilient to zone-wide outages as well as region-wide outages.
azure-app-configuration Howto Integrate Azure Managed Service Identity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-app-configuration/howto-integrate-azure-managed-service-identity.md
In this article, you learn how to:
To complete this tutorial, you must have:
-* [.NET Core SDK](https://www.microsoft.com/net/download/windows).
+* [.NET Core SDK](https://dotnet.microsoft.com/download).
* [Azure Cloud Shell configured](../cloud-shell/quickstart.md). [!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)]
azure-app-configuration Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-app-configuration/policy-reference.md
Title: Built-in policy definitions for Azure App Configuration description: Lists Azure Policy built-in policy definitions for Azure App Configuration. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 04/14/2021 Last updated : 04/21/2021
azure-app-configuration Quickstart Resource Manager https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-app-configuration/quickstart-resource-manager.md
Last updated 10/16/2020 -+ # Quickstart: Create an Azure App Configuration store by using an ARM template
Write-Host "Press [ENTER] to continue..."
To learn about adding feature flag and Key Vault reference to an App Configuration store, check below ARM template examples. - [101-app-configuration-store-ff](https://github.com/Azure/azure-quickstart-templates/tree/master/101-app-configuration-store-ff)-- [101-app-configuration-store-keyvaultref](https://github.com/Azure/azure-quickstart-templates/tree/master/101-app-configuration-store-keyvaultref)
+- [101-app-configuration-store-keyvaultref](https://github.com/Azure/azure-quickstart-templates/tree/master/101-app-configuration-store-keyvaultref)
azure-app-configuration Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-app-configuration/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure App Configuration description: Lists Azure Policy Regulatory Compliance controls available for Azure App Configuration. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 04/14/2021 Last updated : 04/21/2021
azure-arc Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/kubernetes/policy-reference.md
Title: Built-in policy definitions for Azure Arc enabled Kubernetes description: Lists Azure Policy built-in policy definitions for Azure Arc enabled Kubernetes. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 04/14/2021 Last updated : 04/21/2021 #
azure-arc Agent Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/servers/agent-overview.md
Title: Overview of the Connected Machine agent description: This article provides a detailed overview of the Azure Arc enabled servers agent available, which supports monitoring virtual machines hosted in hybrid environments. Last updated 03/25/2021-++ # Overview of Azure Arc enabled servers agent
azure-arc Tutorial Assign Policy Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/servers/learn/tutorial-assign-policy-portal.md
Title: Tutorial - New policy assignment with Azure portal description: In this tutorial, you use Azure portal to create an Azure Policy assignment to identify non-compliant resources. Previously updated : 10/07/2020 Last updated : 04/21/2021 # Tutorial: Create a policy assignment to identify non-compliant resources
To remove the assignment created, follow these steps:
## Next steps
-In this tutorial, you assigned a policy definition to a scope and evaluated its compliance report. The policy definition validates that all the resources in the scope are compliant and identifies which ones aren't. Now you are ready to monitor your Azure Arc enabled servers machine with Azure Monitor for VMs.
+In this tutorial, you assigned a policy definition to a scope and evaluated its compliance report. The policy definition validates that all the resources in the scope are compliant and identifies which ones aren't. Now you are ready to monitor your Azure Arc enabled servers machine by enabling [VM insights](../../../azure-monitor/vm/vminsights-overview.md).
To learn how to monitor and view the performance, running process and their dependencies from your machine, continue to the tutorial: > [!div class="nextstepaction"]
-> [Enable Azure Monitor for VMs](tutorial-enable-vm-insights.md)
+> [Enable VM insights](tutorial-enable-vm-insights.md)
azure-arc Tutorial Enable Vm Insights https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/servers/learn/tutorial-enable-vm-insights.md
Title: Tutorial - Monitor a hybrid machine with Azure Monitor for VMs
+ Title: Tutorial - Monitor a hybrid machine with Azure Monitor VM insights
description: Learn how to collect and analyze data from a hybrid machine in Azure Monitor. Previously updated : 09/23/2020 Last updated : 04/21/2021
-# Tutorial: Monitor a hybrid machine with Azure Monitor for VMs
+# Tutorial: Monitor a hybrid machine with VM insights
-[Azure Monitor](../overview.md) can collect data directly from your hybrid machines into a Log Analytics workspace for detailed analysis and correlation. Typically this would entail installing the [Log Analytics agent](../../../azure-monitor/agents/agents-overview.md#log-analytics-agent) on the machine using a script, manually, or automated method following your configuration management standards. Arc enabled servers recently introduced support to install the Log Analytics and Dependency agent [VM extensions](../manage-vm-extensions.md) for Windows and Linux, enabling Azure Monitor to collect data from your non-Azure VMs.
+[Azure Monitor](../../../azure-monitor/overview.md) can collect data directly from your hybrid machines into a Log Analytics workspace for detailed analysis and correlation. Typically this would entail installing the [Log Analytics agent](../../../azure-monitor/agents/agents-overview.md#log-analytics-agent) on the machine using a script, manually, or automated method following your configuration management standards. Arc enabled servers recently introduced support to install the Log Analytics and Dependency agent [VM extensions](../manage-vm-extensions.md) for Windows and Linux, enabling [VM insights](../../../azure-monitor/vm/vminsights-overview.md) to collect data from your non-Azure VMs.
-This tutorial shows you how to configure and collect data from your Linux or Windows machines by enabling Azure Monitor for VMs following a simplified set of steps, which streamlines the experience and takes a shorter amount of time.
+This tutorial shows you how to configure and collect data from your Linux or Windows machines by enabling VM insights following a simplified set of steps, which streamlines the experience and takes a shorter amount of time.
## Prerequisites
This tutorial shows you how to configure and collect data from your Linux or Win
* VM extension functionality is available only in the list of [supported regions](../overview.md#supported-regions).
-* See [Supported operating systems](../../../azure-monitor/vm/vminsights-enable-overview.md#supported-operating-systems) to ensure that the servers operating system you're enabling is supported by Azure Monitor for VMs.
+* See [Supported operating systems](../../../azure-monitor/vm/vminsights-enable-overview.md#supported-operating-systems) to ensure that the servers operating system you're enabling is supported by VM insights.
-* Review firewall requirements for the Log Analytics agent provided in the [Log Analytics agent overview](../../../azure-monitor/agents/log-analytics-agent.md#network-requirements). The Azure Monitor for VMs Map Dependency agent doesn't transmit any data itself, and it doesn't require any changes to firewalls or ports.
+* Review firewall requirements for the Log Analytics agent provided in the [Log Analytics agent overview](../../../azure-monitor/agents/log-analytics-agent.md#network-requirements). The VM insights Map Dependency agent doesn't transmit any data itself, and it doesn't require any changes to firewalls or ports.
## Sign in to Azure portal Sign in to the [Azure portal](https://portal.azure.com).
-## Enable Azure Monitor for VMs
+## Enable VM insights
1. Launch the Azure Arc service in the Azure portal by clicking **All services**, then searching for and selecting **Machines - Azure Arc**.
Sign in to the [Azure portal](https://portal.azure.com).
1. On the Azure Monitor **Insights Onboarding** page, you are prompted to create a workspace. For this tutorial, we don't recommend you select an existing Log Analytics workspace if you have one already. Select the default, which is a workspace with a unique name in the same region as your registered connected machine. This workspace is created and configured for you.
- :::image type="content" source="./media/tutorial-enable-vm-insights/enable-vm-insights.png" alt-text="Enable Azure Monitor for VMs page" border="false":::
+ :::image type="content" source="./media/tutorial-enable-vm-insights/enable-vm-insights.png" alt-text="Enable VM insights page" border="false":::
1. You receive status messages while the configuration is performed. This process takes a few minutes as extensions are installed on your connected machine.
- :::image type="content" source="./media/tutorial-enable-vm-insights/onboard-vminsights-vm-portal-status.png" alt-text="Enable Azure Monitor for VMs progress status message" border="false":::
+ :::image type="content" source="./media/tutorial-enable-vm-insights/onboard-vminsights-vm-portal-status.png" alt-text="Enable VM insights progress status message" border="false":::
When it's complete, you get a message that the machine has been successfully onboarded and the insight has been successfully deployed.
Sign in to the [Azure portal](https://portal.azure.com).
After the deployment and configuration is completed, select **Insights**, and then select the **Performance** tab. On the Performance tab, it shows a select group of performance counters collected from the guest operating system of your machine. Scroll down to view more counters, and move the mouse over a graph to view average and percentiles taken starting from the time when the Log Analytics VM extension was installed on the machine. Select **Map** to open the maps feature, which shows the processes running on the machine and their dependencies. Select **Properties** to open the property pane if it isn't already open. Expand the processes for your machine. Select one of the processes to view its details and to highlight its dependencies.
azure-arc Manage Vm Extensions Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/servers/manage-vm-extensions-powershell.md
Title: Enable VM extension using Azure PowerShell description: This article describes how to deploy virtual machine extensions to Azure Arc enabled servers running in hybrid cloud environments using Azure PowerShell. Last updated 04/13/2021-++ # Enable Azure VM extensions using Azure PowerShell
azure-arc Manage Vm Extensions Template https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/servers/manage-vm-extensions-template.md
Title: Enable VM extension using Azure Resource Manager template description: This article describes how to deploy virtual machine extensions to Azure Arc enabled servers running in hybrid cloud environments using an Azure Resource Manager template. Last updated 04/13/2021-++ # Enable Azure VM extensions by using ARM template
New-AzResourceGroupDeployment -ResourceGroupName "ContosoEngineering" -TemplateF
* You can deploy, manage, and remove VM extensions using the [Azure PowerShell](manage-vm-extensions-powershell.md), from the [Azure portal](manage-vm-extensions-portal.md), or the [Azure CLI](manage-vm-extensions-cli.md).
-* Troubleshooting information can be found in the [Troubleshoot VM extensions guide](troubleshoot-vm-extensions.md).
+* Troubleshooting information can be found in the [Troubleshoot VM extensions guide](troubleshoot-vm-extensions.md).
azure-arc Manage Vm Extensions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/servers/manage-vm-extensions.md
Azure Arc enabled servers VM extension support provides the following key benefi
- Collect log data for analysis with [Logs in Azure Monitor](../../azure-monitor/logs/data-platform-logs.md) by enabling the Log Analytics agent VM extension. This is useful for doing complex analysis across data from different kinds of sources. -- With [Azure Monitor for VMs](../../azure-monitor/vm/vminsights-overview.md), analyzes the performance of your Windows and Linux VMs, and monitor their processes and dependencies on other resources and external processes. This is achieved through enabling both the Log Analytics agent and Dependency agent VM extensions.
+- With [VM insights](../../azure-monitor/vm/vminsights-overview.md), it analyzes the performance of your Windows and Linux VMs, and monitors their processes and dependencies on other resources and external processes. This is achieved through enabling both the Log Analytics agent and Dependency agent VM extensions.
- Download and execute scripts on hybrid connected machines using the Custom Script Extension. This extension is useful for post deployment configuration, software installation, or any other configuration or management tasks.
azure-arc Onboard Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/servers/onboard-powershell.md
Title: Connect hybrid machines to Azure by using PowerShell description: In this article, you learn how to install the agent and connect a machine to Azure by using Azure Arc enabled servers. You can do this with PowerShell. Last updated 10/28/2020-++ # Connect hybrid machines to Azure by using PowerShell
azure-arc Onboard Service Principal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/servers/onboard-service-principal.md
Title: Connect hybrid machines to Azure at scale description: In this article, you learn how to connect machines to Azure using Azure Arc enabled servers using a service principal. Last updated 03/04/2021-++ # Connect hybrid machines to Azure at scale
azure-arc Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/servers/overview.md
Title: Azure Arc enabled servers Overview description: Learn how to use Azure Arc enabled servers to manage servers hosted outside of Azure like an Azure resource. keywords: azure automation, DSC, powershell, desired state configuration, update management, change tracking, inventory, runbooks, python, graphical, hybrid Previously updated : 02/18/2021 Last updated : 04/21/2021
When you connect your machine to Azure Arc enabled servers, it enables the abili
- Report on configuration changes about installed software, Microsoft services, Windows registry and files, and Linux daemons on monitored servers using Azure Automation [Change Tracking and Inventory](../../automation/change-tracking/overview.md) and [Azure Security Center File Integrity Monitoring](../../security-center/security-center-file-integrity-monitoring.md), for servers enabled with [Azure Defender for servers](../../security-center/defender-for-servers-introduction.md). -- Monitor your connected machine guest operating system performance, and discover application components to monitor their processes and dependencies with other resources the application communicates using [Azure Monitor for VMs](../../azure-monitor/vm/vminsights-overview.md).
+- Monitor your connected machine guest operating system performance, and discover application components to monitor their processes and dependencies with other resources the application communicates using [VM insights](../../azure-monitor/vm/vminsights-overview.md).
-- Simplify deployment with other Azure services like Azure Automation [State Configuration](../../automation/automation-dsc-overview.md) and Azure Monitor Log Analytics workspace using the supported [Azure VM extensions](manage-vm-extensions.md) for your non-Azure Windows or Linux machine. This includes performing post-deployment configuration or software installation using the Custom Script Extension.
+- Simplify deployment using other Azure services like Azure Automation [State Configuration](../../automation/automation-dsc-overview.md) and Azure Monitor Log Analytics workspace, using the supported [Azure VM extensions](manage-vm-extensions.md) for your non-Azure Windows or Linux machine. This includes performing post-deployment configuration or software installation using the Custom Script Extension.
- Use [Update Management](../../automation/update-management/overview.md) in Azure Automation to manage operating system updates for your Windows and Linux servers
azure-arc Plan At Scale Deployment https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/servers/plan-at-scale-deployment.md
Title: How to plan and deploy Azure Arc enabled servers description: Learn how to enable a large number of machines to Azure Arc enabled servers to simplify configuration of essential security, management, and monitoring capabilities in Azure. Previously updated : 03/18/2021 Last updated : 04/21/2021
Phase 3 sees administrators or system engineers enable automation of manual task
|--|-|| |Create a Resource Health alert |If a server stops sending heartbeats to Azure for longer than 15 minutes, it can mean that it is offline, the network connection has been blocked, or the agent is not running. Develop a plan for how youΓÇÖll respond and investigate these incidents and use [Resource Health alerts](../..//service-health/resource-health-alert-monitor-guide.md) to get notified when they start.<br><br> Specify the following when configuring the alert:<br> **Resource type** = **Azure Arc enabled servers**<br> **Current resource status** = **Unavailable**<br> **Previous resource status** = **Available** | One hour | |Create an Azure Advisor alert | For the best experience and most recent security and bug fixes, we recommend keeping the Azure Arc enabled servers agent up to date. Out-of-date agents will be identified with an [Azure Advisor alert](../../advisor/advisor-alerts-portal.md).<br><br> Specify the following when configuring the alert:<br> **Recommendation type** = **Upgrade to the latest version of the Azure Connected Machine Agent** | One hour |
-|[Assign Azure policies](../../governance/policy/assign-policy-portal.md) to your subscription or resource group scope |Assign the **Enable Azure Monitor for VMs** [policy](../../azure-monitor/vm/vminsights-enable-policy.md) (and others that meet your needs) to the subscription or resource group scope. Azure Policy allows you to assign policy definitions that install the required agents for Azure Monitor for VMs across your environment.| Varies |
+|[Assign Azure policies](../../governance/policy/assign-policy-portal.md) to your subscription or resource group scope |Assign the **Enable Azure Monitor for VMs** [policy](../../azure-monitor/vm/vminsights-enable-policy.md) (and others that meet your needs) to the subscription or resource group scope. Azure Policy allows you to assign policy definitions that install the required agents for VM insights across your environment.| Varies |
|[Enable Update Management for your Arc enabled servers](../../automation/update-management/enable-from-automation-account.md) |Configure Update Management in Azure Automation to manage operating system updates for your Windows and Linux virtual machines registered with Arc enabled servers. | 15 minutes | ## Next steps
azure-arc Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/servers/policy-reference.md
Title: Built-in policy definitions for Azure Arc enabled servers description: Lists Azure Policy built-in policy definitions for Azure Arc enabled servers (preview). These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 04/14/2021 Last updated : 04/21/2021
azure-arc Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/servers/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Arc enabled servers (preview) description: Lists Azure Policy Regulatory Compliance controls available for Azure Arc enabled servers (preview). These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 04/14/2021 Last updated : 04/21/2021
azure-australia Reference Library https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-australia/reference-library.md
This resource library contains additional links and references that are relevant
* [Azure Key Vault Overview](../key-vault/general/overview.md) * [About keys, secrets, and certificates](../key-vault/general/about-keys-secrets-certificates.md) * [Configure Azure Key Vault firewalls and virtual networks](../key-vault/general/network-security.md)
-* [Secure access to a key vault](../key-vault/general/security-overview.md)
+* [Secure access to a key vault](../key-vault/general/security-features.md)
* [Azure Data Encryption-at-Rest](../security/fundamentals/encryption-atrest.md) * [How to use Azure Key Vault with Azure Windows Virtual Machines in .NET](../key-vault/general/tutorial-net-virtual-machine.md) * [Azure Key Vault managed storage account - PowerShell](../key-vault/general/tutorial-net-virtual-machine.md)
azure-australia Role Privileged https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-australia/role-privileged.md
Last updated 07/22/2019-++ # Azure role-based access control (Azure RBAC) and Privileged Identity Management (PIM)
The progress of pending Access Reviews can be monitored at any time via a dashbo
## Next steps
-Review the article on [System Monitoring in Azure Australia](system-monitor.md).
+Review the article on [System Monitoring in Azure Australia](system-monitor.md).
azure-australia Vpn Gateway https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-australia/vpn-gateway.md
Last updated 07/22/2019-++ # Azure VPN Gateway in Azure Australia
This article covered the specific configuration of VPN Gateway to meet the requi
- [Azure virtual network gateway overview](../vpn-gateway/index.yml) - [What is VPN Gateway?](../vpn-gateway/vpn-gateway-about-vpngateways.md) - [Create a virtual network with a site-to-site VPN connection by using PowerShell](../vpn-gateway/vpn-gateway-create-site-to-site-rm-powershell.md) -- [Create and manage a VPN gateway](../vpn-gateway/tutorial-create-gateway-portal.md)
+- [Create and manage a VPN gateway](../vpn-gateway/tutorial-create-gateway-portal.md)
azure-cache-for-redis Cache Administration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-cache-for-redis/cache-administration.md
Last updated 07/05/2017-++ # How to administer Azure Cache for Redis
azure-cache-for-redis Cache Configure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-cache-for-redis/cache-configure.md
Last updated 08/22/2017-++ # How to configure Azure Cache for Redis
azure-cache-for-redis Cache Dotnet How To Use Azure Redis Cache https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-cache-for-redis/cache-dotnet-how-to-use-azure-redis-cache.md
If you want to skip straight to the code, see the [.NET Framework quickstart](ht
- Azure subscription - [create one for free](https://azure.microsoft.com/free/) - [Visual Studio 2019](https://www.visualstudio.com/downloads/)-- [.NET Framework 4 or higher](https://www.microsoft.com/net/download/dotnet-framework-runtime), which is required by the StackExchange.Redis client.
+- [.NET Framework 4 or higher](https://dotnet.microsoft.com/download/dotnet-framework), which is required by the StackExchange.Redis client.
## Create a cache [!INCLUDE [redis-cache-create](../../includes/redis-cache-create.md)]
azure-cache-for-redis Cache Event Grid Quickstart Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-cache-for-redis/cache-event-grid-quickstart-powershell.md
Last updated 1/5/2021
-++ # Quickstart: Route Azure Cache for Redis events to web endpoint with PowerShell
Remove-AzResourceGroup -Name $resourceGroup
Now that you know how to create topics and event subscriptions, learn more about Azure Cache for Redis events and what Event Grid can help you do: - [Reacting to Azure Cache for Redis events](cache-event-grid.md)-- [About Event Grid](../event-grid/overview.md)
+- [About Event Grid](../event-grid/overview.md)
azure-cache-for-redis Cache How To Manage Redis Cache Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-cache-for-redis/cache-how-to-manage-redis-cache-powershell.md
Last updated 07/13/2017-++ # Manage Azure Cache for Redis with Azure PowerShell
To learn more about using Windows PowerShell with Azure, see the following resou
* [Using Resource groups to manage your Azure resources](../azure-resource-manager/templates/deploy-portal.md): Learn how to create and manage resource groups in the Azure portal. * [Azure blog](https://azure.microsoft.com/blog/): Learn about new features in Azure. * [Windows PowerShell blog](https://devblogs.microsoft.com/powershell/): Learn about new features in Windows PowerShell.
-* ["Hey, Scripting Guy!" Blog](https://devblogs.microsoft.com/scripting/tag/hey-scripting-guy/): Get real-world tips and tricks from the Windows PowerShell community.
+* ["Hey, Scripting Guy!" Blog](https://devblogs.microsoft.com/scripting/tag/hey-scripting-guy/): Get real-world tips and tricks from the Windows PowerShell community.
azure-cache-for-redis Cache How To Redis Cli Tool https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-cache-for-redis/cache-how-to-redis-cli-tool.md
Previously updated : 02/08/2021 Last updated : 02/08/2021 + # Use the Redis command-line tool with Azure Cache for Redis
redis-cli.exe -h yourcachename.redis.cache.windows.net -p 6379 -a YourAccessKey
## Next steps
-Learn more about using the [Redis Console](cache-configure.md#redis-console) to issue commands.
+Learn more about using the [Redis Console](cache-configure.md#redis-console) to issue commands.
azure-cache-for-redis Cache How To Scale https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-cache-for-redis/cache-how-to-scale.md
Previously updated : 02/08/2021 Last updated : 02/08/2021 + # Scale an Azure Cache for Redis instance Azure Cache for Redis has different cache offerings, which provide flexibility in the choice of cache size and features. For a Basic, Standard or Premium cache, you can change its size and tier after it's been created to keep up with your application needs. This article shows you how to scale your cache using the Azure portal, and tools such as Azure PowerShell, and Azure CLI.
azure-cache-for-redis Cache Redis Cache Arm Provision https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-cache-for-redis/cache-redis-cache-arm-provision.md
-+ Last updated 08/18/2020
azure-cache-for-redis Cache Web App Arm With Redis Cache Provision https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-cache-for-redis/cache-web-app-arm-with-redis-cache-provision.md
Last updated 01/06/2017-++ # Create a Web App plus Azure Cache for Redis using a template
azure-cache-for-redis Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-cache-for-redis/policy-reference.md
Title: Built-in policy definitions for Azure Cache for Redis description: Lists Azure Policy built-in policy definitions for Azure Cache for Redis. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 04/14/2021 Last updated : 04/21/2021
azure-cache-for-redis Security Baseline https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-cache-for-redis/security-baseline.md
Last updated 02/17/2021 -+ # Important: This content is machine generated; do not modify this topic directly. Contact mbaldwin for more information.
azure-cache-for-redis Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-cache-for-redis/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Cache for Redis description: Lists Azure Policy Regulatory Compliance controls available for Azure Cache for Redis. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 04/14/2021 Last updated : 04/21/2021
azure-functions Create First Function Cli Csharp Ieux https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/create-first-function-cli-csharp-ieux.md
Title: Create a C# function from the command line - Azure Functions
description: Learn how to create a C# function from the command line, then publish the local project to serverless hosting in Azure Functions. Last updated 10/03/2020 -+
There is also a [Visual Studio Code-based version](create-first-function-vs-code
+ Get an Azure <abbr title="The profile that maintains billing information for Azure usage.">account</abbr> with an active <abbr title="The basic organizational structure in which you manage resources on Azure, typically associated with an individual or department within an organization.">subscription</abbr>. [Create an account for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio).
-+ Install [.NET Core SDK 3.1](https://www.microsoft.com/net/download)
++ Install [.NET Core 3.1 SDK](https://dotnet.microsoft.com/download) + Install [Azure Functions Core Tools](functions-run-local.md#v2) version 3.x.
azure-functions Create First Function Cli Csharp https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/create-first-function-cli-csharp.md
Title: Create a C# function from the command line - Azure Functions
description: Learn how to create a C# function from the command line, then publish the local project to serverless hosting in Azure Functions. Last updated 10/03/2020 -+ adobe-target: true adobe-target-activity: DocsExpΓÇô386541ΓÇôA/BΓÇôEnhanced-Readability-QuickstartsΓÇô2.19.2021 adobe-target-experience: Experience B
Before you begin, you must have the following:
+ An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio).
-+ The [.NET Core SDK 3.1](https://www.microsoft.com/net/download)
++ The [.NET Core 3.1 SDK](https://dotnet.microsoft.com/download) + The [Azure Functions Core Tools](functions-run-local.md#v2) version 3.x.
azure-functions Create First Function Cli Java Uiex https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/create-first-function-cli-java-uiex.md
Title: Create a Java function from the command line - Azure Functions
description: Learn how to create a Java function from the command line, then publish the local project to serverless hosting in Azure Functions. Last updated 11/03/2020 -+
azure-functions Create First Function Cli Java https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/create-first-function-cli-java.md
Title: Create a Java function from the command line - Azure Functions
description: Learn how to create a Java function from the command line, then publish the local project to serverless hosting in Azure Functions. Last updated 11/03/2020 -+ adobe-target: true adobe-target-activity: DocsExpΓÇô386541ΓÇôA/BΓÇôEnhanced-Readability-QuickstartsΓÇô2.19.2021 adobe-target-experience: Experience B
azure-functions Create First Function Cli Node https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/create-first-function-cli-node.md
Title: Create a JavaScript function from the command line - Azure Functions
description: Learn how to create a JavaScript function from the command line, then publish the local Node.js project to serverless hosting in Azure Functions. Last updated 11/03/2020 -+ # Quickstart: Create a JavaScript function in Azure from the command line
Each binding requires a direction, a type, and a unique name. The HTTP trigger h
## Next steps > [!div class="nextstepaction"]
-> [Connect to an Azure Storage queue](functions-add-output-binding-storage-queue-cli.md?pivots=programming-language-javascript)
+> [Connect to an Azure Storage queue](functions-add-output-binding-storage-queue-cli.md?pivots=programming-language-javascript)
azure-functions Create First Function Cli Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/create-first-function-cli-powershell.md
Title: Create a PowerShell function from the command line - Azure Functions
description: Learn how to create a PowerShell function from the command line, then publish the local project to serverless hosting in Azure Functions. Last updated 11/03/2020 -+ # Quickstart: Create a PowerShell function in Azure from the command line
Before you begin, you must have the following:
+ [Azure CLI](/cli/azure/install-azure-cli) version 2.4 or later.
-+ The [.NET Core SDK 3.1](https://www.microsoft.com/net/download)
++ The [.NET Core 3.1 SDK](https://dotnet.microsoft.com/download) ### Prerequisite check
azure-functions Create First Function Cli Python Uiex https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/create-first-function-cli-python-uiex.md
Title: Create a Python function from the command line for Azure Functions
description: Learn how to create a Python function from the command line and publish the local project to serverless hosting in Azure Functions. Last updated 11/03/2020 -+
azure-functions Create First Function Cli Python https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/create-first-function-cli-python.md
Title: Create a Python function from the command line - Azure Functions
description: Learn how to create a Python function from the command line, then publish the local project to serverless hosting in Azure Functions. Last updated 11/03/2020 -+ adobe-target: true adobe-target-activity: DocsExpΓÇô386541ΓÇôA/BΓÇôEnhanced-Readability-QuickstartsΓÇô2.19.2021 adobe-target-experience: Experience B
In a separate terminal window or in the browser, call the remote function again.
> [!div class="nextstepaction"] > [Connect to an Azure Storage queue](functions-add-output-binding-storage-queue-cli.md?pivots=programming-language-python)
-[Having issues? Let us know.](https://aka.ms/python-functions-qs-survey)
+[Having issues? Let us know.](https://aka.ms/python-functions-qs-survey)
azure-functions Create First Function Cli Typescript https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/create-first-function-cli-typescript.md
Title: Create a TypeScript function from the command line - Azure Functions
description: Learn how to create a TypeScript function from the command line, then publish the local project to serverless hosting in Azure Functions. Last updated 11/03/2020 -+ # Quickstart: Create a TypeScript function in Azure from the command line
azure-functions Create First Function Vs Code Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/create-first-function-vs-code-powershell.md
Before you get started, make sure you have the following requirements in place:
+ [PowerShell 7](/powershell/scripting/install/installing-powershell-core-on-windows)
-+ Both [.NET Core 3.1 runtime](https://www.microsoft.com/net/download) and [.NET Core 2.1 runtime](https://dotnet.microsoft.com/download/dotnet-core/2.1)
++ Both [.NET Core 3.1 runtime](https://dotnet.microsoft.com/download) and [.NET Core 2.1 runtime](https://dotnet.microsoft.com/download/dotnet/2.1) + [Visual Studio Code](https://code.visualstudio.com/) on one of the [supported platforms](https://code.visualstudio.com/docs/supporting/requirements#_platforms).
azure-functions Disable Function https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/disable-function.md
Title: How to disable functions in Azure Functions
description: Learn how to disable and enable functions in Azure Functions. Last updated 03/15/2021 -+ # How to disable functions in Azure Functions
azure-functions Dotnet Isolated Process Developer Howtos https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/dotnet-isolated-process-developer-howtos.md
If you don't need to support .NET 5.0 or run your functions out-of-process, you
+ An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio).
-+ [.NET SDK 5.0](https://www.microsoft.com/net/download)
++ [.NET 5.0 SDK](https://dotnet.microsoft.com/download) + [Azure Functions Core Tools](functions-run-local.md#v2) version 3.0.3381, or a later version.
azure-functions Functions Create First Function Resource Manager https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-create-first-function-resource-manager.md
description: Create and deploy to Azure a simple HTTP triggered serverless funct
Last updated 3/5/2020 -+ # Quickstart: Create and deploy Azure Functions resources from an ARM template
azure-functions Functions Develop Vs Code https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-develop-vs-code.md
These prerequisites are only required to [run and debug your functions locally](
+ [PowerShell 7](/powershell/scripting/install/installing-powershell-core-on-windows) recommended. For version information, see [PowerShell versions](functions-reference-powershell.md#powershell-versions).
-+ Both [.NET Core 3.1 runtime](https://www.microsoft.com/net/download) and [.NET Core 2.1 runtime](https://dotnet.microsoft.com/download/dotnet-core/2.1)
++ Both [.NET Core 3.1 runtime](https://dotnet.microsoft.com/download) and [.NET Core 2.1 runtime](https://dotnet.microsoft.com/download/dotnet/2.1) + The [PowerShell extension for Visual Studio Code](https://marketplace.visualstudio.com/items?itemName=ms-vscode.PowerShell).
azure-functions Functions How To Use Azure Function App Settings https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-how-to-use-azure-function-app-settings.md
description: Learn how to configure function app settings in Azure Functions.
ms.assetid: 81eb04f8-9a27-45bb-bf24-9ab6c30d205c Last updated 04/13/2020-+ # Manage your function app
azure-functions Functions Infrastructure As Code https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-infrastructure-as-code.md
description: Learn how to build an Azure Resource Manager template that deploys
ms.assetid: d20743e3-aab6-442c-a836-9bcea09bfd32 Last updated 04/03/2019-+ # Automate resource deployment for your function app in Azure Functions
azure-functions Security Baseline https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/security-baseline.md
Last updated 02/17/2021 -+ # Important: This content is machine generated; do not modify this topic directly. Contact mbaldwin for more information.
azure-functions Set Runtime Version https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/set-runtime-version.md
Title: How to target Azure Functions runtime versions
description: Azure Functions supports multiple versions of the runtime. Learn how to specify the runtime version of a function app hosted in Azure. Previously updated : 07/22/2020 Last updated : 07/22/2020 + # How to target Azure Functions runtime versions
azure-functions Streaming Logs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/streaming-logs.md
Title: Stream execution logs in Azure Functions
description: 115-145 characters including spaces. This abstract displays in the search result. Last updated 9/1/2020 -+ # Customer intent: As a developer, I want to be able to configure streaming logs so that I can see what's happening in my functions in near real time.
azure-government Azure Secure Isolation Guidance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-government/azure-secure-isolation-guidance.md
Azure AD enforces tenant isolation and implements robust measures to prevent acc
- **Management plane** enables customers to manage the key vault or managed HSM itself, for example, create and delete key vaults or managed HSMs, retrieve key vault or managed HSM properties, and update access policies. For authorization, the management plane uses Azure RBAC with both key vaults and managed HSMs. - **Data plane** enables customers to work with the data stored in their key vaults and managed HSMs, including adding, deleting, and modifying their data. For vaults, stored data can include keys, secrets, and certificates. For managed HSMs, stored data is limited to cryptographic keys only. For authorization, the data plane uses [Key Vault access policy](../key-vault/general/assign-access-policy-portal.md) and [Azure RBAC for data plane operations](../key-vault/general/rbac-guide.md) with key vaults, or [managed HSM local RBAC](../key-vault/managed-hsm/access-control.md) with managed HSMs.
-When you create a key vault or managed HSM in an Azure subscription, it's automatically associated with the Azure AD tenant of the subscription. All callers in both planes must register in this tenant and authenticate to access the [key vault](../key-vault/general/security-overview.md) or [managed HSM](../key-vault/managed-hsm/access-control.md).
+When you create a key vault or managed HSM in an Azure subscription, it's automatically associated with the Azure AD tenant of the subscription. All callers in both planes must register in this tenant and authenticate to access the [key vault](../key-vault/general/security-features.md) or [managed HSM](../key-vault/managed-hsm/access-control.md).
Azure customers control access permissions and can extract detailed activity logs from the Azure Key Vault service. Azure Key Vault logs the following information:
Vaults enable support for [customer-managed keys](../security/fundamentals/encry
Azure Key Vault can handle requesting and renewing certificates in vaults, including Transport Layer Security (TLS) certificates, enabling customers to enroll and automatically renew certificates from supported public Certificate Authorities. Azure Key Vault certificates support provides for the management of customerΓÇÖs X.509 certificates, which are built on top of keys and provide an automated renewal feature. Certificate owner can [create a certificate](../key-vault/certificates/create-certificate.md) through Azure Key Vault or by importing an existing certificate. Both self-signed and Certificate Authority generated certificates are supported. Moreover, the Key Vault certificate owner can implement secure storage and management of X.509 certificates without interaction with private keys.
-When customers create a key vault in a resource group, they can [manage access](../key-vault/general/security-overview.md) by using Azure AD, which enables customers to grant access at a specific scope level by assigning the appropriate Azure roles. For example, to grant access to a user to manage key vaults, customers can assign a predefined key vault Contributor role to the user at a specific scope, including subscription, resource group, or specific resource.
+When customers create a key vault in a resource group, they can [manage access](../key-vault/general/security-features.md) by using Azure AD, which enables customers to grant access at a specific scope level by assigning the appropriate Azure roles. For example, to grant access to a user to manage key vaults, customers can assign a predefined key vault Contributor role to the user at a specific scope, including subscription, resource group, or specific resource.
> [!IMPORTANT] > Customers should control tightly who has Contributor role access to their key vaults. If a user has Contributor permissions to a key vault management plane, the user can gain access to the data plane by setting a key vault access policy. > > *Additional resources:*
-> - How to **[secure access to a key vault](../key-vault/general/security-overview.md)**
+> - How to **[secure access to a key vault](../key-vault/general/security-features.md)**
#### Managed HSM
azure-government Compare Azure Government Global Azure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-government/compare-azure-government-global-azure.md
ms.devlang: na
na Previously updated : 04/14/2021 Last updated : 04/20/2021 # Compare Azure Government and global Azure Microsoft Azure Government uses same underlying technologies as global Azure, which includes the core components of [Infrastructure-as-a-Service (IaaS)](https://azure.microsoft.com/overview/what-is-iaas/), [Platform-as-a-Service (PaaS)](https://azure.microsoft.com/overview/what-is-paas/), and [Software-as-a-Service (SaaS)](https://azure.microsoft.com/overview/what-is-saas/). Both Azure and Azure Government have the same comprehensive security controls in place, as well as the same Microsoft commitment on the safeguarding of customer data. Whereas both cloud environments are assessed and authorized at the FedRAMP High impact level, Azure Government provides an additional layer of protection to customers through contractual commitments regarding storage of customer data in the United States and limiting potential access to systems processing customer data to [screened US persons](./documentation-government-plan-security.md#screening). These commitments may be of interest to customers using the cloud to store or process data subject to US export control regulations.
-### Export control implications
+## Export control implications
Customers are responsible for designing and deploying their applications to meet [US export control requirements](./documentation-government-overview-itar.md) such as the requirements prescribed in the EAR, ITAR, and DoE 10 CFR Part 810. In doing so, customers should not include sensitive or restricted information in Azure resource names, as explained in [Considerations for naming Azure resources](./documentation-government-concept-naming-resources.md).
-### Guidance for developers
+## Guidance for developers
Azure Government services operate the same way as the corresponding services in global Azure, which is why most of the existing online Azure documentation applies equally well to Azure Government. However, there are some key differences that developers working on applications hosted in Azure Government must be aware of. For detailed information, see [Guidance for developers](./documentation-government-developer-guide.md). As a developer, you must know how to connect to Azure Government and once you connect you will mostly have the same experience as in global Azure. Table below lists API endpoints in Azure vs. Azure Government for accessing and managing various services.
Azure Government services operate the same way as the corresponding services in
|||abfa0a7c-a6b6-4736-8310-5855508787cd|6a02c803-dafd-4136-b4c3-5a6f318b4714|Service Principal ID| ||Azure Cognitive Search|\*.search.windows.net|\*.search.windows.us||
-### Service availability
+## Service availability
Microsoft's goal is to enable 100% parity in service availability between Azure and Azure Government. For service availability in Azure Government, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=all&regions=non-regional,usgov-non-regional,us-dod-central,us-dod-east,usgov-arizona,usgov-iowa,usgov-texas,usgov-virginia). Services available in Azure Government are listed by category and whether they are Generally Available or available through Preview. If a service is available in Azure Government, that fact is not reiterated in the rest of this article. Instead, customers are encouraged to review [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=all&regions=non-regional,usgov-non-regional,us-dod-central,us-dod-east,usgov-arizona,usgov-iowa,usgov-texas,usgov-virginia) for the latest, up-to-date information on service availability.
The following Virtual Machines **features are not currently available** in Azure
### [Azure Functions](../azure-functions/index.yml)
-When connecting your function app to Application Insights in Azure Government, make sure you use [`APPLICATIONINSIGHTS_CONNECTION_STRING`](../azure-functions/functions-app-settings.md#applicationinsights_connection_string), which lets you customize the Application Insights endpoint.
+The following Functions **features are not currently available** in Azure Government:
+
+- Running .NET 5 apps
+
+When connecting your Functions app to Application Insights in Azure Government, make sure you use [`APPLICATIONINSIGHTS_CONNECTION_STRING`](../azure-functions/functions-app-settings.md#applicationinsights_connection_string), which lets you customize the Application Insights endpoint.
## Databases
This section outlines variations and considerations when using Databases service
The following Azure Database for MySQL **features are not currently available** in Azure Government: - Advanced Threat Protection-- Private endpoint connections ### [Azure Database for PostgreSQL](../postgresql/index.yml)
The following Azure Database for PostgreSQL **features are not currently availab
- Hyperscale (Citus) and Flexible server deployment options - The following features of the Single server deployment option - Advanced Threat Protection
- - Private endpoint connections
## Developer Tools
The following App Service **features are not currently available** in Azure Gove
- Deployment options: only Local Git Repository and External Repository are available - Development tools - Resource explorer
+- Azure Government portal
+ - Private endpoints for Web Apps cannot be configured in the UI; however, private endpoints are enabled in Azure Government and you can use the Private Link Center if you need the UI.
## Next steps
azure-government Connect With Azure Pipelines https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-government/connect-with-azure-pipelines.md
ms.devlang: na
na Previously updated : 10/25/2018 Last updated : 10/25/2018 + # Deploy an app in Azure Government with Azure Pipelines
A: Currently, Team Foundation Server cannot be used to deploy to an Azure Govern
## Next steps * Subscribe to the [Azure Government blog](https://blogs.msdn.microsoft.com/azuregov/) * Get help on Stack Overflow by using the "[azure-gov](https://stackoverflow.com/questions/tagged/azure-gov)" tag
-* Give us feedback or request new features via the [Azure Government feedback forum](https://feedback.azure.com/forums/558487-azure-government)
+* Give us feedback or request new features via the [Azure Government feedback forum](https://feedback.azure.com/forums/558487-azure-government)
azure-government Documentation Government Cognitiveservices https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-government/documentation-government-cognitiveservices.md
na Last updated 10/10/2020-+ # Cognitive Services on Azure Government ΓÇô Computer Vision, Face, and Translator
azure-government Documentation Government Extension https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-government/documentation-government-extension.md
ms.devlang: na
na Previously updated : 03/11/2021 Last updated : 03/11/2021 + # Azure Government virtual machine extensions
Out-File vm-extensions.md
## Next steps * [Deploy a Windows virtual machine extension](../virtual-machines/extensions/features-windows.md#run-vm-extensions)
-* [Deploy a Linux virtual machine extension](../virtual-machines/extensions/features-linux.md#run-vm-extensions)
+* [Deploy a Linux virtual machine extension](../virtual-machines/extensions/features-linux.md#run-vm-extensions)
azure-government Documentation Government Image Gallery https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-government/documentation-government-image-gallery.md
ms.devlang: na
na Previously updated : 03/09/2021 Last updated : 03/09/2021 + # Azure Government Marketplace images
azure-government Documentation Government Overview Itar https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-government/documentation-government-overview-itar.md
Azure provides many options for [encrypting data in transit](../security/fundame
Azure provides extensive options for [encrypting data at rest](../security/fundamentals/encryption-atrest.md) to help customers safeguard their data and meet their compliance needs using both Microsoft-managed encryption keys and customer-managed encryption keys. This process relies on multiple encryption keys and services such as Azure Key Vault and Azure Active Directory to ensure secure key access and centralized key management. For more information about Azure Storage encryption and Azure Disk encryption, see [Data encryption at rest](./azure-secure-isolation-guidance.md#data-encryption-at-rest).
-Azure SQL Database provides [transparent data encryption](../azure-sql/database/transparent-data-encryption-tde-overview.md) (TDE) at rest by [default](https://azure.microsoft.com/updates/newly-created-azure-sql-databases-encrypted-by-default/). TDE performs real-time encryption and decryption operations on the data and log files. Database Encryption Key (DEK) is a symmetric key stored in the database boot record for availability during recovery. It is secured via a certificate stored in the master database of the server or an asymmetric key called TDE Protector stored under customer control in [Azure Key Vault](../key-vault/general/security-overview.md), which is AzureΓÇÖs cloud-based external key management system. Key Vault supports [bring your own key](../azure-sql/database/transparent-data-encryption-byok-overview.md) (BYOK), which enables customers to store TDE Protector in Key Vault and control key management tasks including key rotation, permissions, deleting keys, enabling auditing/reporting on all TDE Protectors, etc. The key can be generated by the Key Vault, imported, or [transferred to the Key Vault from an on-premises HSM device](../key-vault/keys/hsm-protected-keys.md). Customers can also use the [Always Encrypted](../azure-sql/database/always-encrypted-azure-key-vault-configure.md) feature of Azure SQL Database, which is designed specifically to help protect sensitive data by allowing clients to encrypt data inside client applications and [never reveal the encryption keys to the database engine](/sql/relational-databases/security/encryption/always-encrypted-database-engine). In this manner, Always Encrypted provides separation between those users who own the data (and can view it) and those users who manage the data (but should have no access).
+Azure SQL Database provides [transparent data encryption](../azure-sql/database/transparent-data-encryption-tde-overview.md) (TDE) at rest by [default](https://azure.microsoft.com/updates/newly-created-azure-sql-databases-encrypted-by-default/). TDE performs real-time encryption and decryption operations on the data and log files. Database Encryption Key (DEK) is a symmetric key stored in the database boot record for availability during recovery. It is secured via a certificate stored in the master database of the server or an asymmetric key called TDE Protector stored under customer control in [Azure Key Vault](../key-vault/general/security-features.md), which is AzureΓÇÖs cloud-based external key management system. Key Vault supports [bring your own key](../azure-sql/database/transparent-data-encryption-byok-overview.md) (BYOK), which enables customers to store TDE Protector in Key Vault and control key management tasks including key rotation, permissions, deleting keys, enabling auditing/reporting on all TDE Protectors, etc. The key can be generated by the Key Vault, imported, or [transferred to the Key Vault from an on-premises HSM device](../key-vault/keys/hsm-protected-keys.md). Customers can also use the [Always Encrypted](../azure-sql/database/always-encrypted-azure-key-vault-configure.md) feature of Azure SQL Database, which is designed specifically to help protect sensitive data by allowing clients to encrypt data inside client applications and [never reveal the encryption keys to the database engine](/sql/relational-databases/security/encryption/always-encrypted-database-engine). In this manner, Always Encrypted provides separation between those users who own the data (and can view it) and those users who manage the data (but should have no access).
## Restrictions on insider access
azure-government Documentation Government Overview Wwps https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-government/documentation-government-overview-wwps.md
Azure provides many options for [encrypting data in transit](../security/fundame
Azure provides extensive options for [encrypting data at rest](../security/fundamentals/encryption-atrest.md) to help customers safeguard their data and meet their compliance needs using both Microsoft-managed encryption keys and customer-managed encryption keys. This process relies on multiple encryption keys and services such as Azure Key Vault and Azure Active Directory to ensure secure key access and centralized key management. For more information about Azure Storage encryption and Azure Disk encryption, see [Data encryption at rest](./azure-secure-isolation-guidance.md#data-encryption-at-rest).
-Azure SQL Database provides [transparent data encryption](../azure-sql/database/transparent-data-encryption-tde-overview.md) (TDE) at rest by [default](https://azure.microsoft.com/updates/newly-created-azure-sql-databases-encrypted-by-default/). TDE performs real-time encryption and decryption operations on the data and log files. Database Encryption Key (DEK) is a symmetric key stored in the database boot record for availability during recovery. It is secured via a certificate stored in the master database of the server or an asymmetric key called TDE Protector stored under customer control in [Azure Key Vault](../key-vault/general/security-overview.md), which is AzureΓÇÖs cloud-based external key management system. Key Vault supports [bring your own key](../azure-sql/database/transparent-data-encryption-byok-overview.md) (BYOK), which enables customers to store TDE Protector in Key Vault and control key management tasks including key permissions, rotation, deletion, enabling auditing/reporting on all TDE Protectors, and so on. The key can be generated by the Key Vault, imported, or [transferred to the Key Vault from an on-premises HSM device](../key-vault/keys/hsm-protected-keys.md). Customers can also use the [Always Encrypted](../azure-sql/database/always-encrypted-azure-key-vault-configure.md) feature of Azure SQL Database, which is designed specifically to help protect sensitive data by allowing clients to encrypt data inside client applications and [never reveal the encryption keys to the database engine](/sql/relational-databases/security/encryption/always-encrypted-database-engine). In this manner, Always Encrypted provides separation between those users who own the data (and can view it) and those users who manage the data (but should have no access).
+Azure SQL Database provides [transparent data encryption](../azure-sql/database/transparent-data-encryption-tde-overview.md) (TDE) at rest by [default](https://azure.microsoft.com/updates/newly-created-azure-sql-databases-encrypted-by-default/). TDE performs real-time encryption and decryption operations on the data and log files. Database Encryption Key (DEK) is a symmetric key stored in the database boot record for availability during recovery. It is secured via a certificate stored in the master database of the server or an asymmetric key called TDE Protector stored under customer control in [Azure Key Vault](../key-vault/general/security-features.md), which is AzureΓÇÖs cloud-based external key management system. Key Vault supports [bring your own key](../azure-sql/database/transparent-data-encryption-byok-overview.md) (BYOK), which enables customers to store TDE Protector in Key Vault and control key management tasks including key permissions, rotation, deletion, enabling auditing/reporting on all TDE Protectors, and so on. The key can be generated by the Key Vault, imported, or [transferred to the Key Vault from an on-premises HSM device](../key-vault/keys/hsm-protected-keys.md). Customers can also use the [Always Encrypted](../azure-sql/database/always-encrypted-azure-key-vault-configure.md) feature of Azure SQL Database, which is designed specifically to help protect sensitive data by allowing clients to encrypt data inside client applications and [never reveal the encryption keys to the database engine](/sql/relational-databases/security/encryption/always-encrypted-database-engine). In this manner, Always Encrypted provides separation between those users who own the data (and can view it) and those users who manage the data (but should have no access).
### Data encryption in use
azure-monitor Auto Collect Dependencies https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/auto-collect-dependencies.md
Below is the currently supported list of dependency calls that are automatically
| ASP.NET WebAPI | 4.5+ | | ASP.NET Core | 1.1+ | | <b> Communication libraries</b> |
-| [HttpClient](https://www.microsoft.com/net/) | 4.5+, .NET Core 1.1+ |
+| [HttpClient](https://dotnet.microsoft.com) | 4.5+, .NET Core 1.1+ |
| [SqlClient](https://www.nuget.org/packages/System.Data.SqlClient) | .NET Core 1.0+, NuGet 4.3.0 | | [Microsoft.Data.SqlClient](https://www.nuget.org/packages/Microsoft.Data.SqlClient/1.1.2)| 1.1.0 - latest stable release. (See Note below.) | [EventHubs Client SDK](https://www.nuget.org/packages/Microsoft.Azure.EventHubs) | 1.1.0 |
azure-monitor Java In Process Agent https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/java-in-process-agent.md
See [configuration options](./java-standalone-config.md) for full details.
* Micrometer (including Spring Boot Actuator metrics) * JMX Metrics
-### Azure SDKs
-
-* This feature is in preview, see the [configuration options](./java-standalone-config.md#auto-collected-azure-sdk-telemetry) for how to enable it.
+### Azure SDKs (preview)
+
+See the [configuration options](./java-standalone-config.md#auto-collected-azure-sdk-telemetry-preview)
+to enable this preview feature and capture the telemetry emitted by these Azure SDKs:
+
+* [App Configuration](https://docs.microsoft.com/java/api/overview/azure/data-appconfiguration-readme) 1.1.10+
+* [Cognitive Search](https://docs.microsoft.com/java/api/overview/azure/search-documents-readme) 11.3.0+
+* [Communication Chat](https://docs.microsoft.com/java/api/overview/azure/communication-chat-readme) 1.0.0+
+* [Communication Common](https://docs.microsoft.com/java/api/overview/azure/communication-common-readme) 1.0.0+
+* [Communication Identity](https://docs.microsoft.com/java/api/overview/azure/communication-identity-readme) 1.0.0+
+* [Communication Sms](https://docs.microsoft.com/java/api/overview/azure/communication-sms-readme) 1.0.0+
+* [Cosmos DB](https://docs.microsoft.com/java/api/overview/azure/cosmos-readme) 4.13.0+
+* [Event Grid](https://docs.microsoft.com/java/api/overview/azure/messaging-eventgrid-readme) 4.0.0+
+* [Event Hubs](https://docs.microsoft.com/java/api/overview/azure/messaging-eventhubs-readme) 5.6.0+
+* [Event Hubs - Azure Blob Storage Checkpoint Store](https://docs.microsoft.com/java/api/overview/azure/messaging-eventhubs-checkpointstore-blob-readme) 1.5.1+
+* [Form Recognizer](https://docs.microsoft.com/java/api/overview/azure/ai-formrecognizer-readme) 3.0.6+
+* [Identity](https://docs.microsoft.com/java/api/overview/azure/identity-readme) 1.2.4+
+* [Key Vault - Certificates](https://docs.microsoft.com/java/api/overview/azure/security-keyvault-certificates-readme) 4.1.6+
+* [Key Vault - Keys](https://docs.microsoft.com/java/api/overview/azure/security-keyvault-keys-readme) 4.2.6+
+* [Key Vault - Secrets](https://docs.microsoft.com/java/api/overview/azure/security-keyvault-secrets-readme) 4.2.6+
+* [Service Bus](https://docs.microsoft.com/java/api/overview/azure/messaging-servicebus-readme) 7.1.0+
+* [Text Analytics](https://docs.microsoft.com/java/api/overview/azure/ai-textanalytics-readme) 5.0.4+
+
+[//]: # "the above names and links scraped from https://azure.github.io/azure-sdk/releases/latest/java.html"
+[//]: # "and version sync'd manually against the oldest version in maven central built on azure-core 1.14.0"
+[//]: # ""
+[//]: # "var table = document.querySelector('#tg-sb-content > div > table')"
+[//]: # "var str = ''"
+[//]: # "for (var i = 1, row; row = table.rows[i]; i++) {"
+[//]: # " var name = row.cells[0].getElementsByTagName('div')[0].textContent.trim()"
+[//]: # " var stableRow = row.cells[1]"
+[//]: # " var versionBadge = stableRow.querySelector('.badge')"
+[//]: # " if (!versionBadge) {"
+[//]: # " continue"
+[//]: # " }"
+[//]: # " var version = versionBadge.textContent.trim()"
+[//]: # " var link = stableRow.querySelectorAll('a')[2].href"
+[//]: # " str += '* [' + name + '](' + link + ') ' + version"
+[//]: # "}"
+[//]: # "console.log(str)"
## Send custom telemetry from your application
azure-monitor Java Standalone Config https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/java-standalone-config.md
To disable auto-collection of Micrometer metrics (including Spring Boot Actuator
} ```
-## Auto-collected Azure SDK telemetry
+## Auto-collected Azure SDK telemetry (preview)
-This feature is in preview.
+Many of the latest Azure SDK libraries emit telemetry (see the [full list](./java-in-process-agent.md#azure-sdks-preview)).
-Many of the latest Azure SDK libraries emit telemetry.
+Starting from Application Insights Java 3.0.3, you can enable capturing this telemetry.
-Starting from version 3.0.3, you can enable collection of this telemetry:
+If you want to enable this feature:
```json {
azure-monitor Sampling https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/sampling.md
By default no sampling is enabled in the Java agent and SDK. Currently it only s
```json { "sampling": {
- "percentage": 10 //this is just an example that shows you how to enable only only 10% of transaction
+ "percentage": 10 //this is just an example that shows you how to enable only 10% of transaction
} } ```
azure-monitor Metrics Custom Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/essentials/metrics-custom-overview.md
If you have 100 regions, 200 departments and 2000 customers
Again, this limit is not for an individual metric. ItΓÇÖs for the sum of all such metrics across a subscription and region.
-## Design limitations
+## Design limitations and considerations
-**Do not use Application Insights for the purpose of auditing** ΓÇô The Application Insights pipeline uses the custom metrics API behind the scenes. The pipeline is optimized for a high volume of telemetry with a minimum of impact on your application. As such, it throttles or samples (takes a only a percentage of your telemetry and ignores the rest) if your incoming data stream becomes too large. Because of this behavior, you cannot use it for auditing purposes as some records are likely to be dropped.
+**Do not use Application Insights for the purpose of auditing** ΓÇô The Application Insights telemetry pipeline is optimized for minimizing the performance impact and limiting the network traffic from monitoring your application. As such, it throttles or samples (takes only a percentage of your telemetry and ignores the rest) if the initial dataset becomes too large. Because of this behavior, you cannot use it for auditing purposes as some records are likely to be dropped.
-**Metrics with a variable in the name** ΓÇô Do not use a variable as part of the metric name, for example, a guid or a timestamp. This quickly causes you to hit the 50,000 time series limitation.
-
-**High cardinality metric dimensions** - Metrics with too many valid values in a dimension (a ΓÇ£high cardinalityΓÇ¥) are much more likely to hit the 50k limit. In general, you should never use a constantly changing value in a dimension or metric name. Timestamp, for example, should NEVER be a dimension. Server, customer or productid could be used, but only if you have a smaller number of each of those types. As a test, ask yourself if you would every chart such data on a graph. If you have 10 or maybe even 100 servers, it might be useful to see them all on a graph for comparison. But if you have 1000, the resulting graph would likely be difficult if not impossible to read. Best practice is to keep it to fewer to 100 valid values. Up to 300 is a grey area. If you need to go over this amount, use Azure Monitor custom logs instead.
+**Metrics with a variable in the name** ΓÇô Do not use a variable as part of the metric name, use a constant instead. Each time the variable changes its value, Azure Monitor will generate a new metric, quickly hitting the limits on the number of metrics. Generally, when the developers want to include a variable in the metric name, they really want to track multiple timeseries within one metric and should use dimensions instead of variable metric names.
-If you have a variable in the name or a high cardinality dimension, the following can occur.
+**High cardinality metric dimensions** - Metrics with too many valid values in a dimension (a ΓÇ£high cardinalityΓÇ¥) are much more likely to hit the 50k limit. In general, you should never use a constantly changing value in a dimension or metric name. Timestamp, for example, should NEVER be a dimension. Server, customer or productid could be used, but only if you have a smaller number of each of those types. As a test, ask yourself if you would ever chart such data on a graph. If you have 10 or maybe even 100 servers, it might be useful to see them all on a graph for comparison. But if you have 1000, the resulting graph would likely be difficult if not impossible to read. Best practice is to keep it to fewer to 100 valid values. Up to 300 is a grey area. If you need to go over this amount, use Azure Monitor custom logs instead.
+
+If you have a variable in the name or a high cardinality dimension, the following can occur:
- Metrics become unreliable due to throttling - Metrics Explorer doesnΓÇÖt work - Alerting and notifications become unpredictable
azure-monitor Monitor Azure Resource https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/essentials/monitor-azure-resource.md
Azure Monitor Logs consolidates logs and metrics from multiple services and othe
You can access monitoring data collected from your resource from a command line or include in a script using [Azure PowerShell](/powershell/azure/) or [Azure Command Line Interface](/cli/azure/). - See [CLI metrics reference](/cli/azure/monitor/metrics) for accessing metric data from CLI.-- See [CLI Log Analytics reference](/cli/azure/ext/log-analytics/monitor/log-analytics) for accessing Azure Monitor Logs data using a log query from CLI.
+- See [CLI Log Analytics reference](/cli/azure/monitor/log-analytics) for accessing Azure Monitor Logs data using a log query from CLI.
- See [Azure PowerShell metrics reference](/powershell/module/azurerm.insights/get-azurermmetric) for accessing metric data from Azure PowerShell. - See [Azure PowerShell log query reference](/powershell/module/az.operationalinsights/Invoke-AzOperationalInsightsQuery) for accessing Azure Monitor Logs data using a log query from Azure PowerShell.
azure-monitor Solutions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/insights/solutions.md
Click on the name of a solution to open its summary page. This page displays any
### [Azure CLI](#tab/azure-cli)
-Use the [az monitor log-analytics solution list](/cli/azure/ext/log-analytics-solution/monitor/log-analytics/solution#ext-log-analytics-solution-az-monitor-log-analytics-solution-list) command to list the monitoring solutions installed in your subscription. Before running the `list` command, follow the prerequisites found in [Install a monitoring solution](#install-a-monitoring-solution).
+Use the [az monitor log-analytics solution list](/cli/azure/monitor/log-analytics/solution#az_monitor_log_analytics_solution_list) command to list the monitoring solutions installed in your subscription. Before running the `list` command, follow the prerequisites found in [Install a monitoring solution](#install-a-monitoring-solution).
```azurecli # List all log-analytics solutions in the current subscription.
Members of the community can submit management solutions to Azure Quickstart Tem
When you install a solution, you must select a [Log Analytics workspace](../logs/manage-access.md) where the solution will be installed and where its data will be collected. With the Azure CLI, you manage workspaces by using the [az monitor log-analytics workspace](/cli/azure/monitor/log-analytics/workspace) reference commands. Follow the process described in [Log Analytics workspace and Automation account](#log-analytics-workspace-and-automation-account) to link a workspace and account.
-Use the [az monitor log-analytics solution create](/cli/azure/ext/log-analytics-solution/monitor/log-analytics/solution) to install a monitoring solution. Parameters in square brackets are optional.
+Use the [az monitor log-analytics solution create](/cli/azure/monitor/log-analytics/solution) to install a monitoring solution. Parameters in square brackets are optional.
```azurecli az monitor log-analytics solution create --name
To remove an installed solution using the portal, locate it in the [list of inst
### [Azure CLI](#tab/azure-cli)
-To remove an installed solution using the Azure CLI, use the [az monitor log-analytics solution delete](/cli/azure/ext/log-analytics-solution/monitor/log-analytics/solution#ext-log-analytics-solution-az-monitor-log-analytics-solution-delete) command.
+To remove an installed solution using the Azure CLI, use the [az monitor log-analytics solution delete](/cli/azure/monitor/log-analytics/solution#az_monitor_log_analytics_solution_delete) command.
```azurecli az monitor log-analytics solution delete --name
azure-monitor Data Platform Logs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/logs/data-platform-logs.md
The following table describes some of the different ways that you can use Logs i
| **Alert** | Configure a [log alert rule](../alerts/alerts-log.md) that sends a notification or takes [automated action](../alerts/action-groups.md) when the results of the query match a particular result. | | **Visualize** | Pin query results rendered as tables or charts to an [Azure dashboard](../../azure-portal/azure-portal-dashboards.md).<br>Create a [workbook](../visualize/workbooks-overview.md) to combine with multiple sets of data in an interactive report. <br>Export the results of a query to [Power BI](../visualize/powerbi.md) to use different visualizations and share with users outside of Azure.<br>Export the results of a query to [Grafana](../visualize/grafana-plugin.md) to leverage its dashboarding and combine with other data sources.| | **Insights** | Support [insights](../monitor-reference.md#insights-and-core-solutions) that provide a customized monitoring experience for particular applications and services. |
-| **Retrieve** | Access log query results from a command line using [Azure CLI](/cli/azure/ext/log-analytics/monitor/log-analytics).<br>Access log query results from a command line using [PowerShell cmdlets](/powershell/module/az.operationalinsights).<br>Access log query results from a custom application using [REST API](https://dev.loganalytics.io/). |
+| **Retrieve** | Access log query results from a command line using [Azure CLI](/cli/azure/monitor/log-analytics).<br>Access log query results from a command line using [PowerShell cmdlets](/powershell/module/az.operationalinsights).<br>Access log query results from a custom application using [REST API](https://dev.loganalytics.io/). |
| **Export** | Configure [automated export of log data](./logs-data-export.md) to Azure storage account or Azure Event Hubs.<br>Build a workflow to retrieve log data and copy it to an external location using [Logic Apps](./logicapp-flow-connector.md). | ![Logs overview](media/data-platform-logs/logs-overview.png)
azure-monitor Private Link Security https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/logs/private-link-security.md
You can automate the process described earlier using Azure Resource Manager temp
To create and manage private link scopes, use the [REST API](/rest/api/monitor/privatelinkscopes(preview)/private%20link%20scoped%20resources%20(preview)) or [Azure CLI (az monitor private-link-scope)](/cli/azure/monitor/private-link-scope).
-To manage network access, use the flags `[--ingestion-access {Disabled, Enabled}]` and `[--query-access {Disabled, Enabled}]`on [Log Analytics workspaces](/cli/azure/monitor/log-analytics/workspace) or [Application Insights components](/cli/azure/ext/application-insights/monitor/app-insights/component).
+To manage network access, use the flags `[--ingestion-access {Disabled, Enabled}]` and `[--query-access {Disabled, Enabled}]`on [Log Analytics workspaces](/cli/azure/monitor/log-analytics/workspace) or [Application Insights components](/cli/azure/monitor/app-insights/component).
## Collect custom logs and IIS log over Private Link
azure-monitor Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/policy-reference.md
Title: Built-in policy definitions for Azure Monitor description: Lists Azure Policy built-in policy definitions for Azure Monitor. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 04/14/2021 Last updated : 04/21/2021
azure-monitor Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Monitor description: Lists Azure Policy Regulatory Compliance controls available for Azure Monitor. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 04/14/2021 Last updated : 04/21/2021
azure-netapp-files Azure Netapp Files Create Volumes Smb https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/azure-netapp-files-create-volumes-smb.md
na ms.devlang: na Previously updated : 04/19/2021 Last updated : 04/20/2021 # Create an SMB volume for Azure NetApp Files
Before creating an SMB volume, you need to create an Active Directory connection
> [!IMPORTANT] > The SMB Continuous Availability feature is currently in public preview. You need to submit a waitlist request for accessing the feature through the **[Azure NetApp Files SMB Continuous Availability Shares Public Preview waitlist submission page](https://aka.ms/anfsmbcasharespreviewsignup)**. Wait for an official confirmation email from the Azure NetApp Files team before using the Continuous Availability feature. >
- > You should enable Continuous Availability only for SQL workloads. Using SMB Continuous Availability shares for workloads other than SQL Server is *not* supported. This feature is currently supported on Windows SQL Server. Linux SQL Server is not currently supported. If you are using a non-administrator (domain) account to install SQL Server, ensure that the account has the required security privilege assigned. If the domain account does not have the required security privilege (`SeSecurityPrivilege`), and the privilege cannot be set at the domain level, you can grant the privilege to the account by using the **Security privilege users** field of Active Directory connections. See [Create an Active Directory connection](create-active-directory-connections.md#create-an-active-directory-connection).
+ > You should enable Continuous Availability only for SQL Server and [FSLogix user profile containers](../virtual-desktop/create-fslogix-profile-container.md). Using SMB Continuous Availability shares for workloads other than SQL Server and FSLogix user profile containers is *not* supported. This feature is currently supported on Windows SQL Server. Linux SQL Server is not currently supported. If you are using a non-administrator (domain) account to install SQL Server, ensure that the account has the required security privilege assigned. If the domain account does not have the required security privilege (`SeSecurityPrivilege`), and the privilege cannot be set at the domain level, you can grant the privilege to the account by using the **Security privilege users** field of Active Directory connections. See [Create an Active Directory connection](create-active-directory-connections.md#create-an-active-directory-connection).
<!-- [1/13/21] Commenting out command-based steps below, because the plan is to use form-based (URL) registration, similar to CRR feature registration --> <!--
azure-netapp-files Azure Netapp Files Faqs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/azure-netapp-files-faqs.md
na ms.devlang: na Previously updated : 04/19/2021 Last updated : 04/20/2021 # FAQs About Azure NetApp Files
Management of `SMB Shares`, `Sessions`, and `Open Files` through Computer Manage
Use the **JSON View** link on the volume overview pane, and look for the **startIp** identifier under **properties** -> **mountTargets**.
+### Can an Azure NetApp Files SMB share act as an DFS Namespace (DFS-N) root?
+
+No. However, Azure NetApp Files SMB shares can serve as a DFS Namespace (DFS-N) folder target.
+To use an Azure NetApp Files SMB share as a DFS-N folder target, provide the Universal Naming Convention (UNC) mount path of the Azure NetApp Files SMB share by using the [DFS Add Folder Target](/windows-server/storage/dfs-namespaces/add-folder-targets#to-add-a-folder-target) procedure.
+ ### SMB encryption FAQs This section answers commonly asked questions about SMB encryption (SMB 3.0 and SMB 3.1.1).
azure-netapp-files Dynamic Change Volume Service Level https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/dynamic-change-volume-service-level.md
The feature to move a volume to another capacity pool is currently in preview. I
2. Check the status of the feature registration: > [!NOTE]
- > The **RegistrationState** may be in the `Registering` state for up to 60 minutes before changing to`Registered`. Wait until the status is **Registered** before continuing.
+ > The **RegistrationState** may be in the `Registering` state for up to 60 minutes before changing to `Registered`. Wait until the status is **Registered** before continuing.
```azurepowershell-interactive Get-AzProviderFeature -ProviderNamespace Microsoft.NetApp -FeatureName ANFTierChange
azure-netapp-files Whats New https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/whats-new.md
na ms.devlang: na Previously updated : 04/19/2021 Last updated : 04/21/2021
Azure NetApp Files is updated regularly. This article provides a summary about t
## April 2021
+* [SMB Continuous Availability (CA) shares support for FSLogix user profile containers](azure-netapp-files-create-volumes-smb.md#add-an-smb-volume) (Preview)
+
+ [FSLogix](/fslogix/overview) is a set of solutions that enhance, enable, and simplify non-persistent Windows computing environments. FSLogix solutions are appropriate for virtual environments in both public and private clouds. FSLogix solutions can also be used to create more portable computing sessions when you use physical devices. FSLogix can be used to provide dynamic access to persistent user profile containers stored on SMB shared networked storage, including Azure NetApp Files. To further enhance FSLogix resiliency to storage service maintenance events, Azure NetApp Files has extended support for SMB Transparent Failover via [SMB Continuous Availability (CA) shares on Azure NetApp Files](azure-netapp-files-create-volumes-smb.md#add-an-smb-volume) for user profile containers. See Azure NetApp Files [Windows Virtual Desktop solutions](azure-netapp-files-solution-architectures.md#windows-virtual-desktop) for additional information.
+ * [SMB3 Protocol Encryption](azure-netapp-files-create-volumes-smb.md#add-an-smb-volume) (Preview) You can now enable SMB3 Protocol Encryption on Azure NetApp Files SMB and dual-protocol volumes. This feature enables encryption for in-flight SMB3 data, using the [AES-CCM algorithm on SMB 3.0, and the AES-GCM algorithm on SMB 3.1.1](/windows-server/storage/file-server/file-server-smb-overview#features-added-in-smb-311-with-windows-server-2016-and-windows-10-version-1607) connections. SMB clients not using SMB3 encryption will not be able to access this volume. Data at rest is encrypted regardless of this setting. SMB encryption further enhances security. However, it might impact the client (CPU overhead for encrypting and decrypting messages). It might also impact storage resource utilization (reductions in throughput). You should test the encryption performance impact against your applications before deploying workloads into production.
azure-portal Azure Portal Dashboards Create Programmatically https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-portal/azure-portal-dashboards-create-programmatically.md
Prepare your environment for the Azure CLI.
- These examples use the following dashboard: [portal-dashboard-template-testvm.json](https://raw.githubusercontent.com/Azure/azure-docs-powershell-samples/master/azure-portal/portal-dashboard-template-testvm.json). Replace content in angled brackets with your values.
-Run the [az portal dashboard create](/cli/azure/ext/portal/portal/dashboard#ext_portal_az_portal_dashboard_create) command to create a dashboard:
+Run the [az portal dashboard create](/cli/azure/portal/dashboard#az_portal_dashboard_create) command to create a dashboard:
```azurecli az portal dashboard create --resource-group myResourceGroup --name 'Simple VM Dashboard' \ --input-path portal-dashboard-template-testvm.json --location centralus ```
-You can update a dashboard by using the [az portal dashboard update](/cli/azure/ext/portal/portal/dashboard#ext_portal_az_portal_dashboard_update) command:
+You can update a dashboard by using the [az portal dashboard update](/cli/azure/portal/dashboard#az_portal_dashboard_update) command:
```azurecli az portal dashboard update --resource-group myResourceGroup --name 'Simple VM Dashboard' \ --input-path portal-dashboard-template-testvm.json --location centralus ```
-See the details of a dashboard by running the [az portal dashboard show](/cli/azure/ext/portal/portal/dashboard#ext_portal_az_portal_dashboard_show) command:
+See the details of a dashboard by running the [az portal dashboard show](/cli/azure/portal/dashboard#az_portal_dashboard_show) command:
```azurecli az portal dashboard show --resource-group myResourceGroup --name 'Simple VM Dashboard' ```
-To see all the dashboards for the current subscription, use [az portal dashboard list](/cli/azure/ext/portal/portal/dashboard#ext_portal_az_portal_dashboard_list):
+To see all the dashboards for the current subscription, use [az portal dashboard list](/cli/azure/portal/dashboard#az_portal_dashboard_list):
```azurecli az portal dashboard list
az portal dashboard list --resource-group myResourceGroup
For more information about desktops, see [Manage Azure portal settings and preferences](set-preferences.md).
-For more information about Azure CLI support for dashboards, see [az portal dashboard](/cli/azure/ext/portal/portal/dashboard).
+For more information about Azure CLI support for dashboards, see [az portal dashboard](/cli/azure/portal/dashboard).
azure-portal Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-portal/policy-reference.md
Title: Built-in policy definitions for Azure portal description: Lists Azure Policy built-in policy definitions for Azure portal. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 04/14/2021 Last updated : 04/21/2021
azure-portal Quickstart Portal Dashboard Azure Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-portal/quickstart-portal-dashboard-azure-cli.md Binary files differ
azure-resource-manager Create Custom Provider https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/custom-providers/create-custom-provider.md
You receive the response:
## Custom resource provider commands
-Use the [custom-providers](/cli/azure/ext/custom-providers/custom-providers/resource-provider) commands to work with your custom resource provider.
+Use the [custom-providers](/cli/azure/custom-providers/resource-provider) commands to work with your custom resource provider.
### List custom resource providers
azure-resource-manager Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/custom-providers/policy-reference.md
Title: Built-in policy definitions for Azure Custom Resource Providers description: Lists Azure Policy built-in policy definitions for Azure Custom Resource Providers. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 04/14/2021 Last updated : 04/21/2021
azure-resource-manager Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/managed-applications/policy-reference.md
Title: Built-in policy definitions for Azure Managed Applications description: Lists Azure Policy built-in policy definitions for Azure Managed Applications. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 04/14/2021 Last updated : 04/21/2021
azure-resource-manager Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/management/policy-reference.md
Title: Built-in policy definitions for Azure Resource Manager description: Lists Azure Policy built-in policy definitions for Azure Resource Manager. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 04/14/2021 Last updated : 04/21/2021
azure-resource-manager Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/management/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Resource Manager description: Lists Azure Policy Regulatory Compliance controls available for Azure Resource Manager. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 04/14/2021 Last updated : 04/21/2021
azure-signalr Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-signalr/policy-reference.md
Title: Built-in policy definitions for Azure SignalR description: Lists Azure Policy built-in policy definitions for Azure SignalR. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 04/14/2021 Last updated : 04/21/2021
azure-signalr Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-signalr/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure SignalR description: Lists Azure Policy Regulatory Compliance controls available for Azure SignalR. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 04/14/2021 Last updated : 04/21/2021
azure-signalr Signalr Concept Authenticate Oauth https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-signalr/signalr-concept-authenticate-oauth.md
To complete this tutorial, you must have the following prerequisites:
- An account created on [GitHub](https://github.com/) - [Git](https://git-scm.com/)-- [.NET Core SDK](https://www.microsoft.com/net/download/windows)
+- [.NET Core SDK](https://dotnet.microsoft.com/download)
- [Azure Cloud Shell](../cloud-shell/quickstart.md) configured for the bash environment. - Download or clone the [AzureSignalR-sample](https://github.com/aspnet/AzureSignalR-samples) GitHub repository.
azure-signalr Signalr Quickstart Azure Functions Java https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-signalr/signalr-quickstart-azure-functions-java.md
Azure SignalR Service lets you easily add real-time functionality to your applic
> The required SignalR Service bindings in Java are only supported in Azure Function Core Tools version 2.4.419 (host version 2.0.12332) or above. > [!NOTE]
- > To install extensions, Azure Functions Core Tools requires the [.NET Core SDK](https://www.microsoft.com/net/download) installed. However, no knowledge of .NET is required to build JavaScript Azure Function apps.
+ > To install extensions, Azure Functions Core Tools requires the [.NET Core SDK](https://dotnet.microsoft.com/download) installed. However, no knowledge of .NET is required to build JavaScript Azure Function apps.
- [Java Developer Kit](https://www.azul.com/downloads/zulu/), version 8 - [Apache Maven](https://maven.apache.org), version 3.0 or above
azure-signalr Signalr Quickstart Dotnet Core https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-signalr/signalr-quickstart-dotnet-core.md
The code for this tutorial is available for download in the [AzureSignalR-sample
## Prerequisites
-* Install the [.NET Core SDK](https://www.microsoft.com/net/download/windows).
+* Install the [.NET Core SDK](https://dotnet.microsoft.com/download).
* Download or clone the [AzureSignalR-sample](https://github.com/aspnet/AzureSignalR-samples) GitHub repository. Having issues? Try the [troubleshooting guide](signalr-howto-troubleshoot-guide.md) or [let us know](https://aka.ms/asrs/qsnetcore).
azure-signalr Signalr Quickstart Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-signalr/signalr-quickstart-dotnet.md
In this quickstart, you will learn how to get started with the ASP.NET and Azure
## Prerequisites * [Visual Studio 2019](https://visualstudio.microsoft.com/downloads/)
-* [.NET 4.6.1](https://www.microsoft.com/net/download/windows)
+* [.NET Framework 4.6.1](https://dotnet.microsoft.com/download/dotnet-framework/net461)
* [ASP.NET SignalR 2.4.1](https://www.nuget.org/packages/Microsoft.AspNet.SignalR/) Having issues? Try the [troubleshooting guide](signalr-howto-troubleshoot-guide.md) or [let us know](https://aka.ms/asrs/qsnet).
azure-signalr Signalr Quickstart Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-signalr/signalr-quickstart-rest-api.md
In this quickstart, you will learn how to send messages from a command-line app
This quickstart can be run on macOS, Windows, or Linux.
-* [.NET Core SDK](https://www.microsoft.com/net/download/core)
+* [.NET Core SDK](https://dotnet.microsoft.com/download)
* A text editor or code editor of your choice. [!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)]
azure-sql Connect Query Content Reference Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/connect-query-content-reference-guide.md
The following table lists connectivity libraries or *drivers* that client applic
| Language | Platform | Additional resources | Download | Get started | | :-- | :-- | :-- | :-- | :-- |
-| C# | Windows, Linux, macOS | [Microsoft ADO.NET for SQL Server](/sql/connect/ado-net/microsoft-ado-net-sql-server) | [Download](https://www.microsoft.com/net/download/) | [Get started](https://www.microsoft.com/sql-server/developer-get-started/csharp/ubuntu)
+| C# | Windows, Linux, macOS | [Microsoft ADO.NET for SQL Server](/sql/connect/ado-net/microsoft-ado-net-sql-server) | [Download](https://dotnet.microsoft.com/download) | [Get started](https://www.microsoft.com/sql-server/developer-get-started/csharp/ubuntu)
| Java | Windows, Linux, macOS | [Microsoft JDBC driver for SQL Server](/sql/connect/jdbc/microsoft-jdbc-driver-for-sql-server/) | [Download](/sql/connect/jdbc/download-microsoft-jdbc-driver-for-sql-server) | [Get started](https://www.microsoft.com/sql-server/developer-get-started/java/ubuntu) | PHP | Windows, Linux, macOS| [PHP SQL driver for SQL Server](/sql/connect/php/microsoft-php-driver-for-sql-server) | [Download](/sql/connect/php/download-drivers-php-sql-server) | [Get started](https://www.microsoft.com/sql-server/developer-get-started/php/ubuntu/) | Node.js | Windows, Linux, macOS | [Node.js driver for SQL Server](/sql/connect/node-js/node-js-driver-for-sql-server/) | [Install](/sql/connect/node-js/step-1-configure-development-environment-for-node-js-development/) | [Get started](https://www.microsoft.com/sql-server/developer-get-started/node/ubuntu)
azure-sql Connect Query Dotnet Core https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/connect-query-dotnet-core.md
Last updated 05/29/2020
# Quickstart: Use .NET Core (C#) to query a database [!INCLUDE[appliesto-sqldb-sqlmi](../includes/appliesto-sqldb-sqlmi-asa.md)]
-In this quickstart, you'll use [.NET Core](https://www.microsoft.com/net/) and C# code to connect to a database. You'll then run a Transact-SQL statement to query data.
+In this quickstart, you'll use [.NET Core](https://dotnet.microsoft.com) and C# code to connect to a database. You'll then run a Transact-SQL statement to query data.
> [!TIP] > The following Microsoft Learn module helps you learn for free how to [Develop and configure an ASP.NET application that queries a database in Azure SQL Database](/learn/modules/develop-app-that-queries-azure-sql/)
In this quickstart, you'll use [.NET Core](https://www.microsoft.com/net/) and C
To complete this quickstart, you need: - An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio).-- [.NET Core for your operating system](https://www.microsoft.com/net/core) installed.
+- [.NET Core SDK for your operating system](https://dotnet.microsoft.com/download) installed.
- A database where you can run your query. [!INCLUDE[create-configure-database](../includes/create-configure-database.md)]
azure-sql Connect Query Dotnet Visual Studio https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/connect-query-dotnet-visual-studio.md
Last updated 08/10/2020
# Quickstart: Use .NET and C# in Visual Studio to connect to and query a database [!INCLUDE[appliesto-sqldb-sqlmi](../includes/appliesto-sqldb-sqlmi-asa.md)]
-This quickstart shows how to use the [.NET Framework](https://www.microsoft.com/net/) and C# code in Visual Studio to query a database in Azure SQL or Synapse SQL with Transact-SQL statements.
+This quickstart shows how to use the [.NET Framework](https://dotnet.microsoft.com) and C# code in Visual Studio to query a database in Azure SQL or Synapse SQL with Transact-SQL statements.
## Prerequisites
azure-sql Database Copy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/database-copy.md
To manage database copy using the Azure portal, you will also need the following
Microsoft.Resources/deployments/write Microsoft.Resources/deployments/operationstatuses/read
-If you want to see the operations under deployments in the resource group on the portal, operations across multiple resource providers including SQL operations, you will need these additional Azure roles:
+If you want to see the operations under deployments in the resource group on the portal, operations across multiple resource providers including SQL operations, you will need these additional permissions:
Microsoft.Resources/subscriptions/resourcegroups/deployments/operations/read Microsoft.Resources/subscriptions/resourcegroups/deployments/operationstatuses/read
azure-sql Maintenance Window Configure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/maintenance-window-configure.md
When setting the maintenance window, each region has its own maintenance window
### Discover SQL Database and elastic pool maintenance windows The following example returns the available maintenance windows for the *eastus2* region using the [az maintenance public-configuration list
-](/cli/azure/ext/maintenance/maintenance/public-configuration#ext_maintenance_az_maintenance_public_configuration_list) command. For databases and elastic pools, set `maintenanceScope` to `SQLDB`.
+](/cli/azure/maintenance/public-configuration#az_maintenance_public_configuration_list) command. For databases and elastic pools, set `maintenanceScope` to `SQLDB`.
```azurecli location="eastus2"
The following example returns the available maintenance windows for the *eastus2
### Discover SQL Managed Instance maintenance windows The following example returns the available maintenance windows for the *eastus2* region using the [az maintenance public-configuration list
-](/cli/azure/ext/maintenance/maintenance/public-configuration#ext_maintenance_az_maintenance_public_configuration_list) command. For managed instances, set `maintenanceScope` to `SQLManagedInstance`.
+](/cli/azure/maintenance/public-configuration#az_maintenance_public_configuration_list) command. For managed instances, set `maintenanceScope` to `SQLManagedInstance`.
```azurecli az maintenance public-configuration list --query "[?location=='eastus2'&&contains(maintenanceScope,'SQLManagedInstance')]"
azure-sql Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/policy-reference.md
Title: Built-in policy definitions for Azure SQL Database description: Lists Azure Policy built-in policy definitions for Azure SQL Database and SQL Managed Instance. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 04/14/2021 Last updated : 04/21/2021
azure-sql Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure SQL Database description: Lists Azure Policy Regulatory Compliance controls available for Azure SQL Database and SQL Managed Instance. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 04/14/2021 Last updated : 04/21/2021
azure-sql Security Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/security-overview.md
For example when using the ADO.NET driver this is accomplished via **Encrypt=Tr
[Transparent data encryption (TDE) for SQL Database, SQL Managed Instance, and Azure Synapse Analytics](transparent-data-encryption-tde-overview.md) adds a layer of security to help protect data at rest from unauthorized or offline access to raw files or backups. Common scenarios include data center theft or unsecured disposal of hardware or media such as disk drives and backup tapes. TDE encrypts the entire database using an AES encryption algorithm, which doesn't require application developers to make any changes to existing applications.
-In Azure, all newly created databases are encrypted by default and the database encryption key is protected by a built-in server certificate. Certificate maintenance and rotation are managed by the service and require no input from the user. Customers who prefer to take control of the encryption keys can manage the keys in [Azure Key Vault](../../key-vault/general/security-overview.md).
+In Azure, all newly created databases are encrypted by default and the database encryption key is protected by a built-in server certificate. Certificate maintenance and rotation are managed by the service and require no input from the user. Customers who prefer to take control of the encryption keys can manage the keys in [Azure Key Vault](../../key-vault/general/security-features.md).
### Key management with Azure Key Vault
-[Bring Your Own Key](transparent-data-encryption-byok-overview.md) (BYOK) support for [Transparent Data Encryption](/sql/relational-databases/security/encryption/transparent-data-encryption) (TDE) allows customers to take ownership of key management and rotation using [Azure Key Vault](../../key-vault/general/security-overview.md), Azure's cloud-based external key management system. If the database's access to the key vault is revoked, a database cannot be decrypted and read into memory. Azure Key Vault provides a central key management platform, leverages tightly monitored hardware security modules (HSMs), and enables separation of duties between management of keys and data to help meet security compliance requirements.
+[Bring Your Own Key](transparent-data-encryption-byok-overview.md) (BYOK) support for [Transparent Data Encryption](/sql/relational-databases/security/encryption/transparent-data-encryption) (TDE) allows customers to take ownership of key management and rotation using [Azure Key Vault](../../key-vault/general/security-features.md), Azure's cloud-based external key management system. If the database's access to the key vault is revoked, a database cannot be decrypted and read into memory. Azure Key Vault provides a central key management platform, leverages tightly monitored hardware security modules (HSMs), and enables separation of duties between management of keys and data to help meet security compliance requirements.
### Always Encrypted (Encryption-in-use)
azure-sql Sql Data Sync Sql Server Configure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/sql-data-sync-sql-server-configure.md
For PowerShell examples on how to configure SQL Data Sync, see [How to sync betw
:::image type="content" source="./media/sql-data-sync-sql-server-configure/sync-to-other-databases.png" alt-text = "Sync to other databases, Microsoft Azure portal":::
-1. On the **Sync to other databases** page, select **New Sync Group**. The **New sync group** page opens with **Create sync group (step 1)**.
+1. On the **Sync to other databases** page, select **New Sync Group**. The **New sync group** page opens with **Create sync group**.
- :::image type="content" source="./media/sql-data-sync-sql-server-configure/new-sync-group-private-link.png" alt-text = "Set up new sync group with private link":::
+ :::image type="content" source="./media/sql-data-sync-sql-server-configure/create-sync-group.png" alt-text = "Set up new sync group with private link":::
On the **Create Data Sync Group** page, change the following settings:
For PowerShell examples on how to configure SQL Data Sync, see [How to sync betw
1. On the **New Sync Group** page, if you selected **Use private link**, you will need to approve the private endpoint connection. The link in the info message will take you to the private endpoint connections experience where you can approve the connection.
- :::image type="content" source="./media/sql-data-sync-sql-server-configure/approve-private-link.png" alt-text = "Approve private link":::
+ :::image type="content" source="./media/sql-data-sync-sql-server-configure/approve-private-link-update.png" alt-text = "Approve private link":::
+
+ > [!NOTE]
+ > The private links for the syng group and the sync members neet to be created, approved, and disabled separately.
## Add sync members
-After the new sync group is created and deployed, **Add sync members (step 2)** is highlighted on the **New sync group** page.
-
-In the **Hub Database** section, enter existing credentials for the server on which the hub database is located. Don't enter *new* credentials in this section.
+After the new sync group is created and deployed, open the sync group and access the **Databases** page, where you will select sync members.
- :::image type="content" source="./media/sql-data-sync-sql-server-configure/steptwo.png" alt-text = "Enter existing credentials for the hub database server":::
+ :::image type="content" source="./media/sql-data-sync-sql-server-configure/add-sync-members.png" alt-text = "Select sync members":::
+
+ > [!NOTE]
+ > To update or insert the username and password to your hub database, go to the **Hub Database** section in the **Select sync members** page.
### To add a database in Azure SQL Database
-In the **Member Database** section, optionally add a database in Azure SQL Database to the sync group by selecting **Add an Azure SQL Database**. The **Configure Azure SQL Database** page opens.
+In the **Select sync members** section, optionally add a database in Azure SQL Database to the sync group by selecting **Add an Azure Database**. The **Configure Azure Database** page opens.
:::image type="content" source="./media/sql-data-sync-sql-server-configure/step-two-configure.png" alt-text = "Add a database to the sync group":::
In the **Member Database** section, optionally add a SQL Server database to the
## Configure sync group
-After the new sync group members are created and deployed, **Configure sync group (step 3)** is highlighted in the **New sync group** page.
+After the new sync group members are created and deployed, go to the **Tables** section in the **Database Sync Group** page.
-![Step 3 settings](./media/sql-data-sync-sql-server-configure/stepthree.png)
+![Step 3 settings](./media/sql-data-sync-sql-server-configure/configure-sync-group.png)
-1. On the **Tables** page, select a database from the list of sync group members and select **Refresh schema**.
+1. On the **Tables** page, select a database from the list of sync group members and select **Refresh schema**. Please expect a few minutes delay in refresh schema, the delay might be a few minutes longer if using private link.
1. From the list, select the tables you want to sync. By default, all columns are selected, so disable the checkbox for the columns you don't want to sync. Be sure to leave the primary key column selected.
After you export a database as a *.bacpac* file and import the file to create a
For frequently asked questions about the client agent, see [Agent FAQ](sql-data-sync-agent-overview.md#agent-faq).
-**Is it necessary to manually approve the private link before I can start using it?**
+**Is it necessary to manually approve the link before I can start using it?**
Yes, you must manually approve the service managed private endpoint, in the Private endpoint connections page of the Azure portal during the sync group deployment or by using PowerShell.
azure-sql Transparent Data Encryption Byok Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/transparent-data-encryption-byok-overview.md
Last updated 02/01/2021
Azure SQL [Transparent Data Encryption (TDE)](/sql/relational-databases/security/encryption/transparent-data-encryption) with customer-managed key enables Bring Your Own Key (BYOK) scenario for data protection at rest, and allows organizations to implement separation of duties in the management of keys and data. With customer-managed transparent data encryption, customer is responsible for and in a full control of a key lifecycle management (key creation, upload, rotation, deletion), key usage permissions, and auditing of operations on keys.
-In this scenario, the key used for encryption of the Database Encryption Key (DEK), called TDE protector, is a customer-managed asymmetric key stored in a customer-owned and customer-managed [Azure Key Vault (AKV)](../../key-vault/general/security-overview.md), a cloud-based external key management system. Key Vault is highly available and scalable secure storage for RSA cryptographic keys, optionally backed by FIPS 140-2 Level 2 validated hardware security modules (HSMs). It doesn't allow direct access to a stored key, but provides services of encryption/decryption using the key to the authorized entities. The key can be generated by the key vault, imported, or [transferred to the key vault from an on-prem HSM device](../../key-vault/keys/hsm-protected-keys.md).
+In this scenario, the key used for encryption of the Database Encryption Key (DEK), called TDE protector, is a customer-managed asymmetric key stored in a customer-owned and customer-managed [Azure Key Vault (AKV)](../../key-vault/general/security-features.md), a cloud-based external key management system. Key Vault is highly available and scalable secure storage for RSA cryptographic keys, optionally backed by FIPS 140-2 Level 2 validated hardware security modules (HSMs). It doesn't allow direct access to a stored key, but provides services of encryption/decryption using the key to the authorized entities. The key can be generated by the key vault, imported, or [transferred to the key vault from an on-prem HSM device](../../key-vault/keys/hsm-protected-keys.md).
For Azure SQL Database and Azure Synapse Analytics, the TDE protector is set at the server level and is inherited by all encrypted databases associated with that server. For Azure SQL Managed Instance, the TDE protector is set at the instance level and is inherited by all encrypted databases on that instance. The term *server* refers both to a server in SQL Database and Azure Synapse and to a managed instance in SQL Managed Instance throughout this document, unless stated differently.
azure-sql Transparent Data Encryption Tde Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/transparent-data-encryption-tde-overview.md
Last updated 10/12/2020
[Transparent data encryption (TDE)](/sql/relational-databases/security/encryption/transparent-data-encryption) helps protect Azure SQL Database, Azure SQL Managed Instance, and Azure Synapse Analytics against the threat of malicious offline activity by encrypting data at rest. It performs real-time encryption and decryption of the database, associated backups, and transaction log files at rest without requiring changes to the application. By default, TDE is enabled for all newly deployed SQL Databases and must be manually enabled for older databases of Azure SQL Database, Azure SQL Managed Instance. TDE must be manually enabled for Azure Synapse Analytics.
-TDE performs real-time I/O encryption and decryption of the data at the page level. Each page is decrypted when it's read into memory and then encrypted before being written to disk. TDE encrypts the storage of an entire database by using a symmetric key called the Database Encryption Key (DEK). On database startup, the encrypted DEK is decrypted and then used for decryption and re-encryption of the database files in the SQL Server database engine process. DEK is protected by the TDE protector. TDE protector is either a service-managed certificate (service-managed transparent data encryption) or an asymmetric key stored in [Azure Key Vault](../../key-vault/general/security-overview.md) (customer-managed transparent data encryption).
+TDE performs real-time I/O encryption and decryption of the data at the page level. Each page is decrypted when it's read into memory and then encrypted before being written to disk. TDE encrypts the storage of an entire database by using a symmetric key called the Database Encryption Key (DEK). On database startup, the encrypted DEK is decrypted and then used for decryption and re-encryption of the database files in the SQL Server database engine process. DEK is protected by the TDE protector. TDE protector is either a service-managed certificate (service-managed transparent data encryption) or an asymmetric key stored in [Azure Key Vault](../../key-vault/general/security-features.md) (customer-managed transparent data encryption).
For Azure SQL Database and Azure Synapse, the TDE protector is set at the [server](logical-servers.md) level and is inherited by all databases associated with that server. For Azure SQL Managed Instance, the TDE protector is set at the instance level and it is inherited by all encrypted databases on that instance. The term *server* refers both to server and instance throughout this document, unless stated differently.
Use the following set of commands for Azure SQL Database and Azure Synapse:
- For a general description of TDE, see [Transparent data encryption](/sql/relational-databases/security/encryption/transparent-data-encryption). - To learn more about TDE with BYOK support for Azure SQL Database, Azure SQL Managed Instance and Azure Synapse, see [Transparent data encryption with Bring Your Own Key support](transparent-data-encryption-byok-overview.md). - To start using TDE with Bring Your Own Key support, see the how-to guide, [Turn on transparent data encryption by using your own key from Key Vault](transparent-data-encryption-byok-configure.md).-- For more information about Key Vault, see [Secure access to a key vault](../../key-vault/general/security-overview.md).
+- For more information about Key Vault, see [Secure access to a key vault](../../key-vault/general/security-features.md).
azure-sql Transact Sql Tsql Differences Sql Server https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/managed-instance/transact-sql-tsql-differences-sql-server.md
Undocumented DBCC statements that are enabled in SQL Server aren't supported in
### Distributed transactions
-Partial support for [distributed transactions](../database/elastic-transactions-overview.md) is currently in public preview. Supported scenarios are:
-* Transactions where participants are only Azure SQL Managed Instances that are part of [Server trust group](./server-trust-group-overview.md).
-* Transactions initiated from .NET (TransactionScope class) and Transact-SQL.
+Partial support for [distributed transactions](../database/elastic-transactions-overview.md) is currently in public preview. Distributed transactions are supported under following conditions (all of them must be met):
+* all transaction participants are Azure SQL Managed Instances that are part of the [Server trust group](./server-trust-group-overview.md).
+* transactions are initiated either from .NET (TransactionScope class) or Transact-SQL.
Azure SQL Managed Instance currently does not support other scenarios which are regularly supported by MSDTC on-premises or in Azure Virtual Machines.
azure-vmware Azure Security Integration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/azure-security-integration.md
You can create queries or use the available pre-defined query in Azure Sentinel
Now that you've covered how to protect your Azure VMware Solution VMs, you may want to learn about: -- Using the [Azure Defender dashboard](../security-center/azure-defender-dashboard.md).-- [Advanced multistage attack detection in Azure Sentinel](../azure-monitor/logs/quick-create-workspace.md).-- [Lifecycle management of Azure VMware Solution VMs](lifecycle-management-of-azure-vmware-solution-vms.md).
+- Using the [Azure Defender dashboard](../security-center/azure-defender-dashboard.md)
+- [Advanced multistage attack detection in Azure Sentinel](../azure-monitor/logs/quick-create-workspace.md)
+- [Lifecycle management of Azure VMware Solution VMs](lifecycle-management-of-azure-vmware-solution-vms.md)
azure-vmware Backup Azure Vmware Solution Virtual Machines https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/backup-azure-vmware-solution-virtual-machines.md
You can restore individual files from a protected VM recovery point. This featur
Now that you've covered backing up your Azure VMware Solution VMs with Azure Backup Server, you may want to learn about: -- [Troubleshooting when setting up backups in Azure Backup Server](../backup/backup-azure-mabs-troubleshoot.md).-- [Lifecycle management of Azure VMware Solution VMs](lifecycle-management-of-azure-vmware-solution-vms.md).
+- [Troubleshooting when setting up backups in Azure Backup Server](../backup/backup-azure-mabs-troubleshoot.md)
+- [Lifecycle management of Azure VMware Solution VMs](lifecycle-management-of-azure-vmware-solution-vms.md)
azure-vmware Concepts Monitor Repair Private Cloud https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/concepts-monitor-repair-private-cloud.md
Host remediation starts by adding a new healthy node in the cluster. Then, when
Now that you've covered how Azure VMware Solution monitors and repairs private clouds, you may want to learn about: -- [Azure VMware Solution private cloud upgrades](concepts-upgrades.md).-- [How to enable Azure VMware Solution resource](enable-azure-vmware-solution.md).
+- [Azure VMware Solution private cloud upgrades](concepts-upgrades.md)
+- [How to enable Azure VMware Solution resource](enable-azure-vmware-solution.md)
azure-vmware Concepts Networking https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/concepts-networking.md
For full interconnectivity to your private cloud, you need to enable ExpressRout
Now that you've covered Azure VMware Solution network and interconnectivity concepts, you may want to learn about: -- [Azure VMware Solution storage concepts](concepts-storage.md).-- [Azure VMware Solution identity concepts](concepts-identity.md).-- [How to enable Azure VMware Solution resource](enable-azure-vmware-solution.md).
+- [Azure VMware Solution storage concepts](concepts-storage.md)
+- [Azure VMware Solution identity concepts](concepts-identity.md)
+- [How to enable Azure VMware Solution resource](enable-azure-vmware-solution.md)
<!-- LINKS - external --> [enable Global Reach]: ../expressroute/expressroute-howto-set-global-reach.md
azure-vmware Concepts Upgrades https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/concepts-upgrades.md
At times of failure, Azure VMware Solution can restore these components from the
Now that you've covered the key upgrade processes and features in Azure VMware Solution, you may want to learn about: -- [How to create a private cloud](tutorial-create-private-cloud.md).-- [How to enable Azure VMware Solution resource](enable-azure-vmware-solution.md).
+- [How to create a private cloud](tutorial-create-private-cloud.md)
+- [How to enable Azure VMware Solution resource](enable-azure-vmware-solution.md)
<!-- LINKS - external -->
azure-vmware Deploy Traffic Manager Balance Workloads https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/deploy-traffic-manager-balance-workloads.md
The following steps verify the configuration of the NSX-T segment in the Azure V
Now that you've covered integrating Azure Traffic Manager with Azure VMware Solution, you may want to learn about: -- [Using Azure Application Gateway on Azure VMware Solution](protect-azure-vmware-solution-with-application-gateway.md).-- [Traffic Manager routing methods](../traffic-manager/traffic-manager-routing-methods.md).-- [Combining load-balancing services in Azure](../traffic-manager/traffic-manager-load-balancing-azure.md).-- [Measuring Traffic Manager performance](../traffic-manager/traffic-manager-performance-considerations.md).
+- [Using Azure Application Gateway on Azure VMware Solution](protect-azure-vmware-solution-with-application-gateway.md)
+- [Traffic Manager routing methods](../traffic-manager/traffic-manager-routing-methods.md)
+- [Combining load-balancing services in Azure](../traffic-manager/traffic-manager-load-balancing-azure.md)
+- [Measuring Traffic Manager performance](../traffic-manager/traffic-manager-performance-considerations.md)
azure-vmware Deploy Vm Content Library https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/deploy-vm-content-library.md
Now that the content library has been created, you can add an ISO image to deplo
Now that you've covered creating a content library to deploy VMs in Azure VMware Solution, you may want to learn about: -- [Deploying and configuring VMware HCX](tutorial-deploy-vmware-hcx.md) to migrate VM workloads to your private cloud.-- [Lifecycle management of Azure VMware Solution VMs](lifecycle-management-of-azure-vmware-solution-vms.md).
+- [How to migrate VM workloads to your private cloud](tutorial-deploy-vmware-hcx.md)
+- [Lifecycle management of Azure VMware Solution VMs](lifecycle-management-of-azure-vmware-solution-vms.md)
<!-- LINKS - external-->
azure-vmware Ecosystem Back Up Vms https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/ecosystem-back-up-vms.md
Title: Backup solutions for Azure VMware Solution virtual machines description: Learn about leading backup and restore solutions for your Azure VMware Solution virtual machines. Previously updated : 03/17/2021 Last updated : 04/21/2021 # Backup solutions for Azure VMware Solution virtual machines (VMs)
Our backup partners have industry-leading backup and restore solutions in VMware
Backup network traffic between Azure VMware Solution VMs and the backup repository in Azure travels over a high-bandwidth, low-latency link. Replication traffic across regions travels over the internal Azure backplane network, which lowers bandwidth costs for users. You can find more information on these backup solutions here:-- [Commvault](https://documentation.commvault.com/11.21/essential/128997_support_for_azure_vmware_solution.html)-- [Veritas](https://vrt.as/nb4avs)-- [Veeam](https://www.veeam.com/kb4012) - [Cohesity](https://www.cohesity.com/blogs/expanding-cohesitys-support-for-microsofts-ecosystem-azure-stack-and-azure-vmware-solution/)
+- [Commvault](https://documentation.commvault.com/11.21/essential/128997_support_for_azure_vmware_solution.html)
- [Dell Technologies](https://www.delltechnologies.com/resources/en-us/asset/briefs-handouts/solutions/dell-emc-data-protection-for-avs.pdf)
+- [Rubrik](https://www.rubrik.com/en/products/cloud-data-management)
+- [Veeam](https://www.veeam.com/kb4012)
+- [Veritas](https://vrt.as/nb4avs)
azure-vmware Github Enterprise Server https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/github-enterprise-server.md
In this article, we set up a new instance of GitHub Enterprise Server, the self-
Now that you've covered setting up GitHub Enterprise Server on your Azure VMware Solution private cloud, you may want to learn about: -- [Getting started with GitHub Actions](https://docs.github.com/en/actions).-- [Joining the beta program](https://resources.github.com/beta-signup/).-- [Administration of GitHub Enterprise Server](https://githubtraining.github.io/admin-training/#/00_getting_started).
+- [How to get started with GitHub Actions](https://docs.github.com/en/actions)
+- [How to join the beta program](https://resources.github.com/beta-signup/)
+- [Administration of GitHub Enterprise Server](https://githubtraining.github.io/admin-training/#/00_getting_started)
azure-vmware Netapp Files With Azure Vmware Solution https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/netapp-files-with-azure-vmware-solution.md
The following are just a few compelling Azure NetApp Files use cases.
Now that you've covered integrating Azure NetApp Files with your Azure VMware Solution workloads, you may want to learn about: -- [Resource limits for Azure NetApp Files](../azure-netapp-files/azure-netapp-files-resource-limits.md#resource-limits).-- [Guidelines for Azure NetApp Files network planning](../azure-netapp-files/azure-netapp-files-network-topologies.md).-- [Cross-region replication of Azure NetApp Files volumes](../azure-netapp-files/cross-region-replication-introduction.md). -- [FAQs about Azure NetApp Files](../azure-netapp-files/azure-netapp-files-faqs.md).
+- [Resource limitations for Azure NetApp Files](../azure-netapp-files/azure-netapp-files-resource-limits.md#resource-limits)
+- [Guidelines for Azure NetApp Files network planning](../azure-netapp-files/azure-netapp-files-network-topologies.md)
+- [Cross-region replication of Azure NetApp Files volumes](../azure-netapp-files/cross-region-replication-introduction.md)
+- [FAQs about Azure NetApp Files](../azure-netapp-files/azure-netapp-files-faqs.md)
azure-vmware Tutorial Create Private Cloud https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/tutorial-create-private-cloud.md
You can create an Azure VMware Solution private cloud by using the [Azure portal
### Azure CLI
-Instead of the Azure portal to create an Azure VMware Solution private cloud, you can use the Azure CLI using the Azure Cloud Shell. For a list of commands you can use with Azure VMware Solution, see [Azure VMware commands](/cli/azure/ext/vmware/vmware).
+Instead of the Azure portal to create an Azure VMware Solution private cloud, you can use the Azure CLI using the Azure Cloud Shell. For a list of commands you can use with Azure VMware Solution, see [Azure VMware commands](/cli/azure/vmware).
#### Open Azure Cloud Shell
az vmware private-cloud create -g myResourceGroup -n myPrivateCloudName --locati
## Azure VMware commands
-For a list of commands you can use with Azure VMware Solution, see [Azure VMware commands](/cli/azure/ext/vmware/vmware).
+For a list of commands you can use with Azure VMware Solution, see [Azure VMware commands](/cli/azure/vmware).
## Next steps
backup Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/policy-reference.md
Title: Built-in policy definitions for Azure Backup description: Lists Azure Policy built-in policy definitions for Azure Backup. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 04/14/2021 Last updated : 04/21/2021
backup Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Backup description: Lists Azure Policy Regulatory Compliance controls available for Azure Backup. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 04/14/2021 Last updated : 04/21/2021
baremetal-infrastructure Connect Baremetal Infrastructure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/baremetal-infrastructure/connect-baremetal-infrastructure.md
During the deployment of your BareMetal instances, a new [Azure resource group](
### [Azure CLI](#tab/azure-cli)
-To see all your BareMetal instances, run the [az baremetalinstance list](/cli/azure/ext/baremetal-infrastructure/baremetalinstance#ext_baremetal_infrastructure_az_baremetalinstance_list) command for your resource group:
+To see all your BareMetal instances, run the [az baremetalinstance list](/cli/azure/baremetalinstance#az_baremetalinstance_list) command for your resource group:
```azurecli az baremetalinstance list --resource-group DSM05A-T550 ΓÇôoutput table
Also, on the right side, you'll find the [Azure proximity placement group's](../
### [Azure CLI](#tab/azure-cli)
-To see details of a BareMetal instance, run the [az baremetalinstance show](/cli/azure/ext/baremetal-infrastructure/baremetalinstance#ext_baremetal_infrastructure_az_baremetalinstance_show) command:
+To see details of a BareMetal instance, run the [az baremetalinstance show](/cli/azure/baremetalinstance#az_baremetalinstance_show) command:
```azurecli az baremetalinstance show --resource-group DSM05A-T550 --instance-name orcllabdsm01
Deleting tags also works the same way as for VMs. Applying and deleting a tag is
Assigning tags to BareMetal instances works the same as assigning tags for virtual machines. As with VMs, the tags exist in the Azure metadata. Tags have the same restrictions for BareMetal instances as for VMs.
-To add tags to a BareMetal instance, run the [az baremetalinstance update](/cli/azure/ext/baremetal-infrastructure/baremetalinstance#ext_baremetal_infrastructure_az_baremetalinstance_update) command:
+To add tags to a BareMetal instance, run the [az baremetalinstance update](/cli/azure/baremetalinstance#az_baremetalinstance_update) command:
```azurecli az baremetalinstance update --resource-group DSM05a-T550 --instance-name orcllabdsm01 --set tags.Dept=Finance tags.Status=Normal
When you restart a BareMetal instance, you'll experience a delay. During this de
### [Azure CLI](#tab/azure-cli)
-To restart a BareMetal instance, use the [az baremetalinstance restart](/cli/azure/ext/baremetal-infrastructure/baremetalinstance#ext_baremetal_infrastructure_az_baremetalinstance_restart) command:
+To restart a BareMetal instance, use the [az baremetalinstance restart](/cli/azure/baremetalinstance#az_baremetalinstance_restart) command:
```azurecli az baremetalinstance restart --resource-group DSM05a-T550 --instance-name orcllabdsm01
batch Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/batch/policy-reference.md
Title: Built-in policy definitions for Azure Batch description: Lists Azure Policy built-in policy definitions for Azure Batch. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 04/14/2021 Last updated : 04/21/2021
batch Quick Run Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/batch/quick-run-dotnet.md
After completing this quickstart, you will understand the key concepts of the Ba
- A Batch account and a linked Azure Storage account. To create these accounts, see the Batch quickstarts using the [Azure portal](quick-create-portal.md) or [Azure CLI](quick-create-cli.md). -- [Visual Studio 2017 or later](https://www.visualstudio.com/vs), or [.NET Core 2.1](https://www.microsoft.com/net/download/dotnet-core/2.1) for Linux, macOS, or Windows.
+- [Visual Studio 2017 or later](https://www.visualstudio.com/vs), or [.NET Core 2.1 SDK](https://dotnet.microsoft.com/download/dotnet/2.1) for Linux, macOS, or Windows.
## Sign in to Azure
batch Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/batch/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Batch description: Lists Azure Policy Regulatory Compliance controls available for Azure Batch. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 04/14/2021 Last updated : 04/21/2021
batch Tutorial Parallel Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/batch/tutorial-parallel-dotnet.md
In this tutorial, you convert MP4 media files in parallel to MP3 format using th
## Prerequisites
-* [Visual Studio 2017 or later](https://www.visualstudio.com/vs), or [.NET Core 2.1](https://www.microsoft.com/net/download/dotnet-core/2.1) for Linux, macOS, or Windows.
+* [Visual Studio 2017 or later](https://www.visualstudio.com/vs), or [.NET Core 2.1 SDK](https://dotnet.microsoft.com/download/dotnet/2.1) for Linux, macOS, or Windows.
* A Batch account and a linked Azure Storage account. To create these accounts, see the Batch quickstarts using the [Azure portal](quick-create-portal.md) or [Azure CLI](quick-create-cli.md).
blockchain Create Member Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/blockchain/service/create-member-cli.md
If you prefer to install and use the CLI locally, this quickstart requires Azure
When working with extension references for the Azure CLI, you must first install the extension. Azure CLI extensions give you access to experimental and pre-release commands that have not yet shipped as part of the core CLI. To learn more about extensions including updating and uninstalling, see [Use extensions with Azure CLI](/cli/azure/azure-cli-extensions-overview).
- Install the [extension for Azure Blockchain Service](/cli/azure/ext/blockchain/blockchain) by running the following command:
+ Install the [extension for Azure Blockchain Service](/cli/azure/blockchain) by running the following command:
```azurecli-interactive az extension add --name blockchain
blockchain Manage Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/blockchain/service/manage-cli.md
If you prefer to install and use the CLI locally, see [install Azure CLI](/cli/a
When working with extension references for the Azure CLI, you must first install the extension. Azure CLI extensions give you access to experimental and pre-release commands that have not yet shipped as part of the core CLI. To learn more about extensions including updating and uninstalling, see [Use extensions with Azure CLI](/cli/azure/azure-cli-extensions-overview).
- Install the [extension for Azure Blockchain Service](/cli/azure/ext/blockchain/blockchain) by running the following command:
+ Install the [extension for Azure Blockchain Service](/cli/azure/blockchain) by running the following command:
```azurecli-interactive az extension add --name blockchain
If you prefer to install and use the CLI locally, see [install Azure CLI](/cli/a
## Create blockchain member
-Example [creates a blockchain member](/cli/azure/ext/blockchain/blockchain/member#ext-blockchain-az-blockchain-member-create) in Azure Blockchain Service that runs the Quorum ledger protocol in a new consortium.
+Example [creates a blockchain member](/cli/azure/blockchain/member#az_blockchain_member_create) in Azure Blockchain Service that runs the Quorum ledger protocol in a new consortium.
```azurecli az blockchain member create \
az blockchain member create \
## Change blockchain member passwords or firewall rules
-Example [updates a blockchain member](/cli/azure/ext/blockchain/blockchain/member#ext-blockchain-az-blockchain-member-update)'s password, consortium management password, and firewall rule.
+Example [updates a blockchain member](/cli/azure/blockchain/member#az_blockchain_member_update)'s password, consortium management password, and firewall rule.
```azurecli az blockchain member update \
az blockchain member update \
## Create transaction node
-[Create a transaction node](/cli/azure/ext/blockchain/blockchain/transaction-node#ext-blockchain-az-blockchain-transaction-node-create) inside an existing blockchain member. By adding transaction nodes, you can increase security isolation and distribute load. For example, you could have a transaction node endpoint for different client applications.
+[Create a transaction node](/cli/azure/blockchain/transaction-node#az_blockchain_transaction_node_create) inside an existing blockchain member. By adding transaction nodes, you can increase security isolation and distribute load. For example, you could have a transaction node endpoint for different client applications.
```azurecli az blockchain transaction-node create \
az blockchain transaction-node create \
## Change transaction node password
-Example [updates a transaction node](/cli/azure/ext/blockchain/blockchain/transaction-node#ext-blockchain-az-blockchain-transaction-node-update) password.
+Example [updates a transaction node](/cli/azure/blockchain/transaction-node#az_blockchain_transaction_node_update) password.
```azurecli az blockchain transaction-node update \
az blockchain transaction-node update \
## List API keys
-API keys can be used for node access similar to user name and password. There are two API keys to support key rotation. Use the following command to [list your API keys](/cli/azure/ext/blockchain/blockchain/member#ext-blockchain-az-blockchain-transaction-node-list-api-key).
+API keys can be used for node access similar to user name and password. There are two API keys to support key rotation. Use the following command to [list your API keys](/cli/azure/blockchain/member#az_blockchain_transaction_node_list-api-key).
```azurecli az blockchain member list-api-key \
az blockchain member list-api-key \
## Regenerate API keys
-Use the following command to [regenerate your API keys](/cli/azure/ext/blockchain/blockchain/member#ext-blockchain-az-blockchain-transaction-node-regenerate-api-key).
+Use the following command to [regenerate your API keys](/cli/azure/blockchain/member#az_blockchain_transaction_node_regenerate-api-key).
```azurecli az blockchain member regenerate-api-key \
az blockchain member regenerate-api-key \
## Delete a transaction node
-Example [deletes a blockchain member transaction node](/cli/azure/ext/blockchain/blockchain/transaction-node#ext-blockchain-az-blockchain-transaction-node-delete).
+Example [deletes a blockchain member transaction node](/cli/azure/blockchain/transaction-node#az_blockchain_transaction_node_delete).
```azurecli az blockchain transaction-node delete \
az blockchain transaction-node delete \
## Delete a blockchain member
-Example [deletes a blockchain member](/cli/azure/ext/blockchain/blockchain/member#ext-blockchain-az-blockchain-member-delete).
+Example [deletes a blockchain member](/cli/azure/blockchain/member#az_blockchain_member_delete).
```azurecli az blockchain member delete \
cloud-services Cloud Services Troubleshoot Constrained Allocation Failed https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services/cloud-services-troubleshoot-constrained-allocation-failed.md
Title: Troubleshoot ConstrainedAllocationFailed when deploying a Cloud service (classic) to Azure | Microsoft Docs description: This article shows how to resolve a ConstrainedAllocationFailed exception when deploying a Cloud service (classic) to Azure. --++ Last updated 02/22/2021
cloud-services Cloud Services Troubleshoot Fabric Internal Server Error https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services/cloud-services-troubleshoot-fabric-internal-server-error.md
Title: Troubleshoot FabricInternalServerError or ServiceAllocationFailure when deploying a Cloud service (classic) to Azure | Microsoft Docs description: This article shows how to resolve a FabricInternalServerError or ServiceAllocationFailure exception when deploying a Cloud service (classic) to Azure. --++ Last updated 02/22/2021
cloud-services Cloud Services Troubleshoot Location Not Found For Role Size https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services/cloud-services-troubleshoot-location-not-found-for-role-size.md
Title: Troubleshoot LocationNotFoundForRoleSize when deploying a Cloud service (classic) to Azure | Microsoft Docs description: This article shows how to resolve a LocationNotFoundForRoleSize exception when deploying a Cloud service (classic) to Azure. --++ Last updated 02/22/2021
cloud-services Cloud Services Troubleshoot Overconstrained Allocation Request https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services/cloud-services-troubleshoot-overconstrained-allocation-request.md
Title: Troubleshoot OverconstrainedAllocationRequest when deploying a Cloud serv
description: This article shows how to resolve an OverconstrainedAllocationRequest exception when deploying a Cloud service (classic) to Azure. documentationcenter: ''--++ Last updated 02/22/2021
cognitive-services Client Libraries Multivariate https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Anomaly-Detector/quickstarts/client-libraries-multivariate.md
zone_pivot_groups: anomaly-detector-quickstart-multivariate
Previously updated : 04/01/2020 Last updated : 04/21/2021 keywords: anomaly detection, algorithms
cognitive-services Call Endpoint Csharp https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Bing-Custom-Search/call-endpoint-csharp.md
Use this quickstart to learn how to request search results from your Bing Custom
## Prerequisites - A Bing Custom Search instance. For more information, see [Quickstart: Create your first Bing Custom Search instance](quick-start.md).-- [Microsoft .NET Core](https://www.microsoft.com/net/download/core).
+- [Microsoft .NET Core](https://dotnet.microsoft.com/download).
- Any edition of [Visual Studio 2019 or later](https://www.visualstudio.com/downloads/). - If you're using Linux/MacOS, this application can be run using [Mono](https://www.mono-project.com/). - The [Bing Custom Search](https://www.nuget.org/packages/Microsoft.Azure.CognitiveServices.Search.CustomSearch/2.0.0) NuGet package.
cognitive-services How To Mitigate Latency https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Face/Face-API-How-to-Topics/how-to-mitigate-latency.md
Title: How to mitigate latency when using the Face service
description: Learn how to mitigate latency when using the Face service. - Last updated 1/5/2021- # How to: mitigate latency when using the Face service
cognitive-services Limits https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/QnAMaker/limits.md
These represent the limits for each update action; that is, clicking *Save and t
* Maximum number of URLs that can be refreshed: 5 * Maximum number of QnAs permitted per call: 1000
+## Add unstructured file limits
+
+> [!NOTE]
+> * If you need to use larger files than the limit allows, you can break the file into smaller files before sending them to the API.
+
+These represent the limits when unstructured files are used to *Create KB* or call the CreateKnowledgeBase API:
+* Length of file: We will extract first 32000 characters
+* Maximum 3 responses per file.
+
+## Prebuilt question answering limits
+
+> [!NOTE]
+> * If you need to use larger documents than the limit allows, you can break the text into smaller chunks of text before sending them to the API.
+> * A document is a single string of text characters.
+
+These represent the limits when Prebuilt API is used to *Generate response* or call the GenerateAnswer API:
+* Number of documents: 5
+* Maximum size of a single document: 5,120 characters
+* Maximum 3 responses per document.
+ ## Next steps Learn when and how to change [service pricing tiers](How-To/set-up-qnamaker-service-azure.md#upgrade-qna-maker-sku).
cognitive-services Create Luis https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/quickstarts/create-luis.md
- Title: "Quickstart: Create a LUIS key"-
-description: In this quickstart, you learn how to create a LUIS application and get a key.
------ Previously updated : 06/25/2020--
-# Customer intent: As a C# programmer, I want to learn how to derive speaker intent from their utterances so that I can create a conversational UI for my application.
--
-# Quickstart: Getting a LUIS endpoint key
-
-## Prerequisites
-
-Be sure you have the following items before you begin this tutorial:
-
-* A LUIS account. You can get one for free through the [LUIS portal](https://www.luis.ai/home).
-
-## LUIS and speech
-
-LUIS integrates with the Speech service to recognize intents from speech. You don't need a Speech service subscription, just LUIS.
-
-LUIS uses three kinds of keys:
-
-|Key type|Purpose|
-|--|-|
-|Authoring|Lets you create and modify LUIS apps programmatically|
-|Starter|Lets you test your LUIS application using text only|
-|Endpoint |Authorizes access to a particular LUIS app|
-
-For this tutorial, you need the endpoint key type. The tutorial uses the example Home Automation LUIS app, which you can create by following the [Use prebuilt Home automation app](../../luis/luis-get-started-create-app.md) quickstart. If you've created a LUIS app of your own, you can use it instead.
-
-When you create a LUIS app, LUIS automatically generates a starter key so you can test the app using text queries. This key doesn't enable the Speech service integration and won't work with this tutorial. Create a LUIS resource in the Azure dashboard and assign it to the LUIS app. You can use the free subscription tier for this tutorial.
-
-After you create the LUIS resource in the Azure dashboard, log into the [LUIS portal](https://www.luis.ai/home), choose your application on the **My Apps** page, then switch to the app's **Manage** page. Finally, select **Keys and Endpoints** in the sidebar.
-
-![LUIS portal keys and endpoint settings](~/articles/cognitive-services/Speech-Service/media/sdk/luis-keys-endpoints-page.png)
-
-On the **Keys and Endpoint settings** page:
-
-1. Scroll down to the **Resources and Keys** section and select **Assign resource**.
-1. In the **Assign a key to your app** dialog box, make the following changes:
-
- * Under **Tenant**, choose **Microsoft**.
- * Under **Subscription Name**, choose the Azure subscription that contains the LUIS resource you want to use.
- * Under **Key**, choose the LUIS resource that you want to use with the app.
-
- In a moment, the new subscription appears in the table at the bottom of the page.
-
-1. Select the icon next to a key to copy it to the clipboard. (You may use either key.)
-
-![LUIS app subscription keys](~/articles/cognitive-services/Speech-Service/media/sdk/luis-keys-assigned.png)
--
-## Next steps
-
-> [!div class="nextstepaction"]
-> [Recognize Intents](~/articles/cognitive-services/Speech-Service/get-started-intent-recognition.md)
cognitive-services Cancel Translation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Translator/document-translation/reference/cancel-translation.md
+
+ Title: Cancel translation method
+
+description: The cancel translation method cancels a currently processing or queued operation.
+++++++ Last updated : 04/21/2021+++
+# Cancel translation
+
+Cancel a currently processing or queued operation. An operation won't be canceled if it is already completed or failed or canceling. A bad request will be returned. All documents that have completed translation won't be canceled and will be charged. All pending documents will be canceled if possible.
+
+## Request URL
+
+Send a `DELETE` request to:
+
+```DELETE HTTP
+https://<NAME-OF-YOUR-RESOURCE>.cognitiveservices.azure.com/translator/text/batch/v1.0-preview.1/batches/{id}
+```
+
+Learn how to find your [custom domain name](../get-started-with-document-translation.md#find-your-custom-domain-name).
+
+> [!IMPORTANT]
+>
+> * **All API requests to the Document Translation service require a custom domain endpoint**.
+> * You can't use the endpoint found on your Azure portal resource _Keys and Endpoint_ page nor the global translator endpointΓÇö`api.cognitive.microsofttranslator.com`ΓÇöto make HTTP requests to Document Translation.
+
+## Request parameters
+
+Request parameters passed on the query string are:
+
+|Query parameter|Required|Description|
+|--|--|--|
+|id|True|The operation-id.|
+
+## Request headers
+
+Request headers are:
+
+|Headers|Description|
+|--|--|
+|Ocp-Apim-Subscription-Key|Required request header|
+
+## Response status codes
+
+The following are the possible HTTP status codes that a request returns.
+
+| Status Code| Description|
+|--|--|
+|200|OK. Cancel request has been submitted|
+|401|Unauthorized. Check your credentials.|
+|404|Not found. Resource is not found.
+|500|Internal Server Error.
+|Other Status Codes|<ul><li>Too many requests</li><li>Server temporary unavailable</li></ul>|
+
+## Cancel translation response
+
+### Successful response
+
+The following information is returned in a successful response.
+
+|Name|Type|Description|
+| | | |
+|id|string|ID of the operation.|
+|createdDateTimeUtc|string|Operation created date time.|
+|lastActionDateTimeUtc|string|Date time in which the operation's status has been updated.|
+|status|String|List of possible statuses for job or document: <ul><li>Canceled</li><li>Cancelling</li><li>Failed</li><li>NotStarted</li><li>Running</li><li>Succeeded</li><li>ValidationFailed</li></ul>|
+|summary|StatusSummary|Summary containing the details listed below.|
+|summary.total|integer|Count of total documents.|
+|summary.failed|integer|Count of documents failed.|
+|summary.success|integer|Count of documents successfully translated.|
+|summary.inProgress|integer|Count of documents in progress.|
+|summary.notYetStarted|integer|Count of documents not yet started processing.|
+|summary.cancelled|integer|Number of canceled.|
+|summary.totalCharacterCharged|integer|Total characters charged by the API.|
+
+### Error response
+
+|Name|Type|Description|
+| | | |
+|code|string|Enums containing high-level error codes. Possible values:<br/><ul><li>InternalServerError</li><li>InvalidArgument</li><li>InvalidRequest</li><li>RequestRateTooHigh</li><li>ResourceNotFound</li><li>ServiceUnavailable</li><li>Unauthorized</li></ul>|
+|message|string|Gets high-level error message.|
+|target|string|Gets the source of the error. For example, it would be "documents" or "document id" for an invalid document.|
+|innerError|InnerErrorV2|New Inner Error format, which conforms to Cognitive Services API Guidelines. It contains required properties ErrorCode, message, and optional properties target, details(key value pair), inner error (can be nested).|
+|innerError.code|string|Gets code error string.|
+|inner.Eroor.message|string|Gets high-level error message.|
+
+## Examples
+
+### Example successful response
+
+The following JSON object is an example of a successful response.
+
+Status code: 200
+
+```JSON
+{
+ "id": "727bf148-f327-47a0-9481-abae6362f11e",
+ "createdDateTimeUtc": "2020-03-26T00:00:00Z",
+ "lastActionDateTimeUtc": "2020-03-26T01:00:00Z",
+ "status": "Succeeded",
+ "summary": {
+ "total": 10,
+ "failed": 1,
+ "success": 9,
+ "inProgress": 0,
+ "notYetStarted": 0,
+ "cancelled": 0,
+ "totalCharacterCharged": 0
+ }
+}
+```
+
+### Example error response
+
+The following JSON object is an example of an error response. The schema for other error codes is the same.
+
+Status code: 500
+
+```JSON
+{
+ "error": {
+ "code": "InternalServerError",
+ "message": "Internal Server Error",
+ "target": "Operation",
+ "innerError": {
+ "code": "InternalServerError",
+ "message": "Unexpected internal server error has occurred"
+ }
+ }
+}
+```
+
+## Next steps
+
+Follow our quickstart to learn more about using Document Translation and the client library.
+
+> [!div class="nextstepaction"]
+> [Get started with Document Translation](../get-started-with-document-translation.md)
cognitive-services Get Document Status https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Translator/document-translation/reference/get-document-status.md
Title: Document Translation get document status method
+ Title: Get document status method
description: The get document status method returns the status for a specific document.
Previously updated : 03/25/2021 Last updated : 04/21/2021
-# Document Translation: get document status
+# Get document status
The Get Document Status method returns the status for a specific document. The method returns the translation status for a specific document based on the request ID and document ID.
cognitive-services Get Documents Status https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Translator/document-translation/reference/get-documents-status.md
+
+ Title: Get documents status
+
+description: The get documents status method returns the status for all documents in a batch document translation request.
+++++++ Last updated : 04/21/2021+++
+# Get documents status
+
+The Get documents status method returns the status for all documents in a batch document translation request.
+
+The documents included in the response are sorted by document ID in descending order. If the number of documents in the response exceeds our paging limit, server-side paging is used. Paginated responses indicate a partial result and include a continuation token in the response. The absence of a continuation token means that no additional pages are available.
+
+$top and $skip query parameters can be used to specify a number of results to return and an offset for the collection. The server honors the values specified by the client. However, clients must be prepared to handle responses that contain a different page size or contain a continuation token.
+
+When both $top and $skip are included, the server should first apply $skip and then $top on the collection.
+
+> [!NOTE]
+> If the server can't honor $top and/or $skip, the server must return an error to the client informing about it instead of just ignoring the query options. This reduces the risk of the client making assumptions about the data returned.
+
+## Request URL
+
+Send a `GET` request to:
+```HTTP
+GET https://<NAME-OF-YOUR-RESOURCE>.cognitiveservices.azure.com/translator/text/batch/v1.0-preview.1/batches/{id}/documents
+```
+
+Learn how to find your [custom domain name](../get-started-with-document-translation.md#find-your-custom-domain-name).
+
+> [!IMPORTANT]
+>
+> * **All API requests to the Document Translation service require a custom domain endpoint**.
+> * You can't use the endpoint found on your Azure portal resource _Keys and Endpoint_ page nor the global translator endpointΓÇö`api.cognitive.microsofttranslator.com`ΓÇöto make HTTP requests to Document Translation.
+
+## Request parameters
+
+Request parameters passed on the query string are:
+
+|Query parameter|Required|Description|
+| | | |
+|id|True|The operation ID.|
+|$skip|False|Skip the $skip entries in the collection. When both $top and $skip are supplied, $skip is applied first.|
+|$top|False|Take the $top entries in the collection. When both $top and $skip are supplied, $skip is applied first.|
+
+## Request headers
+
+Request headers are:
+
+|Headers|Description|
+| | |
+|Ocp-Apim-Subscription-Key|Required request header|
+
+## Response status codes
+
+The following are the possible HTTP status codes that a request returns.
+
+|Status Code|Description|
+| | |
+|200|OK. Successful request and returns the status of the documents. HeadersRetry-After: integerETag: string|
+|400|Invalid request. Check input parameters.|
+|401|Unauthorized. Check your credentials.|
+|404|Resource is not found.|
+|500|Internal Server Error.|
+|Other Status Codes|<ul><li>Too many requests</li><li>Server temporary unavailable</li></ul>|
++
+## Get documents status response
+
+### Successful get documents status response
+
+The following information is returned in a successful response.
+
+|Name|Type|Description|
+| | | |
+|@nextLink|string|Url for the next page. Null if no more pages available.|
+|value|DocumentStatusDetail []|The detail status of individual documents listed below.|
+|value.path|string|Location of the document or folder.|
+|value.createdDateTimeUtc|string|Operation created date time.|
+|value.lastActionDateTimeUt|string|Date time in which the operation's status has been updated.|
+|value.status|status|List of possible statuses for job or document.<ul><li>Canceled</li><li>Cancelling</li><li>Failed</li><li>NotStarted</li><li>Running</li><li>Succeeded</li><li>ValidationFailed</li></ul>|
+|value.to|string|To language.|
+|value.progress|string|Progress of the translation if available.|
+|value.id|string|Document ID.|
+|value.characterCharged|integer|Characters charged by the API.|
+
+### Error response
+
+|Name|Type|Description|
+| | | |
+|code|string|Enums containing high-level error codes. Possible values:<br/><ul><li>InternalServerError</li><li>InvalidArgument</li><li>InvalidRequest</li><li>RequestRateTooHigh</li><li>ResourceNotFound</li><li>ServiceUnavailable</li><li>Unauthorized</li></ul>|
+|message|string|Gets high-level error message.|
+|target|string|Gets the source of the error. For example, it would be "documents" or "document id" in the case of an invalid document.|
+|innerError|InnerErrorV2|New Inner Error format, which conforms to Cognitive Services API Guidelines. It contains required properties ErrorCode, message and optional properties target, details(key value pair), inner error (this can be nested).|
+|innerError.code|string|Gets code error string.|
+|innerError.message|string|Gets high-level error message.|
+
+## Examples
+
+### Example successful response
+
+The following is an example of a successful response.
+
+```JSON
+{
+ "value": [
+ {
+ "path": "https://myblob.blob.core.windows.net/destinationContainer/fr/mydoc.txt",
+ "createdDateTimeUtc": "2020-03-26T00:00:00Z",
+ "lastActionDateTimeUtc": "2020-03-26T01:00:00Z",
+ "status": "Running",
+ "to": "fr",
+ "progress": 0.1,
+ "id": "273622bd-835c-4946-9798-fd8f19f6bbf2",
+ "characterCharged": 0
+ }
+ ],
+ "@nextLink": "https://westus.cognitiveservices.azure.com/translator/text/batch/v1.0.preview.1/operation/0FA2822F-4C2A-4317-9C20-658C801E0E55/documents?$top=5&$skip=15"
+}
+```
+
+### Example error response
+
+The following is an example of an error response. The schema for other error codes is the same.
+
+Status code: 500
+
+```JSON
+{
+ "error": {
+ "code": "InternalServerError",
+ "message": "Internal Server Error",
+ "target": "Operation",
+ "innerError": {
+ "code": "InternalServerError",
+ "message": "Unexpected internal server error has occurred"
+ }
+ }
+}
+```
+
+## Next steps
+
+Follow our quickstart to learn more about using Document Translation and the client library.
+
+> [!div class="nextstepaction"]
+> [Get started with Document Translation](../get-started-with-document-translation.md)
cognitive-services Get Supported Document Formats https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Translator/document-translation/reference/get-supported-document-formats.md
+
+ Title: Get supported document formats method
+
+description: The get supported document formats method returns a list of supported document formats.
+++++++ Last updated : 04/21/2021+++
+# Get supported document formats
+
+The Get supported document formats method returns a list of document formats supported by the Document Translation service. The list includes the common file extension, and the content-type if using the upload API.
+
+## Request URL
+
+Send a `GET` request to:
+```HTTP
+GET https://<NAME-OF-YOUR-RESOURCE>.cognitiveservices.azure.com/translator/text/batch/v1.0-preview.1/documents/formats
+```
+
+Learn how to find your [custom domain name](../get-started-with-document-translation.md#find-your-custom-domain-name).
+
+> [!IMPORTANT]
+>
+> * **All API requests to the Document Translation service require a custom domain endpoint**.
+> * You can't use the endpoint found on your Azure portal resource _Keys and Endpoint_ page nor the global translator endpointΓÇö`api.cognitive.microsofttranslator.com`ΓÇöto make HTTP requests to Document Translation.
+
+## Request headers
+
+Request headers are:
+
+|Headers|Description|
+|--|--|
+|Ocp-Apim-Subscription-Key|Required request header|
+
+## Response status codes
+
+The following are the possible HTTP status codes that a request returns.
+
+|Status Code|Description|
+|--|--|
+|200|OK. Returns the list of supported document file formats.|
+|500|Internal Server Error.|
+|Other Status Codes|<ul><li>Too many requests</li><li>Server temporary unavailable</li></ul>|
+
+## File format response
+
+### Successful fileFormatListResult response
+
+The following information is returned in a successful response.
+
+|Name|Type|Description|
+| | | |
+|value|FileFormat []|FileFormat[] contains the details listed below.|
+|value.format|string[]|Supported Content-Types for this format.|
+|value.fileExtensions|string[]|Supported file extension for this format.|
+|value.contentTypes|string[]|Name of the format.|
+|value.versions|String[]|Supported Version.|
+
+### Error response
+
+|Name|Type|Description|
+| | | |
+ |code|string|Enums containing high-level error codes. Possible values:<ul><li>InternalServerError</li><li>InvalidArgument</li><li>InvalidRequest</li><li>RequestRateTooHigh</li><li>ResourceNotFound</li><li>ServiceUnavailable</li><li>Unauthorized</li></ul>|
+|message|string|Gets high-level error message.|
+|innerError|InnerErrorV2|New Inner Error format, which conforms to Cognitive Services API Guidelines. It contains required properties ErrorCode, message and optional properties target, details(key value pair), inner error (this can be nested).|
+|innerError.code|string|Gets code error string.|
+|innerError.message|string|Gets high-level error message.|
+
+## Examples
+
+### Example successful response
+The following is an example of a successful response.
+
+Status code: 200
+
+```JSON
+{
+ "value": [
+ {
+ "format": "PlainText",
+ "fileExtensions": [
+ ".txt"
+ ],
+ "contentTypes": [
+ "text/plain"
+ ],
+ "versions": []
+ },
+ {
+ "format": "PortableDocumentFormat",
+ "fileExtensions": [
+ ".pdf"
+ ],
+ "contentTypes": [
+ "application/pdf"
+ ],
+ "versions": []
+ },
+ {
+ "format": "OpenXmlPresentation",
+ "fileExtensions": [
+ ".pptx"
+ ],
+ "contentTypes": [
+ "application/vnd.openxmlformats-officedocument.presentationml.presentation"
+ ],
+ "versions": []
+ },
+ {
+ "format": "OpenXmlSpreadsheet",
+ "fileExtensions": [
+ ".xlsx"
+ ],
+ "contentTypes": [
+ "application/vnd.openxmlformats-officedocument.spreadsheetml.sheet"
+ ],
+ "versions": []
+ },
+ {
+ "format": "OutlookMailMessage",
+ "fileExtensions": [
+ ".msg"
+ ],
+ "contentTypes": [
+ "application/vnd.ms-outlook"
+ ],
+ "versions": []
+ },
+ {
+ "format": "HtmlFile",
+ "fileExtensions": [
+ ".html"
+ ],
+ "contentTypes": [
+ "text/html"
+ ],
+ "versions": []
+ },
+ {
+ "format": "OpenXmlWord",
+ "fileExtensions": [
+ ".docx"
+ ],
+ "contentTypes": [
+ "application/vnd.openxmlformats-officedocument.wordprocessingml.document"
+ ],
+ "versions": []
+ }
+ ]
+}
+```
+
+### Example error response
+
+The following is an example of an error response. The schema for other error codes is the same.
+
+Status code: 500
+
+```JSON
+{
+ "error": {
+ "code": "InternalServerError",
+ "message": "Internal Server Error",
+ "innerError": {
+ "code": "InternalServerError",
+ "message": "Unexpected internal server error has occurred"
+ }
+ }
+}
+```
+
+## Next steps
+
+Follow our quickstart to learn more about using Document Translation and the client library.
+
+> [!div class="nextstepaction"]
+> [Get started with Document Translation](../get-started-with-document-translation.md)
cognitive-services Get Supported Glossary Formats https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Translator/document-translation/reference/get-supported-glossary-formats.md
+
+ Title: Get supported glossary formats method
+
+description: The get supported glossary formats method returns the list of supported glossary formats.
+++++++ Last updated : 04/21/2021+++
+# Get supported glossary formats
+
+The Get supported glossary formats method returns a list of supported glossary formats supported by the Document Translation service. The list includes the common file extension used.
+
+## Request URL
+
+Send a `GET` request to:
+```HTTP
+GET https://<NAME-OF-YOUR-RESOURCE>.cognitiveservices.azure.com/translator/text/batch/v1.0-preview.1/glossaries/formats
+```
+
+Learn how to find your [custom domain name](../get-started-with-document-translation.md#find-your-custom-domain-name).
+
+> [!IMPORTANT]
+>
+> * **All API requests to the Document Translation service require a custom domain endpoint**.
+> * You can't use the endpoint found on your Azure portal resource _Keys and Endpoint_ page nor the global translator endpointΓÇö`api.cognitive.microsofttranslator.com`ΓÇöto make HTTP requests to Document Translation.
+
+## Request headers
+
+Request headers are:
+
+|Headers|Description|
+| | |
+|Ocp-Apim-Subscription-Key|Required request header|
+
+## Response status codes
+
+The following are the possible HTTP status codes that a request returns.
+
+|Status Code|Description|
+| | |
+|200|OK. Returns the list of supported glossary file formats.|
+|500|Internal Server Error.|
+|Other Status Codes|<ul><li>Too many requests</li><li>Server temporary unavailable</li></ul>|
++
+## Get supported glossary formats response
+
+Base type for list return in the Get supported glossary formats API.
+
+### Successful get supported glossary formats response
+
+Base type for list return in the Get supported glossary formats API.
+
+|Status Code|Description|
+| | |
+|200|OK. Returns the list of supported glossary file formats.|
+|500|Internal Server Error.|
+|Other Status Codes|Too many requestsServer temporary unavailable|
+
+### Error response
+
+|Name|Type|Description|
+| | | |
+|code|string|Enums containing high-level error codes. Possible values:<br/><ul><li>InternalServerError</li><li>InvalidArgument</li><li>InvalidRequest</li><li>RequestRateTooHigh</li><li>ResourceNotFound</li><li>ServiceUnavailable</li><li>Unauthorized</li></ul>|
+|message|string|Gets high-level error message.|
+|innerError|InnerErrorV2|New Inner Error format, which conforms to Cognitive Services API Guidelines. It contains required properties ErrorCode, message and optional properties target, details(key value pair), inner error (this can be nested).|
+|innerError.code|string|Gets code error string.|
+|innerError.message|string|Gets high-level error message.|
+
+## Examples
+
+### Example successful response
+
+The following is an example of a successful response.
+
+```JSON
+{
+ "value": [
+ {
+ "format": "XLIFF",
+ "fileExtensions": [
+ ".xlf"
+ ],
+ "contentTypes": [
+ "application/xliff+xml"
+ ],
+ "versions": [
+ "1.0",
+ "1.1",
+ "1.2"
+ ]
+ },
+ {
+ "format": "TSV",
+ "fileExtensions": [
+ ".tsv",
+ ".tab"
+ ],
+ "contentTypes": [
+ "text/tab-separated-values"
+ ],
+ "versions": []
+ }
+ ]
+}
+```
+
+### Example error response
+the following is an example of an error response. The schema for other error codes is the same.
+
+Status code: 500
+
+```JSON
+{
+ "error": {
+ "code": "InternalServerError",
+ "message": "Internal Server Error",
+ "innerError": {
+ "code": "InternalServerError",
+ "message": "Unexpected internal server error has occurred"
+ }
+ }
+}
+```
+
+## Next steps
+
+Follow our quickstart to learn more about using Document Translation and the client library.
+
+> [!div class="nextstepaction"]
+> [Get started with Document Translation](../get-started-with-document-translation.md)
cognitive-services Get Supported Storage Sources https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Translator/document-translation/reference/get-supported-storage-sources.md
+
+ Title: Get supported storage sources method
+
+description: The get supported storage sources method returns a list of supported storage sources.
+++++++ Last updated : 04/21/2021+++
+# Get supported storage sources
+
+The Get supported storage sources method returns a list of storage sources/options supported by the Document Translation service.
+
+## Request URL
+
+Send a `GET` request to:
+```HTTP
+GET https://<NAME-OF-YOUR-RESOURCE>.cognitiveservices.azure.com/translator/text/batch/v1.0-preview.1/storagesources
+```
+
+Learn how to find your [custom domain name](../get-started-with-document-translation.md#find-your-custom-domain-name).
+
+> [!IMPORTANT]
+>
+> * **All API requests to the Document Translation service require a custom domain endpoint**.
+> * You can't use the endpoint found on your Azure portal resource _Keys and Endpoint_ page nor the global translator endpointΓÇö`api.cognitive.microsofttranslator.com`ΓÇöto make HTTP requests to Document Translation.
+
+## Request headers
+
+Request headers are:
+
+|Headers|Description|
+| | |
+|Ocp-Apim-Subscription-Key|Required request header|
+
+## Response status codes
+
+The following are the possible HTTP status codes that a request returns.
+
+|Status Code|Description|
+| | |
+|200|OK. Successful request and returns the list of storage sources.|
+|500|Internal Server Error.|
+|Other Status Codes|<ul><li>Too many requests</li><li>Server temporary unavailable</li></ul>|
+
+## Get supported storage sources response
+
+### Successful get supported storage sources response
+Base type for list return in the Get supported storage sources API.
+
+|Name|Type|Description|
+| | | |
+|value|string []|List of objects.|
++
+### Error response
+
+|Name|Type|Description|
+| | | |
+|code|string|Enums containing high-level error codes. Possible values:<br/><ul><li>InternalServerError</li><li>InvalidArgument</li><li>InvalidRequest</li><li>RequestRateTooHigh</li><li>ResourceNotFound</li><li>ServiceUnavailable</li><li>Unauthorized</li></ul>|
+|message|string|Gets high-level error message.|
+|innerError|InnerErrorV2|New Inner Error format, which conforms to Cognitive Services API Guidelines. It contains required properties ErrorCode, message and optional properties target, details(key value pair), inner error (this can be nested).|
+|innerError.code|string|Gets code error string.|
+|innerError.message|string|Gets high-level error message.|
+
+## Examples
+
+### Example successful response
+
+The following is an example of a successful response.
+
+```JSON
+{
+ "value": [
+ "AzureBlob"
+ ]
+}
+```
+
+### Example error response
+The following is an example of an error response. The schema for other error codes is the same.
+
+Status code: 500
+
+```JSON
+{
+ "error": {
+ "code": "InternalServerError",
+ "message": "Internal Server Error",
+ "innerError": {
+ "code": "InternalServerError",
+ "message": "Unexpected internal server error has occurred"
+ }
+ }
+}
+```
+
+## Next steps
+
+Follow our quickstart to learn more about using Document Translation and the client library.
+
+> [!div class="nextstepaction"]
+> [Get started with Document Translation](../get-started-with-document-translation.md)
cognitive-services Get Translation Status https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Translator/document-translation/reference/get-translation-status.md
+
+ Title: Get translation status
+
+description: The get translation status method returns the status for a document translation request.
+++++++ Last updated : 04/21/2021+++
+# Get translation status
+
+The Get translation status method returns the status for a document translation request. The status includes the overall request status and the status for documents that are being translated as part of that request.
+
+## Request URL
+
+Send a `GET` request to:
+```HTTP
+GET https://<NAME-OF-YOUR-RESOURCE>.cognitiveservices.azure.com/translator/text/batch/v1.0-preview.1/batches/{id}
+```
+
+Learn how to find your [custom domain name](../get-started-with-document-translation.md#find-your-custom-domain-name).
+
+> [!IMPORTANT]
+>
+> * **All API requests to the Document Translation service require a custom domain endpoint**.
+> * You can't use the endpoint found on your Azure portal resource _Keys and Endpoint_ page nor the global translator endpointΓÇö`api.cognitive.microsofttranslator.com`ΓÇöto make HTTP requests to Document Translation.
++
+## Request parameters
+
+Request parameters passed on the query string are:
+
+|Query parameter|Required|Description|
+| | | |
+|id|True|The operation ID.|
+
+## Request headers
+
+Request headers are:
+
+|Headers|Description|
+| | |
+|Ocp-Apim-Subscription-Key|Required request header|
+
+## Response status codes
+
+The following are the possible HTTP status codes that a request returns.
+
+|Status Code|Description|
+| | |
+|200|OK. Successful request and returns the status of the batch translation operation. HeadersRetry-After: integerETag: string|
+|401|Unauthorized. Check your credentials.|
+|404|Resource is not found.|
+|500|Internal Server Error.|
+|Other Status Codes|<ul><li>Too many requests</li><li>Server temporary unavailable</li></ul>|
+
+## Get translation status response
+
+### Successful get translation status response
+
+The following information is returned in a successful response.
+
+|Name|Type|Description|
+| | | |
+|id|string|ID of the operation.|
+|createdDateTimeUtc|string|Operation created date time.|
+|lastActionDateTimeUtc|string|Date time in which the operation's status has been updated.|
+|status|String|List of possible statuses for job or document: <ul><li>Canceled</li><li>Cancelling</li><li>Failed</li><li>NotStarted</li><li>Running</li><li>Succeeded</li><li>ValidationFailed</li></ul>|
+|summary|StatusSummary|Summary containing the details listed below.|
+|summary.total|integer|Total count.|
+|summary.failed|integer|Failed count.|
+|summary.success|integer|Number of successful.|
+|summary.inProgress|integer|Number of in progress.|
+|summary.notYetStarted|integer|Count of not yet started.|
+|summary.cancelled|integer|Number of canceled.|
+|summary.totalCharacterCharged|integer|Total characters charged by the API.|
+
+###Error response
+
+|Name|Type|Description|
+| | | |
+|code|string|Enums containing high-level error codes. Possible values:<br/><ul><li>InternalServerError</li><li>InvalidArgument</li><li>InvalidRequest</li><li>RequestRateTooHigh</li><li>ResourceNotFound</li><li>ServiceUnavailable</li><li>Unauthorized</li></ul>|
+|message|string|Gets high-level error message.|
+|target|string|Gets the source of the error. For example, it would be "documents" or "document id" for an invalid document.|
+|innerError|InnerErrorV2|New Inner Error format, which conforms to Cognitive Services API Guidelines. It contains required properties ErrorCode, message, and optional properties target, details(key value pair), inner error (can be nested).|
+|innerError.code|string|Gets code error string.|
+|innerError.message|string|Gets high-level error message.|
+
+## Examples
+
+### Example successful response
+
+The following JSON object is an example of a successful response.
+
+```JSON
+{
+ "id": "727bf148-f327-47a0-9481-abae6362f11e",
+ "createdDateTimeUtc": "2020-03-26T00:00:00Z",
+ "lastActionDateTimeUtc": "2020-03-26T01:00:00Z",
+ "status": "Succeeded",
+ "summary": {
+ "total": 10,
+ "failed": 1,
+ "success": 9,
+ "inProgress": 0,
+ "notYetStarted": 0,
+ "cancelled": 0,
+ "totalCharacterCharged": 0
+ }
+}
+```
+
+### Example error response
+
+The following JSON object is an example of an error response. The schema for other error codes is the same.
+
+Status code: 401
+
+```JSON
+{
+ "error": {
+ "code": "Unauthorized",
+ "message": "User is not authorized",
+ "target": "Document",
+ "innerError": {
+ "code": "Unauthorized",
+ "message": "Operation is not authorized"
+ }
+ }
+}
+```
+
+## Next steps
+
+Follow our quickstart to learn more about using Document Translation and the client library.
+
+> [!div class="nextstepaction"]
+> [Get started with Document Translation](../get-started-with-document-translation.md)
cognitive-services Get Translations Status https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Translator/document-translation/reference/get-translations-status.md
+
+ Title: Get translations status
+
+description: The get translations status method returns a list of batch requests submitted and the status for each request.
+++++++ Last updated : 04/21/2021+++
+# Get translations status
+
+The Get translations status method returns a list of batch requests submitted and the status for each request. This list only contains batch requests submitted by the user (based on the subscription). The status for each request is sorted by id.
+
+If the number of requests exceeds our paging limit, server-side paging is used. Paginated responses indicate a partial result and include a continuation token in the response. The absence of a continuation token means that no additional pages are available.
+
+$top and $skip query parameters can be used to specify a number of results to return and an offset for the collection.
+
+The server honors the values specified by the client. However, clients must be prepared to handle responses that contain a different page size or contain a continuation token.
+
+When both $top and $skip are included, the server should first apply $skip and then $top on the collection.
+
+> [!NOTE]
+> If the server can't honor $top and/or $skip, the server must return an error to the client informing about it instead of just ignoring the query options. This reduces the risk of the client making assumptions about the data returned.
+
+## Request URL
+
+Send a `GET` request to:
+```HTTP
+GET https://<NAME-OF-YOUR-RESOURCE>.cognitiveservices.azure.com/translator/text/batch/v1.0-preview.1/batches
+```
+
+Learn how to find your [custom domain name](../get-started-with-document-translation.md#find-your-custom-domain-name).
+
+> [!IMPORTANT]
+>
+> * **All API requests to the Document Translation service require a custom domain endpoint**.
+> * You can't use the endpoint found on your Azure portal resource _Keys and Endpoint_ page nor the global translator endpointΓÇö`api.cognitive.microsofttranslator.com`ΓÇöto make HTTP requests to Document Translation.
+
+## Request parameters
+
+Request parameters passed on the query string are:
+
+|Query parameter|Required|Description|
+| | | |
+|$skip|False|Skip the $skip entries in the collection. When both $top and $skip are supplied, $skip is applied first.|
+|$top|False|Take the $top entries in the collection. When both $top and $skip are supplied, $skip is applied first.|
+
+## Request headers
+
+Request headers are:
+
+|Headers|Description|
+| | |
+|Ocp-Apim-Subscription-Key|Required request header|
+
+## Response status codes
+
+The following are the possible HTTP status codes that a request returns.
+
+|Status Code|Description|
+| | |
+|200|OK. Successful request and returns the status of the all the operations. HeadersRetry-After: integerETag: string|
+|400|Bad Request. Invalid request. Check input parameters.|
+|401|Unauthorized. Check your credentials.|
+|500|Internal Server Error.|
+|Other Status Codes|<ul><li>Too many requests</li><li>Server temporary unavailable</li></ul>|
+
+## Get translations status response
+
+### Successful get translations status response
+
+The following information is returned in a successful response.
+
+|Name|Type|Description|
+| | | |
+|id|string|ID of the operation.|
+|createdDateTimeUtc|string|Operation created date time.|
+|lastActionDateTimeUtc|string|Date time in which the operation's status has been updated.|
+|status|String|List of possible statuses for job or document: <ul><li>Canceled</li><li>Cancelling</li><li>Failed</li><li>NotStarted</li><li>Running</li><li>Succeeded</li><li>ValidationFailed</li></ul>|
+|summary|StatusSummary[]|Summary containing the details listed below.|
+|summary.total|integer|Count of total documents.|
+|summary.failed|integer|Count of documents failed.|
+|summary.success|integer|Count of documents successfully translated.|
+|summary.inProgress|integer|Count of documents in progress.|
+|summary.notYetStarted|integer|Count of documents not yet started processing.|
+|summary.cancelled|integer|Count of documents canceled.|
+|summary.totalCharacterCharged|integer|Total count of characters charged.|
+
+### Error response
+
+|Name|Type|Description|
+| | | |
+|code|string|Enums containing high-level error codes. Possible values:<br/><ul><li>InternalServerError</li><li>InvalidArgument</li><li>InvalidRequest</li><li>RequestRateTooHigh</li><li>ResourceNotFound</li><li>ServiceUnavailable</li><li>Unauthorized</li></ul>|
+|message|string|Gets high-level error message.|
+|target|string|Gets the source of the error. For example, it would be "documents" or "document id" in the case of an invalid document.|
+|innerError|InnerErrorV2|New Inner Error format, which conforms to Cognitive Services API Guidelines. It contains required properties ErrorCode, message and optional properties target, details(key value pair), inner error (this can be nested).|
+|innerError.code|string|Gets code error string.|
+|innerError.message|string|Gets high-level error message.|
+
+## Examples
+
+### Example successful response
+
+The following is an example of a successful response.
+
+```JSON
+{
+ "value": [
+ {
+ "id": "727bf148-f327-47a0-9481-abae6362f11e",
+ "createdDateTimeUtc": "2020-03-26T00:00:00Z",
+ "lastActionDateTimeUtc": "2020-03-26T01:00:00Z",
+ "status": "Succeeded",
+ "summary": {
+ "total": 10,
+ "failed": 1,
+ "success": 9,
+ "inProgress": 0,
+ "notYetStarted": 0,
+ "cancelled": 0,
+ "totalCharacterCharged": 0
+ }
+ }
+ ]
+}
+```
+
+### Example error response
+
+The following is an example of an error response. The schema for other error codes is the same.
+
+Status code: 500
+
+```JSON
+{
+ "error": {
+ "code": "InternalServerError",
+ "message": "Internal Server Error",
+ "target": "Operation",
+ "innerError": {
+ "code": "InternalServerError",
+ "message": "Unexpected internal server error has occurred"
+ }
+ }
+}
+```
+
+## Next steps
+
+Follow our quickstart to learn more about using Document Translation and the client library.
+
+> [!div class="nextstepaction"]
+> [Get started with Document Translation](../get-started-with-document-translation.md)
cognitive-services Start Translation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Translator/document-translation/reference/start-translation.md
+
+ Title: Start translation
+
+description: Start a document translation request with the Document Translation service.
+++++++ Last updated : 04/21/2021+++
+# Start translation
+
+Use this API to start a bulk (batch) translation request with the Document Translation service. Each request can contain multiple documents and must contain a source and destination container for each document.
+
+The prefix and suffix filter (if supplied) are used to filter folders. The prefix is applied to the subpath after the container name.
+
+Glossaries / Translation memory can be included in the request and are applied by the service when the document is translated.
+
+If the glossary is invalid or unreachable during translation, an error is indicated in the document status. If a file with the same name already exists at the destination, it will be overwritten. The targetUrl for each target language must be unique.
+
+## Request URL
+
+Send a `POST` request to:
+```HTTP
+POST https://<NAME-OF-YOUR-RESOURCE>.cognitiveservices.azure.com/translator/text/batch/v1.0-preview.1/batches
+```
+
+Learn how to find your [custom domain name](../get-started-with-document-translation.md#find-your-custom-domain-name).
+
+> [!IMPORTANT]
+>
+> * **All API requests to the Document Translation service require a custom domain endpoint**.
+> * You can't use the endpoint found on your Azure portal resource _Keys and Endpoint_ page nor the global translator endpointΓÇö`api.cognitive.microsofttranslator.com`ΓÇöto make HTTP requests to Document Translation.
+
+## Request headers
+
+Request headers are:
+
+|Headers|Description|
+| | |
+|Ocp-Apim-Subscription-Key|Required request header|
+
+## Request Body: Batch Submission Request
+
+|Name|Type|Description|
+| | | |
+|inputs|BatchRequest[]|BatchRequest listed below. The input list of documents or folders containing documents. Media Types: "application/json", "text/json", "application/*+json".|
+
+### Inputs
+
+Definition for the input batch translation request.
+
+|Name|Type|Required|Description|
+| | | | |
+|source|SourceInput[]|True|inputs.source listed below. Source of the input documents.|
+|storageType|StorageInputType[]|True|inputs.storageType listed below. Storage type of the input documents source string.|
+|targets|TargetInput[]|True|inputs.target listed below. Location of the destination for the output.|
+
+**inputs.source**
+
+Source of the input documents.
+
+|Name|Type|Required|Description|
+| | | | |
+|filter|DocumentFilter[]|False|DocumentFilter[] listed below.|
+|filter.prefix|string|False|A case-sensitive prefix string to filter documents in the source path for translation. For example, when using an Azure storage blob Uri, use the prefix to restrict sub folders for translation.|
+|filter.suffix|string|False|A case-sensitive suffix string to filter documents in the source path for translation. This is most often use for file extensions.|
+|language|string|False|Language code If none is specified, we will perform auto detect on the document.|
+|sourceUrl|string|True|Location of the folder / container or single file with your documents.|
+|storageSource|StorageSource|False|StorageSource listed below.|
+|storageSource.AzureBlob|string|False||
+
+**inputs.storageType**
+
+Storage type of the input documents source string.
+
+|Name|Type|
+| | |
+|file|string|
+|folder|string|
+
+**inputs.target**
+
+Destination for the finished translated documents.
+
+|Name|Type|Required|Description|
+| | | | |
+|category|string|False|Category / custom system for translation request.|
+|glossaries|Glossary[]|False|Glossary listed below. List of Glossary.|
+|glossaries.format|string|False|Format.|
+|glossaries.glossaryUrl|string|True (if using glossaries)|Location of the glossary. We will use the file extension to extract the formatting if the format parameter isn't supplied. If the translation language pair isn't present in the glossary, it won't be applied.|
+|glossaries.storageSource|StorageSource|False|StorageSource listed above.|
+|targetUrl|string|True|Location of the folder / container with your documents.|
+|language|string|True|Two letter Target Language code. See [list of language codes](../../language-support.md).|
+|storageSource|StorageSource []|False|StorageSource [] listed above.|
+|version|string|False|Version.|
+
+## Example request
+
+The following are examples of batch requests.
+
+**Translating all documents in a container**
+
+```json
+{
+ "inputs": [
+ {
+ "source": {
+ "sourceUrl": https://my.blob.core.windows.net/source-en?sv=2019-12-12&st=2021-03-05T17%3A45%3A25Z&se=2021-03-13T17%3A45%3A00Z&sr=c&sp=rl&sig=SDRPMjE4nfrH3csmKLILkT%2Fv3e0Q6SWpssuuQl1NmfM%3D
+ },
+ "targets": [
+ {
+ "targetUrl": https://my.blob.core.windows.net/target-fr?sv=2019-12-12&st=2021-03-05T17%3A49%3A02Z&se=2021-03-13T17%3A49%3A00Z&sr=c&sp=wdl&sig=Sq%2BYdNbhgbq4hLT0o1UUOsTnQJFU590sWYo4BOhhQhs%3D,
+ "language": "fr"
+ }
+ ]
+ }
+ ]
+}
+```
+
+**Translating all documents in a container applying glossaries**
+
+Ensure you have created glossary URL & SAS token for the specific blob/document (not for the container)
+
+```json
+{
+ "inputs": [
+ {
+ "source": {
+ "sourceUrl": https://my.blob.core.windows.net/source-en?sv=2019-12-12&st=2021-03-05T17%3A45%3A25Z&se=2021-03-13T17%3A45%3A00Z&sr=c&sp=rl&sig=SDRPMjE4nfrH3csmKLILkT%2Fv3e0Q6SWpssuuQl1NmfM%3D
+ },
+ "targets": [
+ {
+ "targetUrl": https://my.blob.core.windows.net/target-fr?sv=2019-12-12&st=2021-03-05T17%3A49%3A02Z&se=2021-03-13T17%3A49%3A00Z&sr=c&sp=wdl&sig=Sq%2BYdNbhgbq4hLT0o1UUOsTnQJFU590sWYo4BOhhQhs%3D,
+ "language": "fr"
+ "glossaries": [
+ {
+ "glossaryUrl": https://my.blob.core.windows.net/glossaries/en-fr.xlf?sv=2019-12-12&st=2021-03-05T17%3A45%3A25Z&se=2021-03-13T17%3A45%3A00Z&sr=c&sp=rl&sig=BsciG3NWoOoRjOYesTaUmxlXzyjsX4AgVkt2AsxJ9to%3D,
+ "format": "xliff",
+ "version": "1.2"
+ }
+ ]
+
+ }
+ ]
+ }
+ ]
+}
+```
+
+**Translating specific folder in a container**
+
+Ensure you have specified the folder name (case sensitive) as prefix in filter ΓÇô though the SAS token is still for the container.
+
+```json
+{
+ "inputs": [
+ {
+ "source": {
+ "sourceUrl": https://my.blob.core.windows.net/source-en?sv=2019-12-12&st=2021-03-05T17%3A45%3A25Z&se=2021-03-13T17%3A45%3A00Z&sr=c&sp=rl&sig=SDRPMjE4nfrH3csmKLILkT%2Fv3e0Q6SWpssuuQl1NmfM%3D,
+ "filter": {
+ "prefix": "MyFolder/"
+ }
+ },
+ "targets": [
+ {
+ "targetUrl": https://my.blob.core.windows.net/target-fr?sv=2019-12-12&st=2021-03-05T17%3A49%3A02Z&se=2021-03-13T17%3A49%3A00Z&sr=c&sp=wdl&sig=Sq%2BYdNbhgbq4hLT0o1UUOsTnQJFU590sWYo4BOhhQhs%3D,
+ "language": "fr"
+ }
+ ]
+ }
+ ]
+}
+```
+
+**Translating specific document in a container**
+
+* Ensure you have specified "storageType": "File"
+* Ensure you have created source URL & SAS token for the specific blob/document (not for the container)
+* Ensure you have specified the target filename as part of the target URL ΓÇô though the SAS token is still for the container.
+* Sample request below shows a single document getting translated into two target languages
+
+```json
+{
+ "inputs": [
+ {
+ "storageType": "File",
+ "source": {
+ "sourceUrl": https://my.blob.core.windows.net/source-en/source-english.docx?sv=2019-12-12&st=2021-01-26T18%3A30%3A20Z&se=2021-02-05T18%3A30%3A00Z&sr=c&sp=rl&sig=d7PZKyQsIeE6xb%2B1M4Yb56I%2FEEKoNIF65D%2Fs0IFsYcE%3D
+ },
+ "targets": [
+ {
+ "targetUrl": https://my.blob.core.windows.net/target/try/Target-Spanish.docx?sv=2019-12-12&st=2021-01-26T18%3A31%3A11Z&se=2021-02-05T18%3A31%3A00Z&sr=c&sp=wl&sig=AgddSzXLXwHKpGHr7wALt2DGQJHCzNFF%2F3L94JHAWZM%3D,
+ "language": "es"
+ },
+ {
+ "targetUrl": https://my.blob.core.windows.net/target/try/Target-German.docx?sv=2019-12-12&st=2021-01-26T18%3A31%3A11Z&se=2021-02-05T18%3A31%3A00Z&sr=c&sp=wl&sig=AgddSzXLXwHKpGHr7wALt2DGQJHCzNFF%2F3L94JHAWZM%3D,
+ "language": "de"
+ }
+ ]
+ }
+ ]
+}
+```
+
+## Response status codes
+
+The following are the possible HTTP status codes that a request returns.
+
+|Status Code|Description|
+| | |
+|202|Accepted. Successful request and the batch request are created by the service. The header Operation-Location will indicate a status url with the operation ID.HeadersOperation-Location: string|
+|400|Bad Request. Invalid request. Check input parameters.|
+|401|Unauthorized. Please check your credentials.|
+|429|Request rate is too high.|
+|500|Internal Server Error.|
+|503|Service is currently unavailable. Please try again later.|
+|Other Status Codes|<ul><li>Too many requests</li><li>Server temporary unavailable</li></ul>|
+
+## Error response
+
+|Name|Type|Description|
+| | | |
+|code|string|Enums containing high-level error codes. Possible values:<br/><ul><li>InternalServerError</li><li>InvalidArgument</li><li>InvalidRequest</li><li>RequestRateTooHigh</li><li>ResourceNotFound</li><li>ServiceUnavailable</li><li>Unauthorized</li></ul>|
+|message|string|Gets high-level error message.|
+|innerError|InnerErrorV2|New Inner Error format, which conforms to Cognitive Services API Guidelines. It contains required properties ErrorCode, message and optional properties target, details(key value pair), inner error (this can be nested).|
+|inner.Errorcode|string|Gets code error string.|
+|innerError.message|string|Gets high-level error message.|
+
+## Examples
+
+### Example successful response
+
+The following information is returned in a successful response.
+
+You can find the job ID in the POST method's response Header Operation-Location URL value. The last parameter of the URL is the operation's job ID (the string following "/operation/").
+
+```HTTP
+Operation-Location: https://<NAME-OF-YOUR-RESOURCE>.cognitiveservices.azure.com/translator/text/batch/v1.0.preview.1/operation/0FA2822F-4C2A-4317-9C20-658C801E0E55
+```
+
+### Example error response
+
+```JSON
+{
+ "error": {
+ "code": "ServiceUnavailable",
+ "message": "Service is temporary unavailable",
+ "innerError": {
+ "code": "ServiceTemporaryUnavailable",
+ "message": "Service is currently unavailable. Please try again later"
+ }
+ }
+}
+```
+
+## Next steps
+
+Follow our quickstart to learn more about using Document Translation and the client library.
+
+> [!div class="nextstepaction"]
+> [Get started with Document Translation](../get-started-with-document-translation.md)
cognitive-services Diagnostic Logging https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/diagnostic-logging.md
-+ Last updated 06/14/2019
cognitive-services Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/policy-reference.md
Title: Built-in policy definitions for Azure Cognitive Services description: Lists Azure Policy built-in policy definitions for Azure Cognitive Services. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 04/14/2021 Last updated : 04/21/2021
cognitive-services Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Cognitive Services description: Lists Azure Policy Regulatory Compliance controls available for Azure Cognitive Services. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 04/14/2021 Last updated : 04/21/2021
confidential-computing Confidential Nodes Aks Get Started https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/confidential-computing/confidential-nodes-aks-get-started.md Binary files differ
confidential-computing Faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/confidential-computing/faq.md
Here are some ways you can deploy a DCsv2 VM:
**Will all OS images work with Azure confidential computing?**
-No. The virtual machines can only be deployed on Generation 2 operating machines with Ubuntu Server 18.04, Ubuntu Server 16.04, Windows Server 2019 Datacenter, and Windows Server 2016 Datacenter. Read more about Gen 2 VMs on [Linux](../virtual-machines/generation-2.md) and [Windows](../virtual-machines/generation-2.md)
+No. The virtual machines can only be deployed on Generation 2 operating machines with Ubuntu Server 18.04, Ubuntu Server 20.04, Windows Server 2019 Datacenter, and Windows Server 2016 Datacenter. Read more about Gen 2 VMs on [Linux](../virtual-machines/generation-2.md) and [Windows](../virtual-machines/generation-2.md)
**DCsv2 virtual machines are grayed out in the portal and I can't select one**
confidential-computing Quick Create Marketplace https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/confidential-computing/quick-create-marketplace.md
wget -qO - https://packages.microsoft.com/keys/microsoft.asc | sudo apt-key add
``` #### 2. Install the Intel SGX DCAP Driver
+Some versions of Ubuntu may already have the Intel SGX driver installed. Check using the following command:
+
+```bash
+dmesg | grep -i sgx
+[ 106.775199] sgx: intel_sgx: Intel SGX DCAP Driver {version}
+```
+If the output is blank, install the driver:
```bash sudo apt update sudo apt -y install dkms
-wget https://download.01.org/intel-sgx/sgx-dcap/1.4/linux/distro/ubuntuServer18.04/sgx_linux_x64_driver_1.21.bin -O sgx_linux_x64_driver.bin
+wget https://download.01.org/intel-sgx/sgx-dcap/1.7/linux/distro/ubuntu18.04-server/sgx_linux_x64_driver_1.35.bin -O sgx_linux_x64_driver.bin
chmod +x sgx_linux_x64_driver.bin sudo ./sgx_linux_x64_driver.bin ```
sudo ./sgx_linux_x64_driver.bin
#### 3. Install the Intel and Open Enclave packages and dependencies ```bash
-sudo apt -y install clang-7 libssl-dev gdb libsgx-enclave-common libsgx-enclave-common-dev libprotobuf10 libsgx-dcap-ql libsgx-dcap-ql-dev az-dcap-client open-enclave
+sudo apt -y install clang-8 libssl-dev gdb libsgx-enclave-common libprotobuf10 libsgx-dcap-ql libsgx-dcap-ql-dev az-dcap-client open-enclave
``` > [!NOTE]
confidential-computing Quick Create Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/confidential-computing/quick-create-portal.md
If you don't have an Azure subscription, [create an account](https://azure.micro
1. Configure the operating system image that you would like to use for your virtual machine.
- * **Choose Image**: For this tutorial, select Ubuntu 18.04 LTS. You may also select Windows Server 2019, Windows Server 2016, or and Ubuntu 16.04 LTS. If you choose to do so, you'll be redirected in this tutorial accordingly.
+ * **Choose Image**: For this tutorial, select Ubuntu 18.04 LTS. You may also select Windows Server 2019, Windows Server 2016, or and Ubuntu 20.04 LTS. If you choose to do so, you'll be redirected in this tutorial accordingly.
* **Toggle the image for Gen 2**: Confidential compute virtual machines only run on [Generation 2](../virtual-machines/generation-2.md) images. Ensure the image you select is a Gen 2 image. Click the **Advanced** tab above where you're configuring the virtual machine. Scroll down until you find the section labeled "VM Generation". Select Gen 2 and then go back to the **Basics** tab.
If you don't have an Azure subscription, [create an account](https://azure.micro
![DCsv2-Series VMs](media/quick-create-portal/dcsv2-virtual-machines.png) > [!TIP]
- > You should see sizes **DC1s_v2**, **DC2s_v2**, **DC4s_V2**, and **DC8_v2**. These are the only virtual machine sizes that currently support confidential computing. [Learn more](virtual-machine-solutions.md).
+ > You should see sizes **DC1s_v2**, **DC2s_v2**, **DC4s_V2**, and **DC8_v2**. These are the only virtual machine sizes that currently support Intel SGX confidential computing. [Learn more](virtual-machine-solutions.md).
1. Fill in the following information:
wget -qO - https://packages.microsoft.com/keys/microsoft.asc | sudo apt-key add
``` #### 2. Install the Intel SGX DCAP Driver
+Some versions of Ubuntu may already have the Intel SGX driver installed. Check using the following command:
+
+```bash
+dmesg | grep -i sgx
+[ 106.775199] sgx: intel_sgx: Intel SGX DCAP Driver {version}
+```
+If the output is blank, install the driver:
```bash sudo apt update sudo apt -y install dkms
-wget https://download.01.org/intel-sgx/sgx-dcap/1.9/linux/distro/ubuntu18.04-server/sgx_linux_x64_driver_1.36.2.bin -O sgx_linux_x64_driver.bin
+wget https://download.01.org/intel-sgx/sgx-dcap/1.7/linux/distro/ubuntu18.04-server/sgx_linux_x64_driver_1.35.bin -O sgx_linux_x64_driver.bin
chmod +x sgx_linux_x64_driver.bin sudo ./sgx_linux_x64_driver.bin ```
sudo ./sgx_linux_x64_driver.bin
#### 3. Install the Intel and Open Enclave packages and dependencies + ```bash
-sudo apt -y install clang-7 libssl-dev gdb libsgx-enclave-common libsgx-enclave-common-dev libprotobuf10 libsgx-dcap-ql libsgx-dcap-ql-dev az-dcap-client open-enclave
+sudo apt -y install clang-8 libssl-dev gdb libsgx-enclave-common libprotobuf10 libsgx-dcap-ql libsgx-dcap-ql-dev az-dcap-client open-enclave
``` > [!NOTE]
confidential-computing Virtual Machine Solutions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/confidential-computing/virtual-machine-solutions.md
Follow a quickstart tutorial to deploy a DCsv2-Series virtual machine in less th
- **Resizing** ΓÇô Because of their specialized hardware, you can only resize confidential computing instances within the same size family. For example, you can only resize a DCsv2-series VM from one DCsv2-series size to another. Resizing from a non-confidential computing size to a confidential computing size isn't supported. -- **Image** ΓÇô To provide Intel Software Guard Extension (Intel SGX) support on confidential compute instances, all deployments need to be run on Generation 2 images. Azure confidential computing supports workloads running on Ubuntu 18.04 Gen 2, Ubuntu 16.04 Gen 2, Windows Server 2019 gen2, and Windows Server 2016 Gen 2. Read about [support for generation 2 VMs on Azure](../virtual-machines/generation-2.md) to learn more about supported and unsupported scenarios.
+- **Image** ΓÇô To provide Intel Software Guard Extension (Intel SGX) support on confidential compute instances, all deployments need to be run on Generation 2 images. Azure confidential computing supports workloads running on Ubuntu 18.04 Gen 2, Ubuntu 20.04 Gen 2, Windows Server 2019 gen2, and Windows Server 2016 Gen 2. Read about [support for generation 2 VMs on Azure](../virtual-machines/generation-2.md) to learn more about supported and unsupported scenarios.
- **Storage** ΓÇô Azure confidential computing virtual machine data disks and our ephemeral OS disks are on NVMe disks. Instances support only Premium SSD and Standard SSD disks, not Ultra SSD, or Standard HDD. Virtual machine size **DC8_v2** doesn't support Premium storage.
Under **properties**, you will also have to reference an image under **storagePr
"sku": "18_04-lts-gen2", "version": "latest" },
- "16_04-lts-gen2": {
+ "20_04-lts-gen2": {
"offer": "UbuntuServer", "publisher": "Canonical",
- "sku": "16_04-lts-gen2",
+ "sku": "20_04-lts-gen2",
"version": "latest" } ```
container-instances Container Instances Egress Ip Address https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-instances/container-instances-egress-ip-address.md
For more information about managing traffic and protecting Azure resources, see
[az-container-create]: /cli/azure/container#az_container_create [az-network-vnet-subnet-create]: /cli/azure/network/vnet/subnet#az_network_vnet_subnet_create [az-extension-add]: /cli/azure/extension#az_extension_add
-[az-network-firewall-update]: /cli/azure/ext/azure-firewall/network/firewall#ext-azure-firewall-az-network-firewall-update
+[az-network-firewall-update]: /cli/azure/network/firewall#az_network_firewall_update
[az-network-public-ip-show]: /cli/azure/network/public-ip/#az_network_public_ip_show [az-network-route-table-create]:/cli/azure/network/route-table/#az_network_route_table_create [az-network-route-table-route-create]: /cli/azure/network/route-table/route#az_network_route_table_route_create
-[az-network-firewall-ip-config-list]: /cli/azure/ext/azure-firewall/network/firewall/ip-config#ext-azure-firewall-az-network-firewall-ip-config-list
+[az-network-firewall-ip-config-list]: /cli/azure/network/firewall/ip-config#az_network_firewall_ip_config_list
[az-network-vnet-subnet-update]: /cli/azure/network/vnet/subnet#az_network_vnet_subnet_update [az-container-exec]: /cli/azure/container#az_container_exec [az-vm-create]: /cli/azure/vm#az_vm_create [az-vm-open-port]: /cli/azure/vm#az_vm_open_port [az-vm-list-ip-addresses]: /cli/azure/vm#az_vm_list_ip_addresses
-[az-network-firewall-application-rule-create]: /cli/azure/ext/azure-firewall/network/firewall/application-rule#ext-azure-firewall-az-network-firewall-application-rule-create
-[az-network-firewall-nat-rule-create]: /cli/azure/ext/azure-firewall/network/firewall/nat-rule#ext-azure-firewall-az-network-firewall-nat-rule-create
+[az-network-firewall-application-rule-create]: /cli/azure/network/firewall/application-rule#az_network_firewall_application_rule_create
+[az-network-firewall-nat-rule-create]: /cli/azure/network/firewall/nat-rule#az_network_firewall_nat_rule_create
container-instances Container Instances Github Action https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-instances/container-instances-github-action.md
Browse the [GitHub Marketplace](https://github.com/marketplace?type=actions) for
[az-container-show]: /cli/azure/container#az_container_show [az-container-delete]: /cli/azure/container#az_container_delete [az-extension-add]: /cli/azure/extension#az_extension_add
-[az-container-app-up]: /cli/azure/ext/deploy-to-azure/container/app#ext-deploy-to-azure-az-container-app-up
+[az-container-app-up]: /cli/azure/container/app#az_container_app_up
container-instances Container Instances Image Security https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-instances/container-instances-image-security.md
Security monitoring and image scanning solutions such as [Twistlock](https://azu
### Protect credentials
-Containers can spread across several clusters and Azure regions. So, you must secure credentials required for logins or API access, such as passwords or tokens. Ensure that only privileged users can access those containers in transit and at rest. Inventory all credential secrets, and then require developers to use emerging secrets-management tools that are designed for container platforms. Make sure that your solution includes encrypted databases, TLS encryption for secrets data in transit, and least-privilege [Azure role-based access control (Azure RBAC)](../role-based-access-control/overview.md). [Azure Key Vault](../key-vault/general/security-overview.md) is a cloud service that safeguards encryption keys and secrets (such as certificates, connection strings, and passwords) for containerized applications. Because this data is sensitive and business critical, secure access to your key vaults so that only authorized applications and users can access them.
+Containers can spread across several clusters and Azure regions. So, you must secure credentials required for logins or API access, such as passwords or tokens. Ensure that only privileged users can access those containers in transit and at rest. Inventory all credential secrets, and then require developers to use emerging secrets-management tools that are designed for container platforms. Make sure that your solution includes encrypted databases, TLS encryption for secrets data in transit, and least-privilege [Azure role-based access control (Azure RBAC)](../role-based-access-control/overview.md). [Azure Key Vault](../key-vault/general/security-features.md) is a cloud service that safeguards encryption keys and secrets (such as certificates, connection strings, and passwords) for containerized applications. Because this data is sensitive and business critical, secure access to your key vaults so that only authorized applications and users can access them.
## Considerations for the container ecosystem
container-instances Container Instances Quickstart Template https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-instances/container-instances-quickstart-template.md
If you don't have an Azure subscription, create a [free](https://azure.microsoft
The template used in this quickstart is from [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/101-aci-linuxcontainer-public-ip/). The following resource is defined in the template:
container-instances Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-instances/policy-reference.md
Title: Built-in policy definitions for Azure Container Instances description: Lists Azure Policy built-in policy definitions for Azure Container Instances. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 04/14/2021 Last updated : 04/21/2021
container-registry Container Registry Authentication https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-registry/container-registry-authentication.md
Output displays the access token, abbreviated here:
"loginServer": "myregistry.azurecr.io" } ```
-For registry authentication, we recommend that you store the token credential in a safe location and follow recommended practices to manage [docker login](https://docs.docker.com/engine/reference/commandline/login/)) credentials. For example, store the token value in an environment variable:
+For registry authentication, we recommend that you store the token credential in a safe location and follow recommended practices to manage [docker login](https://docs.docker.com/engine/reference/commandline/login/) credentials. For example, store the token value in an environment variable:
```bash TOKEN=$(az acr login --name <acrName> --expose-token --output tsv --query accessToken)
container-registry Container Registry Customer Managed Keys https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-registry/container-registry-customer-managed-keys.md
If this issue occurs with a system-assigned identity, please [create an Azure su
## Next steps * Learn more about [encryption at rest in Azure](../security/fundamentals/encryption-atrest.md).
-* Learn more about access policies and how to [secure access to a key vault](../key-vault/general/security-overview.md).
+* Learn more about access policies and how to [secure access to a key vault](../key-vault/general/security-features.md).
<!-- LINKS - external -->
container-registry Container Registry Faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-registry/container-registry-faq.md
You may disable anonymous pull access at any time by setting `--anonymous-pull-e
> * Before attempting an anonymous pull operation, run `docker logout` to ensure that you clear any existing Docker credentials. > * Only data-plane operations are available to unauthenticated clients. > * The registry may throttle a high rate of unauthenticated requests.
+> * Currently, anonymous pull access isn't supported in [geo-replicated](container-registry-geo-replication.md) registry regions.
> [!WARNING] > Anonymous pull access currently applies to all repositories in the registry. If you manage repository access using [repository-scoped tokens](container-registry-repository-scoped-permissions.md), be aware that all users may pull from those repositories in a registry enabled for anonymous pull. We recommend deleting tokens when anonymous pull access is enabled.
container-registry Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-registry/policy-reference.md
Title: Built-in policy definitions for Azure Container Registry description: Lists Azure Policy built-in policy definitions for Azure Container Registry. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 04/14/2021 Last updated : 04/21/2021
container-registry Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-registry/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Container Registry description: Lists Azure Policy Regulatory Compliance controls available for Azure Container Registry. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 04/14/2021 Last updated : 04/21/2021
cosmos-db Create Sql Api Dotnet V4 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/create-sql-api-dotnet-v4.md Binary files differ
cosmos-db How To Setup Rbac https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/how-to-setup-rbac.md
To use the Azure Cosmos DB RBAC in your application, you have to update the way
The way you create a `TokenCredential` instance is beyond the scope of this article. There are many ways to create such an instance depending on the type of AAD identity you want to use (user principal, service principal, group etc.). Most importantly, your `TokenCredential` instance must resolve to the identity (principal ID) that you've assigned your roles to. You can find examples of creating a `TokenCredential` class: -- [in .NET](/dotnet/api/overview/azure/identity-readme#credential-classes)-- [in Java](/java/api/overview/azure/identity-readme#credential-classes)-- [in JavaScript](/javascript/api/overview/azure/identity-readme#credential-classes)
+- [In .NET](/dotnet/api/overview/azure/identity-readme#credential-classes)
+- [In Java](/java/api/overview/azure/identity-readme#credential-classes)
+- [In JavaScript](/javascript/api/overview/azure/identity-readme#credential-classes)
+- In REST API
The examples below use a service principal with a `ClientSecretCredential` instance.
const client = new CosmosClient({
}); ```
+### In REST API
+
+The Azure Cosmos DB RBAC is currently supported with the 2021-03-15 version of REST API. When constructing the [authorization header](/rest/api/cosmos-db/access-control-on-cosmosdb-resources), set the **type** parameter to **aad** and the hash signature **(sig)** to the **oauth token** as shown in the following example:
+
+`type=aad&ver=1.0&sig=<token-from-oauth>`
+ ## Auditing data requests When using the Azure Cosmos DB RBAC, [diagnostic logs](cosmosdb-monitor-resource-logs.md) get augmented with identity and authorization information for each data operation. This lets you perform detailed auditing and retrieve the AAD identity used for every data request sent to your Azure Cosmos DB account.
cosmos-db Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/policy-reference.md
Title: Built-in policy definitions for Azure Cosmos DB description: Lists Azure Policy built-in policy definitions for Azure Cosmos DB. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 04/14/2021 Last updated : 04/21/2021
cosmos-db Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Cosmos DB description: Lists Azure Policy Regulatory Compliance controls available for Azure Cosmos DB. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 04/14/2021 Last updated : 04/21/2021
cosmos-db Sql Api Sdk Dotnet Changefeed https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql-api-sdk-dotnet-changefeed.md
|**SDK download**|[NuGet](https://www.nuget.org/packages/Microsoft.Azure.DocumentDB.ChangeFeedProcessor/)| |**API documentation**|[Change Feed Processor library API reference documentation](/dotnet/api/microsoft.azure.documents.changefeedprocessor)| |**Get started**|[Get started with the Change Feed Processor .NET SDK](change-feed.md)|
-|**Current supported framework**| [Microsoft .NET Framework 4.5](https://www.microsoft.com/download/details.aspx?id=30653)</br> [Microsoft .NET Core](https://www.microsoft.com/net/download/core) |
+|**Current supported framework**| [Microsoft .NET Framework 4.5](https://www.microsoft.com/download/details.aspx?id=30653)</br> [Microsoft .NET Core](https://dotnet.microsoft.com/download) |
> [!NOTE] > If you are using change feed processor, please see the latest version 3.x of the [.NET SDK](change-feed-processor.md), which has change feed built into the SDK.
cosmos-db Use Metrics https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/use-metrics.md
Previously updated : 03/22/2021 Last updated : 04/09/2021 # Monitor and debug with metrics in Azure Cosmos DB
This article walks through common use cases and how Azure Cosmos DB metrics can
:::image type="content" source="./media/use-metrics/performance-metrics.png" alt-text="Cosmos DB performance metrics in Azure portal":::
-The following metrics are available from the **Metrics** pane:
+The following metrics are available from the **Metrics** pane:
* **Throughput metrics** - This metric shows the number of requests consumed or failed (429 response code) because the throughput or storage capacity provisioned for the container has exceeded.
cost-management-billing Quick Acm Cost Analysis https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/costs/quick-acm-cost-analysis.md Binary files differ
cost-management-billing Tutorial Export Acm Data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/costs/tutorial-export-acm-data.md Binary files differ
cost-management-billing Programmatically Create Subscription Enterprise Agreement https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/manage/programmatically-create-subscription-enterprise-agreement.md
You get the subscriptionId as part of the response from the command.
First, install the extension by running `az extension add --name account` and `az extension add --name alias`.
-Run the following [az account alias create](/cli/azure/ext/account/account/alias#ext_account_az_account_alias_create) command and provide `billing-scope` and `id` from one of your `enrollmentAccounts`.
+Run the following [az account alias create](/cli/azure/account/alias#az_account_alias_create) command and provide `billing-scope` and `id` from one of your `enrollmentAccounts`.
```azurecli-interactive az account alias create --name "sampleAlias" --billing-scope "/providers/Microsoft.Billing/billingAccounts/1234567/enrollmentAccounts/654321" --display-name "Dev Team Subscription" --workload "Production"
cost-management-billing Programmatically Create Subscription Microsoft Customer Agreement https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/manage/programmatically-create-subscription-microsoft-customer-agreement.md
You get the subscriptionId as part of the response from the command.
First, install the extension by running `az extension add --name account` and `az extension add --name alias`.
-Run the [az account alias create](/cli/azure/ext/account/account/alias#ext_account_az_account_alias_create) following command.
+Run the [az account alias create](/cli/azure/account/alias#az_account_alias_create) following command.
```azurecli az account alias create --name "sampleAlias" --billing-scope "/providers/Microsoft.Billing/billingAccounts/5e98e158-xxxx-xxxx-xxxx-xxxxxxxxxxxx:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_xxxx-xx-xx/billingProfiles/AW4F-xxxx-xxx-xxx/invoiceSections/SH3V-xxxx-xxx-xxx" --display-name "Dev Team Subscription" --workload "Production"
cost-management-billing Programmatically Create Subscription Microsoft Partner Agreement https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/manage/programmatically-create-subscription-microsoft-partner-agreement.md
Pass the optional *resellerId* copied from the second step in the `New-AzSubscri
First, install the extension by running `az extension add --name account` and `az extension add --name alias`.
-Run the following [az account alias create](/cli/azure/ext/account/account/alias#ext_account_az_account_alias_create) command.
+Run the following [az account alias create](/cli/azure/account/alias#az_account_alias_create) command.
```azurecli az account alias create --name "sampleAlias" --billing-scope "/providers/Microsoft.Billing/billingAccounts/99a13315-xxxx-xxxx-xxxx-xxxxxxxxxxxx:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_xxxx-xx-xx/customers/2281f543-xxxx-xxxx-xxxx-xxxxxxxxxxxx" --display-name "Dev Team Subscription" --workload "Production"
cost-management-billing Programmatically Create Subscription Preview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/manage/programmatically-create-subscription-preview.md
To see a full list of all parameters, see [New-AzSubscription](/powershell/modul
First, install the preview extension by running `az extension add --name subscription`.
-Run the [az account create](/cli/azure/ext/subscription/account#-ext-subscription-az-account-create) command below, replacing `<enrollmentAccountObjectId>` with the `name` you copied in the first step (```747ddfe5-xxxx-xxxx-xxxx-xxxxxxxxxxxx```). To specify owners, see [how to get user object IDs](grant-access-to-create-subscription.md#userObjectId).
+Run the [az account create](/cli/azure/account#-ext-subscription-az-account-create) command below, replacing `<enrollmentAccountObjectId>` with the `name` you copied in the first step (```747ddfe5-xxxx-xxxx-xxxx-xxxxxxxxxxxx```). To specify owners, see [how to get user object IDs](grant-access-to-create-subscription.md#userObjectId).
```azurecli-interactive az account create --offer-type "MS-AZR-0017P" --display-name "Dev Team Subscription" --enrollment-account-object-id "<enrollmentAccountObjectId>" --owner-object-id "<userObjectId>","<servicePrincipalObjectId>"
az account create --offer-type "MS-AZR-0017P" --display-name "Dev Team Subscript
| `owner-upn` | No | String | The email address of any user to add as an Azure RBAC Owner on the subscription when it's created. You can use the parameter instead of `owner-object-id`.| | `owner-spn` | No | String | The application ID of any service principal to add as an Azure RBAC Owner on the subscription when it's created. You can use the parameter instead of `owner-object-id`. When using the parameter, the service principal must have [read access to the directory](/powershell/azure/active-directory/signing-in-service-principal#give-the-service-principal-reader-access-to-the-current-tenant-get-azureaddirectoryrole).|
-To see a full list of all parameters, see [az account create](/cli/azure/ext/subscription/account#-ext-subscription-az-account-create).
+To see a full list of all parameters, see [az account create](/cli/azure/account#-ext-subscription-az-account-create).
cost-management-billing Manage Reserved Vm Instance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/reservations/manage-reserved-vm-instance.md
Previously updated : 03/29/2021 Last updated : 04/21/2021 # Manage Reservations for Azure resources
If you have questions or need help, [create a support request](https://go.micro
## Next steps To learn more about Azure Reservations, see the following articles:--- [What are reservations for Azure?](save-compute-costs-reservations.md)-
-Buy a service plan:
-- [Prepay for Virtual Machines with Azure Reserved VM Instances](../../virtual-machines/prepay-reserved-vm-instances.md)-- [Prepay for SQL Database compute resources with Azure SQL Database reserved capacity](../../azure-sql/database/reserved-capacity-overview.md)-- [Prepay for Azure Cosmos DB resources with Azure Cosmos DB reserved capacity](../../cosmos-db/cosmos-db-reserved-capacity.md)-
-Buy a software plan:
-- [Prepay for Red Hat software plans from Azure Reservations](../../virtual-machines/linux/prepay-suse-software-charges.md)-- [Prepay for SUSE software plans from Azure Reservations](../../virtual-machines/linux/prepay-suse-software-charges.md)-
-Understand discount and usage:
-- [Understand how the VM reservation discount is applied](../manage/understand-vm-reservation-charges.md)-- [Understand how the Red Hat Enterprise Linux software plan discount is applied](understand-rhel-reservation-charges.md)-- [Understand how the SUSE Linux Enterprise software plan discount is applied](understand-suse-reservation-charges.md)-- [Understand how other reservation discounts are applied](understand-reservation-charges.md)-- [Understand reservation usage for your Pay-As-You-Go subscription](understand-reserved-instance-usage.md)-- [Understand reservation usage for your Enterprise enrollment](understand-reserved-instance-usage-ea.md)-- [Windows software costs not included with Reservations](reserved-instance-windows-software-costs.md)
+ - [View reservation utilization](reservation-utilization.md)
+ - [Exchange and refund](exchange-and-refund-azure-reservations.md)
+ - [Renew reservations](reservation-renew.md)
+ - [Transfers between tenants](troubleshoot-reservation-transfers-between-tenants.md)
+ - [Find a reservation purchaser from Azure logs](find-reservation-purchaser-from-logs.md)
+ - [Renew a reservation](reservation-renew.md)
data-factory Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/policy-reference.md
Previously updated : 04/14/2021 Last updated : 04/21/2021 # Azure Policy built-in definitions for Data Factory (Preview)
data-factory Quickstart Create Data Factory Azure Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/quickstart-create-data-factory-azure-cli.md
This quickstart uses an Azure Storage account, which includes a container with a
## Create a data factory
-To create an Azure data factory, run the [az datafactory factory create](/cli/azure/ext/datafactory/datafactory/factory#ext_datafactory_az_datafactory_factory_create) command:
+To create an Azure data factory, run the [az datafactory factory create](/cli/azure/datafactory/factory#az_datafactory_factory_create) command:
```azurecli az datafactory factory create --resource-group ADFQuickStartRG \
az datafactory factory create --resource-group ADFQuickStartRG \
> [!IMPORTANT] > Replace `ADFTutorialFactory` with a globally unique data factory name, for example, ADFTutorialFactorySP1127.
-You can see the data factory that you created by using the [az datafactory factory show](/cli/azure/ext/datafactory/datafactory/factory#ext_datafactory_az_datafactory_factory_show) command:
+You can see the data factory that you created by using the [az datafactory factory show](/cli/azure/datafactory/factory#az_datafactory_factory_show) command:
```azurecli az datafactory factory show --resource-group ADFQuickStartRG \
az datafactory factory show --resource-group ADFQuickStartRG \
Next, create a linked service and two datasets.
-1. Get the connection string for your storage account by using the [az storage account show-connection-string](/cli/azure/ext/datafactory/datafactory/factory#ext_datafactory_az_datafactory_factory_show) command:
+1. Get the connection string for your storage account by using the [az storage account show-connection-string](/cli/azure/datafactory/factory#az_datafactory_factory_show) command:
```azurecli az storage account show-connection-string --resource-group ADFQuickStartRG \
Next, create a linked service and two datasets.
} ```
-1. Create a linked service, named `AzureStorageLinkedService`, by using the [az datafactory linked-service create](/cli/azure/ext/datafactory/datafactory/linked-service#ext_datafactory_az_datafactory_linked_service_create) command:
+1. Create a linked service, named `AzureStorageLinkedService`, by using the [az datafactory linked-service create](/cli/azure/datafactory/linked-service#az_datafactory_linked_service_create) command:
```azurecli az datafactory linked-service create --resource-group ADFQuickStartRG \
Next, create a linked service and two datasets.
} ```
-1. Create an input dataset named `InputDataset` by using the [az datafactory dataset create](/cli/azure/ext/datafactory/datafactory/dataset#ext_datafactory_az_datafactory_dataset_create) command:
+1. Create an input dataset named `InputDataset` by using the [az datafactory dataset create](/cli/azure/datafactory/dataset#az_datafactory_dataset_create) command:
```azurecli az datafactory dataset create --resource-group ADFQuickStartRG \
Next, create a linked service and two datasets.
} ```
-1. Create an output dataset named `OutputDataset` by using the [az datafactory dataset create](/cli/azure/ext/datafactory/datafactory/dataset#ext_datafactory_az_datafactory_dataset_create) command:
+1. Create an output dataset named `OutputDataset` by using the [az datafactory dataset create](/cli/azure/datafactory/dataset#az_datafactory_dataset_create) command:
```azurecli az datafactory dataset create --resource-group ADFQuickStartRG \
Finally, create and run the pipeline.
} ```
-1. Create a pipeline named `Adfv2QuickStartPipeline` by using the [az datafactory pipeline create](/cli/azure/ext/datafactory/datafactory/pipeline#ext_datafactory_az_datafactory_pipeline_create) command:
+1. Create a pipeline named `Adfv2QuickStartPipeline` by using the [az datafactory pipeline create](/cli/azure/datafactory/pipeline#az_datafactory_pipeline_create) command:
```azurecli az datafactory pipeline create --resource-group ADFQuickStartRG \
Finally, create and run the pipeline.
--pipeline @Adfv2QuickStartPipeline.json ```
-1. Run the pipeline by using the [az datafactory pipeline create-run](/cli/azure/ext/datafactory/datafactory/pipeline#ext_datafactory_az_datafactory_pipeline_create_run) command:
+1. Run the pipeline by using the [az datafactory pipeline create-run](/cli/azure/datafactory/pipeline#az_datafactory_pipeline_create_run) command:
```azurecli az datafactory pipeline create-run --resource-group ADFQuickStartRG \
Finally, create and run the pipeline.
This command returns a run ID. Copy it for use in the next command.
-1. Verify that the pipeline run succeeded by using the [az datafactory pipeline-run show](/cli/azure/ext/datafactory/datafactory/pipeline-run#ext_datafactory_az_datafactory_pipeline_run_show) command:
+1. Verify that the pipeline run succeeded by using the [az datafactory pipeline-run show](/cli/azure/datafactory/pipeline-run#az_datafactory_pipeline_run_show) command:
```azurecli az datafactory pipeline-run show --resource-group ADFQuickStartRG \
All of the resources in this quickstart are part of the same resource group. To
az group delete --name ADFQuickStartRG ```
-If you're using this resource group for anything else, instead, delete individual resources. For instance, to remove the linked service, use the [az datafactory linked-service delete](/cli/azure/ext/datafactory/datafactory/linked-service#ext_datafactory_az_datafactory_linked_service_delete) command.
+If you're using this resource group for anything else, instead, delete individual resources. For instance, to remove the linked service, use the [az datafactory linked-service delete](/cli/azure/datafactory/linked-service#az_datafactory_linked_service_delete) command.
In this quickstart, you created the following JSON files:
data-lake-analytics Data Lake Analytics Data Lake Tools For Vscode https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-lake-analytics/data-lake-analytics-data-lake-tools-for-vscode.md
Azure Data Lake Tools for VS Code supports Windows, Linux, and macOS. U-SQL loc
- [Visual Studio Code](https://www.visualstudio.com/products/code-vs.aspx)
-For MacOS and Linux:
+For macOS and Linux:
-- [.NET Core SDK 5.0](https://www.microsoft.com/net/download/core)-- [Mono 6.12.x](https://www.mono-project.com/download/)
+- [.NET 5.0 SDK](https://dotnet.microsoft.com/download)
+- [Mono 5.2.x](https://www.mono-project.com/download/)
## Install Azure Data Lake Tools
data-lake-analytics Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-lake-analytics/policy-reference.md
Title: Built-in policy definitions for Azure Data Lake Analytics description: Lists Azure Policy built-in policy definitions for Azure Data Lake Analytics. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 04/14/2021 Last updated : 04/21/2021
data-lake-analytics Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-lake-analytics/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Data Lake Analytics description: Lists Azure Policy Regulatory Compliance controls available for Azure Data Lake Analytics. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 04/14/2021 Last updated : 04/21/2021
data-lake-store Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-lake-store/policy-reference.md
Title: Built-in policy definitions for Azure Data Lake Storage Gen1 description: Lists Azure Policy built-in policy definitions for Azure Data Lake Storage Gen1. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 04/14/2021 Last updated : 04/21/2021
data-lake-store Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-lake-store/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Data Lake Storage Gen1 description: Lists Azure Policy Regulatory Compliance controls available for Azure Data Lake Storage Gen1. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 04/14/2021 Last updated : 04/21/2021
data-share How To Share From Storage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-share/how-to-share-from-storage.md
Previously updated : 02/23/2021 Last updated : 04/20/2021 # Share and receive data from Azure Blob Storage and Azure Data Lake Storage
Last updated 02/23/2021
Azure Data Share supports snapshot-based sharing from a storage account. This article explains how to share and receive data from Azure Blob Storage, Azure Data Lake Storage Gen1, and Azure Data Lake Storage Gen2.
-Azure Data Share supports the sharing of files, folders, and file systems from Azure Data Lake Gen1 and Azure Data Lake Gen2. It also supports the sharing of blobs, folders, and containers from Azure Blob Storage. Only block blobs are currently supported. Data shared from these sources can be received by Azure Data Lake Gen2 or Azure Blob Storage.
+Azure Data Share supports the sharing of files, folders, and file systems from Azure Data Lake Gen1 and Azure Data Lake Gen2. It also supports the sharing of blobs, folders, and containers from Azure Blob Storage. You can share block, append, or page blobs, and they are received as block blobs. Data shared from these sources can be received by Azure Data Lake Gen2 or Azure Blob Storage.
When file systems, containers, or folders are shared in snapshot-based sharing, data consumers can choose to make a full copy of the share data. Or they can use the incremental snapshot capability to copy only new or updated files. The incremental snapshot capability is based on the last modified time of the files.
data-share Share Your Data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-share/share-your-data.md
Use these commands to create the resource:
az group create --name testresourcegroup --location "East US 2" ```
-1. Run the [az datashare account create](/cli/azure/ext/datashare/datashare/account#ext_datashare_az_datashare_account_create) command to create a Data Share account:
+1. Run the [az datashare account create](/cli/azure/datashare/account#az_datashare_account_create) command to create a Data Share account:
```azurecli az datashare account create --resource-group testresourcegroup --name datashareaccount --location "East US 2" ```
- Run the [az datashare account list](/cli/azure/ext/datashare/datashare/account#ext_datashare_az_datashare_account_list) command to see your Data Share accounts:
+ Run the [az datashare account list](/cli/azure/datashare/account#az_datashare_account_list) command to see your Data Share accounts:
```azurecli az datashare account list --resource-group testresourcegroup
Use these commands to create the resource:
az storage container create --name ContosoMarketplaceContainer --account-name ContosoMarketplaceAccount ```
-1. Run the [az datashare create](/cli/azure/ext/datashare/datashare#ext_datashare_az_datashare_create) command to create your Data Share:
+1. Run the [az datashare create](/cli/azure/datashare#az_datashare_create) command to create your Data Share:
```azurecli az datashare create --resource-group testresourcegroup \
Use these commands to create the resource:
--description "Data Share" --share-kind "CopyBased" --terms "Confidential" ```
-1. Use the [az datashare invitation create](/cli/azure/ext/datashare/datashare/invitation#ext_datashare_az_datashare_invitation_create) command to create the invitation for the specified address:
+1. Use the [az datashare invitation create](/cli/azure/datashare/invitation#az_datashare_invitation_create) command to create the invitation for the specified address:
```azurecli az datashare invitation create --resource-group testresourcegroup \
data-share Subscribe To Data Share https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-share/subscribe-to-data-share.md
Start by preparing your environment for the Azure CLI:
[!INCLUDE [azure-cli-prepare-your-environment-no-header.md](../../includes/azure-cli-prepare-your-environment-no-header.md)]
-Run the [az datashare consumer invitation list](/cli/azure/ext/datashare/datashare/consumer/invitation#ext_datashare_az_datashare_consumer_invitation_list) command to see your current invitations:
+Run the [az datashare consumer invitation list](/cli/azure/datashare/consumer/invitation#az_datashare_consumer_invitation_list) command to see your current invitations:
```azurecli az datashare consumer invitation list --subscription 11111111-1111-1111-1111-111111111111
Copy your invitation ID for use in the next section.
### [Azure CLI](#tab/azure-cli)
-Use the [az datashare consumer share-subscription create](/cli/azure/ext/datashare/datashare/consumer/share-subscription#ext_datashare_az_datashare_consumer_share_subscription_create) command to create the Data Share.
+Use the [az datashare consumer share-subscription create](/cli/azure/datashare/consumer/share-subscription#az_datashare_consumer_share_subscription_create) command to create the Data Share.
```azurecli az datashare consumer share-subscription create --resource-group share-rg \
Follow the steps below to configure where you want to receive data.
Use these commands to configure where you want to receive data.
-1. Run the [az datashare consumer share-subscription list-source-dataset](/cli/azure/ext/datashare/datashare/consumer/share-subscription#ext_datashare_az_datashare_consumer_share_subscription_list_source_dataset) command to get the data set ID:
+1. Run the [az datashare consumer share-subscription list-source-dataset](/cli/azure/datashare/consumer/share-subscription#az_datashare_consumer_share_subscription_list_source_dataset) command to get the data set ID:
```azurecli az datashare consumer share-subscription list-source-dataset \
Use these commands to configure where you want to receive data.
\"storage_account_name\":\"datashareconsumersa\",\"kind\":\"BlobFolder\",\"prefix\":\"consumer\"}' ```
-1. Use the [az datashare consumer dataset-mapping create](/cli/azure/ext/datashare/datashare/consumer/dataset-mapping#ext_datashare_az_datashare_consumer_dataset_mapping_create) command to create the dataset mapping:
+1. Use the [az datashare consumer dataset-mapping create](/cli/azure/datashare/consumer/dataset-mapping#az_datashare_consumer_dataset_mapping_create) command to create the dataset mapping:
```azurecli az datashare consumer dataset-mapping create --resource-group "share-rg" \
Use these commands to configure where you want to receive data.
--subscription 11111111-1111-1111-1111-111111111111 ```
-1. Run the [az datashare consumer share-subscription synchronization start](/cli/azure/ext/datashare/datashare/consumer/share-subscription/synchronization#ext_datashare_az_datashare_consumer_share_subscription_synchronization_start) command to start dataset synchronization.
+1. Run the [az datashare consumer share-subscription synchronization start](/cli/azure/datashare/consumer/share-subscription/synchronization#az_datashare_consumer_share_subscription_synchronization_start) command to start dataset synchronization.
```azurecli az datashare consumer share-subscription synchronization start \
Use these commands to configure where you want to receive data.
--subscription 11111111-1111-1111-1111-111111111111 ```
- Run the [az datashare consumer share-subscription synchronization list](/cli/azure/ext/datashare/datashare/consumer/share-subscription/synchronization#ext_datashare_az_datashare_consumer_share_subscription_synchronization_list) command to see a list of your synchronizations:
+ Run the [az datashare consumer share-subscription synchronization list](/cli/azure/datashare/consumer/share-subscription/synchronization#az_datashare_consumer_share_subscription_synchronization_list) command to see a list of your synchronizations:
```azurecli az datashare consumer share-subscription synchronization list \
Use these commands to configure where you want to receive data.
--subscription 11111111-1111-1111-1111-111111111111 ```
- Use the [az datashare consumer share-subscription list-source-share-synchronization-setting](/cli/azure/ext/datashare/datashare/consumer/share-subscription#ext_datashare_az_datashare_consumer_share_subscription_list_source_share_synchronization_setting) command to see synchronization settings set on your share.
+ Use the [az datashare consumer share-subscription list-source-share-synchronization-setting](/cli/azure/datashare/consumer/share-subscription#az_datashare_consumer_share_subscription_list_source_share_synchronization_setting) command to see synchronization settings set on your share.
```azurecli az datashare consumer share-subscription list-source-share-synchronization-setting \
These steps only apply to snapshot-based sharing.
### [Azure CLI](#tab/azure-cli)
-Run the [az datashare consumer trigger create](/cli/azure/ext/datashare/datashare/consumer/trigger#ext_datashare_az_datashare_consumer_trigger_create) command to trigger a snapshot:
+Run the [az datashare consumer trigger create](/cli/azure/datashare/consumer/trigger#az_datashare_consumer_trigger_create) command to trigger a snapshot:
```azurecli az datashare consumer trigger create --resource-group "share-rg" \
data-share Supported Data Stores https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-share/supported-data-stores.md
Previously updated : 12/16/2020 Last updated : 04/20/2021 # Supported data stores in Azure Data Share
The following table explains the combinations and options that data consumers ca
| Data Explorer ||||||| Γ£ô | ## Share from a storage account
-Azure Data Share supports the sharing of files, folders, and file systems from Azure Data Lake Storage Gen1 and Azure Data Lake Storage Gen2. It also supports the sharing of blobs, folders, and containers from Azure Blob Storage. Only block blobs are currently supported.
+Azure Data Share supports the sharing of files, folders, and file systems from Azure Data Lake Storage Gen1 and Azure Data Lake Storage Gen2. It also supports the sharing of blobs, folders, and containers from Azure Blob Storage. You can share block, append, or page blobs, and they are received as block blobs.
When file systems, containers, or folders are shared in snapshot-based sharing, data consumers can choose to make a full copy of the shared data. Or they can use the incremental snapshot capability to copy only new files or updated files.
databox-online Azure Stack Edge Create Iot Edge Module https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-create-iot-edge-module.md
Before you begin, make sure you have:
- [Visual Studio Code](https://code.visualstudio.com/). - [C# for Visual Studio Code (powered by OmniSharp) extension](https://marketplace.visualstudio.com/items?itemName=ms-dotnettools.csharp). - [Azure IoT Edge extension for Visual Studio Code](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-edge).
- - [.NET Core 2.1 SDK](https://www.microsoft.com/net/download).
+ - [.NET Core 2.1 SDK](https://dotnet.microsoft.com/download/dotnet/2.1).
- [Docker CE](https://store.docker.com/editions/community/docker-ce-desktop-windows). You may have to create an account to download and install the software. ## Create a container registry
databox-online Azure Stack Edge Gpu Create Iot Edge Module https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-gpu-create-iot-edge-module.md
Before you begin, make sure you have:
- [Visual Studio Code](https://code.visualstudio.com/). - [C# for Visual Studio Code (powered by OmniSharp) extension](https://marketplace.visualstudio.com/items?itemName=ms-vscode.csharp). - [Azure IoT Edge extension for Visual Studio Code](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-edge).
- - [.NET Core 2.1 SDK](https://www.microsoft.com/net/download).
+ - [.NET Core 2.1 SDK](https://dotnet.microsoft.com/download/dotnet/2.1).
- [Docker CE](https://store.docker.com/editions/community/docker-ce-desktop-windows). You may have to create an account to download and install the software. ## Create a container registry
databox-online Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/policy-reference.md
Title: Built-in policy definitions for Azure Stack Edge description: Lists Azure Policy built-in policy definitions for Azure Stack Edge. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 04/14/2021 Last updated : 04/21/2021
databox Data Box Deploy Ordered https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox/data-box-deploy-ordered.md
Do the following steps using Azure CLI to order a device:
|query| The JMESPath query string. For more information, see [JMESPath](http://jmespath.org/). | --query <string>| |verbose| Include verbose logging. | --verbose |
-2. In your command-prompt of choice or terminal, run [az data box job create](/cli/azure/ext/databox/databox/job#ext-databox-az-databox-job-create) to create your Azure Data Box order.
+2. In your command-prompt of choice or terminal, run [az data box job create](/cli/azure/databox/job#az_databox_job_create) to create your Azure Data Box order.
```azurecli az databox job create --resource-group <resource-group> --name <order-name> --location <azure-location> --sku <databox-device-type> --contact-name <contact-name> --phone <phone-number> --email-list <email-list> --street-address1 <street-address-1> --street-address2 <street-address-2> --city "contact-city" --state-or-province <state-province> --country <country> --postal-code <postal-code> --company-name <company-name> --storage-account "storage-account"
Microsoft then prepares and dispatches your device via a regional carrier. You r
### Track a single order
-To get tracking information about a single, existing Azure Data Box order, run [`az databox job show`](/cli/azure/ext/databox/databox/job#ext-databox-az-databox-job-show). The command displays information about the order such as, but not limited to: name, resource group, tracking information, subscription ID, contact information, shipment type, and device sku.
+To get tracking information about a single, existing Azure Data Box order, run [`az databox job show`](/cli/azure/databox/job#az_databox_job_show). The command displays information about the order such as, but not limited to: name, resource group, tracking information, subscription ID, contact information, shipment type, and device sku.
```azurecli az databox job show --resource-group <resource-group> --name <order-name>
To get tracking information about a single, existing Azure Data Box order, run [
### List all orders
-If you have ordered multiple devices, you can run [`az databox job list`](/cli/azure/ext/databox/databox/job#ext-databox-az-databox-job-list) to view all your Azure Data Box orders. The command lists all orders that belong to a specific resource group. Also displayed in the output: order name, shipping status, Azure region, delivery type, order status. Canceled orders are also included in the list.
+If you have ordered multiple devices, you can run [`az databox job list`](/cli/azure/databox/job#az_databox_job_list) to view all your Azure Data Box orders. The command lists all orders that belong to a specific resource group. Also displayed in the output: order name, shipping status, Azure region, delivery type, order status. Canceled orders are also included in the list.
The command also displays time stamps of each order. ```azurecli
To delete a canceled order, go to **Overview** and select **Delete** from the co
### Cancel an order
-To cancel an Azure Data Box order, run [`az databox job cancel`](/cli/azure/ext/databox/databox/job#ext-databox-az-databox-job-cancel). You are required to specify your reason for canceling the order.
+To cancel an Azure Data Box order, run [`az databox job cancel`](/cli/azure/databox/job#az_databox_job_cancel). You are required to specify your reason for canceling the order.
```azurecli az databox job cancel --resource-group <resource-group> --name <order-name> --reason <cancel-description>
To cancel an Azure Data Box order, run [`az databox job cancel`](/cli/azure/ext/
### Delete an order
-If you have canceled an Azure Data Box order, you can run [`az databox job delete`](/cli/azure/ext/databox/databox/job#ext-databox-az-databox-job-delete) to delete the order.
+If you have canceled an Azure Data Box order, you can run [`az databox job delete`](/cli/azure/databox/job#az_databox_job_delete) to delete the order.
```azurecli az databox job delete --name [-n] <order-name> --resource-group <resource-group> [--yes] [--verbose]
databox Data Box Disk Quickstart Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox/data-box-disk-quickstart-portal.md
Use these Azure CLI commands to create a Data Box Disk job.
az storage account create --resource-group databox-rg --name databoxtestsa ```
-1. Run the [az databox job create](/cli/azure/ext/databox/databox/job#ext_databox_az_databox_job_create) command to create a Data Box job with the SKU DataBoxDisk:
+1. Run the [az databox job create](/cli/azure/databox/job#az_databox_job_create) command to create a Data Box job with the SKU DataBoxDisk:
```azurecli az databox job create --resource-group databox-rg --name databoxdisk-job \
Use these Azure CLI commands to create a Data Box Disk job.
--storage-account databoxtestsa --expected-data-size 1 ```
-1. Run the [az databox job update](/cli/azure/ext/databox/databox/job#ext_databox_az_databox_job_update) to update a job, as in this example, where you change the contact name and email:
+1. Run the [az databox job update](/cli/azure/databox/job#az_databox_job_update) to update a job, as in this example, where you change the contact name and email:
```azurecli az databox job update -g databox-rg --name databox-job --contact-name "Robert Anic" --email-list RobertAnic@contoso.com ```
- Run the [az databox job show](/cli/azure/ext/databox/databox/job#ext_databox_az_databox_job_show) command to get information about the job:
+ Run the [az databox job show](/cli/azure/databox/job#az_databox_job_show) command to get information about the job:
```azurecli az databox job show --resource-group databox-rg --name databox-job ```
- Use the [az databox job list]( /cli/azure/ext/databox/databox/job#ext_databox_az_databox_job_list) command to see all the Data Box jobs for a resource group:
+ Use the [az databox job list]( /cli/azure/databox/job#az_databox_job_list) command to see all the Data Box jobs for a resource group:
```azurecli az databox job list --resource-group databox-rg ```
- Run the [az databox job cancel](/cli/azure/ext/databox/databox/job#ext_databox_az_databox_job_cancel) command to cancel a job:
+ Run the [az databox job cancel](/cli/azure/databox/job#az_databox_job_cancel) command to cancel a job:
```azurecli az databox job cancel ΓÇôresource-group databox-rg --name databox-job --reason "Cancel job." ```
- Run the [az databox job delete](/cli/azure/ext/databox/databox/job#ext_databox_az_databox_job_delete) command to delete a job:
+ Run the [az databox job delete](/cli/azure/databox/job#az_databox_job_delete) command to delete a job:
```azurecli az databox job delete ΓÇôresource-group databox-rg --name databox-job ```
-1. Use the [az databox job list-credentials]( /cli/azure/ext/databox/databox/job#ext_databox_az_databox_job_list_credentials) command to list credentials for a Data Box job:
+1. Use the [az databox job list-credentials]( /cli/azure/databox/job#az_databox_job_list_credentials) command to list credentials for a Data Box job:
```azurecli az databox job list-credentials --resource-group "databox-rg" --name "databoxdisk-job"
databox Data Box Heavy Quickstart Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox/data-box-heavy-quickstart-portal.md
Use these Azure CLI commands to create a Data Box Heavy job.
az storage account create --resource-group databox-rg --name databoxtestsa ```
-1. Run the [az databox job create](/cli/azure/ext/databox/databox/job#ext_databox_az_databox_job_create) command to create a Data Box job with the **--sku** value of `DataBoxHeavy`:
+1. Run the [az databox job create](/cli/azure/databox/job#az_databox_job_create) command to create a Data Box job with the **--sku** value of `DataBoxHeavy`:
```azurecli az databox job create --resource-group databox-rg --name databoxheavy-job \
Use these Azure CLI commands to create a Data Box Heavy job.
> [!NOTE] > Make sure your subscription supports Data Box Heavy.
-1. Run the [az databox job update](/cli/azure/ext/databox/databox/job#ext_databox_az_databox_job_update) to update a job, as in this example, where you change the contact name and email:
+1. Run the [az databox job update](/cli/azure/databox/job#az_databox_job_update) to update a job, as in this example, where you change the contact name and email:
```azurecli az databox job update -g databox-rg --name databox-job --contact-name "Robert Anic" --email-list RobertAnic@contoso.com ```
- Run the [az databox job show](/cli/azure/ext/databox/databox/job#ext_databox_az_databox_job_show) command to get information about the job:
+ Run the [az databox job show](/cli/azure/databox/job#az_databox_job_show) command to get information about the job:
```azurecli az databox job show --resource-group databox-rg --name databox-job ```
- Use the [az databox job list]( /cli/azure/ext/databox/databox/job#ext_databox_az_databox_job_list) command to see all the Data Box jobs for a resource group:
+ Use the [az databox job list]( /cli/azure/databox/job#az_databox_job_list) command to see all the Data Box jobs for a resource group:
```azurecli az databox job list --resource-group databox-rg ```
- Run the [az databox job cancel](/cli/azure/ext/databox/databox/job#ext_databox_az_databox_job_cancel) command to cancel a job:
+ Run the [az databox job cancel](/cli/azure/databox/job#az_databox_job_cancel) command to cancel a job:
```azurecli az databox job cancel ΓÇôresource-group databox-rg --name databox-job --reason "Cancel job." ```
- Run the [az databox job delete](/cli/azure/ext/databox/databox/job#ext_databox_az_databox_job_delete) command to delete a job:
+ Run the [az databox job delete](/cli/azure/databox/job#az_databox_job_delete) command to delete a job:
```azurecli az databox job delete ΓÇôresource-group databox-rg --name databox-job ```
-1. Use the [az databox job list-credentials]( /cli/azure/ext/databox/databox/job#ext_databox_az_databox_job_list_credentials) command to list credentials for a Data Box job:
+1. Use the [az databox job list-credentials]( /cli/azure/databox/job#az_databox_job_list_credentials) command to list credentials for a Data Box job:
```azurecli az databox job list-credentials --resource-group "databox-rg" --name "databoxdisk-job"
databox Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox/policy-reference.md
Title: Built-in policy definitions for Azure Data Box description: Lists Azure Policy built-in policy definitions for Azure Data Box. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 04/14/2021 Last updated : 04/21/2021
databox Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Data Box description: Lists Azure Policy Regulatory Compliance controls available for Azure Data Box. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 04/14/2021 Last updated : 04/21/2021
ddos-protection Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/ddos-protection/policy-reference.md
ms.devlang: na na Previously updated : 04/14/2021 Last updated : 04/21/2021
dedicated-hsm Quickstart Hsm Azure Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/dedicated-hsm/quickstart-hsm-azure-cli.md
# Quickstart: Create an Azure Dedicated HSM by using the Azure CLI
-This article describes how to create and manage an Azure Dedicated HSM by using the [az dedicated-hsm](/cli/azure/ext/hardware-security-modules/dedicated-hsm) Azure CLI extension.
+This article describes how to create and manage an Azure Dedicated HSM by using the [az dedicated-hsm](/cli/azure/dedicated-hsm) Azure CLI extension.
## Prerequisites
az group create --name myRG --location westus
## Create a dedicated HSM
-To create a dedicated HSM, use the [az dedicated-hsm create](/cli/azure/ext/hardware-security-modules/dedicated-hsm#ext_hardware_security_modules_az_dedicated_hsm_create) command. The following example provisions a dedicated HSM named `hsm1` in the `westus` region, `myRG` resource group, and specified subscription, virtual network, and subnet. The required parameters are `name`, `location`, and `resource group`.
+To create a dedicated HSM, use the [az dedicated-hsm create](/cli/azure/dedicated-hsm#az_dedicated_hsm_create) command. The following example provisions a dedicated HSM named `hsm1` in the `westus` region, `myRG` resource group, and specified subscription, virtual network, and subnet. The required parameters are `name`, `location`, and `resource group`.
```azurecli-interactive az dedicated-hsm create \
The deployment takes approximately 25 to 30 minutes to complete.
## Get a dedicated HSM
-To get a current dedicated HSM, run the [az dedicated-hsm show](/cli/azure/ext/hardware-security-modules/dedicated-hsm#ext_hardware_security_modules_az_dedicated_hsm_show) command. The following example gets the `hsm1` dedicated HSM in the `myRG` resource group.
+To get a current dedicated HSM, run the [az dedicated-hsm show](/cli/azure/dedicated-hsm#az_dedicated_hsm_show) command. The following example gets the `hsm1` dedicated HSM in the `myRG` resource group.
```azurecli-interactive az dedicated-hsm show --resource-group myRG --name hsm1
az dedicated-hsm show --resource-group myRG --name hsm1
## Update a dedicated HSM
-Use the [az dedicated-hsm update](/cli/azure/ext/hardware-security-modules/dedicated-hsm#ext_hardware_security_modules_az_dedicated_hsm_update) command to update a dedicated HSM. The following example updates the `hsm1` dedicated HSM in the `myRG` resource group, and its tags:
+Use the [az dedicated-hsm update](/cli/azure/dedicated-hsm#az_dedicated_hsm_update) command to update a dedicated HSM. The following example updates the `hsm1` dedicated HSM in the `myRG` resource group, and its tags:
```azurecli-interactive az dedicated-hsm update --resource-group myRG ΓÇô-name hsm1 --tags resourceType="hsm" Environment="prod" Slice="A"
az dedicated-hsm update --resource-group myRG ΓÇô-name hsm1 --tags resourceType=
## List dedicated HSMs
-Run the [az dedicated-hsm list](/cli/azure/ext/hardware-security-modules/dedicated-hsm#ext_hardware_security_modules_az_dedicated_hsm_list) command to get information about current dedicated HSMs. The following example lists the dedicated HSMs in the `myRG` resource group:
+Run the [az dedicated-hsm list](/cli/azure/dedicated-hsm#az_dedicated_hsm_list) command to get information about current dedicated HSMs. The following example lists the dedicated HSMs in the `myRG` resource group:
```azurecli-interactive az dedicated-hsm list --resource-group myRG
az dedicated-hsm list --resource-group myRG
## Remove a dedicated HSM
-To remove a dedicated HSM, use the [az dedicated-hsm delete](/cli/azure/ext/hardware-security-modules/dedicated-hsm#ext_hardware_security_modules_az_dedicated_hsm_delete) command. The following example deletes the `hsm1` dedicated HSM from the `myRG` resource group:
+To remove a dedicated HSM, use the [az dedicated-hsm delete](/cli/azure/dedicated-hsm#az_dedicated_hsm_delete) command. The following example deletes the `hsm1` dedicated HSM from the `myRG` resource group:
```azurecli-interactive az dedicated-hsm delete --resource-group myRG ΓÇô-name hsm1
dedicated-hsm Tutorial Deploy Hsm Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/dedicated-hsm/tutorial-deploy-hsm-cli.md
az network vnet subnet create \
After you configure your network, use these Azure CLI commands to provision your HSMs.
-1. Use the [az dedicated-hsm create](/cli/azure/ext/hardware-security-modules/dedicated-hsm#ext_hardware_security_modules_az_dedicated_hsm_create) command to provision the first HSM. The HSM is named hsm1. Substitute your subscription:
+1. Use the [az dedicated-hsm create](/cli/azure/dedicated-hsm#az_dedicated_hsm_create) command to provision the first HSM. The HSM is named hsm1. Substitute your subscription:
```azurecli az dedicated-hsm create --location westus --name hsm1 --resource-group myRG --network-profile-network-interfaces \
After you configure your network, use these Azure CLI commands to provision your
This deployment should take approximately 25 to 30 minutes to complete with the bulk of that time being the HSM devices.
-1. To see a current HSM, run the [az dedicated-hsm show](/cli/azure/ext/hardware-security-modules/dedicated-hsm#ext_hardware_security_modules_az_dedicated_hsm_show) command:
+1. To see a current HSM, run the [az dedicated-hsm show](/cli/azure/dedicated-hsm#az_dedicated_hsm_show) command:
```azurecli az dedicated-hsm show --resource group myRG --name hsm1
After you configure your network, use these Azure CLI commands to provision your
/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/myRG/providers/Microsoft.Network/virtualNetworks/MyHSM-vnet/subnets/MyHSM-vnet ```
-1. Run the [az dedicated-hsm list](/cli/azure/ext/hardware-security-modules/dedicated-hsm#ext_hardware_security_modules_az_dedicated_hsm_list) command to view details about your current HSMs:
+1. Run the [az dedicated-hsm list](/cli/azure/dedicated-hsm#az_dedicated_hsm_list) command to view details about your current HSMs:
```azurecli az dedicated-hsm list --resource-group myRG ```
-There are some other commands that might be useful. Use the [az dedicated-hsm update](/cli/azure/ext/hardware-security-modules/dedicated-hsm#ext_hardware_security_modules_az_dedicated_hsm_update) command to update an HSM:
+There are some other commands that might be useful. Use the [az dedicated-hsm update](/cli/azure/dedicated-hsm#az_dedicated_hsm_update) command to update an HSM:
```azurecli az dedicated-hsm update --resource-group myRG ΓÇôname hsm1 ```
-To delete an HSM, use the [az dedicated-hsm delete](/cli/azure/ext/hardware-security-modules/dedicated-hsm#ext_hardware_security_modules_az_dedicated_hsm_delete) command:
+To delete an HSM, use the [az dedicated-hsm delete](/cli/azure/dedicated-hsm#az_dedicated_hsm_delete) command:
```azurecli az dedicated-hsm delete --resource-group myRG ΓÇôname hsm1
defender-for-iot How To Deploy Edge https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/how-to-deploy-edge.md
Title: Deploy IoT Edge Defender-IoT-micro-agent
+ Title: Deploy IoT Edge security module
description: Learn about how to deploy a Defender for IoT security agent on IoT Edge. Previously updated : 1/30/2020 Last updated : 04/21/2021
-# Deploy a Defender-IoT-micro-agent on your IoT Edge device
+# Deploy a security module on your IoT Edge device
**Defender for IoT** module provides a comprehensive security solution for your IoT Edge devices.
-The Defender-IoT-micro-agent collects, aggregates, and analyzes raw security data from your Operating System and Container system into actionable security recommendations and alerts.
-To learn more, see [Defender-IoT-micro-agent for IoT Edge](security-edge-architecture.md).
+The security module collects, aggregates, and analyzes raw security data from your Operating System and Container system into actionable security recommendations and alerts.
+To learn more, see [Security module for IoT Edge](security-edge-architecture.md).
-In this article, you'll learn how to deploy a Defender-IoT-micro-agent on your IoT Edge device.
+In this article, you'll learn how to deploy a security module on your IoT Edge device.
-## Deploy Defender-IoT-micro-agent
+## Deploy security module
-Use the following steps to deploy a Defender for IoT Defender-IoT-micro-agent for IoT Edge.
+Use the following steps to deploy a Defender for IoT security module for IoT Edge.
### Prerequisites
Complete each step to complete your IoT Edge deployment for Defender for IoT.
## Diagnostic steps
-If you encounter an issue, container logs are the best way to learn about the state of an IoT Edge Defender-IoT-micro-agent device. Use the commands and tools in this section to gather information.
+If you encounter an issue, container logs are the best way to learn about the state of an IoT Edge security module device. Use the commands and tools in this section to gather information.
### Verify the required containers are installed and functioning as expected
defender-for-iot How To Install Software https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/how-to-install-software.md
Title: Defender for IoT installation description: Learn how to install a sensor and the on-premises management console for Azure Defender for IoT. Previously updated : 12/2/2020 Last updated : 4/20/2021
To install:
1. Select **SENSOR-RELEASE-\<version\> Enterprise**.
- :::image type="content" source="media/tutorial-install-components/sensor-version-select-screen-v2.png" alt-text="Screenshot that shows version selection.":::
+ :::image type="content" source="media/tutorial-install-components/sensor-version-select-screen-v2.png" alt-text="Select your sensor version and enterprise type.":::
-1. Define the appliance profile and network properties:
+1. Define the appliance profile, and network properties:
- :::image type="content" source="media/tutorial-install-components/appliance-profile-screen-v2.png" alt-text="Screenshot that shows the appliance profile.":::
+ :::image type="content" source="media/tutorial-install-components/appliance-profile-screen-v2.png" alt-text="Screenshot that shows the appliance profile, and network properties.":::
| Parameter | Configuration | |--|--|
To install the software:
:::image type="content" source="media/tutorial-install-components/sensor-version-select-screen-v2.png" alt-text="Screenshot of the screen for selecting a version.":::
-1. In the Installation Wizard, define the appliance profile and network properties:
+1. In the Installation Wizard define the hardware profile and network properties:
:::image type="content" source="media/tutorial-install-components/installation-wizard-screen-v2.png" alt-text="Screenshot that shows the Installation Wizard.":::
To install:
:::image type="content" source="media/tutorial-install-components/sensor-version-select-screen-v2.png" alt-text="Screenshot that shows selecting the version.":::
-1. In the Installation Wizard, define the appliance profile and network properties.
+1. In the Installation Wizard define the appliance profile and network properties.
:::image type="content" source="media/tutorial-install-components/installation-wizard-screen-v2.png" alt-text="Screenshot that shows the Installation Wizard.":::
To install:
:::image type="content" source="media/tutorial-install-components/defender-for-iot-sign-in-screen.png" alt-text="Screenshot that shows access to the management console.":::
+## On-premises management console installation
+
+Before installing the software on the appliance, you need to adjust the appliance's BIOS configuration:
+
+### BIOS configuration
+
+To configure the BIOS for your appliance:
+
+1. [Enable remote access and update the password](#enable-remote-access-and-update-the-password).
+
+1. [Configure the BIOS](#configure-the-hpe-bios).
+
+### Software installation
+
+The installation process takes about 20 minutes. After the installation, the system is restarted several times.
+
+During the installation process, you will can add a secondary NIC. If you choose not to install the secondary NIC during installation, you can [add a secondary NIC](#add-a-secondary-nic) at a later time.
+
+To install the software:
+
+1. Select your preferred language for the installation process.
+
+ :::image type="content" source="media/tutorial-install-components/on-prem-language-select.png" alt-text="Select your preferred language for the installation process.":::
+
+1. Select **MANAGEMENT-RELEASE-\<version\>\<deployment type\>**.
+
+ :::image type="content" source="media/tutorial-install-components/on-prem-install-screen.png" alt-text="Select your version.":::
+
+1. In the Installation Wizard, define the network properties:
+
+ :::image type="content" source="media/tutorial-install-components/on-prem-first-steps-install.png" alt-text="Screenshot that shows the appliance profile.":::
+
+ | Parameter | Configuration |
+ |--|--|
+ | **configure management network interface** | For Dell: **eth0, eth1** <br /> For HP: **enu1, enu2** <br /> or <br />**possible value** |
+ | **configure management network IP address:** | **IP address provided by the customer** |
+ | **configure subnet mask:** | **IP address provided by the customer** |
+ | **configure DNS:** | **IP address provided by the customer** |
+ | **configure default gateway IP address:** | **IP address provided by the customer** |
+
+1. **(Optional)** If you would like to install a secondary Network Interface Card (NIC), define the following appliance profile, and network properties:
+
+ :::image type="content" source="media/tutorial-install-components/on-prem-secondary-nic-install.png" alt-text="Screenshot that shows the Secondary NIC install questions.":::
+
+ | Parameter | Configuration |
+ |--|--|
+ | **configure sensor monitoring interface (Optional):** | **eth1**, or **possible value** |
+ | **configure an IP address for the sensor monitoring interface:** | **IP address provided by the customer** |
+ | **configure a subnet mask for the sensor monitoring interface:** | **IP address provided by the customer** |
+
+1. Accept the settlings and continue by typing `Y`.
+
+1. After about 10 minutes, the two sets of credentials appear. One is for a **CyberX** user, and one is for a **Support** user.
+
+ :::image type="content" source="media/tutorial-install-components/credentials-screen.png" alt-text="Copy these credentials as they will not be presented again.":::
+
+ Save the usernames, and passwords, you'll need these credentials to access the platform the first time you use it.
+
+1. Select **Enter** to continue.
+
+For information on how to find the physical port on your appliance, see [Find your port](#find-your-port).
+
+### Add a secondary NIC
+
+You can enhance security to your on-premises management console by adding a secondary NIC. By adding a secondary NIC you will have one dedicated for your users, and the other will support the configuration of a gateway for routed networks. The second NIC is dedicated to all attached sensors within an IP address range.
+
+Both NICs have the user interface (UI) enabled. When routing is not necessary, all of the features that are supported by the UI, will be available on the secondary NIC. High Availability will run on the secondary NIC.
+
+If you choose not to deploy a secondary NIC, all of the features will be available through the primary NIC.
+
+If you have already configured your on-premises management console, and would like to add a secondary NIC to your on-premises management console, use the following steps:
+
+1. Use the network reconfigure command:
+
+ ```bash
+ sudo cyberx-management-network-reconfigure
+ ```
+
+1. Enter the following responses to the following questions:
+
+ :::image type="content" source="media/tutorial-install-components/network-reconfig-command.png" alt-text="Enter the following answers to configure your appliance.":::
+
+ | Parameters | Response to enter |
+ |--|--|
+ | **Management Network IP address** | `N` |
+ | **Subnet mask** | `N` |
+ | **DNS** | `N` |
+ | **Default gateway IP Address** | `N` |
+ | **Sensor monitoring interface (Optional. Applicable when sensors are on a different network segment. For more information, see the Installation instructions)**| `Y`, **select a possible value** |
+ | **An IP address for the sensor monitoring interface (accessible by the sensors)** | `Y`, **IP address provided by the customer**|
+ | **A subnet mask for the sensor monitoring interface (accessible by the sensors)** | `Y`, **IP address provided by the customer** |
+ | **Hostname** | **provided by the customer** |
+
+1. Review all choices, and enter `Y` to accept the changes. The system reboots.
+
+### Find your port
+
+If you are having trouble locating the physical port on your device, you can use the following command to:
+
+```bash
+sudo ethtool -p <port value> <time-in-seconds>
+```
+
+This command will cause the light on the port to flash for the specified time period. For example, entering `sudo ethtool -p eno1 120`, will have port eno1 flash for 2 minutes allowing you to find the port on the back of your appliance.
+ ## Virtual appliance: On-premises management console installation The on-premises management console VM supports the following architectures:
To create a virtual machine by using Hyper-V:
### Software installation (ESXi and Hyper-V)
-Starting the virtual machine will start the installation process from the ISO image. To enhance security, you can create a second network interface on your on-premises management console. One network interface is dedicated for your users, and can support the configuration of a gateway for routed networks. The second network interface is dedicated to the all attached sensors within an IP address range.
-
-Both network interfaces have the user interface (UI) enabled, and all of the features that are supported by the UI will be available on the secondary network interface when routing in not needed. High Availability will run on the secondary network interface.
-
-If you choose not to deploy a secondary network interface, all of the features will be available through the primary network interface.
+Starting the virtual machine will start the installation process from the ISO image.
To install the software:
To install the software:
1. Define the network interface for the sensor management network: interface, IP, subnet, DNS server, and default gateway.
-1. (Optional) Add a second network interface to your on-premises management console.
+1. Sign-in credentials are automatically generated. Save the username and passwords, you'll need these credentials to access the platform the first time you use it.
- 1. `Please type sensor monitoring interface (Optional. Applicable when sensors are on a different network segment. For more information see the Installation instructions): <name of interface>`
-
- 1. `Please type an IP address for the sensor monitoring interface (accessible by the sensors): <ip address>`
-
- 1. `Please type a subnet mask for the sensor monitoring interface (accessible by the sensors): <subnet>`
-
-1. Sign-in credentials are automatically generated and presented. Keep these credentials in a safe place, because they're required for sign-in and administration.
-
- | Username | Description |
- |--|--|
- | Support | The administrative user for user management. |
- | CyberX | The equivalent of root for accessing the appliance. |
-
-1. The appliance restarts.
+ The appliance will then reboot.
1. Access the management console via the IP address previously configured: `<https://ip_address>`.
defender-for-iot References Work With Defender For Iot Apis https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/references-work-with-defender-for-iot-apis.md
Array of JSON objects that represent alerts.
| **engine** | String | No | Protocol Violation, Policy Violation, Malware, Anomaly, or Operational | | **sourceDevice** | Numeric | Yes | Device ID | | **destinationDevice** | Numeric | Yes | Device ID |
-| **sourceDeviceAddress** | Numeric | Yes | IP, MAC, Null |
-| **destinationDeviceAddress** | Numeric | Yes | IP, MAC, Null |
+| **sourceDeviceAddress** | Numeric | Yes | IP, MAC |
+| **destinationDeviceAddress** | Numeric | Yes | IP, MAC |
| **remediationSteps** | String | Yes | Remediation steps described in alert | | **additionalInformation** | Additional information object | Yes | - |
Use this API to retrieve all or filtered alerts from an on-premises management c
| **engine** | String | No | Protocol Violation, Policy Violation, Malware, Anomaly, or Operational | | **sourceDevice** | Numeric | Yes | Device ID | | **destinationDevice** | Numeric | Yes | Device ID |
-| **sourceDeviceAddress** | Numeric | Yes | IP, MAC, Null |
-| **destinationDeviceAddress** | Numeric | Yes | IP, MAC, Null |
+| **sourceDeviceAddress** | Numeric | Yes | IP, MAC |
+| **destinationDeviceAddress** | Numeric | Yes | IP, MAC |
| **remediationSteps** | String | Yes | Remediation steps shown in alert|
-| **sensorName** | String | Yes | Name of sensor defined by user in the console|
-|**zoneName** | String | Yes | Name of zone associated with sensor in the console|
-| **siteName** | String | Yes | Name of site associated with sensor in the console |
+| **sensorName** | String | Yes | Name of sensor defined by user |
+|**zoneName** | String | Yes | Name of zone associated with sensor|
+| **siteName** | String | Yes | Name of site associated with sensor |
| **additionalInformation** | Additional information object | Yes | - | Note that /api/v2/ is needed for the following information:
dms Migrate Mysql To Azure Mysql Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/dms/migrate-mysql-to-azure-mysql-powershell.md
+
+ Title: "PowerShell: Run offline migration from MySQL database to Azure Database for MySQL using DMS"
+
+description: Learn to migrate an on-premise MySQL database to Azure Database for MySQL by using Azure Database Migration Service through PowerShell script.
+++++++++ Last updated : 04/11/2021++
+# Migrate MySQL to Azure Database for MySQL offline with PowerShell & Azure Database Migration Service
+
+In this article, you migrate a MySQL database restored to an on-premises instance to Azure Database for MySQL by using the offline migration capability of Azure Database Migration Service through Microsoft Azure PowerShell. The article documents a collection of PowerShell scripts which can be executed in sequence to perform the offline migration of MySQL database to Azure.
+
+> [!NOTE]
+> Currently it is not possible to run complete database migration using the Az.DataMigration module. In the meantime, the sample PowerShell script is provided "as-is" that uses the [DMS Rest API](https://docs.microsoft.com/rest/api/datamigration/tasks/get) and allows you to automate migration. This script will be modified or deprecated, once official support is added in the Az.DataMigration module and Azure CLI.
+
+> [!IMPORTANT]
+> The ΓÇ£MySQL to Azure Database for MySQLΓÇ¥ online migration scenario is being replaced with a parallelized, highly performant offline migration scenario from June 1, 2021. For online migrations, you can use this new offering together with [data-in replication](https://docs.microsoft.com/azure/mysql/concepts-data-in-replication). Alternatively, use open-source tools such as [MyDumper/MyLoader](https://centminmod.com/mydumper.html) with data-in replication for online migrations.
+
+The article helps to automate the scenario where source and target database names can be same or different and as part of migration either all or few of the tables in the target database need to be migrated which have the same name and table structure. Although the articles assumes the source to be a MySQL database instance and target to be Azure Database for MySQL, it can be used to migrate from one Azure Database for MySQL to another just by changing the source server name and credentials. Also, migration from lower version MySQL servers (v5.6 and above) to higher versions is also supported.
++
+In this article, you learn how to:
+> [!div class="checklist"]
+>
+> * Migrate database schema.
+> * Create a resource group.
+> * Create an instance of the Azure Database Migration Service.
+> * Create a migration project in an Azure Database Migration Service instance.
+> * Configure the migration project to use the offline migration capability for MySQL.
+> * Run the migration.
+
+## Prerequisites
+
+To complete these steps, you need:
+
+* Have an Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free).
+* Have an on-premises MySQL database with version 5.6 or above. If not, then download and install [MySQL community edition](https://dev.mysql.com/downloads/mysql/) 5.6 or above.
+* [Create an instance in Azure Database for MySQL](../mysql/quickstart-create-mysql-server-database-using-azure-portal.md). Refer to the article [Use MySQL Workbench to connect and query data](../mysql/connect-workbench.md) for details about how to connect and create a database using the Workbench application. The Azure Database for MySQL version should be equal to or higher than the on-premises MySQL version . For example, MySQL 5.7 can migrate to Azure Database for MySQL 5.7 or upgraded to 8.
+* Create a Microsoft Azure Virtual Network for Azure Database Migration Service by using Azure Resource Manager deployment model, which provides site-to-site connectivity to your on-premises source servers by using either [ExpressRoute](../expressroute/expressroute-introduction.md) or [VPN](../vpn-gateway/vpn-gateway-about-vpngateways.md). For more information about creating a virtual network, see the [Virtual Network Documentation](../virtual-network/index.yml), and especially the quickstart articles with step-by-step details.
+
+ > [!NOTE]
+ > During virtual networkNet setup, if you use ExpressRoute with network peering to Microsoft, add the *Microsoft.Sql* service [endpoint](../virtual-network/virtual-network-service-endpoints-overview.md) to the subnet in which the service will be provisioned. This configuration is necessary because Azure Database Migration Service lacks internet connectivity.
+
+* Ensure that your virtual network Network Security Group rules don't block the outbound port 443 of ServiceTag for Storage and AzureMonitor. For more detail on virtual network NSG traffic filtering, see the article [Filter network traffic with network security groups](../virtual-network/virtual-network-vnet-plan-design-arm.md).
+* Open your Windows firewall to allow connections from Virtual Network for Azure Database Migration Service to access the source MySQL Server, which by default is TCP port 3306.
+* When using a firewall appliance in front of your source database(s), you may need to add firewall rules to allow connections from Virtual Network for Azure Database Migration Service to access the source database(s) for migration.
+* Create a server-level [firewall rule](../azure-sql/database/firewall-configure.md) or [configure VNET service endpoints](../mysql/howto-manage-vnet-using-portal.md) for target Azure Database for MySQL to allow Virtual Network for Azure Database Migration Service access to the target databases.
+* The source MySQL must be on supported MySQL community edition. To determine the version of MySQL instance, in the MySQL utility or MySQL Workbench, run the following command:
+
+ ```
+ SELECT @@version;
+ ```
+
+* Azure Database for MySQL supports only InnoDB tables. To convert MyISAM tables to InnoDB, see the article [Converting Tables from MyISAM to InnoDB](https://dev.mysql.com/doc/refman/5.7/en/converting-tables-to-innodb.html)
+* The user must have the privileges to read data on the source database.
+* The guide uses PowerShell v7.1 with PSEdition Core which can be installed as per the [installation guide](/powershell/scripting/install/installing-powershell?view=powershell-7.1&preserve-view=true)
+* Download and install following modules from the PowerShell Gallery by using [Install-Module PowerShell cmdlet](/powershell/module/powershellget/Install-Module); be sure to open the PowerShell command window using run as an Administrator:
+ * Az.Resources
+ * Az.Network
+ * Az.DataMigration
+
+```powershell
+Install-Module Az.Resources
+Install-Module Az.Network
+Install-Module Az.DataMigration
+Import-Module Az.Resources
+Import-Module Az.Network
+Import-Module Az.DataMigration
+```
+
+## Migrate database schema
+
+To transfer all the database objects like table schemas, indexes and stored procedures, we need to extract schema from the source database and apply to the target database. To extract schema, you can use mysqldump with the `--no-data` parameter. For this you need a machine which can connect to both the source MySQL database and the target Azure Database for MySQL.
+
+To export the schema using mysqldump, run the following command:
+
+```
+mysqldump -h [servername] -u [username] -p[password] --databases [db name] --no-data > [schema file path]
+```
+
+For example:
+
+```
+mysqldump -h 10.10.123.123 -u root -p --databases migtestdb --no-data > d:\migtestdb.sql
+```
+
+To import schema to target Azure Database for MySQL, run the following command:
+
+```
+mysql.exe -h [servername] -u [username] -p[password] [database]< [schema file path]
+ ```
+
+For example:
+
+```
+mysql.exe -h mysqlsstrgt.mysql.database.azure.com -u docadmin@mysqlsstrgt -p migtestdb < d:\migtestdb.sql
+ ```
+
+If you have foreign keys in your schema, the parallel data load during migration will be handled by the migration task. There is no need to drop foreign keys during schema migration.
+
+If you have triggers in the database, it will enforce data integrity in the target ahead of full data migration from the source. The recommendation is to disable triggers on all the tables in the target during migration, and then enable the triggers after migration is done.
+
+Execute the following script in MySQL Workbench on the target database to extract the drop trigger script and add trigger script.
+
+```sql
+SELECT
+ SchemaName,
+ GROUP_CONCAT(DropQuery SEPARATOR ';\n') as DropQuery,
+ Concat('DELIMITER $$ \n\n', GROUP_CONCAT(AddQuery SEPARATOR '$$\n'), '$$\n\nDELIMITER ;') as AddQuery
+FROM
+(
+SELECT
+ TRIGGER_SCHEMA as SchemaName,
+ Concat('DROP TRIGGER `', TRIGGER_NAME, "`") as DropQuery,
+ Concat('CREATE TRIGGER `', TRIGGER_NAME, '` ', ACTION_TIMING, ' ', EVENT_MANIPULATION,
+ '\nON `', EVENT_OBJECT_TABLE, '`\n' , 'FOR EACH ', ACTION_ORIENTATION, ' ',
+ ACTION_STATEMENT) as AddQuery
+FROM
+ INFORMATION_SCHEMA.TRIGGERS
+ORDER BY EVENT_OBJECT_SCHEMA, EVENT_OBJECT_TABLE, ACTION_TIMING, EVENT_MANIPULATION, ACTION_ORDER ASC
+) AS Queries
+GROUP BY SchemaName
+```
+
+Run the generated drop trigger query (DropQuery column) in the result to drop triggers in the target database. The add trigger query can be saved, to be used post data migration completion.
+
+## Log in to your Microsoft Azure subscription
+
+Use the [Connect-AzAccount PowerShell command](/powershell/module/az.accounts/connect-azaccount) to sign in to your Azure subscription by using PowerShell, as per the directions in the article [Log in with Azure PowerShell](/powershell/azure/authenticate-azureps).
+
+The following script sets the default subscription for PowerShell session post login and creates a helper logging function for formatted console logs.
+
+```powershell
+[string] $SubscriptionName = "mySubscription"
+$ErrorActionPreference = "Stop";
+
+Connect-AzAccount
+Set-AzContext -Subscription $SubscriptionName
+$global:currentSubscriptionId = (Get-AzContext).Subscription.Id;
+
+function LogMessage([string] $Message, [bool] $IsProcessing = $false) {
+ if ($IsProcessing) {
+ Write-Host "$(Get-Date -Format "yyyy-MM-dd HH:mm:ss"): $Message" -ForegroundColor Yellow
+ }
+ else {
+ Write-Host "$(Get-Date -Format "yyyy-MM-dd HH:mm:ss"): $Message" -ForegroundColor Green
+ }
+}
+```
+
+## Register the Microsoft.DataMigration resource provider
+
+Registration of the resource provider needs to be done on each Azure subscription only once. Without the registration, you will not be able to create an instance of **Azure Database Migration Service**.
+
+Register the resource provider by using the [Register-AzResourceProvider](/powershell/module/az.resources/register-azresourceprovider) command. The following script registers the resource provider required for **Azure Database Migration Service**
+
+```powershell
+Register-AzResourceProvider -ProviderNamespace Microsoft.DataMigration
+```
+
+## Create a resource group
+
+An Azure resource group is a logical container into which Azure resources are deployed and managed. Create a resource group before you create any DMS resources.
+
+Create a resource group by using the [New-AzResourceGroup](/powershell/module/az.resources/new-azresourcegroup) command.
+
+The following example creates a resource group named *myResourceGroup* in the *West US 2* region under the default subscription *mySubscription*.
+
+```powershell
+# Get the details of resource group
+[string] $Location = "westus2"
+[string] $ResourceGroupName = "myResourceGroup"
+
+$resourceGroup = Get-AzResourceGroup -Name $ResourceGroupName
+if (-not($resourceGroup)) {
+ LogMessage -Message "Creating resource group $ResourceGroupName..." -IsProcessing $true
+ $resourceGroup = New-AzResourceGroup -Name $ResourceGroupName -Location $Location
+ LogMessage -Message "Created resource group - $($resourceGroup.ResourceId)."
+}
+else { LogMessage -Message "Resource group $ResourceGroupName exists." }
+```
+
+## Create an instance of Azure Database Migration Service
+
+You can create new instance of Azure Database Migration Service by using the [New-AzDataMigrationService](/powershell/module/az.datamigration/new-azdatamigrationservice) command. This command expects the following required parameters:
+* *Azure Resource Group name*. You can use [New-AzResourceGroup](/powershell/module/az.resources/new-azresourcegroup) command to create Azure Resource group as previously shown and provide its name as a parameter.
+* *Service name*. String that corresponds to the desired unique service name for Azure Database Migration Service
+* *Location*. Specifies the location of the service. Specify an Azure data center location, such as West US or Southeast Asia
+* *Sku*. This parameter corresponds to DMS Sku name. The currently supported Sku name are *Standard_1vCore*, *Standard_2vCores*, *Standard_4vCores*, *Premium_4vCores*.
+* *Virtual Subnet Identifier*. You can use [Get-AzVirtualNetworkSubnetConfig](/powershell/module/az.network/get-azvirtualnetworksubnetconfig) command to get the information of a subnet.
+
+The following script expects that the *myVirtualNetwork* virtual network exists with a subnet named *default* and then creates a Database Migration Service with the name *myDmService* under the resource group created in **Step 3** and in the same region.
+
+```powershell
+# Get a reference to the DMS service - Create if not exists
+[string] $VirtualNetworkName = "myVirtualNetwork"
+[string] $SubnetName = "default"
+[string] $ServiceName = "myDmService"
+
+$dmsServiceResourceId = "/subscriptions/$($global:currentSubscriptionId)/resourceGroups/$ResourceGroupName/providers/Microsoft.DataMigration/services/$ServiceName"
+$dmsService = Get-AzResource -ResourceId $dmsServiceResourceId -ErrorAction SilentlyContinue
+
+# Create Azure DMS service if not existing
+# Possible values for SKU currently are Standard_1vCore,Standard_2vCores,Standard_4vCores,Premium_4vCores
+if (-not($dmsService)) {
+ $virtualNetwork = Get-AzVirtualNetwork -ResourceGroupName $ResourceGroupName -Name $VirtualNetworkName
+ if (-not ($virtualNetwork)) { throw "ERROR: Virtual Network $VirtualNetworkName does not exists" }
+
+ $subnet = Get-AzVirtualNetworkSubnetConfig -VirtualNetwork $virtualNetwork -Name $SubnetName
+ if (-not ($subnet)) { throw "ERROR: Virtual Network $VirtualNetworkName does not contains Subnet $SubnetName" }
+
+ LogMessage -Message "Creating Azure Data Migration Service $ServiceName..." -IsProcessing $true
+ $dmsService = New-AzDataMigrationService `
+ -ResourceGroupName $ResourceGroupName `
+ -Name $ServiceName `
+ -Location $resourceGroup.Location `
+ -Sku Premium_4vCores `
+ -VirtualSubnetId $Subnet.Id
+
+ $dmsService = Get-AzResource -ResourceId $dmsServiceResourceId
+ LogMessage -Message "Created Azure Data Migration Service - $($dmsService.ResourceId)."
+}
+else { LogMessage -Message "Azure Data Migration Service $ServiceName exists." }
+```
+
+## Create a migration project
+
+After creating an Azure Database Migration Service instance, you will create a migration project. A migration project specifies the type of migration that needs to be done.
+
+The following script creates a migration project named *myfirstmysqlofflineproject* for offline migration from MySQL to Azure Database for MySQL under the Database Migration Service instance created in **Step 4** and in the same region.
+
+```powershell
+# Get a reference to the DMS project - Create if not exists
+[string] $ProjectName = "myfirstmysqlofflineproject"
+
+$dmsProjectResourceId = "/subscriptions/$($global:currentSubscriptionId)/resourceGroups/$($dmsService.ResourceGroupName)/providers/Microsoft.DataMigration/services/$($dmsService.Name)/projects/$projectName"
+$dmsProject = Get-AzResource -ResourceId $dmsProjectResourceId -ErrorAction SilentlyContinue
+
+# Create Azure DMS Project if not existing
+if (-not($dmsProject)) {
+ LogMessage -Message "Creating Azure DMS project $projectName for MySQL migration ..." -IsProcessing $true
+
+ $newProjectProperties = @{"sourcePlatform" = "MySQL"; "targetPlatform" = "AzureDbForMySQL" }
+ $dmsProject = New-AzResource `
+ -ApiVersion 2018-03-31-preview `
+ -Location $dmsService.Location `
+ -ResourceId $dmsProjectResourceId `
+ -Properties $newProjectProperties `
+ -Force
+
+ LogMessage -Message "Created Azure DMS project $projectName - $($dmsProject.ResourceId)."
+}
+else { LogMessage -Message "Azure DMS project $projectName exists." }
+```
+
+## Create a Database Connection Info object for the source and target connections
+
+After creating the migration project, you will create the database connection information. This connection information will be used to connect to the source and target servers during the migration process.
+
+The following script takes the server name, user name and password for the source and target MySQL instances and creates the connection information objects. The script prompts the user to enter the password for the source and target MySQL instances. For silent scripts, the credentials can be fetched from Azure Key Vault.
+
+```powershell
+# Initialize the source and target database server connections
+[string] $SourceServerName = "13.66.136.192"
+[string] $SourceUserName = "docadmin@mysqlserver"
+[securestring] $SourcePassword = Read-Host "Enter MySQL Source Server Password" -AsSecureString
+
+[string] $TargetServerName = "migdocdevwus2mysqlsstrgt.mysql.database.azure.com"
+[string] $TargetUserName = "docadmin@migdocdevwus2mysqlsstrgt"
+[securestring] $TargetPassword = Read-Host "Enter MySQL Target Server Password" -AsSecureString
+
+function InitConnection(
+ [string] $ServerName,
+ [string] $UserName,
+ [securestring] $Password) {
+ $connectionInfo = @{
+ "dataSource" = "";
+ "serverName" = "";
+ "port" = 3306;
+ "userName" = "";
+ "password" = "";
+ "authentication" = "SqlAuthentication";
+ "encryptConnection" = $true;
+ "trustServerCertificate" = $true;
+ "additionalSettings" = "";
+ "type" = "MySqlConnectionInfo"
+ }
+
+ $connectionInfo.dataSource = $ServerName;
+ $connectionInfo.serverName = $ServerName;
+ $connectionInfo.userName = $UserName;
+ $connectionInfo.password = (ConvertFrom-SecureString -AsPlainText $password).ToString();
+ $connectionInfo;
+}
+
+# Initialize the source and target connections
+LogMessage -Message "Initializing source and target connection objects ..." -IsProcessing $true
+$sourceConnInfo = InitConnection `
+ $SourceServerName `
+ $SourceUserName `
+ $SourcePassword;
+
+$targetConnInfo = InitConnection `
+ $TargetServerName `
+ $TargetUserName `
+ $TargetPassword;
+
+LogMessage -Message "Source and target connection object initialization complete."
+```
+
+## Extract the list of table names from the target database
+
+Database table list can be extracted using a migration task and connection information. The table list will be extracted from both the source database and target database so that proper mapping and validation can be done.
+
+The following script takes the names of the source and target databases and then extracts the table list from the databases using the *GetUserTablesMySql* migration task.
+
+```powershell
+# Run scenario to get the tables from the target database to build
+# the migration table mapping
+[string] $TargetDatabaseName = "migtargetdb"
+[string] $SourceDatabaseName = "migsourcedb"
+
+function RunScenario([object] $MigrationService,
+ [object] $MigrationProject,
+ [string] $ScenarioTaskName,
+ [object] $TaskProperties,
+ [bool] $WaitForScenario = $true) {
+ # Check if the scenario task already exists, if so remove it
+ LogMessage -Message "Removing scenario if already exists..." -IsProcessing $true
+ Remove-AzDataMigrationTask `
+ -ResourceGroupName $MigrationService.ResourceGroupName `
+ -ServiceName $MigrationService.Name `
+ -ProjectName $MigrationProject.Name `
+ -TaskName $ScenarioTaskName `
+ -Force;
+
+ # Start the new scenario task using the provided properties
+ LogMessage -Message "Initializing scenario..." -IsProcessing $true
+ New-AzResource `
+ -ApiVersion 2018-03-31-preview `
+ -Location $MigrationService.Location `
+ -ResourceId "/subscriptions/$($global:currentSubscriptionId)/resourceGroups/$($MigrationService.ResourceGroupName)/providers/Microsoft.DataMigration/services/$($MigrationService.Name)/projects/$($MigrationProject.Name)/tasks/$($ScenarioTaskName)" `
+ -Properties $TaskProperties `
+ -Force | Out-Null;
+
+ LogMessage -Message "Waiting for $ScenarioTaskName scenario to complete..." -IsProcessing $true
+ if ($WaitForScenario) {
+ $progressCounter = 0;
+ do {
+ if ($null -ne $scenarioTask) {
+ Start-Sleep 10;
+ }
+
+ # Get calls can time out and will return a cancellation exception in that case
+ $scenarioTask = Get-AzDataMigrationTask `
+ -ResourceGroupName $MigrationService.ResourceGroupName `
+ -ServiceName $MigrationService.Name `
+ -ProjectName $MigrationProject.Name `
+ -TaskName $ScenarioTaskName `
+ -Expand `
+ -ErrorAction Ignore;
+
+ Write-Progress -Activity "Scenario Run $ScenarioTaskName (Marquee Progress Bar)" `
+ -Status $scenarioTask.ProjectTask.Properties.State `
+ -PercentComplete $progressCounter
+
+ $progressCounter += 10;
+ if ($progressCounter -gt 100) { $progressCounter = 10 }
+ }
+ while (($null -eq $scenarioTask) -or ($scenarioTask.ProjectTask.Properties.State -eq "Running") -or ($scenarioTask.ProjectTask.Properties.State -eq "Queued"))
+ }
+ Write-Progress -Activity "Scenario Run $ScenarioTaskName" `
+ -Status $scenarioTask.ProjectTask.Properties.State `
+ -Completed
+
+ # Now get it using REST APIs so we can expand the output
+ LogMessage -Message "Getting expanded task results ..." -IsProcessing $true
+ $psToken = (Get-AzAccessToken -ResourceUrl https://management.azure.com).Token;
+ $token = ConvertTo-SecureString -String $psToken -AsPlainText -Force;
+ $taskResource = Invoke-RestMethod `
+ -Method GET `
+ -Uri "https://management.azure.com$($scenarioTask.ProjectTask.Id)?api-version=2018-03-31-preview&`$expand=output" `
+ -ContentType "application/json" `
+ -Authentication Bearer `
+ -Token $token;
+
+ $taskResource.properties;
+}
+
+# create the get table task properties by initializing the connection and
+# database name
+$getTablesTaskProperties = @{
+ "input" = @{
+ "connectionInfo" = $null;
+ "selectedDatabases" = $null;
+ };
+ "taskType" = "GetUserTablesMySql";
+};
+
+LogMessage -Message "Running scenario to get the list of tables from the target database..." -IsProcessing $true
+$getTablesTaskProperties.input.connectionInfo = $targetConnInfo;
+$getTablesTaskProperties.input.selectedDatabases = @($TargetDatabaseName);
+# Create a name for the task
+$getTableTaskName = "$($TargetDatabaseName)GetUserTables"
+# Get the list of tables from the source
+$getTargetTablesTask = RunScenario -MigrationService $dmsService `
+ -MigrationProject $dmsProject `
+ -ScenarioTaskName $getTableTaskName `
+ -TaskProperties $getTablesTaskProperties;
+
+if (-not ($getTargetTablesTask)) { throw "ERROR: Could not get target database $TargetDatabaseName table information." }
+LogMessage -Message "List of tables from the target database acquired."
+
+LogMessage -Message "Running scenario to get the list of tables from the source database..." -IsProcessing $true
+$getTablesTaskProperties.input.connectionInfo = $sourceConnInfo;
+$getTablesTaskProperties.input.selectedDatabases = @($SourceDatabaseName);
+# Create a name for the task
+$getTableTaskName = "$($SourceDatabaseName)GetUserTables"
+# Get the list of tables from the source
+$getSourceTablesTask = RunScenario -MigrationService $dmsService `
+ -MigrationProject $dmsProject `
+ -ScenarioTaskName $getTableTaskName `
+ -TaskProperties $getTablesTaskProperties;
+
+if (-not ($getSourceTablesTask)) { throw "ERROR: Could not get source database $SourceDatabaseName table information." }
+LogMessage -Message "List of tables from the source database acquired."
+
+```
+
+## Build table mapping based on user configuration
+
+As part of configuring the migration task, you will create a mapping between the source and target tables. The mapping is at the table name level but the assumption is that the table structure (column count, column names, data types etc.) of the mapped tables is exactly the same.
+
+The following script creates a mapping based on the target and source table list extracted in **Step 7**. For partial data load, the user can provide a list of table to filter out the tables. If no user input is provided, then all target tables are mapped. The script also checks if a table with the same name exists in the source or not. If table name does not exists in the source, then the target table is ignored for migration.
+
+```powershell
+# Create the source to target table map
+# Optional table settings
+# DEFAULT: $IncludeTables = $null => include all tables for migration
+# DEFAULT: $ExcludeTables = $null => exclude no tables from migration
+# Exclude list has higher priority than include list
+# Array of qualified source table names which should be migrated
+[string[]] $IncludeTables = @("migsourcedb.coupons", "migsourcedb.daily_cash_sheets");
+[string[]] $ExcludeTables = $null;
+
+LogMessage -Message "Creating the table map based on the user input and database table information ..." `
+ -IsProcessing $true
+
+$targetTables = $getTargetTablesTask.Output.DatabasesToTables."$TargetDatabaseName";
+$sourceTables = $getSourceTablesTask.Output.DatabasesToTables."$SourceDatabaseName";
+$tableMap = New-Object 'system.collections.generic.dictionary[string,string]';
+
+$schemaPrefixLength = $($SourceDatabaseName + ".").Length;
+$tableMappingError = $false
+foreach ($srcTable in $sourceTables) {
+ # Removing the database name prefix from the table name so that comparison
+ # can be done in cases where database name given are different
+ $tableName = $srcTable.Name.Substring($schemaPrefixLength, `
+ $srcTable.Name.Length - $schemaPrefixLength)
+
+ # In case the table is part of exclusion list then ignore the table
+ if ($null -ne $ExcludeTables -and $ExcludeTables -contains $srcTable.Name) {
+ continue;
+ }
+
+ # Either the include list is null or the table is part of the include list then add it in the mapping
+ if ($null -eq $IncludeTables -or $IncludeTables -contains $srcTable.Name) {
+ # Check if the table exists in the target. If not then log TABLE MAPPING ERROR
+ if (-not ($targetTables | Where-Object { $_.name -ieq "$($TargetDatabaseName).$tableName" })) {
+ $tableMappingError = $true
+ Write-Host "TABLE MAPPING ERROR: $($targetTables.name) does not exists in target." -ForegroundColor Red
+ continue;
+ }
+
+ $tableMap.Add("$($SourceDatabaseName).$tableName", "$($TargetDatabaseName).$tableName");
+ }
+}
+
+# In case of any table mapping errors identified, throw an error and stop the process
+if ($tableMappingError) { throw "ERROR: One or more table mapping errors were identified. Please see previous messages." }
+# In case no tables are in the mapping then throw error
+if ($tableMap.Count -le 0) { throw "ERROR: Could not create table mapping." }
+LogMessage -Message "Migration table mapping created for $($tableMap.Count) tables."
+```
+
+## Create and configure the migration task inputs
+
+After building the table mapping, you will create the inputs for migration task of type *Migrate.MySql.AzureDbForMySql* and configure the properties.
+
+The following script creates the migration task and sets the connections, database names and table mapping.
+
+```powershell
+# Create and configure the migration scenario based on the connections
+# and the table mapping
+$offlineMigTaskProperties = @{
+ "input" = @{
+ "sourceConnectionInfo" = $null;
+ "targetConnectionInfo" = $null;
+ "selectedDatabases" = $null;
+ "optionalAgentSettings" = @{
+ "EnableCacheBatchesInMemory" = $true;
+ "DisableIncrementalRowStatusUpdates" = $true;
+ };
+ "startedOn" = $null;
+ };
+ "taskType" = "Migrate.MySql.AzureDbForMySql";
+};
+$offlineSelectedDatabase = @{
+ "name" = $null;
+ "targetDatabaseName" = $null;
+ "tableMap" = $null;
+};
+
+LogMessage -Message "Preparing migration scenario configuration ..." -IsProcessing $true
+
+# Select the database to be migrated
+$offlineSelectedDatabase.name = $SourceDatabaseName;
+$offlineSelectedDatabase.tableMap = New-Object PSObject -Property $tableMap;
+$offlineSelectedDatabase.targetDatabaseName = $TargetDatabaseName;
+
+# Set connection info and the database mapping
+$offlineMigTaskProperties.input.sourceConnectionInfo = $sourceConnInfo;
+$offlineMigTaskProperties.input.targetConnectionInfo = $targetConnInfo;
+$offlineMigTaskProperties.input.selectedDatabases = @($offlineSelectedDatabase);
+$offlineMigTaskProperties.input.startedOn = [System.DateTimeOffset]::UtcNow.ToString("O");
+```
+
+## Configure performance tuning parameters
+
+As pert of the PowerShell module, there are few optional parameters available, which can be tuned based on the environment. These parameters can be used to improve the performance of the migration task. All these parameters are optional and their default value is NULL.
+
+> [!NOTE]
+> The following performance configurations have shown increased throughput during migration on Premium SKU.
+> * WriteDataRangeBatchTaskCount = 12
+> * DelayProgressUpdatesInStorageInterval = 30 seconds
+> * ThrottleQueryTableDataRangeTaskAtBatchCount = 36
+
+The following script takes the user values of the parameters and sets the parameters in the migration task properties.
+
+```powershell
+# Setting optional parameters from fine tuning the data transfer rate during migration
+# DEFAULT values for all the configurations is $null
+LogMessage -Message "Adding optional migration performance tuning configuration ..." -IsProcessing $true
+# Partitioning settings
+# Optional setting that configures the maximum number of parallel reads on tables located on the source database.
+[object] $DesiredRangesCount = 4
+# Optional setting that configures that size of the largest batch that will be committed to the target server.
+[object] $MaxBatchSizeKb = 4096
+# Optional setting that configures the minimum number of rows in each batch written to the target.
+[object] $MinBatchRows = $null
+# Task count settings
+# Optional setting that configures the number of databases that will be prepared for migration in parallel.
+[object] $PrepareDatabaseForBulkImportTaskCount = $null
+# Optional setting that configures the number of tables that will be prepared for migration in parallel.
+[object] $PrepareTableForBulkImportTaskCount = $null
+# Optional setting that configures the number of threads available to read ranges on the source.
+[object] $QueryTableDataRangeTaskCount = 8
+# Optional setting that configures the number of threads available to write batches to the target.
+[object] $WriteDataRangeBatchTaskCount = 12
+# Batch cache settings
+# Optional setting that configures how much memory will be used to cache batches in memory before reads on the source are throttled.
+[object] $MaxBatchCacheSizeMb = $null
+# Optional setting that configures the amount of available memory at which point reads on the source will be throttled.
+[object] $ThrottleQueryTableDataRangeTaskAtAvailableMemoryMb = $null
+# Optional setting that configures the number of batches cached in memory that will trigger read throttling on the source.
+[object] $ThrottleQueryTableDataRangeTaskAtBatchCount = 36
+# Performance settings
+# Optional setting that configures the delay between updates of result objects in Azure Table Storage.
+[object] $DelayProgressUpdatesInStorageInterval = "00:00:30"
+
+function AddOptionalSetting($optionalAgentSettings, $settingName, $settingValue) {
+ # If no value specified for the setting, don't bother adding it to the input
+ if ($null -eq $settingValue) {
+ return;
+ }
+
+ # Add a new property to the JSON object to capture the setting which will be customized
+ $optionalAgentSettings | add-member -MemberType NoteProperty -Name $settingName -Value $settingValue
+}
+
+# Set any optional settings in the input based on parameters to this cmdlet
+AddOptionalSetting $offlineMigTaskProperties.input.optionalAgentSettings "DesiredRangesCount" $DesiredRangesCount;
+AddOptionalSetting $offlineMigTaskProperties.input.optionalAgentSettings "MaxBatchSizeKb" $MaxBatchSizeKb;
+AddOptionalSetting $offlineMigTaskProperties.input.optionalAgentSettings "MinBatchRows" $MinBatchRows;
+AddOptionalSetting $offlineMigTaskProperties.input.optionalAgentSettings "PrepareDatabaseForBulkImportTaskCount" $PrepareDatabaseForBulkImportTaskCount;
+AddOptionalSetting $offlineMigTaskProperties.input.optionalAgentSettings "PrepareTableForBulkImportTaskCount" $PrepareTableForBulkImportTaskCount;
+AddOptionalSetting $offlineMigTaskProperties.input.optionalAgentSettings "QueryTableDataRangeTaskCount" $QueryTableDataRangeTaskCount;
+AddOptionalSetting $offlineMigTaskProperties.input.optionalAgentSettings "WriteDataRangeBatchTaskCount" $WriteDataRangeBatchTaskCount;
+AddOptionalSetting $offlineMigTaskProperties.input.optionalAgentSettings "MaxBatchCacheSizeMb" $MaxBatchCacheSizeMb;
+AddOptionalSetting $offlineMigTaskProperties.input.optionalAgentSettings "ThrottleQueryTableDataRangeTaskAtAvailableMemoryMb" $ThrottleQueryTableDataRangeTaskAtAvailableMemoryMb;
+AddOptionalSetting $offlineMigTaskProperties.input.optionalAgentSettings "ThrottleQueryTableDataRangeTaskAtBatchCount" $ThrottleQueryTableDataRangeTaskAtBatchCount;
+AddOptionalSetting $offlineMigTaskProperties.input.optionalAgentSettings "DelayProgressUpdatesInStorageInterval" $DelayProgressUpdatesInStorageInterval;
+```
+
+## Creating and running the migration task
+
+After configuring the input for the task, now the task will be created and executed on the agent. The script triggers the task execution and wait for the migration to complete.
+
+The following script invokes the configured migration task and waits for it to complete.
+
+```powershell
+# Running the migration scenario
+[string] $TaskName = "mysqlofflinemigrate"
+
+LogMessage -Message "Running data migration scenario ..." -IsProcessing $true
+$summary = @{
+ "SourceServer" = $SourceServerName;
+ "SourceDatabase" = $SourceDatabaseName;
+ "TargetServer" = $TargetServerName;
+ "TargetDatabase" = $TargetDatabaseName;
+ "TableCount" = $tableMap.Count;
+ "StartedOn" = $offlineMigTaskProperties.input.startedOn;
+}
+
+Write-Host "Job Summary:" -ForegroundColor Yellow
+Write-Host $(ConvertTo-Json $summary) -ForegroundColor Yellow
+
+$migrationResult = RunScenario -MigrationService $dmsService `
+ -MigrationProject $dmsProject `
+ -ScenarioTaskName $TaskName `
+ -TaskProperties $offlineMigTaskProperties
+
+LogMessage -Message "Migration completed with status - $($migrationResult.state)"
+#Checking for any errors or warnings captured by the task during migration
+$dbLevelResult = $migrationResult.output | Where-Object { $_.resultType -eq "DatabaseLevelOutput" }
+$migrationLevelResult = $migrationResult.output | Where-Object { $_.resultType -eq "MigrationLevelOutput" }
+if ($dbLevelResult.exceptionsAndWarnings) {
+ Write-Host "Following database errors were captured: $($dbLevelResult.exceptionsAndWarnings)" -ForegroundColor Red
+}
+if ($migrationLevelResult.exceptionsAndWarnings) {
+ Write-Host "Following migration errors were captured: $($migrationLevelResult.exceptionsAndWarnings)" -ForegroundColor Red
+}
+if ($migrationResult.errors.details) {
+ Write-Host "Following task level migration errors were captured: $($migrationResult.errors.details)" -ForegroundColor Red
+}
+```
+
+## Deleting the Database Migration Service
+
+The same Database Migration Service can be used for multiple migrations so the instance once created can be re-used. If you're not going to continue to use the Database Migration Service, then you can delete the service using the [Remove-AzDataMigrationService](/powershell/module/az.datamigration/remove-azdatamigrationservice?) command.
+
+The following script deletes the Azure Database Migration Service instance and its associated projects.
+
+```powershell
+Remove-AzDataMigrationService -ResourceId $($dmsService.ResourceId)
+```
+
+## Next steps
+
+* For information about known issues and limitations when performing migrations using DMS, see the article [Common issues - Azure Database Migration Service](./known-issues-troubleshooting-dms.md).
+* For troubleshooting source database connectivity issues while using DMS, see the article [Issues connecting source databases](./known-issues-troubleshooting-dms-source-connectivity.md).
+* For information about Azure Database Migration Service, see the article [What is Azure Database Migration Service?](./dms-overview.md).
+* For information about Azure Database for MySQL, see the article [What is Azure Database for MySQL?](../mysql/overview.md).
+* For tutorial about using DMS via portal, see the article [Tutorial: Migrate MySQL to Azure Database for MySQL offline using DMS](./tutorial-mysql-azure-mysql-offline-portal.md)
dms Resource Scenario Status https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/dms/resource-scenario-status.md
Azure Database Migration Service is designed to support different migration scen
With Azure Database Migration Service, you can do an offline or an online migration. With *offline* migrations, application downtime begins at the same time that the migration starts. To limit downtime to the time required to cut over to the new environment when the migration completes, use an *online* migration. It's recommended to test an offline migration to determine whether the downtime is acceptable; if not, do an online migration.
+## Migration scenario status
+
+The status of migration scenarios supported by Azure Database Migration Service varies with time. Generally, scenarios are first released in **private preview**. After private preview, the scenario status changes to **public preview**. Azure Database Migration Service users can try out migration scenarios in public preview directly from the user interface. No sign-up is required. However, migration scenarios in public preview may not be available in all regions and may undergo additional changes before final release. After public preview, the scenario status changes to **generally availability**. General availability (GA) is the final release status, and the functionality is complete and accessible to all users.
+ ## Migration scenario support The following tables show which migration scenarios are supported when using Azure Database Migration Service.
The following table shows Azure Database Migration Service support for offline m
| **Azure SQL VM** | SQL Server | Γ£ö | GA | | | Oracle | X | | | **Azure Cosmos DB** | MongoDB | Γ£ö | GA |
-| **Azure DB for MySQL** | MySQL | X | |
+| **Azure DB for MySQL** | MySQL | Γ£ö | |
| | RDS MySQL | X | | | **Azure DB for PostgreSQL - Single server** | PostgreSQL | X | | | RDS PostgreSQL | X | |
dms Tutorial Mysql Azure Mysql Offline Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/dms/tutorial-mysql-azure-mysql-offline-portal.md
+
+ Title: "Tutorial: Migrate MySQL to Azure Database for MySQL offline using DMS"
+
+description: "Learn to perform an offline migration from MySQL on-premises to Azure Database for MySQL by using Azure Database Migration Service."
+++++++++ Last updated : 04/11/2021++
+# Tutorial: Migrate MySQL to Azure Database for MySQL offline using DMS
+
+You can use Azure Database Migration Service to perform a one-time full database migration on-premises MySQL instance to [Azure Database for MySQL](../mysql/index.yml) with high speed data migration capability. In this tutorial, we will migrate a sample database from an on-premises instance of MySQL 5.7 to Azure Database for MySQL (v5.7) by using an offline migration activity in Azure Database Migration Service. Although the articles assumes the source to be a MySQL database instance and target to be Azure Database for MySQL, it can be used to migrate from one Azure Database for MySQL to another just by changing the source server name and credentials. Also, migration from lower version MySQL servers (v5.6 and above) to higher versions is also supported.
+
+> [!IMPORTANT]
+> For online migrations, you can use this new offering together with [data-in replication](https://docs.microsoft.com/azure/mysql/concepts-data-in-replication). Alternatively, use open-source tools such as [MyDumper/MyLoader](https://centminmod.com/mydumper.html) with data-in replication for online migrations.
+++
+In this tutorial, you learn how to:
+
+> [!div class="checklist"]
+>
+> * Migrate database schema using mysqldump utility.
+> * Create an instance of Azure Database Migration Service.
+> * Create a migration project by using Azure Database Migration Service.
+> * Run the migration.
+> * Monitor the migration.
+
+## Prerequisites
+
+To complete this tutorial, you need to:
+
+* Have an Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free).
+* Have an on-premises MySQL database with version 5.7. If not, then download and install [MySQL community edition](https://dev.mysql.com/downloads/mysql/) 5.7.
+* [Create an instance in Azure Database for MySQL](../mysql/quickstart-create-mysql-server-database-using-azure-portal.md). Refer to the article [Use MySQL Workbench to connect and query data](../mysql/connect-workbench.md) for details about how to connect and create a database using the Workbench application. The Azure Database for MySQL version should be equal to or higher than the on-premises MySQL version . For example, MySQL 5.7 can migrate to Azure Database for MySQL 5.7 or upgraded to 8.
+* Create a Microsoft Azure Virtual Network for Azure Database Migration Service by using Azure Resource Manager deployment model, which provides site-to-site connectivity to your on-premises source servers by using either [ExpressRoute](../expressroute/expressroute-introduction.md) or [VPN](../vpn-gateway/vpn-gateway-about-vpngateways.md). For more information about creating a virtual network, see the [Virtual Network Documentation](../virtual-network/index.yml), and especially the quickstart articles with step-by-step details.
+
+ > [!NOTE]
+ > During virtual networkNet setup, if you use ExpressRoute with network peering to Microsoft, add the following service [endpoints](../virtual-network/virtual-network-service-endpoints-overview.md) to the subnet in which the service will be provisioned:
+ >
+ > * Target database endpoint (for example, SQL endpoint, Cosmos DB endpoint, and so on)
+ > * Storage endpoint
+ > * Service bus endpoint
+ >
+ > This configuration is necessary because Azure Database Migration Service lacks internet connectivity.
+
+* Ensure that your virtual network Network Security Group rules don't block the outbound port 443 of ServiceTag for ServiceBus, Storage and AzureMonitor. For more detail on virtual network NSG traffic filtering, see the article [Filter network traffic with network security groups](../virtual-network/virtual-network-vnet-plan-design-arm.md).
+* Open your Windows firewall to allow connections from Virtual Network for Azure Database Migration Service to access the source MySQL Server, which by default is TCP port 3306.
+* When using a firewall appliance in front of your source database(s), you may need to add firewall rules to allow connections from Virtual Network for Azure Database Migration Service to access the source database(s) for migration.
+* Create a server-level [firewall rule](../azure-sql/database/firewall-configure.md) or [configure VNET service endpoints](../mysql/howto-manage-vnet-using-portal.md) for target Azure Database for MySQL to allow Virtual Network for Azure Database Migration Service access to the target databases.
+* The source MySQL must be on supported MySQL community edition. To determine the version of MySQL instance, in the MySQL utility or MySQL Workbench, run the following command:
+
+ ```
+ SELECT @@version;
+ ```
+
+* Azure Database for MySQL supports only InnoDB tables. To convert MyISAM tables to InnoDB, see the article [Converting Tables from MyISAM to InnoDB](https://dev.mysql.com/doc/refman/5.7/en/converting-tables-to-innodb.html)
+* The user must have the privileges to read data on the source database.
+
+## Migrate database schema
+
+To transfer all the database objects like table schemas, indexes and stored procedures, we need to extract schema from the source database and apply to the target database. To extract schema, you can use mysqldump with the `--no-data` parameter. For this you need a machine which can connect to both the source MySQL database and the target Azure Database for MySQL.
+
+To export the schema using mysqldump, run the following command:
+
+```
+mysqldump -h [servername] -u [username] -p[password] --databases [db name] --no-data > [schema file path]
+```
+
+For example:
+
+```
+mysqldump -h 10.10.123.123 -u root -p --databases migtestdb --no-data > d:\migtestdb.sql
+```
+
+To import schema to target Azure Database for MySQL, run the following command:
+
+```
+mysql.exe -h [servername] -u [username] -p[password] [database]< [schema file path]
+ ```
+
+For example:
+
+```
+mysql.exe -h mysqlsstrgt.mysql.database.azure.com -u docadmin@mysqlsstrgt -p migtestdb < d:\migtestdb.sql
+ ```
+
+If you have foreign keys in your schema, the parallel data load during migration will be handled by the migration task. There is no need to drop foreign keys during schema migration.
+
+If you have triggers in the database, it will enforce data integrity in the target ahead of full data migration from the source. The recommendation is to disable triggers on all the tables in the target during migration, and then enable the triggers after migration is done.
+
+Execute the following script in MySQL Workbench on the target database to extract the drop trigger script and add trigger script.
+
+```sql
+SELECT
+ SchemaName,
+ GROUP_CONCAT(DropQuery SEPARATOR ';\n') as DropQuery,
+ Concat('DELIMITER $$ \n\n', GROUP_CONCAT(AddQuery SEPARATOR '$$\n'), '$$\n\nDELIMITER ;') as AddQuery
+FROM
+(
+SELECT
+ TRIGGER_SCHEMA as SchemaName,
+ Concat('DROP TRIGGER `', TRIGGER_NAME, "`") as DropQuery,
+ Concat('CREATE TRIGGER `', TRIGGER_NAME, '` ', ACTION_TIMING, ' ', EVENT_MANIPULATION,
+ '\nON `', EVENT_OBJECT_TABLE, '`\n' , 'FOR EACH ', ACTION_ORIENTATION, ' ',
+ ACTION_STATEMENT) as AddQuery
+FROM
+ INFORMATION_SCHEMA.TRIGGERS
+ORDER BY EVENT_OBJECT_SCHEMA, EVENT_OBJECT_TABLE, ACTION_TIMING, EVENT_MANIPULATION, ACTION_ORDER ASC
+) AS Queries
+GROUP BY SchemaName
+```
+
+Run the generated drop trigger query (DropQuery column) in the result to drop triggers in the target database. The add trigger query can be saved, to be used post data migration completion.
+
+## Register the Microsoft.DataMigration resource provider
+
+Registration of the resource provider needs to be done on each Azure subscription only once. Without the registration, you will not be able to create an instance of **Azure Database Migration Service**.
+
+1. Sign in to the Azure portal, select **All services**, and then select **Subscriptions**.
+
+ ![Show portal subscriptions](media/tutorial-mysql-to-azure-mysql-offline-portal/01-dms-portal-select-subscription.png)
+
+2. Select the subscription in which you want to create the instance of Azure Database Migration Service, and then select **Resource providers**.
+
+3. Search for migration, and then to the right of **Microsoft.DataMigration**, select **Register**.
+
+ ![Register resource provider](media/tutorial-mysql-to-azure-mysql-offline-portal/02-dms-portal-register-rp.png)
+
+## Create a Database Migration Service instance
+
+1. In the Azure portal, select + **Create a resource**, search for Azure Database Migration Service, and then select **Azure Database Migration Service** from the drop-down list.
+
+ ![Azure Marketplace](media/tutorial-mysql-to-azure-mysql-offline-portal/03-dms-portal-marketplace.png)
+
+2. On the **Azure Database Migration Service** screen, select **Create**.
+
+ ![Create Azure Database Migration Service instance](media/tutorial-mysql-to-azure-mysql-offline-portal/04-dms-portal-marketplace-create.png)
+
+3. On the **Create Migration Service** screen, specify a name for the service, the subscription, and a new or existing resource group.
+
+4. Select a pricing tier and move to the networking screen. Offline migration capability is available in both Standard and Premium pricing tier.
+
+ For more information on costs and pricing tiers, see the [pricing page](https://aka.ms/dms-pricing).
+
+ ![Configure Azure Database Migration Service basic settings](media/tutorial-mysql-to-azure-mysql-offline-portal/05-dms-portal-create-basic.png)
+
+5. Select an existing virtual network from the list or provide the name of new virtual network to be created. Move to the review + create screen. Optionally you can add tags to the service using the tags screen.
+
+ The virtual network provides Azure Database Migration Service with access to the source SQL Server and the target Azure SQL Database instance.
+
+ ![Configure Azure Database Migration Service network settings](media/tutorial-mysql-to-azure-mysql-offline-portal/06-dms-portal-create-networking.png)
+
+ For more information about how to create a virtual network in the Azure portal, see the article [Create a virtual network using the Azure portal](../virtual-network/quick-create-portal.md).
+
+6. Review the configurations and select **Create** to create the service.
+
+ ![Azure Database Migration Service create](media/tutorial-mysql-to-azure-mysql-offline-portal/07-dms-portal-create-submit.png)
+
+## Create a migration project
+
+After the service is created, locate it within the Azure portal, open it, and then create a new migration project.
+
+1. In the Azure portal, select **All services**, search for Azure Database Migration Service, and then select **Azure Database Migration Services**.
+
+ ![Locate all instances of Azure Database Migration Service](media/tutorial-mysql-to-azure-mysql-offline-portal/08-01-dms-portal-search-service.png)
+
+2. Select your migration service instance from the search results and select + **New Migration Project**.
+
+ ![Create a new migration project](media/tutorial-mysql-to-azure-mysql-offline-portal/08-02-dms-portal-new-project.png)
+
+3. On the **New migration project** screen, specify a name for the project, in the **Source server type** selection box, select **MySQL**, in the **Target server type** selection box, select **Azure Database For MySQL** and in the **Migration activity type** selection box, select **Data migration \[preview\]**. Select **Create and run activity**.
+
+ ![Create Database Migration Service Project](media/tutorial-mysql-to-azure-mysql-offline-portal/09-dms-portal-project-mysql-create.png)
+
+ > [!NOTE]
+ > Alternately, you can choose **Create project only** to create the migration project now and execute the migration later.
+
+## Configure migration project
+
+1. On the **Select source** screen, specify the connection details for the source MySQL instance, and select **Next : Select target>>**
+
+ ![Add source details screen](media/tutorial-mysql-to-azure-mysql-offline-portal/10-dms-portal-project-mysql-source.png)
+
+2. On the **Select target** screen, specify the connection details for the target Azure Database for MySQL instance, and select **Next : Select databases>>**
+
+ ![Add target details screen](media/tutorial-mysql-to-azure-mysql-offline-portal/11-dms-portal-project-mysql-target.png)
+
+3. On the **Select databases** screen, map the source and the target database for migration, and select **Next : Configure migration settings>>**. You can select the **Make Source Server Readonly** option to make the source as read-only, but be cautious that this is a server level setting. If selected, it sets the entire server to read-only, not just the selected databases.
+
+ If the target database contains the same database name as the source database, Azure Database Migration Service selects the target database by default.
+ ![Select database details screen](media/tutorial-mysql-to-azure-mysql-offline-portal/12-dms-portal-project-mysql-select-db.png)
+
+ > [!NOTE]
+ > Though you can select multiple databases in this step, but there are limits to how many and how fast the DBs can be migrated this way, since each database will share compute. With the default configuration of the Premium SKU, each migration task will attempt to migrate two tables in parallel. These tables could be from any of the selected databases. If this isn't fast enough, you can split database migration activities into different migration tasks and scale across multiple services. Also, there is a limit of 10 instances of Azure Database Migration Service per subscription per region.
+ > For more granular control on the migration throughput and parallelization, please refer to the article [PowerShell: Run offline migration from MySQL database to Azure Database for MySQL using DMS](./migrate-mysql-to-azure-mysql-powershell.md)
+
+4. On the **Configure migration settings** screen, select the tables to be part of migration, and select **Next : Summary>>**. If the target tables have any data, they are not selected by default but you can explicitly select them and they will be truncated before starting the migration.
+
+ ![Select tables screen](media/tutorial-mysql-to-azure-mysql-offline-portal/13-dms-portal-project-mysql-select-tbl.png)
+
+5. On the **Summary** screen, in the **Activity name** text box, specify a name for the migration activity and review the summary to ensure that the source and target details match what you previously specified.
+
+ ![Migration project summary](media/tutorial-mysql-to-azure-mysql-offline-portal/14-dms-portal-project-mysql-activity-summary.png)
+
+6. Select **Start migration**. The migration activity window appears, and the **Status** of the activity is **Initializing**. The **Status** changes to **Running** when the table migrations start.
+
+ ![Running migration](media/tutorial-mysql-to-azure-mysql-offline-portal/15-dms-portal-project-mysql-running.png)
+
+## Monitor the migration
+
+1. On the migration activity screen, select **Refresh** to update the display and see progress about number of tables completed.
+
+2. You can click on the database name on the activity screen to see the status of each table as they are getting migrated. Select **Refresh** to update the display.
+
+ ![Monitoring migration](media/tutorial-mysql-to-azure-mysql-offline-portal/16-dms-portal-project-mysql-monitor.png)
+
+## Complete the migration
+
+1. On the migration activity screen, select **Refresh** to update the display until the **Status** of the migration shows as **Complete**.
+
+ ![Complete migration](media/tutorial-mysql-to-azure-mysql-offline-portal/17-dms-portal-project-mysql-complete.png)
+
+## Post migration activities
+
+Migration cutover in an offline migration is an application dependent process which is out of scope for this document, but following post-migration activities are prescribed:
+
+1. Create logins, roles and permissions as per the application requirements.
+2. Recreate all the triggers on the target database as extracted during the pre-migration step.
+3. Perform sanity testing of the application against the target database to certify the migration.
+
+## Clean up resources
+
+If you're not going to continue to use the Database Migration Service, then you can delete the service with the following steps:
+
+1. In the Azure portal, select **All services**, search for Azure Database Migration Service, and then select **Azure Database Migration Services**.
+
+ ![Locate all instances of DMS](media/tutorial-mysql-to-azure-mysql-offline-portal/08-01-dms-portal-search-service.png)
+
+2. Select your migration service instance from the search results and select **Delete Service**.
+
+ ![Delete the migration service](media/tutorial-mysql-to-azure-mysql-offline-portal/18-dms-portal-delete.png)
+
+3. On the confirmation dialog, type in the name of the service in the **TYPE THE DATABASE MIGRATION SERVICE NAME** textbox and select **Delete**
+
+ ![Confirm migration service delete](media/tutorial-mysql-to-azure-mysql-offline-portal/19-dms-portal-deleteconfirm.png)
+
+## Next steps
+
+* For information about known issues and limitations when performing migrations using DMS, see the article [Common issues - Azure Database Migration Service](./known-issues-troubleshooting-dms.md).
+* For troubleshooting source database connectivity issues while using DMS, see the article [Issues connecting source databases](./known-issues-troubleshooting-dms-source-connectivity.md).
+* For information about Azure Database Migration Service, see the article [What is Azure Database Migration Service?](./dms-overview.md).
+* For information about Azure Database for MySQL, see the article [What is Azure Database for MySQL?](../mysql/overview.md).
+* For guidance about using DMS via PowerShell, see the article [PowerShell: Run offline migration from MySQL database to Azure Database for MySQL using DMS](./migrate-mysql-to-azure-mysql-powershell.md)
dms Tutorial Mysql Azure Mysql Online https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/dms/tutorial-mysql-azure-mysql-online.md
In this tutorial, you learn how to:
To complete this tutorial, you need to: * Download and install [MySQL community edition](https://dev.mysql.com/downloads/mysql/) 5.6 or 5.7. The on-premises MySQL version must match with Azure Database for MySQL version. For example, MySQL 5.6 can only migrate to Azure Database for MySQL 5.6 and not upgraded to 5.7. Migrations to or from MySQL 8.0 are not supported.
-* [Create an instance in Azure Database for MySQL](../mysql/quickstart-create-mysql-server-database-using-azure-portal.md). Refer to the article [Use MySQL Workbench to connect and query data](../mysql/connect-workbench.md) for details about how to connect and create a database using the Azure portal.
+* [Create an instance in Azure Database for MySQL](../mysql/quickstart-create-mysql-server-database-using-azure-portal.md). Refer to the article [Use MySQL Workbench to connect and query data](../mysql/connect-workbench.md) for details about how to connect and create a database using the Workbench application.
* Create a Microsoft Azure Virtual Network for Azure Database Migration Service by using Azure Resource Manager deployment model, which provides site-to-site connectivity to your on-premises source servers by using either [ExpressRoute](../expressroute/expressroute-introduction.md) or [VPN](../vpn-gateway/vpn-gateway-about-vpngateways.md). For more information about creating a virtual network, see the [Virtual Network Documentation](../virtual-network/index.yml), and especially the quickstart articles with step-by-step details. > [!NOTE]
Run the drop foreign key (which is the second column) in the query result to dro
> [!IMPORTANT] > If importing data using a backup, remove the CREATE DEFINER commands manually or by using the --skip-definer command when performing a mysqldump. DEFINER requires super privileges to create and is restricted in Azure Database for MySQL.
-If you have a trigger in the data (insert or update trigger), it will enforce data integrity in the target ahead of the replicated data from the source. The recommendation is to disable triggers in all the tables at the target during migration, and then enable the triggers after migration is done.
+If you have triggers in the database, it will enforce data integrity in the target ahead of full data migration from the source. The recommendation is to disable triggers on all the tables in the target during migration, and then enable the triggers after migration is done.
-To disable triggers in the target database, use the following command:
+Execute the following script in MySQL Workbench on the target database to extract the drop trigger script and add trigger script.
-```
-SELECT Concat('DROP TRIGGER ', Trigger_Name, ';') FROM information_schema.TRIGGERS WHERE TRIGGER_SCHEMA = 'your_schema';
+```sql
+SELECT
+ SchemaName,
+ GROUP_CONCAT(DropQuery SEPARATOR ';\n') as DropQuery,
+ Concat('DELIMITER $$ \n\n', GROUP_CONCAT(AddQuery SEPARATOR '$$\n'), '$$\n\nDELIMITER ;') as AddQuery
+FROM
+(
+SELECT
+ TRIGGER_SCHEMA as SchemaName,
+ Concat('DROP TRIGGER `', TRIGGER_NAME, "`") as DropQuery,
+ Concat('CREATE TRIGGER `', TRIGGER_NAME, '` ', ACTION_TIMING, ' ', EVENT_MANIPULATION,
+ '\nON `', EVENT_OBJECT_TABLE, '`\n' , 'FOR EACH ', ACTION_ORIENTATION, ' ',
+ ACTION_STATEMENT) as AddQuery
+FROM
+ INFORMATION_SCHEMA.TRIGGERS
+ORDER BY EVENT_OBJECT_SCHEMA, EVENT_OBJECT_TABLE, ACTION_TIMING, EVENT_MANIPULATION, ACTION_ORDER ASC
+) AS Queries
+GROUP BY SchemaName
```
+Run the generated drop trigger query (DropQuery column) in the result to drop triggers in the target database. The add trigger query can be saved, to be used post data migration completion.
+ ## Register the Microsoft.DataMigration resource provider 1. Sign in to the Azure portal, select **All services**, and then select **Subscriptions**.
- ![Show portal subscriptions](media/tutorial-mysql-to-azure-mysql-online/portal-select-subscriptions.png)
+ ![Show portal subscriptions](media/tutorial-mysql-to-azure-mysql-online/01-portal-select-subscriptions.png)
2. Select the subscription in which you want to create the instance of Azure Database Migration Service, and then select **Resource providers**.
- ![Show resource providers](media/tutorial-mysql-to-azure-mysql-online/portal-select-resource-provider.png)
+ ![Show resource providers](media/tutorial-mysql-to-azure-mysql-online/02-01-portal-select-resource-provider.png)
3. Search for migration, and then to the right of **Microsoft.DataMigration**, select **Register**.
- ![Register resource provider](media/tutorial-mysql-to-azure-mysql-online/portal-register-resource-provider.png)
+ ![Register resource provider](media/tutorial-mysql-to-azure-mysql-online/02-02-portal-register-resource-provider.png)
-## Create a DMS instance
+## Create a Database Migration Service instance
1. In the Azure portal, select + **Create a resource**, search for Azure Database Migration Service, and then select **Azure Database Migration Service** from the drop-down list.
- ![Azure Marketplace](media/tutorial-mysql-to-azure-mysql-online/portal-marketplace.png)
+ ![Azure Marketplace](media/tutorial-mysql-to-azure-mysql-online/03-dms-portal-marketplace.png)
2. On the **Azure Database Migration Service** screen, select **Create**.
- ![Create Azure Database Migration Service instance](media/tutorial-mysql-to-azure-mysql-online/dms-create1.png)
+ ![Create Azure Database Migration Service instance](media/tutorial-mysql-to-azure-mysql-online/04-dms-portal-marketplace-create.png)
3. On the **Create Migration Service** screen, specify a name for the service, the subscription, and a new or existing resource group.
-4. Select an existing virtual network or create a new one.
+4. Select a pricing tier and move to the networking screen. Offline migration capability is available in both Standard and Premium pricing tier.
- The virtual network provides Azure Database Migration Service with access to the source SQL Server and the target Azure SQL Database instance.
+ For more information on costs and pricing tiers, see the [pricing page](https://aka.ms/dms-pricing).
- For more information about how to create a virtual network in the Azure portal, see the article [Create a virtual network using the Azure portal](../virtual-network/quick-create-portal.md).
+ ![Configure Azure Database Migration Service basic settings](media/tutorial-mysql-to-azure-mysql-online/05-dms-portal-create-basic.png)
-5. Select a pricing tier.
+5. Select an existing virtual network from the list or provide the name of new virtual network to be created. Move to the review + create screen. Optionally you can add tags to the service using the tags screen.
- For more information on costs and pricing tiers, see the [pricing page](https://aka.ms/dms-pricing).
+ The virtual network provides Azure Database Migration Service with access to the source SQL Server and the target Azure SQL Database instance.
- ![Configure Azure Database Migration Service instance settings](media/tutorial-mysql-to-azure-mysql-online/dms-settings3.png)
+ ![Configure Azure Database Migration Service network settings](media/tutorial-mysql-to-azure-mysql-online/06-dms-portal-create-networking.png)
-6. Select **Create** to create the service.
+ For more information about how to create a virtual network in the Azure portal, see the article [Create a virtual network using the Azure portal](../virtual-network/quick-create-portal.md).
+
+6. Review the configurations and select **Create** to create the service.
+
+ ![Azure Database Migration Service create](media/tutorial-mysql-to-azure-mysql-online/07-dms-portal-create-submit.png)
## Create a migration project
-After the service is created, locate it within the Azure portal, open it, and then create a new migration project.
+After the service is created, locate it within the Azure portal, open it, and then create a new migration project.
1. In the Azure portal, select **All services**, search for Azure Database Migration Service, and then select **Azure Database Migration Services**.
- ![Locate all instances of Azure Database Migration Service](media/tutorial-mysql-to-azure-mysql-online/dms-search.png)
+ ![Locate all instances of Azure Database Migration Service](media/tutorial-mysql-to-azure-mysql-online/08-01-dms-portal-search-service.png)
-2. On the **Azure Database Migration Services** screen, search for the name of Azure Database Migration Service instance that you created, and then select the instance.
+2. Select your migration service instance from the search results and select + **New Migration Project**.
+
+ ![Create a new migration project](media/tutorial-mysql-to-azure-mysql-online/08-02-dms-portal-new-project.png)
- ![Locate your instance of Azure Database Migration Service](media/tutorial-mysql-to-azure-mysql-online/dms-instance-search.png)
+3. On the **New migration project** screen, specify a name for the project, in the **Source server type** selection box, select **MySQL**, in the **Target server type** selection box, select **Azure Database For MySQL** and in the **Migration activity type** selection box, select **Online data migration**. Select **Create and run activity**.
-3. Select + **New Migration Project**.
-4. On the **New migration project** screen, specify a name for the project, in the **Source server type** text box, select **MySQL**, in the **Target server type** text box, select **AzureDbForMySQL**.
-5. In the **Choose type of activity** section, select **Online data migration**
-
- ![Create Database Migration Service Project](media/tutorial-mysql-to-azure-mysql-online/dms-create-project4.png)
+ ![Create Database Migration Service Project](media/tutorial-mysql-to-azure-mysql-online/09-dms-portal-project-mysql-create.png)
> [!NOTE] > Alternately, you can choose **Create project only** to create the migration project now and execute the migration later.
-6. Select **Save**, note the requirements to successfully use DMS to migrate data, and then select **Create and run activity**.
-
-## Specify source details
+## Configure migration project
-1. On the **Add Source Details** screen, specify the connection details for the source MySQL instance.
+1. On the **Select source** screen, specify the connection details for the source MySQL instance, and select **Next : Select target>>**
- ![Add Source Details screen](media/tutorial-mysql-to-azure-mysql-online/dms-add-source-details.png)
+ ![Add source details screen](media/tutorial-mysql-to-azure-mysql-online/10-dms-portal-project-mysql-source.png)
-## Specify target details
+2. On the **Select target** screen, specify the connection details for the target Azure Database for MySQL instance, and select **Next : Select databases>>**
-1. Select **Save**, and then on the **Target details** screen, specify the connection details for the target Azure Database for MySQL server, which is the pre-provisioned instance of Azure Database for MySQL to which the **Employees** schema was deployed by using mysqldump.
-
- ![Target details screen](media/tutorial-mysql-to-azure-mysql-online/dms-add-target-details.png)
-
-2. Select **Save**, and then on the **Map to target databases** screen, map the source and the target database for migration.
+ ![Add target details screen](media/tutorial-mysql-to-azure-mysql-online/11-dms-portal-project-mysql-target.png)
+3. On the **Select databases** screen, map the source and the target database for migration, and select **Next : Configure migration settings>>**. You can select the **Make Source Server Readonly** option to make the source as read-only, but be cautious that this is a server level setting. If selected, it sets the entire server to read-only, not just the selected databases.
+
If the target database contains the same database name as the source database, Azure Database Migration Service selects the target database by default.-
- ![Map to target databases](media/tutorial-mysql-to-azure-mysql-online/dms-map-target-details.png)
- > [!NOTE]
+ ![Select database details screen](media/tutorial-mysql-to-azure-mysql-online/12-dms-portal-project-mysql-select-db.png)
+
+ > [!NOTE]
> Though you can select multiple databases in this step, each instance of Azure Database Migration Service supports up to 4 databases for concurrent migration. Also, there is a limit of 10 instances of Azure Database Migration Service per subscription per region. For example, if you have 80 databases to migrate, you can migrate 40 of them to the same region concurrently, but only if you have created 10 instances of the Azure Database Migration Service.
-3. Select **Save**, on the **Migration summary** screen, in the **Activity name** text box, specify a name for the migration activity, and then review the summary to ensure that the source and target details match what you previously specified.
+4. On the **Configure migration settings** screen, select the tables to be part of migration, and select **Next : Summary>>**. If the target tables have any data, they are not selected by default but you can explicitly select them and they will be truncated before starting the migration.
- ![Migration Summary](media/tutorial-mysql-to-azure-mysql-online/dms-migration-summary.png)
+ ![Select tables screen](media/tutorial-mysql-to-azure-mysql-online/13-dms-portal-project-mysql-select-tbl.png)
-## Run the migration
+5. On the **Summary** screen, in the **Activity name** text box, specify a name for the migration activity and review the summary to ensure that the source and target details match what you previously specified.
-* Select **Run migration**.
+ ![Migration project summary](media/tutorial-mysql-to-azure-mysql-online/14-dms-portal-project-mysql-activity-summary.png)
- The migration activity window appears, and the **Status** of the activity is **initializing**.
+6. Select **Start migration**. The migration activity window appears, and the **Status** of the activity is **Initializing**. The **Status** changes to **Running** when the table migrations start.
## Monitor the migration 1. On the migration activity screen, select **Refresh** to update the display until the **Status** of the migration shows as **Complete**.
- ![Activity Status - complete](media/tutorial-mysql-to-azure-mysql-online/dms-activity-completed.png)
+ ![Activity Status - complete](media/tutorial-mysql-to-azure-mysql-online/15-dms-activity-completed.png)
2. Under **Database Name**, select specific database to get to the migration status for **Full data load** and **Incremental data sync** operations. Full data load will show the initial load migration status while Incremental data sync will show change data capture (CDC) status.
- ![Activity Status - Full load completed](media/tutorial-mysql-to-azure-mysql-online/dms-activity-full-load-completed.png)
+ ![Activity Status - Full load completed](media/tutorial-mysql-to-azure-mysql-online/16-dms-activity-full-load-completed.png)
- ![Activity Status - Incremental data sync](media/tutorial-mysql-to-azure-mysql-online/dms-activity-incremental-data-sync.png)
+ ![Activity Status - Incremental data sync](media/tutorial-mysql-to-azure-mysql-online/17-dms-activity-incremental-data-sync.png)
## Perform migration cutover
After the initial Full load is completed, the databases are marked **Ready to cu
1. When you're ready to complete the database migration, select **Start Cutover**.
- ![Start cutover](media/tutorial-mysql-to-azure-mysql-online/dms-start-cutover.png)
+ ![Start cutover](media/tutorial-mysql-to-azure-mysql-online/18-dms-start-cutover.png)
2. Make sure to stop all the incoming transactions to the source database; wait until the **Pending changes** counter shows **0**. 3. Select **Confirm**, and the select **Apply**.
dns Dns Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/dns/dns-overview.md
description: Overview of DNS hosting service on Microsoft Azure. Host your domai
Previously updated : 3/25/2021 Last updated : 4/20/2021 #Customer intent: As an administrator, I want to evaluate Azure DNS so I can determine if I want to use it instead of my current DNS service.
dns Dns Zones Records https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/dns/dns-zones-records.md
Title: DNS Zones and Records overview - Azure DNS description: Overview of support for hosting DNS zones and records in Microsoft Azure DNS. - ms.assetid: be4580d7-aa1b-4b6b-89a3-0991c0cda897 Previously updated : 12/18/2017 Last updated : 04/20/2021 # Overview of DNS zones and records
-This page explains the key concepts of domains, DNS zones, and DNS records and record sets, and how they are supported in Azure DNS.
+This article explains the key concepts of domains, DNS zones, DNS records, and record sets. You'll learn how it's supported in Azure DNS.
## Domain names
-The Domain Name System is a hierarchy of domains. The hierarchy starts from the 'root' domain, whose name is simply '**.**'. Below this come top-level domains, such as 'com', 'net', 'org', 'uk' or 'jp'. Below these are second-level domains, such as 'org.uk' or 'co.jp'. The domains in the DNS hierarchy are globally distributed, hosted by DNS name servers around the world.
+The Domain Name System is a hierarchy of domains. The hierarchy starts from the 'root' domain, whose name is simply '**.**'. Below this come top-level domains, such as 'com', 'net', 'org', 'uk' or 'jp'. Below the top-level domains are second-level domains, such as 'org.uk' or 'co.jp'. The domains in the DNS hierarchy are globally distributed, hosted by DNS name servers around the world.
A domain name registrar is an organization that allows you to purchase a domain name, such as `contoso.com`. Purchasing a domain name gives you the right to control the DNS hierarchy under that name, for example allowing you to direct the name `www.contoso.com` to your company web site. The registrar may host the domain in its own name servers on your behalf, or allow you to specify alternative name servers.
-Azure DNS provides a globally distributed, high-availability name server infrastructure, which you can use to host your domain. By hosting your domains in Azure DNS, you can manage your DNS records with the same credentials, APIs, tools, billing, and support as your other Azure services.
+Azure DNS provides a globally distributed and high-availability name server infrastructure that you can use to host your domain. By hosting your domains in Azure DNS, you can manage your DNS records with the same credentials, APIs, tools, billing, and support as your other Azure services.
-Azure DNS does not currently support purchasing of domain names. If you want to purchase a domain name, you need to use a third-party domain name registrar. The registrar typically charges a small annual fee. The domains can then be hosted in Azure DNS for management of DNS records. See [Delegate a Domain to Azure DNS](dns-domain-delegation.md) for details.
+Azure DNS currently doesn't support purchasing of domain names. If you want to purchase a domain name, you need to use a third-party domain name registrar. The registrar typically charges a small annual fee. The domains can then be hosted in Azure DNS for management of DNS records. See [Delegate a Domain to Azure DNS](dns-domain-delegation.md) for details.
## DNS zones
Azure DNS does not currently support purchasing of domain names. If you want to
The time to live, or TTL, specifies how long each record is cached by clients before being requeried. In the above example, the TTL is 3600 seconds or 1 hour.
-In Azure DNS, the TTL is specified for the record set, not for each record, so the same value is used for all records within that record set. You can specify any TTL value between 1 and 2,147,483,647 seconds.
+In Azure DNS, the TTL gets specified for the record set, not for each record, so the same value is used for all records within that record set. You can specify any TTL value between 1 and 2,147,483,647 seconds.
### Wildcard records
-Azure DNS supports [wildcard records](https://en.wikipedia.org/wiki/Wildcard_DNS_record). Wildcard records are returned in response to any query with a matching name (unless there is a closer match from a non-wildcard record set). Azure DNS supports wildcard record sets for all record types except NS and SOA.
+Azure DNS supports [wildcard records](https://en.wikipedia.org/wiki/Wildcard_DNS_record). Wildcard records get returned in response to any query with a matching name, unless there's a closer match from a non-wildcard record set. Azure DNS supports wildcard record sets for all record types except NS and SOA.
-To create a wildcard record set, use the record set name '\*'. Alternatively, you can also use a name with '\*' as its left-most label, for example, '\*.foo'.
+To create a wildcard record set, use the record set name '\*'. You can also use a name with '\*' as its left-most label, for example, '\*.foo'.
### CAA records
-CAA records allow domain owners to specify which Certificate Authorities (CAs) are authorized to issue certificates for their domain. This allows CAs to avoid mis-issuing certificates in some circumstances. CAA records have three properties:
-* **Flags**: This is an integer between 0 and 255, used to represent the critical flag that has special meaning per the [RFC](https://tools.ietf.org/html/rfc6844#section-3)
+CAA records allow domain owners to specify which Certificate Authorities (CAs) are authorized to issue certificates for their domain. This record allows CAs to avoid mis-issuing certificates in some circumstances. CAA records have three properties:
+* **Flags**: This field is an integer between 0 and 255, used to represent the critical flag that has special meaning per the [RFC](https://tools.ietf.org/html/rfc6844#section-3)
* **Tag**: an ASCII string that can be one of the following:
- * **issue**: use this if you want to specify CAs that are permitted to issue certs (all types)
- * **issuewild**: use this if you want to specify CAs that are permitted to issue certs (wildcard certs only)
+ * **issue**: if you want to specify CAs that are permitted to issue certs (all types)
+ * **issuewild**: if you want to specify CAs that are permitted to issue certs (wildcard certs only)
* **iodef**: specify an email address or hostname to which CAs can notify for unauthorized cert issue requests * **Value**: the value for the specific Tag chosen ### CNAME records
-CNAME record sets cannot coexist with other record sets with the same name. For example, you cannot create a CNAME record set with the relative name 'www' and an A record with the relative name 'www' at the same time.
+CNAME record sets can't coexist with other record sets with the same name. For example, you can't create a CNAME record set with the relative name 'www' and an A record with the relative name 'www' at the same time.
-Because the zone apex (name = '\@') always contains the NS and SOA record sets that were created when the zone was created, you can't create a CNAME record set at the zone apex.
+Since the zone apex (name = '\@') will always contain the NS and SOA record sets during the creation of the zone, you can't create a CNAME record set at the zone apex.
-These constraints arise from the DNS standards and are not limitations of Azure DNS.
+These constraints arise from the DNS standards and aren't limitations of Azure DNS.
### NS records
-The NS record set at the zone apex (name '\@') is created automatically with each DNS zone, and is deleted automatically when the zone is deleted (it cannot be deleted separately).
+The NS record set at the zone apex (name '\@') gets created automatically with each DNS zone, and gets deleted automatically when the zone gets deleted. It can't be deleted separately.
-This record set contains the names of the Azure DNS name servers assigned to the zone. You can add additional name servers to this NS record set, to support co-hosting domains with more than one DNS provider. You can also modify the TTL and metadata for this record set. However, you cannot remove or modify the pre-populated Azure DNS name servers.
+This record set contains the names of the Azure DNS name servers assigned to the zone. You can add more name servers to this NS record set, to support cohosting domains with more than one DNS provider. You can also modify the TTL and metadata for this record set. However, removing or modifying the pre-populated Azure DNS name servers isn't allowed.
-This applies only to the NS record set at the zone apex. Other NS record sets in your zone (as used to delegate child zones) can be created, modified, and deleted without constraint.
+This restriction only applies to the NS record set at the zone apex. Other NS record sets in your zone (as used to delegate child zones) can be created, modified, and deleted without constraint.
### SOA records
-A SOA record set is created automatically at the apex of each zone (name = '\@'), and is deleted automatically when the zone is deleted. SOA records cannot be created or deleted separately.
+A SOA record set gets created automatically at the apex of each zone (name = '\@'), and gets deleted automatically when the zone gets deleted. SOA records cannot be created or deleted separately.
-You can modify all properties of the SOA record except for the 'host' property, which is pre-configured to refer to the primary name server name provided by Azure DNS.
+You can modify all properties of the SOA record except for the 'host' property. This property gets pre-configured to refer to the primary name server name provided by Azure DNS.
-The zone serial number in the SOA record is not updated automatically when changes are made to the records in the zone. It can be updated manually by editing the SOA record, if necessary.
+The zone serial number in the SOA record isn't updated automatically when changes are made to the records in the zone. It can be updated manually by editing the SOA record, if necessary.
### SPF records
The zone serial number in the SOA record is not updated automatically when chang
[SRV records](https://en.wikipedia.org/wiki/SRV_record) are used by various services to specify server locations. When specifying an SRV record in Azure DNS:
-* The *service* and *protocol* must be specified as part of the record set name, prefixed with underscores. For example, '\_sip.\_tcp.name'. For a record at the zone apex, there is no need to specify '\@' in the record name, simply use the service and protocol, for example '\_sip.\_tcp'.
+* The *service* and *protocol* must be specified as part of the record set name, prefixed with underscores. For example, '\_sip.\_tcp.name'. For a record at the zone apex, there's no need to specify '\@' in the record name, simply use the service and protocol, for example '\_sip.\_tcp'.
* The *priority*, *weight*, *port*, and *target* are specified as parameters of each record in the record set. ### TXT records
-TXT records are used to map domain names to arbitrary text strings. They are used in multiple applications, in particular related to email configuration, such as the [Sender Policy Framework (SPF)](https://en.wikipedia.org/wiki/Sender_Policy_Framework) and [DomainKeys Identified Mail (DKIM)](https://en.wikipedia.org/wiki/DomainKeys_Identified_Mail).
+TXT records are used to map domain names to arbitrary text strings. They're used in multiple applications, in particular related to email configuration, such as the [Sender Policy Framework (SPF)](https://en.wikipedia.org/wiki/Sender_Policy_Framework) and [DomainKeys Identified Mail (DKIM)](https://en.wikipedia.org/wiki/DomainKeys_Identified_Mail).
The DNS standards permit a single TXT record to contain multiple strings, each of which may be up to 254 characters in length. Where multiple strings are used, they are concatenated by clients and treated as a single string. When calling the Azure DNS REST API, you need to specify each TXT string separately. When using the Azure portal, PowerShell or CLI interfaces you should specify a single string per record, which is automatically divided into 254-character segments if necessary.
-The multiple strings in a DNS record should not be confused with the multiple TXT records in a TXT record set. A TXT record set can contain multiple records, *each of which* can contain multiple strings. Azure DNS supports a total string length of up to 1024 characters in each TXT record set (across all records combined).
+The multiple strings in a DNS record shouldn't be confused with the multiple TXT records in a TXT record set. A TXT record set can contain multiple records, *each of which* can contain multiple strings. Azure DNS supports a total string length of up to 1024 characters in each TXT record set (across all records combined).
## Tags and metadata ### Tags
-Tags are a list of name-value pairs and are used by Azure Resource Manager to label resources. Azure Resource Manager uses tags to enable filtered views of your Azure bill, and also enables you to set a policy on which tags are required. For more information about tags, see [Using tags to organize your Azure resources](../azure-resource-manager/management/tag-resources.md).
+Tags are a list of name-value pairs and are used by Azure Resource Manager to label resources. Azure Resource Manager uses tags to enable filtered views of your Azure bill and also enables you to set a policy for certain tags. For more information about tags, see [Using tags to organize your Azure resources](../azure-resource-manager/management/tag-resources.md).
-Azure DNS supports using Azure Resource Manager tags on DNS zone resources. It does not support tags on DNS record sets, although as an alternative 'metadata' is supported on DNS record sets as explained below.
+Azure DNS supports using Azure Resource Manager tags on DNS zone resources. It doesn't support tags on DNS record sets, although as an alternative 'metadata' is supported on DNS record sets as explained below.
### Metadata
-As an alternative to record set tags, Azure DNS supports annotating record sets using 'metadata'. Similar to tags, metadata enables you to associate name-value pairs with each record set. This can be useful, for example to record the purpose of each record set. Unlike tags, metadata cannot be used to provide a filtered view of your Azure bill and cannot be specified in an Azure Resource Manager policy.
+As an alternative to record set tags, Azure DNS supports annotating record sets using 'metadata'. Similar to tags, metadata enables you to associate name-value pairs with each record set. This feature can be useful, for example to record the purpose of each record set. Unlike tags, metadata cannot be used to provide a filtered view of your Azure bill and cannot be specified in an Azure Resource Manager policy.
## Etags Suppose two people or two processes try to modify a DNS record at the same time. Which one wins? And does the winner know that they've overwritten changes created by someone else?
-Azure DNS uses Etags to handle concurrent changes to the same resource safely. Etags are separate from [Azure Resource Manager 'Tags'](#tags). Each DNS resource (zone or record set) has an Etag associated with it. Whenever a resource is retrieved, its Etag is also retrieved. When updating a resource, you can choose to pass back the Etag so Azure DNS can verify that the Etag on the server matches. Since each update to a resource results in the Etag being regenerated, an Etag mismatch indicates a concurrent change has occurred. Etags can also be used when creating a new resource to ensure that the resource does not already exist.
+Azure DNS uses Etags to handle concurrent changes to the same resource safely. Etags are separate from [Azure Resource Manager 'Tags'](#tags). Each DNS resource (zone or record set) has an Etag associated with it. Whenever a resource is retrieved, its Etag is also retrieved. When updating a resource, you can choose to pass back the Etag so Azure DNS can verify the Etag on the server matches. Since each update to a resource results in the Etag being regenerated, an Etag mismatch indicates a concurrent change has occurred. Etags can also be used when creating a new resource to ensure the resource doesn't already exist.
By default, Azure DNS PowerShell uses Etags to block concurrent changes to zones and record sets. The optional *-Overwrite* switch can be used to suppress Etag checks, in which case any concurrent changes that have occurred are overwritten.
At the level of the Azure DNS REST API, Etags are specified using HTTP headers.
| None |PUT always succeeds (no Etag checks) | | If-match \<etag> |PUT only succeeds if resource exists and Etag matches | | If-match * |PUT only succeeds if resource exists |
-| If-none-match * |PUT only succeeds if resource does not exist |
+| If-none-match * |PUT only succeeds if resource doesn't exist |
## Limits
event-grid Create View Manage System Topics Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-grid/create-view-manage-system-topics-cli.md
For a local installation:
## Create a system topic - To create a system topic on an Azure source first and then create an event subscription for that topic, see the following reference topics:
- - [az eventgrid system-topic create](/cli/azure/ext/eventgrid/eventgrid/system-topic#ext-eventgrid-az-eventgrid-system-topic-create)
+ - [az eventgrid system-topic create](/cli/azure/eventgrid/system-topic#az_eventgrid_system_topic_create)
```azurecli-interactive # Get the ID of the Azure source (for example: Azure Storage account)
For a local installation:
```azurecli-interactive az eventgrid topic-type list --output json | grep -w id ```
- - [az eventgrid system-topic event-subscription create](/cli/azure/ext/eventgrid/eventgrid/system-topic/event-subscription#ext-eventgrid-az-eventgrid-system-topic-event-subscription-create)
+ - [az eventgrid system-topic event-subscription create](/cli/azure/eventgrid/system-topic/event-subscription#az_eventgrid_system_topic_event-subscription-create)
```azurecli-interactive az eventgrid system-topic event-subscription create --name <SPECIFY EVENT SUBSCRIPTION NAME> \ -g rg1 --system-topic-name <SYSTEM TOPIC NAME> \ --endpoint <ENDPOINT URL> ```-- To create a system topic (implicitly) when creating an event subscription for an Azure source, use the [az eventgrid event-subscription create](/cli/azure/ext/eventgrid/eventgrid/event-subscription#ext-eventgrid-az-eventgrid-event-subscription-create) method. Here's an example:
+- To create a system topic (implicitly) when creating an event subscription for an Azure source, use the [az eventgrid event-subscription create](/cli/azure/eventgrid/event-subscription#az_eventgrid_event_subscription_create) method. Here's an example:
```azurecli-interactive storageid=$(az storage account show --name <AZURE STORAGE ACCOUNT NAME> --resource-group <AZURE RESOURCE GROUP NAME> --query id --output tsv)
For a local installation:
## View all system topics To view all system topics and details of a selected system topic, use the following commands: -- [az eventgrid system-topic list](/cli/azure/ext/eventgrid/eventgrid/system-topic#ext-eventgrid-az-eventgrid-system-topic-list)
+- [az eventgrid system-topic list](/cli/azure/eventgrid/system-topic#az_eventgrid_system_topic_list)
```azurecli-interactive az eventgrid system-topic list ```-- [az eventgrid system-topic show](/cli/azure/ext/eventgrid/eventgrid/system-topic#ext-eventgrid-az-eventgrid-system-topic-show)
+- [az eventgrid system-topic show](/cli/azure/eventgrid/system-topic#az_eventgrid_system_topic_show)
```azurecli-interactive az eventgrid system-topic show -g <AZURE RESOURCE GROUP NAME> -n <SYSTEM TOPIC NAME>
To view all system topics and details of a selected system topic, use the follow
## Delete a system topic To delete a system topic, use the following command: -- [az eventgrid system-topic delete](/cli/azure/ext/eventgrid/eventgrid/system-topic#ext-eventgrid-az-eventgrid-system-topic-delete)
+- [az eventgrid system-topic delete](/cli/azure/eventgrid/system-topic#az_eventgrid_system_topic_delete)
```azurecli-interactive az eventgrid system-topic delete -g <AZURE RESOURCE GROUP NAME> --name <SYSTEM TOPIC NAME>
event-grid Partner Onboarding Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-grid/partner-onboarding-overview.md
After posting to the partner namespace endpoint, you receive a response. The res
* [ARM template](/azure/templates/microsoft.eventgrid/allversions) * [ARM template schema](https://github.com/Azure/azure-resource-manager-schemas/blob/master/schemas/2020-04-01-preview/Microsoft.EventGrid.json) * [REST APIs](/azure/templates/microsoft.eventgrid/2020-04-01-preview/partnernamespaces)
- * [CLI extension](/cli/azure/ext/eventgrid/)
+ * [CLI extension](/cli/azure/)
### SDKs * [.NET](https://www.nuget.org/packages/Microsoft.Azure.Management.EventGrid/5.3.1-preview)
event-grid Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-grid/policy-reference.md
Title: Built-in policy definitions for Azure Event Grid description: Lists Azure Policy built-in policy definitions for Azure Event Grid. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 04/14/2021 Last updated : 04/21/2021
event-grid Event Grid Cli Azure Subscription https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-grid/scripts/event-grid-cli-azure-subscription.md
This script uses the following command to create the event subscription. Each co
| Command | Notes | ||| | [az eventgrid event-subscription create](/cli/azure/eventgrid/event-subscription#az_eventgrid_event_subscription_create) | Create an Event Grid subscription. |
-| [az eventgrid event-subscription create](/cli/azure/ext/eventgrid/eventgrid/event-subscription#ext-eventgrid-az-eventgrid-event-subscription-create) - extension version | Create an Event Grid subscription. |
+| [az eventgrid event-subscription create](/cli/azure/eventgrid/event-subscription#az_eventgrid_event_subscription_create) - extension version | Create an Event Grid subscription. |
## Next steps
event-grid Event Grid Cli Blob https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-grid/scripts/event-grid-cli-blob.md
This script uses the following command to create the event subscription. Each co
| Command | Notes | ||| | [az eventgrid event-subscription create](/cli/azure/eventgrid/event-subscription#az_eventgrid_event_subscription_create) | Create an Event Grid subscription. |
-| [az eventgrid event-subscription create](/cli/azure/ext/eventgrid/eventgrid/event-subscription#ext-eventgrid-az-eventgrid-event-subscription-create) - extension version | Create an Event Grid subscription. |
+| [az eventgrid event-subscription create](/cli/azure/eventgrid/event-subscription#az_eventgrid_event_subscription_create) - extension version | Create an Event Grid subscription. |
## Next steps
event-grid Event Grid Cli Resource Group Filter https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-grid/scripts/event-grid-cli-resource-group-filter.md
This script uses the following command to create the event subscription. Each co
| Command | Notes | ||| | [az eventgrid event-subscription create](/cli/azure/eventgrid/event-subscription#az_eventgrid_event_subscription_create) | Create an Event Grid subscription. |
-| [az eventgrid event-subscription create](/cli/azure/ext/eventgrid/eventgrid/event-subscription#ext-eventgrid-az-eventgrid-event-subscription-create) - extension version | Create an Event Grid subscription. |
+| [az eventgrid event-subscription create](/cli/azure/eventgrid/event-subscription#az_eventgrid_event_subscription_create) - extension version | Create an Event Grid subscription. |
## Next steps
event-grid Event Grid Cli Resource Group https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-grid/scripts/event-grid-cli-resource-group.md
This script uses the following command to create the event subscription. Each co
| Command | Notes | ||| | [az eventgrid event-subscription create](/cli/azure/eventgrid/event-subscription#az_eventgrid_event_subscription_create) | Create an Event Grid subscription. |
-| [az eventgrid event-subscription create](/cli/azure/ext/eventgrid/eventgrid/event-subscription#ext-eventgrid-az-eventgrid-event-subscription-create) - extension version | Create an Event Grid subscription. |
+| [az eventgrid event-subscription create](/cli/azure/eventgrid/event-subscription#az_eventgrid_event_subscription_create) - extension version | Create an Event Grid subscription. |
## Next steps
event-grid Event Grid Cli Subscribe Custom Topic https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-grid/scripts/event-grid-cli-subscribe-custom-topic.md
This script uses the following command to create the event subscription. Each co
| Command | Notes | ||| | [az eventgrid event-subscription create](/cli/azure/eventgrid/event-subscription#az_eventgrid_event_subscription_create) | Create an Event Grid subscription. |
-| [az eventgrid event-subscription create](/cli/azure/ext/eventgrid/eventgrid/event-subscription#ext-eventgrid-az-eventgrid-event-subscription-create) - extension version | Create an Event Grid subscription. |
+| [az eventgrid event-subscription create](/cli/azure/eventgrid/event-subscription#az_eventgrid_event_subscription_create) - extension version | Create an Event Grid subscription. |
## Next steps
event-grid Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-grid/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Event Grid description: Lists Azure Policy Regulatory Compliance controls available for Azure Event Grid. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 04/14/2021 Last updated : 04/21/2021
event-hubs Configure Customer Managed Key https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-hubs/configure-customer-managed-key.md
After you enable customer-managed keys, you need to associate the customer manag
You can rotate your key in the key vault by using the Azure Key Vaults rotation mechanism. Activation and expiration dates can also be set to automate key rotation. The Event Hubs service will detect new key versions and start using them automatically. ## Revoke access to keys
-Revoking access to the encryption keys won't purge the data from Event Hubs. However, the data can't be accessed from the Event Hubs namespace. You can revoke the encryption key through access policy or by deleting the key. Learn more about access policies and securing your key vault from [Secure access to a key vault](../key-vault/general/security-overview.md).
+Revoking access to the encryption keys won't purge the data from Event Hubs. However, the data can't be accessed from the Event Hubs namespace. You can revoke the encryption key through access policy or by deleting the key. Learn more about access policies and securing your key vault from [Secure access to a key vault](../key-vault/general/security-features.md).
Once the encryption key is revoked, the Event Hubs service on the encrypted namespace will become inoperable. If the access to the key is enabled or the delete key is restored, Event Hubs service will pick the key so you can access the data from the encrypted Event Hubs namespace.
event-hubs Event Hubs Dotnet Standard Get Started Send Legacy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-hubs/event-hubs-dotnet-standard-get-started-send-legacy.md
To complete this quickstart, you need the following prerequisites:
- **Microsoft Azure subscription**. To use Azure services, including Azure Event Hubs, you need a subscription. If you don't have an existing Azure account, you can sign up for a [free trial](https://azure.microsoft.com/free/) or use your MSDN subscriber benefits when you [create an account](https://azure.microsoft.com). - [Microsoft Visual Studio 2019](https://www.visualstudio.com).-- [.NET Core Visual Studio 2015 or 2017 tools](https://www.microsoft.com/net/core).
+- [.NET Core SDK](https://dotnet.microsoft.com/download).
- **Create an Event Hubs namespace and an event hub**. The first step is to use the [Azure portal](https://portal.azure.com) to create a namespace of type Event Hubs, and obtain the management credentials your application needs to communicate with the event hub. To create a namespace and an event hub, follow the procedure in [this article](event-hubs-create.md). Then, get the **connection string for the event hub namespace** by following instructions from the article: [Get connection string](event-hubs-get-connection-string.md#get-connection-string-from-the-portal). You use the connection string later in this quickstart. ## Send events
event-hubs Event Hubs Quickstart Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-hubs/event-hubs-quickstart-powershell.md
To complete this tutorial, make sure you have:
- Azure subscription. If you don't have one, [create a free account][] before you begin. - [Visual Studio 2019](https://www.visualstudio.com/vs).-- [.NET Standard SDK](https://www.microsoft.com/net/download/windows), version 2.0 or later.
+- [.NET Core SDK](https://dotnet.microsoft.com/download), version 2.0 or later.
[!INCLUDE [cloud-shell-try-it.md](../../includes/cloud-shell-try-it.md)]
event-hubs Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-hubs/policy-reference.md
Title: Built-in policy definitions for Azure Event Hubs description: Lists Azure Policy built-in policy definitions for Azure Event Hubs. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 04/14/2021 Last updated : 04/21/2021
event-hubs Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-hubs/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Event Hubs description: Lists Azure Policy Regulatory Compliance controls available for Azure Event Hubs. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 04/14/2021 Last updated : 04/21/2021
firewall-manager Security Baseline https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/firewall-manager/security-baseline.md
Apply tags to your Azure resources, resource groups, and subscriptions to logica
**Guidance**: Remove Azure Firewall Manager resources when they are no longer needed to minimize attack surface. Users can manage their Azure Firewall Manager resources via the Azure portal, CLI, or REST APIs. -- [Azure Firewall Policy CLI](/cli/azure/ext/azure-firewall/network/firewall/policy)
+- [Azure Firewall Policy CLI](/cli/azure/network/firewall/policy)
- [Azure network CLI](/powershell/module/az.network/?preserve-view=true&view=azps-5.1.0#networking)
Use workflow automation features in Azure Security Center and Azure Sentinel to
- [Azure Firewall Policy template reference](/azure/templates/microsoft.network/firewallpolicies) -- [Azure Firewall Policy CLI](/cli/azure/ext/azure-firewall/network/firewall/policy)
+- [Azure Firewall Policy CLI](/cli/azure/network/firewall/policy)
- [Illustration of guardrails implementation in enterprise-scale landing zone](/azure/cloud-adoption-framework/ready/enterprise-scale/architecture#landing-zone-expanded-definition)
firewall Active Ftp Support https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/firewall/active-ftp-support.md
To deploy using Azure PowerShell, use the `AllowActiveFTP` parameter. For more i
## Azure CLI
-To deploy using the Azure CLI, use the `--allow-active-ftp` parameter. For more information, see [az network firewall create](/cli/azure/ext/azure-firewall/network/firewall#ext_azure_firewall_az_network_firewall_create-optional-parameters).
+To deploy using the Azure CLI, use the `--allow-active-ftp` parameter. For more information, see [az network firewall create](/cli/azure/network/firewall#az_network_firewall_create-optional-parameters).
## Azure Resource Manager (ARM) template
firewall Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/firewall/overview.md
Previously updated : 04/05/2021 Last updated : 04/20/2021 # Customer intent: As an administrator, I want to evaluate Azure Firewall so I can determine if I want to use it.
Azure Firewall has the following known issues:
|SNAT on inbound connections|In addition to DNAT, connections via the firewall public IP address (inbound) are SNATed to one of the firewall private IPs. This requirement today (also for Active/Active NVAs) to ensure symmetric routing.|To preserve the original source for HTTP/S, consider using [XFF](https://en.wikipedia.org/wiki/X-Forwarded-For) headers. For example, use a service such as [Azure Front Door](../frontdoor/front-door-http-headers-protocol.md#front-door-to-backend) or [Azure Application Gateway](../application-gateway/rewrite-http-headers.md) in front of the firewall. You can also add WAF as part of Azure Front Door and chain to the firewall. |SQL FQDN filtering support only in proxy mode (port 1433)|For Azure SQL Database, Azure Synapse Analytics, and Azure SQL Managed Instance:<br><br>SQL FQDN filtering is supported in proxy-mode only (port 1433).<br><br>For Azure SQL IaaS:<br><br>If you're using non-standard ports, you can specify those ports in the application rules.|For SQL in redirect mode (the default if connecting from within Azure), you can instead filter access using the SQL service tag as part of Azure Firewall network rules. |Outbound traffic on TCP port 25 isn't allowed| Outbound SMTP connections that use TCP port 25 are blocked. Port 25 is primarily used for unauthenticated email delivery. This is the default platform behavior for virtual machines. For more information, see more [Troubleshoot outbound SMTP connectivity issues in Azure](../virtual-network/troubleshoot-outbound-smtp-connectivity.md). However, unlike virtual machines, it isn't currently possible to enable this functionality on Azure Firewall. Note: to allow authenticated SMTP (port 587) or SMTP over a port other than 25, make sure you configure a network rule and not an application rule as SMTP inspection isn't supported at this time.|Follow the recommended method to send email, as documented in the SMTP troubleshooting article. Or, exclude the virtual machine that needs outbound SMTP access from your default route to the firewall. Instead, configure outbound access directly to the internet.
-|SNAT port utilization metric shows 0%|The Azure Firewall SNAT port utilization metric may show 0% usage even when SNAT ports are used. In this case, using the metric as part of the firewall health metric provides an incorrect result.|This issue has been fixed and rollout to production is targeted for May 2020. In some cases, firewall redeployment resolves the issue, but it's not consistent. As an intermediate workaround, only use the firewall health state to look for *status=degraded*, not for *status=unhealthy*. Port exhaustion will show as *degraded*. *Not healthy* is reserved for future use when the are more metrics to affect the firewall health.
+|SNAT port exhaustion|Azure Firewall currently supports 1024 ports per Public IP address per backend virtual machine scale set instance. By default, there are two VMSS instances.|This is an SLB limitation and we are constantly looking for opportunities to increase the limits. In the meantime, it is recommended to configure Azure Firewall deployments with a minimum of five public IP addresses for deployments susceptible to SNAT exhaustion. This increases the SNAT ports available by five times. Allocate from an IP address prefix to simplify downstream permissions.|
|DNAT isn't supported with Forced Tunneling enabled|Firewalls deployed with Forced Tunneling enabled can't support inbound access from the Internet because of asymmetric routing.|This is by design because of asymmetric routing. The return path for inbound connections goes via the on-premises firewall, which hasn't seen the connection established. |Outbound Passive FTP may not work for Firewalls with multiple public IP addresses, depending on your FTP server configuration.|Passive FTP establishes different connections for control and data channels. When a Firewall with multiple public IP addresses sends data outbound, it randomly selects one of its public IP addresses for the source IP address. FTP may fail when data and control channels use different source IP addresses, depending on your FTP server configuration.|An explicit SNAT configuration is planned. In the meantime, you can configure your FTP server to accept data and control channels from different source IP addresses (see [an example for IIS](/iis/configuration/system.applicationhost/sites/sitedefaults/ftpserver/security/datachannelsecurity)). Alternatively, consider using a single IP address in this situation.| |Inbound Passive FTP may not work depending on your FTP server configuration |Passive FTP establishes different connections for control and data channels. Inbound connections on Azure Firewall are SNATed to one of the firewall private IP addresses to ensure symmetric routing. FTP may fail when data and control channels use different source IP addresses, depending on your FTP server configuration.|Preserving the original source IP address is being investigated. In the meantime, you can configure your FTP server to accept data and control channels from different source IP addresses.|
Azure Firewall has the following known issues:
|Start/Stop doesnΓÇÖt work with a firewall configured in forced-tunnel mode|Start/stop doesnΓÇÖt work with Azure firewall configured in forced-tunnel mode. Attempting to start Azure Firewall with forced tunneling configured results in the following error:<br><br>*Set-AzFirewall: AzureFirewall FW-xx management IP configuration cannot be added to an existing firewall. Redeploy with a management IP configuration if you want to use forced tunneling support.<br>StatusCode: 400<br>ReasonPhrase: Bad Request*|Under investigation.<br><br>As a workaround, you can delete the existing firewall and create a new one with the same parameters.| |Can't add firewall policy tags using the portal|Azure Firewall Policy has a patch support limitation that prevents you from adding a tag using the Azure portal. The following error is generated: *Could not save the tags for the resource*.|A fix is being investigated. Or, you can use the Azure PowerShell cmdlet `Set-AzFirewallPolicy` to update tags.| |IPv6 not yet supported|If you add an IPv6 address to a rule, the firewall fails.|Use only IPv4 addresses. IPv6 support is under investigation.|
-|Updating multiple IP Groups fails with conflict error.|When you update two or more IPGroups attached to the same firewall, one of the resource goes into a failed state.|This is a known issue/limitation. <br><br>When you update an IPGroup, it triggers an update on all firewalls that the IPGroup is attached to. If an update to a second IPGroup is started while the firewall is still in the *Updating* state, then the IPGroup update fails.<br><br>To avoid the failure, IPGroups attached to the same firewall must be updated one at a time. Allow enough time between updates to allow the firewall to get out of the *Updating* state.|
+|Updating multiple IP Groups fails with conflict error.|When you update two or more IPGroups attached to the same firewall, one of the resources goes into a failed state.|This is a known issue/limitation. <br><br>When you update an IPGroup, it triggers an update on all firewalls that the IPGroup is attached to. If an update to a second IPGroup is started while the firewall is still in the *Updating* state, then the IPGroup update fails.<br><br>To avoid the failure, IPGroups attached to the same firewall must be updated one at a time. Allow enough time between updates to allow the firewall to get out of the *Updating* state.|
## Next steps
frontdoor Front Door Tutorial Rules Engine https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/frontdoor/front-door-tutorial-rules-engine.md
In this tutorial, you learn how to:
az network front-door routing-rule update -g {rg} -f {front_door} -n {routing_rule_name} --remove rulesEngine # case sensitive word ΓÇÿrulesEngineΓÇÖ ```
-For more information, a full list of AFD Rules Engine commands can be found [here](/cli/azure/ext/front-door/network/front-door/rules-engine).
+For more information, a full list of AFD Rules Engine commands can be found [here](/cli/azure/network/front-door/rules-engine).
## Clean up resources
frontdoor Front Door Waf https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/frontdoor/front-door-waf.md
In this tutorial, you'll learn how to:
``` > [!NOTE]
-> For more information about the commands used in this tutorial, see [Azure CLI reference for Front Door](/cli/azure/ext/front-door).
+> For more information about the commands used in this tutorial, see [Azure CLI reference for Front Door](/cli/azure/).
## Create an Azure Front Door resource
frontdoor Quickstart Create Front Door Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/frontdoor/quickstart-create-front-door-cli.md
Make note of the default host name of each web app so you can define the backend
Create a basic Front Door with default load balancing settings, health probe, and routing rules by running to follow:
-Create Front Door with [az network front-door create](/cli/azure/ext/front-door/network/front-door#ext_front_door_az_network_front_door_create&preserve-view=true):
+Create Front Door with [az network front-door create](/cli/azure/network/front-door#az_network_front_door_create&preserve-view=true):
```azurecli-interactive az network front-door create \
germany Germany Migration Databases https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/germany/germany-migration-databases.md
Migrating a database with geo-replication or BACPAC file does not copy over the
A new PowerShell command **Copy-AzSqlDatabaseLongTermRetentionBackup** has been introduced, which can be used to copy the long-term retention backups from Azure Germany to Azure global regions. 1. **Copy LTR backup using backup name**
-Following example shows how you can copy a LTR backup from Azure Germany to Azure global region, using the backupname.
+Following example shows how you can copy a LTR backup from Azure Germany to Azure global region, using the backupname.
```powershell # Source database and target database info
Copy-AzSqlDatabaseLongTermRetentionBackup
``` 2. **Copy LTR backup using backup resourceID**
-Following example shows how you can copy LTR backup from Azure Germany to Azure global region, using a backup resourceID.
+Following example shows how you can copy LTR backup from Azure Germany to Azure global region, using a backup resourceID. This example can be used to copy backups of a deleted database as well.
```powershell
+$location = "<location>"
+# list LTR backups for All databases (you have option to choose All/Live/Deleted)
+$ltrBackups = Get-AzSqlDatabaseLongTermRetentionBackup -Location $location -DatabaseState All
+
+# select the LTR backup you want to copy
+$ltrBackup = $ltrBackups[0]
+$resourceID = $ltrBackup.ResourceId
+ # Source Database and target database info
-$resourceID = "/subscriptions/000000000-eeee-4444-9999-e9999a5555ab/resourceGroups/mysourcergname/providers/Microsoft.Sql/locations/germanynorth/longTermRetentionServers/mysourceserver/longTermRetentionDatabases/mysourcedb/longTermRetentionBackups/0e848ed8-c229-444c-a3ba-75ac0507dd31;132567894740000000"
$targetDatabaseName = "<target database name>" $targetSubscriptionId = "<target subscriptionID>" $targetRGName = "<target resource group name>" $targetServerFQDN = "<targetservername.database.windows.net>" - Copy-AzSqlDatabaseLongTermRetentionBackup
- -ResourceId $sourceRGName
+ -ResourceId $resourceID
-TargetDatabaseName $targetDatabaseName -TargetSubscriptionId $targetSubscriptionId -TargetResourceGroupName $targetRGName - TargetServerFullyQualifiedDomainName $targetServerFQDN ```
-3. **Copy LTR backup of a deleted database**
-Following example shows how to copy LTR backup of a deleted or dropped database from Azure Germany to Azure global. Note that, since this is a backup of a dropped database, the database should exist on the target server when starting the copy operation.
-
-```powershell
-# Source Database and target database info
-$targetDatabaseName = "<target database name>"
-$targetSubscriptionId = "<target subscriptionID>"
-$targetRGName = "<target resource group name>"
-$targetServerFQDN = "<targetservername.database.windows.net>"
-
-Copy-AzSqlDatabaseLongTermRetentionBackup
--TargetDatabaseName $targetDatabaseName --TargetSubscriptionId $targetSubscriptionId--TargetResourceGroupName $targetRGName-- TargetServerFullyQualifiedDomainName $targetServerFQDN
-```
- ### Limitations
governance Create Blueprint Azurecli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/create-blueprint-azurecli.md
assignment on the resource group.
> [!NOTE] > Use the filename _blueprint.json_ when importing your blueprint definitions. > This file name is used when calling
- > [az blueprint import](/cli/azure/ext/blueprint/blueprint#ext_blueprint_az_blueprint_import).
+ > [az blueprint import](/cli/azure/blueprint#az_blueprint_import).
The blueprint object is created in the default subscription by default. To specify the management group, use parameter **managementgroup**. To specify the subscription, use parameter
governance Effects https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/concepts/effects.md
Title: Understand how effects work description: Azure Policy definitions have various effects that determine how compliance is managed and reported. Previously updated : 02/17/2021 Last updated : 04/19/2021 # Understand Azure Policy effects
After the Resource Provider returns a success code on a Resource Manager mode re
**AuditIfNotExists** and **DeployIfNotExists** evaluate to determine if additional compliance logging or action is required.
-Additionally, `PATCH` requests that only modify `tags` related fields restricts policy evaluation to policies containing conditions that inspect `tags` related fields.
+Additionally, `PATCH` requests that only modify `tags` related fields restricts policy evaluation to
+policies containing conditions that inspect `tags` related fields.
## Append
take either a single **field/value** pair or multiples. Refer to
### Append examples
-Example 1: Single **field/value** pair using a non-**\[\*\]** [alias](definition-structure.md#aliases)
-with an array **value** to set IP rules on a storage account. When the non-**\[\*\]** alias is an
-array, the effect appends the **value** as the entire array. If the array already exists, a deny
-event occurs from the conflict.
+Example 1: Single **field/value** pair using a non-**\[\*\]**
+[alias](definition-structure.md#aliases) with an array **value** to set IP rules on a storage
+account. When the non-**\[\*\]** alias is an array, the effect appends the **value** as the entire
+array. If the array already exists, a deny event occurs from the conflict.
```json "then": {
Similar to AuditIfNotExists, a DeployIfNotExists policy definition executes a te
when the condition is met. > [!NOTE]
-> [Nested templates](../../../azure-resource-manager/templates/linked-templates.md#nested-template) are supported with **deployIfNotExists**, but
-> [linked templates](../../../azure-resource-manager/templates/linked-templates.md#linked-template) are currently not supported.
+> [Nested templates](../../../azure-resource-manager/templates/linked-templates.md#nested-template)
+> are supported with **deployIfNotExists**, but
+> [linked templates](../../../azure-resource-manager/templates/linked-templates.md#linked-template)
+> are currently not supported.
### DeployIfNotExists evaluation
related resources to match and the template deployment to execute.
- Default is _ResourceGroup_. - **Deployment** (required) - This property should include the full template deployment as it would be passed to the
- `Microsoft.Resources/deployments` PUT API. For more information, see the [Deployments REST API](/rest/api/resources/deployments).
+ `Microsoft.Resources/deployments` PUT API. For more information, see the
+ [Deployments REST API](/rest/api/resources/deployments).
> [!NOTE] > All functions inside the **Deployment** property are evaluated as components of the template,
operations.
The following operations are supported by Modify: -- Add, replace or remove resource tags. For tags, a Modify policy should have `mode` set to
+- Add, replace, or remove resource tags. For tags, a Modify policy should have `mode` set to
_Indexed_ unless the target resource is a resource group. - Add or replace the value of managed identity type (`identity.type`) of virtual machines and virtual machine scale sets.
needed for remediation and the **operations** used to add, update, or remove tag
- The role defined must include all operations granted to the [Contributor](../../../role-based-access-control/built-in-roles.md#contributor) role. - **conflictEffect** (optional)
- - Determines which policy definition "wins" in the event that more than one policy definition
- modifies the same property or when the Modify operation doesn't work on the specified alias.
+ - Determines which policy definition "wins" if more than one policy definition modifies the same
+ property or when the Modify operation doesn't work on the specified alias.
- For new or updated resources, the policy definition with _deny_ takes precedence. Policy definitions with _audit_ skip all **operations**. If more than one policy definition has _deny_, the request is denied as a conflict. If all policy definitions have _audit_, then none
governance Get Compliance Data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/how-to/get-compliance-data.md
Title: Get policy compliance data description: Azure Policy evaluations and effects determine compliance. Learn how to get the compliance details of your Azure resources. Previously updated : 03/16/2021 Last updated : 04/19/2021 # Get compliance data of Azure resources
Evaluations of assigned policies and initiatives happen as the result of various
compliant status information for the individual resource becomes available in the portal and SDKs around 15 minutes later. This event doesn't cause an evaluation of other resources.
+- A subscription (resource type `Microsoft.Resource/subscriptions`) is created or moved within a
+ [management group hierarchy](../../management-groups/overview.md) with an assigned policy
+ definition targeting the subscription resource type. Evaluation of the subscription supported
+ effects (audit, auditIfNotExist, deployIfNotExists, modify), logging, and any remediation actions
+ takes around 30 minutes.
+ - A [policy exemption](../concepts/exemption-structure.md) is created, updated, or deleted. In this scenario, the corresponding assignment is evaluated for the defined exemption scope.
governance Azure Security Benchmark https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/samples/azure-security-benchmark.md
Title: Regulatory Compliance details for Azure Security Benchmark description: Details of the Azure Security Benchmark Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 04/14/2021 Last updated : 04/21/2021
The following article details how the Azure Policy Regulatory Compliance built-in initiative definition maps to **compliance domains** and **controls** in Azure Security Benchmark. For more information about this compliance standard, see
-[Azure Security Benchmark](../../../security/benchmarks/overview.md). To understand
+[Azure Security Benchmark](/security/benchmark/azure/introduction). To understand
_Ownership_, see [Azure Policy policy definition](../concepts/definition-structure.md#type) and [Shared responsibility in the cloud](../../../security/fundamentals/shared-responsibility.md).
initiative definition.
|[API Management services should use a virtual network](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fef619a2c-cc4d-4d03-b2ba-8c94a834d85b) |Azure Virtual Network deployment provides enhanced security, isolation and allows you to place your API Management service in a non-internet routable network that you control access to. These networks can then be connected to your on-premises networks using various VPN technologies, which enables access to your backend services within the network and/or on-premises. The developer portal and API gateway, can be configured to be accessible either from the Internet or only within the virtual network. |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/API%20Management/ApiManagement_VNETEnabled_Audit.json) | |[Authorized IP ranges should be defined on Kubernetes Services](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0e246bcf-5f6f-4f87-bc6f-775d4712c7ea) |Restrict access to the Kubernetes Service Management API by granting API access only to IP addresses in specific ranges. It is recommended to limit access to authorized IP ranges to ensure that only applications from allowed networks can access the cluster. |Audit, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableIpRanges_KubernetesService_Audit.json) | |[Azure Cosmos DB accounts should have firewall rules](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F862e97cf-49fc-4a5c-9de4-40d4e2e7c8eb) |Firewall rules should be defined on your Azure Cosmos DB accounts to prevent traffic from unauthorized sources. Accounts that have at least one IP rule defined with the virtual network filter enabled are deemed compliant. Accounts disabling public access are also deemed compliant. |Audit, Deny, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cosmos%20DB/Cosmos_NetworkRulesExist_Audit.json) |
+|[Azure Key Vault should disable public network access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F55615ac9-af46-4a59-874e-391cc3dfb490) |Disable public network access for your key vault so that it's not accessible over the public internet. This can reduce data leakage risks. Learn more at: [https://aka.ms/akvprivatelink](https://aka.ms/akvprivatelink). |Audit, Deny, Disabled |[1.1.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Key%20Vault/AzureKeyVaultFirewallEnabled_Audit.json) |
|[Cognitive Services accounts should disable public network access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0725b4dd-7e76-479c-a735-68e7ee23d5ca) |Disabling public network access improves security by ensuring that Cognitive Services account isn't exposed on the public internet. Creating private endpoints can limit exposure of Cognitive Services account. Learn more at: [https://go.microsoft.com/fwlink/?linkid=2129800](https://go.microsoft.com/fwlink/?linkid=2129800). |Audit, Deny, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_DisablePublicNetworkAccess_Audit.json) | |[Cognitive Services accounts should restrict network access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F037eea7a-bd0a-46c5-9a66-03aea78705d3) |Network access to Cognitive Services accounts should be restricted. Configure network rules so only applications from allowed networks can access the Cognitive Services account. To allow connections from specific internet or on-premises clients, access can be granted to traffic from specific Azure virtual networks or to public internet IP address ranges. |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_NetworkAcls_Audit.json) | |[Container registries should not allow unrestricted network access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd0793b48-0edc-4296-a390-4c75d1bdfd71) |Azure container registries by default accept connections over the internet from hosts on any network. To protect your registries from potential threats, allow access from only specific public IP addresses or address ranges. If your registry doesn't have an IP/firewall rule or a configured virtual network, it will appear in the unhealthy resources. Learn more about Container Registry network rules here: [https://aka.ms/acr/portal/public-network](https://aka.ms/acr/portal/public-network) and here [https://aka.ms/acr/vnet](https://aka.ms/acr/vnet). |Audit, Deny, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Container%20Registry/ACR_NetworkRulesExist_AuditDeny.json) |
-|[Firewall should be enabled on Key Vault](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F55615ac9-af46-4a59-874e-391cc3dfb490) |Key vault's firewall prevents unauthorized traffic from reaching your key vault and provides an additional layer of protection for your secrets. Enable the firewall to make sure that only traffic from allowed networks can access your key vault. |Audit, Disabled |[1.0.2-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Key%20Vault/AzureKeyVaultFirewallEnabled_Audit.json) |
|[Internet-facing virtual machines should be protected with network security groups](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff6de0be7-9a8a-4b8a-b349-43cf02d22f7c) |Protect your virtual machines from potential threats by restricting access to them with network security groups (NSG). Learn more about controlling traffic with NSGs at [https://aka.ms/nsg-doc](https://aka.ms/nsg-doc) |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_NetworkSecurityGroupsOnInternetFacingVirtualMachines_Audit.json) | |[IP Forwarding on your virtual machine should be disabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fbd352bd5-2853-4985-bf0d-73806b4a5744) |Enabling IP forwarding on a virtual machine's NIC allows the machine to receive traffic addressed to other destinations. IP forwarding is rarely required (e.g., when using the VM as a network virtual appliance), and therefore, this should be reviewed by the network security team. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_IPForwardingOnVirtualMachines_Audit.json) | |[Management ports of virtual machines should be protected with just-in-time network access control](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb0f33259-77d7-4c9e-aac6-3aabcfae693c) |Possible network Just In Time (JIT) access will be monitored by Azure Security Center as recommendations |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_JITNetworkAccess_Audit.json) |
initiative definition.
|[Authorized IP ranges should be defined on Kubernetes Services](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0e246bcf-5f6f-4f87-bc6f-775d4712c7ea) |Restrict access to the Kubernetes Service Management API by granting API access only to IP addresses in specific ranges. It is recommended to limit access to authorized IP ranges to ensure that only applications from allowed networks can access the cluster. |Audit, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableIpRanges_KubernetesService_Audit.json) | |[Azure Cosmos DB accounts should have firewall rules](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F862e97cf-49fc-4a5c-9de4-40d4e2e7c8eb) |Firewall rules should be defined on your Azure Cosmos DB accounts to prevent traffic from unauthorized sources. Accounts that have at least one IP rule defined with the virtual network filter enabled are deemed compliant. Accounts disabling public access are also deemed compliant. |Audit, Deny, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cosmos%20DB/Cosmos_NetworkRulesExist_Audit.json) | |[Azure DDoS Protection Standard should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa7aca53f-2ed4-4466-a25e-0b45ade68efd) |DDoS protection standard should be enabled for all virtual networks with a subnet that is part of an application gateway with a public IP. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableDDoSProtection_Audit.json) |
-|[Firewall should be enabled on Key Vault](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F55615ac9-af46-4a59-874e-391cc3dfb490) |Key vault's firewall prevents unauthorized traffic from reaching your key vault and provides an additional layer of protection for your secrets. Enable the firewall to make sure that only traffic from allowed networks can access your key vault. |Audit, Disabled |[1.0.2-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Key%20Vault/AzureKeyVaultFirewallEnabled_Audit.json) |
+|[Azure Key Vault should disable public network access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F55615ac9-af46-4a59-874e-391cc3dfb490) |Disable public network access for your key vault so that it's not accessible over the public internet. This can reduce data leakage risks. Learn more at: [https://aka.ms/akvprivatelink](https://aka.ms/akvprivatelink). |Audit, Deny, Disabled |[1.1.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Key%20Vault/AzureKeyVaultFirewallEnabled_Audit.json) |
|[Internet-facing virtual machines should be protected with network security groups](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff6de0be7-9a8a-4b8a-b349-43cf02d22f7c) |Protect your virtual machines from potential threats by restricting access to them with network security groups (NSG). Learn more about controlling traffic with NSGs at [https://aka.ms/nsg-doc](https://aka.ms/nsg-doc) |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_NetworkSecurityGroupsOnInternetFacingVirtualMachines_Audit.json) | |[IP Forwarding on your virtual machine should be disabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fbd352bd5-2853-4985-bf0d-73806b4a5744) |Enabling IP forwarding on a virtual machine's NIC allows the machine to receive traffic addressed to other destinations. IP forwarding is rarely required (e.g., when using the VM as a network virtual appliance), and therefore, this should be reviewed by the network security team. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_IPForwardingOnVirtualMachines_Audit.json) | |[Management ports of virtual machines should be protected with just-in-time network access control](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb0f33259-77d7-4c9e-aac6-3aabcfae693c) |Possible network Just In Time (JIT) access will be monitored by Azure Security Center as recommendations |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_JITNetworkAccess_Audit.json) |
initiative definition.
|[Azure Defender for Azure SQL Database servers should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7fe3b40f-802b-4cdd-8bd4-fd799c948cc2) |Azure Defender for SQL provides functionality for surfacing and mitigating potential database vulnerabilities, detecting anomalous activities that could indicate threats to SQL databases, and discovering and classifying sensitive data. |AuditIfNotExists, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedDataSecurityOnSqlServers_Audit.json) | |[Azure Defender for SQL servers on machines should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6581d072-105e-4418-827f-bd446d56421b) |Azure Defender for SQL provides functionality for surfacing and mitigating potential database vulnerabilities, detecting anomalous activities that could indicate threats to SQL databases, and discovering and classifying sensitive data. |AuditIfNotExists, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedDataSecurityOnSqlServerVirtualMachines_Audit.json) | |[Azure Defender for Storage should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F308fbb08-4ab8-4e67-9b29-592e93fb94fa) |Azure Defender for Storage provides detections of unusual and potentially harmful attempts to access or exploit storage accounts. |AuditIfNotExists, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnStorageAccounts_Audit.json) |
-|[Cognitive Services accounts should enable data encryption](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2bdd0062-9d75-436e-89df-487dd8e4b3c7) |This policy audits any Cognitive Services account not using data encryption. For each Cognitive Services account with storage, should enable data encryption with either customer managed or Microsoft managed key. |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_Encryption_Audit.json) |
|[Disk encryption should be applied on virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0961003e-5a0a-4549-abde-af6a37f2724d) |Virtual machines without an enabled disk encryption will be monitored by Azure Security Center as recommendations. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_UnencryptedVMDisks_Audit.json) | |[Storage account public access should be disallowed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4fa4b6c0-31ca-4c0d-b10d-24b96f62a751) |Anonymous public read access to containers and blobs in Azure Storage is a convenient way to share data but might present security risks. To prevent data breaches caused by undesired anonymous access, Microsoft recommends preventing public access to a storage account unless your scenario requires it. |audit, deny, disabled |[2.0.1-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Storage/ASC_Storage_DisallowPublicBlobAccess_Audit.json) | |[Transparent Data Encryption on SQL databases should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F17k78e20-9358-41c9-923c-fb736d382a12) |Transparent data encryption should be enabled to protect data-at-rest and meet compliance requirements |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlDBEncryption_Audit.json) |
initiative definition.
|[Only secure connections to your Azure Cache for Redis should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F22bee202-a82f-4305-9a2a-6d7f44d4dedb) |Audit enabling of only connections via SSL to Azure Cache for Redis. Use of secure connections ensures authentication between the server and the service and protects data in transit from network layer attacks such as man-in-the-middle, eavesdropping, and session-hijacking |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cache/RedisCache_AuditSSLPort_Audit.json) | |[Secure transfer to storage accounts should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F404c3081-a854-4457-ae30-26a93ef643f9) |Audit requirement of Secure transfer in your storage account. Secure transfer is an option that forces your storage account to accept requests only from secure connections (HTTPS). Use of HTTPS ensures authentication between the server and the service and protects data in transit from network layer attacks such as man-in-the-middle, eavesdropping, and session-hijacking |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Storage/Storage_AuditForHTTPSEnabled_Audit.json) | |[Web Application should only be accessible over HTTPS](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa4af4a39-4135-47fb-b175-47fbdf85311d) |Use of HTTPS ensures server/service authentication and protects data in transit from network layer eavesdropping attacks. |Audit, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppServiceWebapp_AuditHTTP_Audit.json) |
-|[Windows web servers should be configured to use secure communication protocols](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5752e6d6-1206-46d8-8ab1-ecc2f71a8112) |To protect the privacy of information communicated over the Internet, your web servers should use the latest version of the industry-standard cryptographic protocol, Transport Layer Security (TLS). TLS secures communications over a network by using security certificates to encrypt a connection between machines. TLS 1.3 is faster and more secure than the earlier versions: TLS 1.0-1.2 and SSL 2-3, which are all considered legacy protocols. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_SecureWebProtocol_AINE.json) |
+|[Windows web servers should be configured to use secure communication protocols](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5752e6d6-1206-46d8-8ab1-ecc2f71a8112) |To protect the privacy of information communicated over the Internet, your web servers should use the latest version of the industry-standard cryptographic protocol, Transport Layer Security (TLS). TLS secures communications over a network by using security certificates to encrypt a connection between machines. |AuditIfNotExists, Disabled |[2.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_SecureWebProtocol_AINE.json) |
### Encrypt sensitive data at rest
initiative definition.
|[Azure Machine Learning workspaces should be encrypted with a customer-managed key](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fba769a63-b8cc-4b2d-abf6-ac33c7204be8) |Manage encryption at rest of Azure Machine Learning workspace data with customer-managed keys. By default, customer data is encrypted with service-managed keys, but customer-managed keys are commonly required to meet regulatory compliance standards. Customer-managed keys enable the data to be encrypted with an Azure Key Vault key created and owned by you. You have full control and responsibility for the key lifecycle, including rotation and management. Learn more at [https://aka.ms/azureml-workspaces-cmk](https://aka.ms/azureml-workspaces-cmk). |Audit, Deny, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Machine%20Learning/Workspace_CMKEnabled_Audit.json) | |[Bring your own key data protection should be enabled for MySQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F83cef61d-dbd1-4b20-a4fc-5fbc7da10833) |Use customer-managed keys to manage the encryption at rest of your MySQL servers. By default, the data is encrypted at rest with service-managed keys, but customer-managed keys are commonly required to meet regulatory compliance standards. Customer-managed keys enable the data to be encrypted with an Azure Key Vault key created and owned by you. You have full control and responsibility for the key lifecycle, including rotation and management. |AuditIfNotExists, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/MySQL_EnableByok_Audit.json) | |[Bring your own key data protection should be enabled for PostgreSQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F18adea5e-f416-4d0f-8aa8-d24321e3e274) |Use customer-managed keys to manage the encryption at rest of your PostgreSQL servers. By default, the data is encrypted at rest with service-managed keys, but customer-managed keys are commonly required to meet regulatory compliance standards. Customer-managed keys enable the data to be encrypted with an Azure Key Vault key created and owned by you. You have full control and responsibility for the key lifecycle, including rotation and management. |AuditIfNotExists, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/PostgreSQL_EnableByok_Audit.json) |
-|[Cognitive Services accounts should enable data encryption](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2bdd0062-9d75-436e-89df-487dd8e4b3c7) |This policy audits any Cognitive Services account not using data encryption. For each Cognitive Services account with storage, should enable data encryption with either customer managed or Microsoft managed key. |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_Encryption_Audit.json) |
|[Cognitive Services accounts should enable data encryption with a customer-managed key](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F67121cc7-ff39-4ab8-b7e3-95b84dab487d) |Customer-managed keys are commonly required to meet regulatory compliance standards. Customer-managed keys enable the data stored in Cognitive Services to be encrypted with an Azure Key Vault key created and owned by you. You have full control and responsibility for the key lifecycle, including rotation and management. Learn more about customer-managed keys at [https://go.microsoft.com/fwlink/?linkid=2121321](https://go.microsoft.com/fwlink/?linkid=2121321). |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_CustomerManagedKey_Audit.json) |
-|[Cognitive Services accounts should use customer owned storage or enable data encryption.](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F11566b39-f7f7-4b82-ab06-68d8700eb0a4) |This policy audits any Cognitive Services account not using customer owned storage nor data encryption. For each Cognitive Services account with storage, use either customer owned storage or enable data encryption. |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_BYOX_Audit.json) |
|[Container registries should be encrypted with a customer-managed key](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5b9159ae-1701-4a6f-9a7a-aa9c8ddd0580) |Use customer-managed keys to manage the encryption at rest of the contents of your registries. By default, the data is encrypted at rest with service-managed keys, but customer-managed keys are commonly required to meet regulatory compliance standards. Customer-managed keys enable the data to be encrypted with an Azure Key Vault key created and owned by you. You have full control and responsibility for the key lifecycle, including rotation and management. Learn more at [https://aka.ms/acr/CMK](https://aka.ms/acr/CMK). |Audit, Deny, Disabled |[1.1.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Container%20Registry/ACR_CMKEncryptionEnabled_Audit.json) | |[Disk encryption should be applied on virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0961003e-5a0a-4549-abde-af6a37f2724d) |Virtual machines without an enabled disk encryption will be monitored by Azure Security Center as recommendations. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_UnencryptedVMDisks_Audit.json) | |[Service Fabric clusters should have the ClusterProtectionLevel property set to EncryptAndSign](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F617c02be-7f02-4efd-8836-3180d47b6c68) |Service Fabric provides three levels of protection (None, Sign and EncryptAndSign) for node-to-node communication using a primary cluster certificate. Set the protection level to ensure that all node-to-node messages are encrypted and digitally signed |Audit, Deny, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Service%20Fabric/ServiceFabric_AuditClusterProtectionLevel_Audit.json) |
initiative definition.
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | |||||
-|[Azure Backup should be enabled for Virtual Machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F013e242c-8828-4970-87b3-ab247555486d) |Ensure protection of your Azure Virtual Machines by enabling Azure Backup. Azure Backup is a secure and cost effective data protection solution for Azure. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Backup/VirtualMachines_EnableAzureBackup_Audit.json) |
+|[Azure Backup should be enabled for Virtual Machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F013e242c-8828-4970-87b3-ab247555486d) |Ensure protection of your Azure Virtual Machines by enabling Azure Backup. Azure Backup is a secure and cost effective data protection solution for Azure. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Backup/VirtualMachines_EnableAzureBackup_Audit.json) |
|[Geo-redundant backup should be enabled for Azure Database for MariaDB](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0ec47710-77ff-4a3d-9181-6aa50af424d0) |Azure Database for MariaDB allows you to choose the redundancy option for your database server. It can be set to a geo-redundant backup storage in which the data is not only stored within the region in which your server is hosted, but is also replicated to a paired region to provide recovery option in case of a region failure. Configuring geo-redundant storage for backup is only allowed during server create. |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/GeoRedundant_DBForMariaDB_Audit.json) | |[Geo-redundant backup should be enabled for Azure Database for MySQL](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F82339799-d096-41ae-8538-b108becf0970) |Azure Database for MySQL allows you to choose the redundancy option for your database server. It can be set to a geo-redundant backup storage in which the data is not only stored within the region in which your server is hosted, but is also replicated to a paired region to provide recovery option in case of a region failure. Configuring geo-redundant storage for backup is only allowed during server create. |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/GeoRedundant_DBForMySQL_Audit.json) | |[Geo-redundant backup should be enabled for Azure Database for PostgreSQL](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F48af4db5-9b8b-401c-8e74-076be876a430) |Azure Database for PostgreSQL allows you to choose the redundancy option for your database server. It can be set to a geo-redundant backup storage in which the data is not only stored within the region in which your server is hosted, but is also replicated to a paired region to provide recovery option in case of a region failure. Configuring geo-redundant storage for backup is only allowed during server create. |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/GeoRedundant_DBForPostgreSQL_Audit.json) |
initiative definition.
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | |||||
-|[Azure Backup should be enabled for Virtual Machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F013e242c-8828-4970-87b3-ab247555486d) |Ensure protection of your Azure Virtual Machines by enabling Azure Backup. Azure Backup is a secure and cost effective data protection solution for Azure. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Backup/VirtualMachines_EnableAzureBackup_Audit.json) |
+|[Azure Backup should be enabled for Virtual Machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F013e242c-8828-4970-87b3-ab247555486d) |Ensure protection of your Azure Virtual Machines by enabling Azure Backup. Azure Backup is a secure and cost effective data protection solution for Azure. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Backup/VirtualMachines_EnableAzureBackup_Audit.json) |
|[Geo-redundant backup should be enabled for Azure Database for MariaDB](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0ec47710-77ff-4a3d-9181-6aa50af424d0) |Azure Database for MariaDB allows you to choose the redundancy option for your database server. It can be set to a geo-redundant backup storage in which the data is not only stored within the region in which your server is hosted, but is also replicated to a paired region to provide recovery option in case of a region failure. Configuring geo-redundant storage for backup is only allowed during server create. |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/GeoRedundant_DBForMariaDB_Audit.json) | |[Geo-redundant backup should be enabled for Azure Database for MySQL](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F82339799-d096-41ae-8538-b108becf0970) |Azure Database for MySQL allows you to choose the redundancy option for your database server. It can be set to a geo-redundant backup storage in which the data is not only stored within the region in which your server is hosted, but is also replicated to a paired region to provide recovery option in case of a region failure. Configuring geo-redundant storage for backup is only allowed during server create. |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/GeoRedundant_DBForMySQL_Audit.json) | |[Geo-redundant backup should be enabled for Azure Database for PostgreSQL](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F48af4db5-9b8b-401c-8e74-076be876a430) |Azure Database for PostgreSQL allows you to choose the redundancy option for your database server. It can be set to a geo-redundant backup storage in which the data is not only stored within the region in which your server is hosted, but is also replicated to a paired region to provide recovery option in case of a region failure. Configuring geo-redundant storage for backup is only allowed during server create. |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/GeoRedundant_DBForPostgreSQL_Audit.json) |
governance Azure Security Benchmarkv1 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/samples/azure-security-benchmarkv1.md
Title: Regulatory Compliance details for Azure Security Benchmark v1 description: Details of the Azure Security Benchmark v1 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 04/14/2021 Last updated : 04/21/2021
The following article details how the Azure Policy Regulatory Compliance built-in initiative definition maps to **compliance domains** and **controls** in Azure Security Benchmark v1. For more information about this compliance standard, see
-[Azure Security Benchmark v1](../../../security/benchmarks/overview.md). To understand
+[Azure Security Benchmark v1](/security/benchmark/azure/introduction). To understand
_Ownership_, see [Azure Policy policy definition](../concepts/definition-structure.md#type) and [Shared responsibility in the cloud](../../../security/fundamentals/shared-responsibility.md).
This built-in initiative is deployed as part of the
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | |||||
-|[Azure Backup should be enabled for Virtual Machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F013e242c-8828-4970-87b3-ab247555486d) |Ensure protection of your Azure Virtual Machines by enabling Azure Backup. Azure Backup is a secure and cost effective data protection solution for Azure. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Backup/VirtualMachines_EnableAzureBackup_Audit.json) |
+|[Azure Backup should be enabled for Virtual Machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F013e242c-8828-4970-87b3-ab247555486d) |Ensure protection of your Azure Virtual Machines by enabling Azure Backup. Azure Backup is a secure and cost effective data protection solution for Azure. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Backup/VirtualMachines_EnableAzureBackup_Audit.json) |
|[Geo-redundant backup should be enabled for Azure Database for MariaDB](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0ec47710-77ff-4a3d-9181-6aa50af424d0) |Azure Database for MariaDB allows you to choose the redundancy option for your database server. It can be set to a geo-redundant backup storage in which the data is not only stored within the region in which your server is hosted, but is also replicated to a paired region to provide recovery option in case of a region failure. Configuring geo-redundant storage for backup is only allowed during server create. |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/GeoRedundant_DBForMariaDB_Audit.json) | |[Geo-redundant backup should be enabled for Azure Database for MySQL](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F82339799-d096-41ae-8538-b108becf0970) |Azure Database for MySQL allows you to choose the redundancy option for your database server. It can be set to a geo-redundant backup storage in which the data is not only stored within the region in which your server is hosted, but is also replicated to a paired region to provide recovery option in case of a region failure. Configuring geo-redundant storage for backup is only allowed during server create. |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/GeoRedundant_DBForMySQL_Audit.json) | |[Geo-redundant backup should be enabled for Azure Database for PostgreSQL](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F48af4db5-9b8b-401c-8e74-076be876a430) |Azure Database for PostgreSQL allows you to choose the redundancy option for your database server. It can be set to a geo-redundant backup storage in which the data is not only stored within the region in which your server is hosted, but is also replicated to a paired region to provide recovery option in case of a region failure. Configuring geo-redundant storage for backup is only allowed during server create. |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/GeoRedundant_DBForPostgreSQL_Audit.json) |
This built-in initiative is deployed as part of the
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | |||||
-|[Azure Backup should be enabled for Virtual Machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F013e242c-8828-4970-87b3-ab247555486d) |Ensure protection of your Azure Virtual Machines by enabling Azure Backup. Azure Backup is a secure and cost effective data protection solution for Azure. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Backup/VirtualMachines_EnableAzureBackup_Audit.json) |
+|[Azure Backup should be enabled for Virtual Machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F013e242c-8828-4970-87b3-ab247555486d) |Ensure protection of your Azure Virtual Machines by enabling Azure Backup. Azure Backup is a secure and cost effective data protection solution for Azure. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Backup/VirtualMachines_EnableAzureBackup_Audit.json) |
|[Geo-redundant backup should be enabled for Azure Database for MariaDB](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0ec47710-77ff-4a3d-9181-6aa50af424d0) |Azure Database for MariaDB allows you to choose the redundancy option for your database server. It can be set to a geo-redundant backup storage in which the data is not only stored within the region in which your server is hosted, but is also replicated to a paired region to provide recovery option in case of a region failure. Configuring geo-redundant storage for backup is only allowed during server create. |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/GeoRedundant_DBForMariaDB_Audit.json) | |[Geo-redundant backup should be enabled for Azure Database for MySQL](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F82339799-d096-41ae-8538-b108becf0970) |Azure Database for MySQL allows you to choose the redundancy option for your database server. It can be set to a geo-redundant backup storage in which the data is not only stored within the region in which your server is hosted, but is also replicated to a paired region to provide recovery option in case of a region failure. Configuring geo-redundant storage for backup is only allowed during server create. |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/GeoRedundant_DBForMySQL_Audit.json) | |[Geo-redundant backup should be enabled for Azure Database for PostgreSQL](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F48af4db5-9b8b-401c-8e74-076be876a430) |Azure Database for PostgreSQL allows you to choose the redundancy option for your database server. It can be set to a geo-redundant backup storage in which the data is not only stored within the region in which your server is hosted, but is also replicated to a paired region to provide recovery option in case of a region failure. Configuring geo-redundant storage for backup is only allowed during server create. |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/GeoRedundant_DBForPostgreSQL_Audit.json) |
governance Built In Initiatives https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/samples/built-in-initiatives.md
Title: List of built-in policy initiatives description: List built-in policy initiatives for Azure Policy. Categories include Regulatory Compliance, Guest Configuration, and more. Previously updated : 04/14/2021 Last updated : 04/21/2021
governance Built In Policies https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/samples/built-in-policies.md
Title: List of built-in policy definitions description: List built-in policy definitions for Azure Policy. Categories include Tags, Regulatory Compliance, Key Vault, Kubernetes, Guest Configuration, and more. Previously updated : 04/14/2021 Last updated : 04/21/2021
side of the page. Otherwise, use <kbd>Ctrl</kbd>-<kbd>F</kbd> to use your browse
[!INCLUDE [azure-policy-reference-policies-automation](../../../../includes/policy/reference/bycat/policies-automation.md)]
+## Azure Active Directory
++ ## Azure Data Explorer [!INCLUDE [azure-policy-reference-policies-azure-data-explorer](../../../../includes/policy/reference/bycat/policies-azure-data-explorer.md)]
governance Cis Azure 1 1 0 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/samples/cis-azure-1-1-0.md
Title: Regulatory Compliance details for CIS Microsoft Azure Foundations Benchmark 1.1.0 description: Details of the CIS Microsoft Azure Foundations Benchmark 1.1.0 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 04/14/2021 Last updated : 04/21/2021
governance Cis Azure 1 3 0 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/samples/cis-azure-1-3-0.md
Title: Regulatory Compliance details for CIS Microsoft Azure Foundations Benchmark 1.3.0 description: Details of the CIS Microsoft Azure Foundations Benchmark 1.3.0 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 04/14/2021 Last updated : 04/21/2021
governance Cmmc L3 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/samples/cmmc-l3.md
Title: Regulatory Compliance details for CMMC Level 3 description: Details of the CMMC Level 3 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 04/14/2021 Last updated : 04/21/2021
This built-in initiative is deployed as part of the
|[Add system-assigned managed identity to enable Guest Configuration assignments on virtual machines with no identities](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3cf2ab00-13f1-4d0c-8971-2ac904541a7e) |This policy adds a system-assigned managed identity to virtual machines hosted in Azure that are supported by Guest Configuration but do not have any managed identities. A system-assigned managed identity is a prerequisite for all Guest Configuration assignments and must be added to machines before using any Guest Configuration policy definitions. For more information on Guest Configuration, visit [https://aka.ms/gcpol](https://aka.ms/gcpol). |modify |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_AddSystemIdentityWhenNone_Prerequisite.json) | |[Add system-assigned managed identity to enable Guest Configuration assignments on VMs with a user-assigned identity](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F497dff13-db2a-4c0f-8603-28fa3b331ab6) |This policy adds a system-assigned managed identity to virtual machines hosted in Azure that are supported by Guest Configuration and have at least one user-assigned identity but do not have a system-assigned managed identity. A system-assigned managed identity is a prerequisite for all Guest Configuration assignments and must be added to machines before using any Guest Configuration policy definitions. For more information on Guest Configuration, visit [https://aka.ms/gcpol](https://aka.ms/gcpol). |modify |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_AddSystemIdentityWhenUser_Prerequisite.json) | |[Audit Linux machines that allow remote connections from accounts without passwords](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fea53dbee-c6c9-4f0e-9f9e-de0039b78023) |Requires that prerequisites are deployed to the policy assignment scope. For details, visit [https://aka.ms/gcpol](https://aka.ms/gcpol). Machines are non-compliant if Linux machines that allow remote connections from accounts without passwords |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_LinuxPassword110_AINE.json) |
+|[Azure Key Vault should disable public network access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F55615ac9-af46-4a59-874e-391cc3dfb490) |Disable public network access for your key vault so that it's not accessible over the public internet. This can reduce data leakage risks. Learn more at: [https://aka.ms/akvprivatelink](https://aka.ms/akvprivatelink). |Audit, Deny, Disabled |[1.1.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Key%20Vault/AzureKeyVaultFirewallEnabled_Audit.json) |
|[Cognitive Services accounts should disable public network access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0725b4dd-7e76-479c-a735-68e7ee23d5ca) |Disabling public network access improves security by ensuring that Cognitive Services account isn't exposed on the public internet. Creating private endpoints can limit exposure of Cognitive Services account. Learn more at: [https://go.microsoft.com/fwlink/?linkid=2129800](https://go.microsoft.com/fwlink/?linkid=2129800). |Audit, Deny, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_DisablePublicNetworkAccess_Audit.json) | |[Cognitive Services accounts should restrict network access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F037eea7a-bd0a-46c5-9a66-03aea78705d3) |Network access to Cognitive Services accounts should be restricted. Configure network rules so only applications from allowed networks can access the Cognitive Services account. To allow connections from specific internet or on-premises clients, access can be granted to traffic from specific Azure virtual networks or to public internet IP address ranges. |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_NetworkAcls_Audit.json) | |[Container registries should not allow unrestricted network access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd0793b48-0edc-4296-a390-4c75d1bdfd71) |Azure container registries by default accept connections over the internet from hosts on any network. To protect your registries from potential threats, allow access from only specific public IP addresses or address ranges. If your registry doesn't have an IP/firewall rule or a configured virtual network, it will appear in the unhealthy resources. Learn more about Container Registry network rules here: [https://aka.ms/acr/portal/public-network](https://aka.ms/acr/portal/public-network) and here [https://aka.ms/acr/vnet](https://aka.ms/acr/vnet). |Audit, Deny, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Container%20Registry/ACR_NetworkRulesExist_AuditDeny.json) |
This built-in initiative is deployed as part of the
|[External accounts with owner permissions should be removed from your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff8456c1c-aa66-4dfb-861a-25d127b775c9) |External accounts with owner permissions should be removed from your subscription in order to prevent unmonitored access. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_RemoveExternalAccountsWithOwnerPermissions_Audit.json) | |[External accounts with read permissions should be removed from your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5f76cf89-fbf2-47fd-a3f4-b891fa780b60) |External accounts with read privileges should be removed from your subscription in order to prevent unmonitored access. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_RemoveExternalAccountsReadPermissions_Audit.json) | |[External accounts with write permissions should be removed from your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5c607a2e-c700-4744-8254-d77e7c9eb5e4) |External accounts with write privileges should be removed from your subscription in order to prevent unmonitored access. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_RemoveExternalAccountsWritePermissions_Audit.json) |
-|[Firewall should be enabled on Key Vault](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F55615ac9-af46-4a59-874e-391cc3dfb490) |Key vault's firewall prevents unauthorized traffic from reaching your key vault and provides an additional layer of protection for your secrets. Enable the firewall to make sure that only traffic from allowed networks can access your key vault. |Audit, Disabled |[1.0.2-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Key%20Vault/AzureKeyVaultFirewallEnabled_Audit.json) |
|[Kubernetes cluster pods should only use approved host network and port range](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F82985f06-dc18-4a48-bc1c-b9f4f0098cfe) |Restrict pod access to the host network and the allowable host port range in a Kubernetes cluster. This recommendation is part of CIS 5.2.4 which is intended to improve the security of your Kubernetes environments. This policy is generally available for Kubernetes Service (AKS), and preview for AKS Engine and Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](https://aka.ms/kubepolicydoc). |audit, deny, disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Kubernetes/HostNetworkPorts.json) | |[Management ports of virtual machines should be protected with just-in-time network access control](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb0f33259-77d7-4c9e-aac6-3aabcfae693c) |Possible network Just In Time (JIT) access will be monitored by Azure Security Center as recommendations |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_JITNetworkAccess_Audit.json) | |[Public network access on Azure SQL Database should be disabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1b8ca024-1d5c-4dec-8995-b1a932b41780) |Disabling the public network access property improves security by ensuring your Azure SQL Database can only be accessed from a private endpoint. This configuration denies all logins that match IP or virtual network based firewall rules. |Audit, Deny, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlServer_PublicNetworkAccess_Audit.json) |
This built-in initiative is deployed as part of the
||||| |[API App should only be accessible over HTTPS](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb7ddfbdc-1260-477d-91fd-98bd9be789a6) |Use of HTTPS ensures server/service authentication and protects data in transit from network layer eavesdropping attacks. |Audit, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppServiceApiApp_AuditHTTP_Audit.json) | |[Audit Linux machines that allow remote connections from accounts without passwords](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fea53dbee-c6c9-4f0e-9f9e-de0039b78023) |Requires that prerequisites are deployed to the policy assignment scope. For details, visit [https://aka.ms/gcpol](https://aka.ms/gcpol). Machines are non-compliant if Linux machines that allow remote connections from accounts without passwords |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_LinuxPassword110_AINE.json) |
+|[Azure Key Vault should disable public network access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F55615ac9-af46-4a59-874e-391cc3dfb490) |Disable public network access for your key vault so that it's not accessible over the public internet. This can reduce data leakage risks. Learn more at: [https://aka.ms/akvprivatelink](https://aka.ms/akvprivatelink). |Audit, Deny, Disabled |[1.1.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Key%20Vault/AzureKeyVaultFirewallEnabled_Audit.json) |
|[Cognitive Services accounts should disable public network access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0725b4dd-7e76-479c-a735-68e7ee23d5ca) |Disabling public network access improves security by ensuring that Cognitive Services account isn't exposed on the public internet. Creating private endpoints can limit exposure of Cognitive Services account. Learn more at: [https://go.microsoft.com/fwlink/?linkid=2129800](https://go.microsoft.com/fwlink/?linkid=2129800). |Audit, Deny, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_DisablePublicNetworkAccess_Audit.json) | |[Cognitive Services accounts should restrict network access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F037eea7a-bd0a-46c5-9a66-03aea78705d3) |Network access to Cognitive Services accounts should be restricted. Configure network rules so only applications from allowed networks can access the Cognitive Services account. To allow connections from specific internet or on-premises clients, access can be granted to traffic from specific Azure virtual networks or to public internet IP address ranges. |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_NetworkAcls_Audit.json) | |[Container registries should not allow unrestricted network access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd0793b48-0edc-4296-a390-4c75d1bdfd71) |Azure container registries by default accept connections over the internet from hosts on any network. To protect your registries from potential threats, allow access from only specific public IP addresses or address ranges. If your registry doesn't have an IP/firewall rule or a configured virtual network, it will appear in the unhealthy resources. Learn more about Container Registry network rules here: [https://aka.ms/acr/portal/public-network](https://aka.ms/acr/portal/public-network) and here [https://aka.ms/acr/vnet](https://aka.ms/acr/vnet). |Audit, Deny, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Container%20Registry/ACR_NetworkRulesExist_AuditDeny.json) |
This built-in initiative is deployed as part of the
|[CORS should not allow every resource to access your Web Applications](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5744710e-cc2f-4ee8-8809-3b11e89f4bc9) |Cross-Origin Resource Sharing (CORS) should not allow all domains to access your web application. Allow only required domains to interact with your web app. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_RestrictCORSAccess_WebApp_Audit.json) | |[Enforce SSL connection should be enabled for MySQL database servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe802a67a-daf5-4436-9ea6-f6d821dd0c5d) |Azure Database for MySQL supports connecting your Azure Database for MySQL server to client applications using Secure Sockets Layer (SSL). Enforcing SSL connections between your database server and your client applications helps protect against 'man in the middle' attacks by encrypting the data stream between the server and your application. This configuration enforces that SSL is always enabled for accessing your database server. |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/MySQL_EnableSSL_Audit.json) | |[Enforce SSL connection should be enabled for PostgreSQL database servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd158790f-bfb0-486c-8631-2dc6b4e8e6af) |Azure Database for PostgreSQL supports connecting your Azure Database for PostgreSQL server to client applications using Secure Sockets Layer (SSL). Enforcing SSL connections between your database server and your client applications helps protect against 'man in the middle' attacks by encrypting the data stream between the server and your application. This configuration enforces that SSL is always enabled for accessing your database server. |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/PostgreSQL_EnableSSL_Audit.json) |
-|[Firewall should be enabled on Key Vault](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F55615ac9-af46-4a59-874e-391cc3dfb490) |Key vault's firewall prevents unauthorized traffic from reaching your key vault and provides an additional layer of protection for your secrets. Enable the firewall to make sure that only traffic from allowed networks can access your key vault. |Audit, Disabled |[1.0.2-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Key%20Vault/AzureKeyVaultFirewallEnabled_Audit.json) |
|[Function App should only be accessible over HTTPS](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6d555dd1-86f2-4f1c-8ed7-5abae7c6cbab) |Use of HTTPS ensures server/service authentication and protects data in transit from network layer eavesdropping attacks. |Audit, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppServiceFunctionApp_AuditHTTP_Audit.json) | |[Kubernetes cluster pods should only use approved host network and port range](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F82985f06-dc18-4a48-bc1c-b9f4f0098cfe) |Restrict pod access to the host network and the allowable host port range in a Kubernetes cluster. This recommendation is part of CIS 5.2.4 which is intended to improve the security of your Kubernetes environments. This policy is generally available for Kubernetes Service (AKS), and preview for AKS Engine and Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](https://aka.ms/kubepolicydoc). |audit, deny, disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Kubernetes/HostNetworkPorts.json) | |[Management ports of virtual machines should be protected with just-in-time network access control](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb0f33259-77d7-4c9e-aac6-3aabcfae693c) |Possible network Just In Time (JIT) access will be monitored by Azure Security Center as recommendations |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_JITNetworkAccess_Audit.json) |
This built-in initiative is deployed as part of the
|[Storage accounts should restrict network access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F34c877ad-507e-4c82-993e-3452a6e0ad3c) |Network access to storage accounts should be restricted. Configure network rules so only applications from allowed networks can access the storage account. To allow connections from specific internet or on-premises clients, access can be granted to traffic from specific Azure virtual networks or to public internet IP address ranges |Audit, Deny, Disabled |[1.1.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Storage/Storage_NetworkAcls_Audit.json) | |[Web Application should only be accessible over HTTPS](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa4af4a39-4135-47fb-b175-47fbdf85311d) |Use of HTTPS ensures server/service authentication and protects data in transit from network layer eavesdropping attacks. |Audit, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppServiceWebapp_AuditHTTP_Audit.json) | |[Windows machines should meet requirements for 'Security Options - Network Access'](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3ff60f98-7fa4-410a-9f7f-0b00f5afdbdd) |Windows machines should have the specified Group Policy settings in the category 'Security Options - Network Access' for including access for anonymous users, local accounts, and remote access to the registry. This policy requires that the Guest Configuration prerequisites have been deployed to the policy assignment scope. For details, visit [https://aka.ms/gcpol](https://aka.ms/gcpol). |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_SecurityOptionsNetworkAccess_AINE.json) |
-|[Windows web servers should be configured to use secure communication protocols](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5752e6d6-1206-46d8-8ab1-ecc2f71a8112) |To protect the privacy of information communicated over the Internet, your web servers should use the latest version of the industry-standard cryptographic protocol, Transport Layer Security (TLS). TLS secures communications over a network by using security certificates to encrypt a connection between machines. TLS 1.3 is faster and more secure than the earlier versions: TLS 1.0-1.2 and SSL 2-3, which are all considered legacy protocols. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_SecureWebProtocol_AINE.json) |
+|[Windows web servers should be configured to use secure communication protocols](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5752e6d6-1206-46d8-8ab1-ecc2f71a8112) |To protect the privacy of information communicated over the Internet, your web servers should use the latest version of the industry-standard cryptographic protocol, Transport Layer Security (TLS). TLS secures communications over a network by using security certificates to encrypt a connection between machines. |AuditIfNotExists, Disabled |[2.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_SecureWebProtocol_AINE.json) |
### Verify and control/limit connections to and use of external information systems.
This built-in initiative is deployed as part of the
|[Advanced data security should be enabled on your SQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fabfb4388-5bf4-4ad7-ba82-2cd2f41ceae9) |Audit SQL servers without Advanced Data Security |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlServer_AdvancedDataSecurity_Audit.json) | |[All Internet traffic should be routed via your deployed Azure Firewall](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffc5e4038-4584-4632-8c85-c0448d374b2c) |Azure Security Center has identified that some of your subnets aren't protected with a next generation firewall. Protect your subnets from potential threats by restricting access to them with Azure Firewall or a supported next generation firewall |AuditIfNotExists, Disabled |[3.0.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/ASC_All_Internet_traffic_should_be_routed_via_Azure_Firewall.json) | |[All network ports should be restricted on network security groups associated to your virtual machine](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F9daedab3-fb2d-461e-b861-71790eead4f6) |Azure Security Center has identified some of your network security groups' inbound rules to be too permissive. Inbound rules should not allow access from 'Any' or 'Internet' ranges. This can potentially enable attackers to target your resources. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_UnprotectedEndpoints_Audit.json) |
-|[Firewall should be enabled on Key Vault](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F55615ac9-af46-4a59-874e-391cc3dfb490) |Key vault's firewall prevents unauthorized traffic from reaching your key vault and provides an additional layer of protection for your secrets. Enable the firewall to make sure that only traffic from allowed networks can access your key vault. |Audit, Disabled |[1.0.2-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Key%20Vault/AzureKeyVaultFirewallEnabled_Audit.json) |
+|[Azure Key Vault should disable public network access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F55615ac9-af46-4a59-874e-391cc3dfb490) |Disable public network access for your key vault so that it's not accessible over the public internet. This can reduce data leakage risks. Learn more at: [https://aka.ms/akvprivatelink](https://aka.ms/akvprivatelink). |Audit, Deny, Disabled |[1.1.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Key%20Vault/AzureKeyVaultFirewallEnabled_Audit.json) |
|[Web Application Firewall (WAF) should be enabled for Application Gateway](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F564feb30-bf6a-4854-b4bb-0d2d2d1e6c66) |Deploy Azure Web Application Firewall (WAF) in front of public facing web applications for additional inspection of incoming traffic. Web Application Firewall (WAF) provides centralized protection of your web applications from common exploits and vulnerabilities such as SQL injections, Cross-Site Scripting, local and remote file executions. You can also restrict access to your web applications by countries, IP address ranges, and other http(s) parameters via custom rules. |Audit, Deny, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/WAF_AppGatewayEnabled_Audit.json) | |[Web Application Firewall (WAF) should be enabled for Azure Front Door Service service](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F055aa869-bc98-4af8-bafc-23f1ab6ffe2c) |Deploy Azure Web Application Firewall (WAF) in front of public facing web applications for additional inspection of incoming traffic. Web Application Firewall (WAF) provides centralized protection of your web applications from common exploits and vulnerabilities such as SQL injections, Cross-Site Scripting, local and remote file executions. You can also restrict access to your web applications by countries, IP address ranges, and other http(s) parameters via custom rules. |Audit, Deny, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/WAF_AFD_Enabled_Audit.json) | |[Web Application Firewall (WAF) should use the specified mode for Application Gateway](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F12430be1-6cc8-4527-a9a8-e3d38f250096) |Mandates the use of 'Detection' or 'Prevention' mode to be active on all Web Application Firewall policies for Application Gateway. |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/WAF_AppGatewayMode_Audit.json) |
This built-in initiative is deployed as part of the
|[MFA should be enabled on accounts with owner permissions on your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Faa633080-8b72-40c4-a2d7-d00c03e80bed) |Multi-Factor Authentication (MFA) should be enabled for all subscription accounts with owner permissions to prevent a breach of accounts or resources. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableMFAForOwnerPermissions_Audit.json) | |[MFA should be enabled on accounts with read permissions on your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe3576e28-8b17-4677-84c3-db2990658d64) |Multi-Factor Authentication (MFA) should be enabled for all subscription accounts with read privileges to prevent a breach of accounts or resources. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableMFAForReadPermissions_Audit.json) | |[Web Application should only be accessible over HTTPS](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa4af4a39-4135-47fb-b175-47fbdf85311d) |Use of HTTPS ensures server/service authentication and protects data in transit from network layer eavesdropping attacks. |Audit, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppServiceWebapp_AuditHTTP_Audit.json) |
-|[Windows web servers should be configured to use secure communication protocols](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5752e6d6-1206-46d8-8ab1-ecc2f71a8112) |To protect the privacy of information communicated over the Internet, your web servers should use the latest version of the industry-standard cryptographic protocol, Transport Layer Security (TLS). TLS secures communications over a network by using security certificates to encrypt a connection between machines. TLS 1.3 is faster and more secure than the earlier versions: TLS 1.0-1.2 and SSL 2-3, which are all considered legacy protocols. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_SecureWebProtocol_AINE.json) |
+|[Windows web servers should be configured to use secure communication protocols](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5752e6d6-1206-46d8-8ab1-ecc2f71a8112) |To protect the privacy of information communicated over the Internet, your web servers should use the latest version of the industry-standard cryptographic protocol, Transport Layer Security (TLS). TLS secures communications over a network by using security certificates to encrypt a connection between machines. |AuditIfNotExists, Disabled |[2.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_SecureWebProtocol_AINE.json) |
## Incident Response
This built-in initiative is deployed as part of the
|[Azure Defender for servers should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4da35fc9-c9e7-4960-aec9-797fe7d9051d) |Azure Defender for servers provides real-time threat protection for server workloads and generates hardening recommendations as well as alerts about suspicious activities. |AuditIfNotExists, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnVM_Audit.json) | |[Azure Defender for SQL servers on machines should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6581d072-105e-4418-827f-bd446d56421b) |Azure Defender for SQL provides functionality for surfacing and mitigating potential database vulnerabilities, detecting anomalous activities that could indicate threats to SQL databases, and discovering and classifying sensitive data. |AuditIfNotExists, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedDataSecurityOnSqlServerVirtualMachines_Audit.json) | |[Azure Defender for Storage should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F308fbb08-4ab8-4e67-9b29-592e93fb94fa) |Azure Defender for Storage provides detections of unusual and potentially harmful attempts to access or exploit storage accounts. |AuditIfNotExists, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnStorageAccounts_Audit.json) |
+|[Azure Key Vault should disable public network access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F55615ac9-af46-4a59-874e-391cc3dfb490) |Disable public network access for your key vault so that it's not accessible over the public internet. This can reduce data leakage risks. Learn more at: [https://aka.ms/akvprivatelink](https://aka.ms/akvprivatelink). |Audit, Deny, Disabled |[1.1.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Key%20Vault/AzureKeyVaultFirewallEnabled_Audit.json) |
|[Deploy Advanced Threat Protection for Cosmos DB Accounts](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb5f04e03-92a3-4b09-9410-2cc5e5047656) |This policy enables Advanced Threat Protection across Cosmos DB accounts. |DeployIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cosmos%20DB/CosmosDbAdvancedThreatProtection_Deploy.json) | |[Deploy Advanced Threat Protection on Storage Accounts](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F361c2074-3595-4e5d-8cab-4f21dffc835c) |This policy enables Advanced Threat Protection on Storage Accounts. |DeployIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Storage/StorageAdvancedThreatProtection_Deploy.json) | |[Email notification for high severity alerts should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6e2593d9-add6-4083-9c9b-4b7d2188c899) |To ensure the relevant people in your organization are notified when there is a potential security breach in one of your subscriptions, enable email notifications for high severity alerts in Security Center. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_Email_notification.json) |
-|[Firewall should be enabled on Key Vault](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F55615ac9-af46-4a59-874e-391cc3dfb490) |Key vault's firewall prevents unauthorized traffic from reaching your key vault and provides an additional layer of protection for your secrets. Enable the firewall to make sure that only traffic from allowed networks can access your key vault. |Audit, Disabled |[1.0.2-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Key%20Vault/AzureKeyVaultFirewallEnabled_Audit.json) |
|[Flow logs should be configured for every network security group](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc251913d-7d24-4958-af87-478ed3b9ba41) |Audit for network security groups to verify if flow logs are configured. Enabling flow logs allows to log information about IP traffic flowing through network security group. It can be used for optimizing network flows, monitoring throughput, verifying compliance, detecting intrusions and more. |Audit, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/NetworkSecurityGroup_FlowLog_Audit.json) | |[Monitor missing Endpoint Protection in Azure Security Center](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Faf6cd1bd-1635-48cb-bde7-5b15693900b9) |Servers without an installed Endpoint Protection agent will be monitored by Azure Security Center as recommendations |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_MissingEndpointProtection_Audit.json) | |[Web Application Firewall (WAF) should be enabled for Application Gateway](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F564feb30-bf6a-4854-b4bb-0d2d2d1e6c66) |Deploy Azure Web Application Firewall (WAF) in front of public facing web applications for additional inspection of incoming traffic. Web Application Firewall (WAF) provides centralized protection of your web applications from common exploits and vulnerabilities such as SQL injections, Cross-Site Scripting, local and remote file executions. You can also restrict access to your web applications by countries, IP address ranges, and other http(s) parameters via custom rules. |Audit, Deny, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/WAF_AppGatewayEnabled_Audit.json) |
This built-in initiative is deployed as part of the
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | ||||| |[Audit virtual machines without disaster recovery configured](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0015ea4d-51ff-4ce3-8d8c-f3f8f0179a56) |Audit virtual machines which do not have disaster recovery configured. To learn more about disaster recovery, visit [https://aka.ms/asr-doc](https://aka.ms/asr-doc). |auditIfNotExists |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Compute/RecoveryServices_DisasterRecovery_Audit.json) |
-|[Azure Backup should be enabled for Virtual Machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F013e242c-8828-4970-87b3-ab247555486d) |Ensure protection of your Azure Virtual Machines by enabling Azure Backup. Azure Backup is a secure and cost effective data protection solution for Azure. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Backup/VirtualMachines_EnableAzureBackup_Audit.json) |
+|[Azure Backup should be enabled for Virtual Machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F013e242c-8828-4970-87b3-ab247555486d) |Ensure protection of your Azure Virtual Machines by enabling Azure Backup. Azure Backup is a secure and cost effective data protection solution for Azure. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Backup/VirtualMachines_EnableAzureBackup_Audit.json) |
|[Geo-redundant backup should be enabled for Azure Database for MariaDB](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0ec47710-77ff-4a3d-9181-6aa50af424d0) |Azure Database for MariaDB allows you to choose the redundancy option for your database server. It can be set to a geo-redundant backup storage in which the data is not only stored within the region in which your server is hosted, but is also replicated to a paired region to provide recovery option in case of a region failure. Configuring geo-redundant storage for backup is only allowed during server create. |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/GeoRedundant_DBForMariaDB_Audit.json) | |[Geo-redundant backup should be enabled for Azure Database for MySQL](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F82339799-d096-41ae-8538-b108becf0970) |Azure Database for MySQL allows you to choose the redundancy option for your database server. It can be set to a geo-redundant backup storage in which the data is not only stored within the region in which your server is hosted, but is also replicated to a paired region to provide recovery option in case of a region failure. Configuring geo-redundant storage for backup is only allowed during server create. |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/GeoRedundant_DBForMySQL_Audit.json) | |[Geo-redundant backup should be enabled for Azure Database for PostgreSQL](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F48af4db5-9b8b-401c-8e74-076be876a430) |Azure Database for PostgreSQL allows you to choose the redundancy option for your database server. It can be set to a geo-redundant backup storage in which the data is not only stored within the region in which your server is hosted, but is also replicated to a paired region to provide recovery option in case of a region failure. Configuring geo-redundant storage for backup is only allowed during server create. |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/GeoRedundant_DBForPostgreSQL_Audit.json) |
This built-in initiative is deployed as part of the
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | ||||| |[Audit virtual machines without disaster recovery configured](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0015ea4d-51ff-4ce3-8d8c-f3f8f0179a56) |Audit virtual machines which do not have disaster recovery configured. To learn more about disaster recovery, visit [https://aka.ms/asr-doc](https://aka.ms/asr-doc). |auditIfNotExists |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Compute/RecoveryServices_DisasterRecovery_Audit.json) |
-|[Azure Backup should be enabled for Virtual Machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F013e242c-8828-4970-87b3-ab247555486d) |Ensure protection of your Azure Virtual Machines by enabling Azure Backup. Azure Backup is a secure and cost effective data protection solution for Azure. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Backup/VirtualMachines_EnableAzureBackup_Audit.json) |
+|[Azure Backup should be enabled for Virtual Machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F013e242c-8828-4970-87b3-ab247555486d) |Ensure protection of your Azure Virtual Machines by enabling Azure Backup. Azure Backup is a secure and cost effective data protection solution for Azure. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Backup/VirtualMachines_EnableAzureBackup_Audit.json) |
|[Geo-redundant backup should be enabled for Azure Database for MariaDB](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0ec47710-77ff-4a3d-9181-6aa50af424d0) |Azure Database for MariaDB allows you to choose the redundancy option for your database server. It can be set to a geo-redundant backup storage in which the data is not only stored within the region in which your server is hosted, but is also replicated to a paired region to provide recovery option in case of a region failure. Configuring geo-redundant storage for backup is only allowed during server create. |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/GeoRedundant_DBForMariaDB_Audit.json) | |[Geo-redundant backup should be enabled for Azure Database for MySQL](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F82339799-d096-41ae-8538-b108becf0970) |Azure Database for MySQL allows you to choose the redundancy option for your database server. It can be set to a geo-redundant backup storage in which the data is not only stored within the region in which your server is hosted, but is also replicated to a paired region to provide recovery option in case of a region failure. Configuring geo-redundant storage for backup is only allowed during server create. |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/GeoRedundant_DBForMySQL_Audit.json) | |[Geo-redundant backup should be enabled for Azure Database for PostgreSQL](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F48af4db5-9b8b-401c-8e74-076be876a430) |Azure Database for PostgreSQL allows you to choose the redundancy option for your database server. It can be set to a geo-redundant backup storage in which the data is not only stored within the region in which your server is hosted, but is also replicated to a paired region to provide recovery option in case of a region failure. Configuring geo-redundant storage for backup is only allowed during server create. |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/GeoRedundant_DBForPostgreSQL_Audit.json) |
This built-in initiative is deployed as part of the
|[Web Application should only be accessible over HTTPS](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa4af4a39-4135-47fb-b175-47fbdf85311d) |Use of HTTPS ensures server/service authentication and protects data in transit from network layer eavesdropping attacks. |Audit, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppServiceWebapp_AuditHTTP_Audit.json) | |[Windows machines should meet requirements for 'Security Options - Network Access'](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3ff60f98-7fa4-410a-9f7f-0b00f5afdbdd) |Windows machines should have the specified Group Policy settings in the category 'Security Options - Network Access' for including access for anonymous users, local accounts, and remote access to the registry. This policy requires that the Guest Configuration prerequisites have been deployed to the policy assignment scope. For details, visit [https://aka.ms/gcpol](https://aka.ms/gcpol). |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_SecurityOptionsNetworkAccess_AINE.json) | |[Windows machines should meet requirements for 'Security Options - Network Security'](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1221c620-d201-468c-81e7-2817e6107e84) |Windows machines should have the specified Group Policy settings in the category 'Security Options - Network Security' for including Local System behavior, PKU2U, LAN Manager, LDAP client, and NTLM SSP. This policy requires that the Guest Configuration prerequisites have been deployed to the policy assignment scope. For details, visit [https://aka.ms/gcpol](https://aka.ms/gcpol). |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_SecurityOptionsNetworkSecurity_AINE.json) |
-|[Windows web servers should be configured to use secure communication protocols](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5752e6d6-1206-46d8-8ab1-ecc2f71a8112) |To protect the privacy of information communicated over the Internet, your web servers should use the latest version of the industry-standard cryptographic protocol, Transport Layer Security (TLS). TLS secures communications over a network by using security certificates to encrypt a connection between machines. TLS 1.3 is faster and more secure than the earlier versions: TLS 1.0-1.2 and SSL 2-3, which are all considered legacy protocols. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_SecureWebProtocol_AINE.json) |
+|[Windows web servers should be configured to use secure communication protocols](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5752e6d6-1206-46d8-8ab1-ecc2f71a8112) |To protect the privacy of information communicated over the Internet, your web servers should use the latest version of the industry-standard cryptographic protocol, Transport Layer Security (TLS). TLS secures communications over a network by using security certificates to encrypt a connection between machines. |AuditIfNotExists, Disabled |[2.1.0]