Updates from: 04/08/2021 03:07:22
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Force Password Reset https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/force-password-reset.md
To enable the **Forced password reset** setting in a sign-up or sign-in user flo
1. In the Azure portal, search for and select **Azure AD B2C**. 1. Under **Policies**, select **Identity Experience Framework**. 1. Select the `B2C_1A_signup_signin_Custom_ForcePasswordReset` policy to open it.
-1. For **Application**, select a web application that you [previously registered](troubleshoot-custom-policies.md#troubleshoot-the-runtime). The **Reply URL** should show `https://jwt.ms`.
+1. For **Application**, select a web application that you [previously registered](tutorial-register-applications.md). The **Reply URL** should show `https://jwt.ms`.
1. Select the **Run now** button. 1. Sign in with the user account for which you reset the password. 1. You now must change the password for the user. Change the password and select **Continue**. The token is returned to `https://jwt.ms` and should be displayed to you.
active-directory-b2c Identity Provider Amazon https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/identity-provider-amazon.md
You can define an Amazon account as a claims provider by adding it to the **Clai
## Test your custom policy 1. Select your relying party policy, for example `B2C_1A_signup_signin`.
-1. For **Application**, select a web application that you [previously registered](troubleshoot-custom-policies.md#troubleshoot-the-runtime). The **Reply URL** should show `https://jwt.ms`.
+1. For **Application**, select a web application that you [previously registered](tutorial-register-applications.md). The **Reply URL** should show `https://jwt.ms`.
1. Select the **Run now** button. 1. From the sign-up or sign-in page, select **Amazon** to sign in with Amazon account.
active-directory-b2c Identity Provider Apple Id https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/identity-provider-apple-id.md
You can define an Apple ID as a claims provider by adding it to the **ClaimsProv
## Test your custom policy 1. Select your relying party policy, for example `B2C_1A_signup_signin`.
-1. For **Application**, select a web application that you [previously registered](troubleshoot-custom-policies.md#troubleshoot-the-runtime). The **Reply URL** should show `https://jwt.ms`.
+1. For **Application**, select a web application that you [previously registered](tutorial-register-applications.md). The **Reply URL** should show `https://jwt.ms`.
1. Select the **Run now** button. 1. From the sign-up or sign-in page, select **Apple** to sign in with Apple ID.
active-directory-b2c Identity Provider Azure Ad B2c https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/identity-provider-azure-ad-b2c.md
You can define Azure AD B2C as a claims provider by adding Azure AD B2C to the *
## Test your custom policy 1. Select your relying party policy, for example `B2C_1A_signup_signin`.
-1. For **Application**, select a web application that you [previously registered](troubleshoot-custom-policies.md#troubleshoot-the-runtime). The **Reply URL** should show `https://jwt.ms`.
+1. For **Application**, select a web application that you [previously registered](tutorial-register-applications.md). The **Reply URL** should show `https://jwt.ms`.
1. Select the **Run now** button. 1. From the sign-up or sign-in page, select **Fabrikam** to sign in with the other Azure AD B2C tenant.
active-directory-b2c Identity Provider Azure Ad Multi Tenant https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/identity-provider-azure-ad-multi-tenant.md
Perform these steps for each Azure AD tenant that should be used to sign in:
## Test your custom policy 1. Select your relying party policy, for example `B2C_1A_signup_signin`.
-1. For **Application**, select a web application that you [previously registered](troubleshoot-custom-policies.md#troubleshoot-the-runtime). The **Reply URL** should show `https://jwt.ms`.
+1. For **Application**, select a web application that you [previously registered](tutorial-register-applications.md). The **Reply URL** should show `https://jwt.ms`.
1. Select the **Run now** button. 1. From the sign-up or sign-in page, select **Common AAD** to sign in with Azure AD account.
active-directory-b2c Identity Provider Azure Ad Single Tenant https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/identity-provider-azure-ad-single-tenant.md
To get a token from the Azure AD endpoint, you need to define the protocols that
## Test your custom policy 1. Select your relying party policy, for example `B2C_1A_signup_signin`.
-1. For **Application**, select a web application that you [previously registered](troubleshoot-custom-policies.md#troubleshoot-the-runtime). The **Reply URL** should show `https://jwt.ms`.
+1. For **Application**, select a web application that you [previously registered](tutorial-register-applications.md). The **Reply URL** should show `https://jwt.ms`.
1. Select the **Run now** button. 1. From the sign-up or sign-in page, select **Contoso Employee** to sign in with Azure AD Contoso account.
active-directory-b2c Identity Provider Generic Saml https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/identity-provider-generic-saml.md
Open a browser and navigate to the URL. Make sure you type the correct URL and t
1. In the Azure portal, search for and select **Azure AD B2C**. 1. Under **Policies**, select **Identity Experience Framework** 1. Select your relying party policy, for example `B2C_1A_signup_signin`.
-1. For **Application**, select a web application that you [previously registered](troubleshoot-custom-policies.md#troubleshoot-the-runtime). The **Reply URL** should show `https://jwt.ms`.
+1. For **Application**, select a web application that you [previously registered](tutorial-register-applications.md). The **Reply URL** should show `https://jwt.ms`.
1. Select the **Run now** button. 1. From the sign-up or sign-in page, select **Contoso** to sign in with Contoso account.
active-directory-b2c Identity Provider Github https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/identity-provider-github.md
The GitHub technical profile requires the **CreateIssuerUserId** claim transform
## Test your custom policy 1. Select your relying party policy, for example `B2C_1A_signup_signin`.
-1. For **Application**, select a web application that you [previously registered](troubleshoot-custom-policies.md#troubleshoot-the-runtime). The **Reply URL** should show `https://jwt.ms`.
+1. For **Application**, select a web application that you [previously registered](tutorial-register-applications.md). The **Reply URL** should show `https://jwt.ms`.
1. Select the **Run now** button. 1. From the sign-up or sign-in page, select **GitHub** to sign in with GitHub account.
active-directory-b2c Identity Provider Google https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/identity-provider-google.md
You can define a Google account as a claims provider by adding it to the **Claim
## Test your custom policy 1. Select your relying party policy, for example `B2C_1A_signup_signin`.
-1. For **Application**, select a web application that you [previously registered](troubleshoot-custom-policies.md#troubleshoot-the-runtime). The **Reply URL** should show `https://jwt.ms`.
+1. For **Application**, select a web application that you [previously registered](tutorial-register-applications.md). The **Reply URL** should show `https://jwt.ms`.
1. Select the **Run now** button. 1. From the sign-up or sign-in page, select **Google** to sign in with Google account.
active-directory-b2c Identity Provider Id Me https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/identity-provider-id-me.md
Next, you need a claims transformation to create the displayName claim. Add the
## Test your custom policy 1. Select your relying party policy, for example `B2C_1A_signup_signin`.
-1. For **Application**, select a web application that you [previously registered](troubleshoot-custom-policies.md#troubleshoot-the-runtime). The **Reply URL** should show `https://jwt.ms`.
+1. For **Application**, select a web application that you [previously registered](tutorial-register-applications.md). The **Reply URL** should show `https://jwt.ms`.
1. Select the **Run now** button. 1. From the sign-up or sign-in page, select **ID.me** to sign in with ID.me account.
active-directory-b2c Identity Provider Linkedin https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/identity-provider-linkedin.md
Add the **BuildingBlocks** element near the top of the *TrustFrameworkExtensions
## Test your custom policy 1. Select your relying party policy, for example `B2C_1A_signup_signin`.
-1. For **Application**, select a web application that you [previously registered](troubleshoot-custom-policies.md#troubleshoot-the-runtime). The **Reply URL** should show `https://jwt.ms`.
+1. For **Application**, select a web application that you [previously registered](tutorial-register-applications.md). The **Reply URL** should show `https://jwt.ms`.
1. Select the **Run now** button. 1. From the sign-up or sign-in page, select **LinkedIn** to sign in with LinkedIn account.
active-directory-b2c Identity Provider Microsoft Account https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/identity-provider-microsoft-account.md
You've now configured your policy so that Azure AD B2C knows how to communicate
## Test your custom policy 1. Select your relying party policy, for example `B2C_1A_signup_signin`.
-1. For **Application**, select a web application that you [previously registered](troubleshoot-custom-policies.md#troubleshoot-the-runtime). The **Reply URL** should show `https://jwt.ms`.
+1. For **Application**, select a web application that you [previously registered](tutorial-register-applications.md). The **Reply URL** should show `https://jwt.ms`.
1. Select the **Run now** button. 1. From the sign-up or sign-in page, select **Microsoft** to sign in with Microsoft account.
active-directory-b2c Identity Provider Qq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/identity-provider-qq.md
You can define a QQ account as a claims provider by adding it to the **ClaimsPro
## Test your custom policy 1. Select your relying party policy, for example `B2C_1A_signup_signin`.
-1. For **Application**, select a web application that you [previously registered](troubleshoot-custom-policies.md#troubleshoot-the-runtime). The **Reply URL** should show `https://jwt.ms`.
+1. For **Application**, select a web application that you [previously registered](tutorial-register-applications.md). The **Reply URL** should show `https://jwt.ms`.
1. Select the **Run now** button. 1. From the sign-up or sign-in page, select **QQ** to sign in with QQ account.
active-directory-b2c Identity Provider Salesforce Saml https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/identity-provider-salesforce-saml.md
You can define a Salesforce account as a claims provider by adding it to the **C
## Test your custom policy 1. Select your relying party policy, for example `B2C_1A_signup_signin`.
-1. For **Application**, select a web application that you [previously registered](troubleshoot-custom-policies.md#troubleshoot-the-runtime). The **Reply URL** should show `https://jwt.ms`.
+1. For **Application**, select a web application that you [previously registered](tutorial-register-applications.md). The **Reply URL** should show `https://jwt.ms`.
1. Select the **Run now** button. 1. From the sign-up or sign-in page, select **Salesforce** to sign in with Salesforce account.
active-directory-b2c Identity Provider Salesforce https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/identity-provider-salesforce.md
You can define a Salesforce account as a claims provider by adding it to the **C
## Test your custom policy 1. Select your relying party policy, for example `B2C_1A_signup_signin`.
-1. For **Application**, select a web application that you [previously registered](troubleshoot-custom-policies.md#troubleshoot-the-runtime). The **Reply URL** should show `https://jwt.ms`.
+1. For **Application**, select a web application that you [previously registered](tutorial-register-applications.md). The **Reply URL** should show `https://jwt.ms`.
1. Select the **Run now** button. 1. From the sign-up or sign-in page, select **Salesforce** to sign in with Salesforce account.
active-directory-b2c Identity Provider Twitter https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/identity-provider-twitter.md
Previously updated : 03/17/2021 Last updated : 04/06/2021
To enable sign-in for users with a Twitter account in Azure AD B2C, you need to
1. Under **Authentication settings**, select **Edit** 1. Select **Enable 3-legged OAuth** checkbox. 1. Select **Request email address from users** checkbox.
- 1. For the **Callback URLs**, enter `https://your-tenant.b2clogin.com/your-tenant-name.onmicrosoft.com/your-user-flow-Id/oauth1/authresp`. If you use a [custom domain](custom-domain.md), enter `https://your-domain-name/your-tenant-name.onmicrosoft.com/your-user-flow-Id/oauth1/authresp`. Use all lowercase letters when entering your tenant name and user flow id even if they are defined with uppercase letters in Azure AD B2C. Replace:
+ 1. For the **Callback URLs**, enter `https://your-tenant.b2clogin.com/your-tenant-name.onmicrosoft.com/your-user-flow-Id/oauth1/authresp`. If you use a [custom domain](custom-domain.md), enter `https://your-domain-name/your-tenant-name.onmicrosoft.com/your-user-flow-Id/oauth1/authresp`. Use all lowercase letters when entering your tenant name and user flow ID even if they are defined with uppercase letters in Azure AD B2C. Replace:
- `your-tenant-name` with the name of your tenant name. - `your-domain-name` with your custom domain. - `your-user-flow-Id` with the identifier of your user flow. For example, `b2c_1a_signup_signin_twitter`.
You can define a Twitter account as a claims provider by adding it to the **Clai
## Test your custom policy 1. Select your relying party policy, for example `B2C_1A_signup_signin`.
-1. For **Application**, select a web application that you [previously registered](troubleshoot-custom-policies.md#troubleshoot-the-runtime). The **Reply URL** should show `https://jwt.ms`.
+1. For **Application**, select a web application that you [previously registered](tutorial-register-applications.md). The **Reply URL** should show `https://jwt.ms`.
1. Select the **Run now** button. 1. From the sign-up or sign-in page, select **Twitter** to sign in with Twitter account.
active-directory-b2c Identity Provider Wechat https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/identity-provider-wechat.md
You can define a WeChat account as a claims provider by adding it to the **Claim
## Test your custom policy 1. Select your relying party policy, for example `B2C_1A_signup_signin`.
-1. For **Application**, select a web application that you [previously registered](troubleshoot-custom-policies.md#troubleshoot-the-runtime). The **Reply URL** should show `https://jwt.ms`.
+1. For **Application**, select a web application that you [previously registered](tutorial-register-applications.md). The **Reply URL** should show `https://jwt.ms`.
1. Select the **Run now** button. 1. From the sign-up or sign-in page, select **WeChat** to sign in with WeChat account.
active-directory-b2c Identity Provider Weibo https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/identity-provider-weibo.md
The GitHub technical profile requires the **CreateIssuerUserId** claim transform
## Test your custom policy 1. Select your relying party policy, for example `B2C_1A_signup_signin`.
-1. For **Application**, select a web application that you [previously registered](troubleshoot-custom-policies.md#troubleshoot-the-runtime). The **Reply URL** should show `https://jwt.ms`.
+1. For **Application**, select a web application that you [previously registered](tutorial-register-applications.md). The **Reply URL** should show `https://jwt.ms`.
1. Select the **Run now** button. 1. From the sign-up or sign-in page, select **Weibo** to sign in with Weibo account.
active-directory-b2c Troubleshoot Custom Policies https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/troubleshoot-custom-policies.md
Previously updated : 08/13/2019 Last updated : 04/06/2021
-# Troubleshoot Azure AD B2C custom policies and Identity Experience Framework
+# Troubleshoot Azure AD B2C custom policies
-If you use Azure Active Directory B2C (Azure AD B2C) custom policies, you might experience challenges setting up the Identity Experience Framework in its policy language XML format. Learning to write custom policies can be like learning a new language. In this article, we describe some tools and tips that can help you discover and resolve issues.
+If you use Azure Active Directory B2C (Azure AD B2C) [custom policies](custom-policy-overview.md), you might experience challenges with policy language XML format or runtime issues. This article describes some tools and tips that can help you discover and resolve issues.
This article focuses on troubleshooting your Azure AD B2C custom policy configuration. It doesn't address the relying party application or its identity library.
-## XML editing
+## Azure AD B2C correlation ID overview
+
+Azure AD B2C correlation ID is a unique identifier value that is attached to authorization requests. It passes through all the orchestration steps a user is taken through. With the correlation ID, you can:
+
+- Identify sign-in activity in your application and track the performance of your policy.
+- Find the sign-in request's Azure Application Insights logs.
+- Pass the correlation ID to your REST API and use it to identify the sign-in flow.
+
+The correlation ID is changed every time a new session is established. When debugging your policies, make sure close existing browser tabs. Or open a new in-private mode browser.
+
+### Get the Azure AD B2C correlation ID
+
+You can find the correlation ID in the Azure AD B2C sign-up or sign-in page. In your browser, select **view source**. The correlation appears as a comment at the top of the page.
+
+![Screenshot of Azure AD B2C sign-in page view source.](./media/troubleshoot-custom-policies/find-azure-ad-b2c-correlation-id.png)
+
+Copy the correlation ID, and then continue the sign-in flow. Use the correlation ID to observe the sign-in behavior. For more information, see [Troubleshooting with Application Insights](#troubleshooting-with-application-insights).
+
+### Echo the Azure AD B2C correlation ID
+
+You can include the correlation ID in your Azure AD B2C tokens. To include the correlation ID:
+
+1. Open the extensions file of your policy. For example, <em>`SocialAndLocalAccounts/`**`TrustFrameworkExtensions.xml`**</em>.
+1. Search for the [BuildingBlocks](buildingblocks.md) element. If the element doesn't exist, add it.
+1. Locate the [ClaimsSchema](claimsschema.md) element. If the element doesn't exist, add it.
+1. Add the city claim to the **ClaimsSchema** element.
+
+ ```xml
+ <!--
+ <BuildingBlocks>
+ <ClaimsSchema> -->
+ <ClaimType Id="correlationId">
+ <DisplayName>correlation ID</DisplayName>
+ <DataType>string</DataType>
+ </ClaimType>
+ <!--
+ </ClaimsSchema>
+ </BuildingBlocks>-->
+ ```
+
+1. Open the relying party file of your policy. For example, <em>`SocialAndLocalAccounts/`**`SignUpOrSignIn.xml`**</em> file. The output claim will be added to the token after a successful user journey and sent to the application. Modify the technical profile element in the relying party section to add the city as an output claim.
+
+ ```xml
+ <RelyingParty>
+ <DefaultUserJourney ReferenceId="SignUpOrSignIn" />
+ <TechnicalProfile Id="PolicyProfile">
+ <DisplayName>PolicyProfile</DisplayName>
+ <Protocol Name="OpenIdConnect" />
+ <OutputClaims>
+ ...
+ <OutputClaim ClaimTypeReferenceId="correlationId" DefaultValue="{Context:CorrelationId}" />
+ </OutputClaims>
+ <SubjectNamingInfo ClaimType="sub" />
+ </TechnicalProfile>
+ </RelyingParty>
+ ```
++
+## Troubleshooting with Application Insights
+
+To diagnose problems with your custom policies, use [Application Insights](troubleshoot-with-application-insights.md). Application Insights traces the activity of your custom policy user journey. It provides a way to diagnose exceptions and observe the exchange of claims between Azure AD B2C and the various claims providers that are defined by technical profiles, such as identity providers, API-based services, the Azure AD B2C user directory, and other services.
+
+We recommend installing the [Azure AD B2C extension](https://marketplace.visualstudio.com/items?itemName=AzureADB2CTools.aadb2c) for [VS Code](https://code.visualstudio.com/). With the Azure AD B2C extension, the logs are organized for you by policy name, correlation ID (Application Insights presents the first digit of the correlation ID), and the log timestamp. This feature helps you find the relevant log based on the local timestamp and see the user journey as executed by Azure AD B2C.
+
+> [!NOTE]
+> The community has developed the Visual Studio Code extension for Azure AD B2C to help identity developers. The extension is not supported by Microsoft and is made available strictly as-is.
+
+A single sign-in flow can issue more than one Azure Application Insights trace. In the following screenshot, the *B2C_1A_signup_signin* policy has three logs. Each log represents part of the sign-in flow.
+
+![Screenshot of Azure AD B2C extension for VS Code with Azure Application Insights trace.](./media/troubleshoot-custom-policies/vscode-extension-application-insights-trace.png)
+
+### Application Insights trace log details
+
+When you select an Azure Application Insights trace, the extension opens the **Application Insights details** page with the following information:
+
+- **Application Insights** - Generic information about the trace log, including the policy name, correlation ID, Azure Application Insights trace ID, and trace timestamp.
+- **Technical profiles** - List of technical profiles that appear in the trace log.
+- **Claims** - Alphabetical list of claims that appear in the trace log and their values. If a claim appears in the trace log multiple times with different values, a `=>` sign designates the newest value. You can review these claims to determine if expected claim values are set correctly. For example, if you have a precondition that checks a claim value, the claims section can help you determine why an expected flow behaves differently.
+- **Claims transformation** - List of claims transformations that appear in the trace log. Each claims transformation contains the input claims, input parameters, and output claims. The claims transformation section gives insight into the data sent in and the outcome of the claims transformation.
+- **Tokens** - List of tokens that appear in the trace log. The tokens include the underlying federated OAuth, and OpenId Connect identity provider's tokens. The federated identity provider's token gives details about how the identity provider returns the claims to Azure AD B2C so you can map the identity provider technical profile output claims.
+- **Exceptions** - List of exceptions or fatal errors that appear in the trace log.
+- **Application Insights JSON** - The raw data the returns from the Application Insights.
+
+## Troubleshoot JWT tokens
+
+For JWT token validation and debugging purposes, your can decode JWTs using a site like [https://jwt.ms](https://jwt.ms). Create a test application that can redirect to `https://jwt.ms` for token inspection. If you haven't already done so, [register a web application](tutorial-register-applications.md), and [enable ID token implicit grant](tutorial-register-applications.md#enable-id-token-implicit-grant).
+
+![Screenshot of JWT token preview.](./media/troubleshoot-custom-policies/jwt-token-preview.png)
+
+Use **Run now** and `https://jwt.ms` to test your policies independently of your web or mobile application. This website acts like a relying party application. It displays the contents of the JSON web token (JWT) that is generated by your Azure AD B2C policy.
+
+## Troubleshoot SAML protocol
+
+To help configure and debug the integration with your service provider, you can use a browser extension for the SAML protocol, for example, [SAML DevTools extension](https://chrome.google.com/webstore/detail/saml-devtools-extension/jndllhgbinhiiddokbeoeepbppdnhhio) for Chrome, [SAML-tracer](https://addons.mozilla.org/es/firefox/addon/saml-tracer/) for FireFox, or [Edge or IE Developer tools](https://techcommunity.microsoft.com/t5/microsoft-sharepoint-blog/gathering-a-saml-token-using-edge-or-ie-developer-tools/ba-p/320957).
+
+The following screenshot demonstrates how the SAML DevTools extension presents the SAML request Azure AD B2C sends to the identity provider, and the SAML response.
+
+![Screenshot of SAML protocol trace log.](./media/troubleshoot-custom-policies/saml-protocol-trace.png)
+
+Using these tools, you can check the integration between your application and Azure AD B2C. For example:
+
+- Check whether the SAML request contains a signature and determine what algorithm is used to sign in the authorization request.
+- Check if Azure AD B2C returns an error message.
+- Check if the assertion section is encrypted.
+- Get the name of the claims return the identity provider.
+
+You can also trace the exchange of messages between your client browser and Azure AD B2C, with [Fiddler](https://www.telerik.com/fiddler). It can help you get an indication of where your user journey is failing in your orchestration steps.
+
+## Troubleshoot policy validity
+
+After you finish developing your policy, you upload the policy to Azure AD B2C. there might be some issues with your policy. Use the following methods to ensure your policy integrity/validity.
The most common error in setting up custom policies is improperly formatted XML. A good XML editor is nearly essential. It displays XML natively, color-codes content, pre-fills common terms, keeps XML elements indexed, and can validate against an XML schema.
-Two of our favorite editors are [Visual Studio Code](https://code.visualstudio.com/) and [Notepad++](https://notepad-plus-plus.org/).
+We recommend using [Visual Studio Code](https://code.visualstudio.com/). Then install an XML extension, such as [XML Language Support by Red Hat](https://marketplace.visualstudio.com/items?itemName=redhat.vscode-xml). The XML extension let's you validate the XML schema before you upload your XML file, using custom policy [XSD](https://raw.githubusercontent.com/Azure-Samples/active-directory-b2c-custom-policy-starterpack/master/TrustFrameworkPolicy_0.3.0.0.xsd) schema definition.
+
+You can use the XML file association strategy to bind the XML file the XSD by adding the following settings into your VS Code `settings.json` file. To do so:
+
+1. From VS Code, click on the **Settings**. For more information, see [User and Workspace Settings](https://code.visualstudio.com/docs/getstarted/settings).
+1. Search for **fileAssociations**, then under the **Extension**, select the **XML**.
+1. Select **Edit in settings.json**.
+
+ ![Screenshot of VS Code XML schema validation.](./media/troubleshoot-custom-policies/xml-validation.png)
+1. In the settings.json, add the following JSON code:
+
+ ```json
+ "xml.fileAssociations": [
+ {
+ "pattern": "**.xml",
+ "systemId": "https://raw.githubusercontent.com/Azure-Samples/active-directory-b2c-custom-policy-starterpack/master/TrustFrameworkPolicy_0.3.0.0.xsd"
+ }
+ ]
+ ```
+
+The following example shows an XML validation error. When you move your mouse over the element name, the extension list the expected elements.
-XML schema validation identifies errors before you upload your XML file. In the root folder of the [starter pack](https://github.com/Azure-Samples/active-directory-b2c-custom-policy-starterpack), get the XML schema definition file *TrustFrameworkPolicy_0.3.0.0.xsd*. To find out how to use the XSD schema file for validation in your editor, look for *XML tools* and *XML validation* or similar in the editor's documentation.
+![Screenshot of VS Code XML schema validation error indicator.](./media/troubleshoot-custom-policies/xml-validation-error.png)
-You might find a review of XML rules helpful. Azure AD B2C rejects any XML formatting errors that it detects. Occasionally, incorrectly formatted XML might cause error messages that are misleading.
+In the following case, the `DisplayName` element is correct. But, in the wrong order. The `DisplayName` should be before the `Protocol` element. To fix the issue, move your mouse over the `DisplayName` element to the correct order of the elements.
+
+![Screenshot of VS Code XML schema validation order error.](./media/troubleshoot-custom-policies/xml-validation-order-error.png)
## Upload policies and policy validation Validation of the XML policy file is performed automatically on upload. Most errors cause the upload to fail. Validation includes the policy file that you are uploading. It also includes the chain of files the upload file refers to (the relying party policy file, the extensions file, and the base file).
-Common validation errors include the following:
+> [!TIP]
+> Azure AD B2C runs additional validation for relying party policy. When having an issue with your policy, even if you edit only the extension policy, it's a good practice to upload the relying party policy as well.
-> Error snippet: `...makes a reference to ClaimType with id "displayName" but neither the policy nor any of its base policies contain such an element`
+This section contains the common validation errors and probable solutions.
-* The ClaimType value might be misspelled, or does not exist in the schema.
-* ClaimType values must be defined in at least one of the files in the policy.
- For example: `<ClaimType Id="issuerUserId">`
-* If ClaimType is defined in the extensions file, but it's also used in a TechnicalProfile value in the base file, uploading the base file results in an error.
+### Schema validation error found ...has invalid child element '{name}'
-> Error snippet: `...makes a reference to a ClaimsTransformation with id...`
+Your policy contains an invalid XML element, or the XML element is valid, but appear to be in the wrong order. To fix this type of error, check out the [Troubleshoot policy validity](#troubleshoot-policy-validity) section.
-* The causes for this error can be the same as for the ClaimType error.
+### There is a duplicate key sequence '{number}'
-> Error snippet: `Reason: User is currently logged as a user of 'yourtenant.onmicrosoft.com' tenant. In order to manage 'yourtenant.onmicrosoft.com', please login as a user of 'yourtenant.onmicrosoft.com' tenant`
+A user [journey](userjourneys.md) or [sub journey](subjourneys.md) consist of an ordered list of orchestration steps that are executed in sequence. After you change your journey, renumber the steps sequentially without skipping any integers from 1 to N.
-* Check that the TenantId value in the `<TrustFrameworkPolicy\>` and `<BasePolicy\>` elements match your target Azure AD B2C tenant.
+> [!TIP]
+> You can use the [Azure AD B2C extension](https://marketplace.visualstudio.com/items?itemName=AzureADB2CTools.aadb2c) for [VS Code](https://code.visualstudio.com/) `(Shift+Ctrl+r)` command to renumber all of the user journeys and sub journeys orchestration steps in your policy.
-## Troubleshoot the runtime
+### ...was expected to have step with order "{number}" but it was not found...
-* Use **Run now** and `https://jwt.ms` to test your policies independently of your web or mobile application. This website acts like a relying party application. It displays the contents of the JSON web token (JWT) that is generated by your Azure AD B2C policy.
+Check the previous error.
- To create a test application that can redirect to `https://jwt.ms` for token inspection:
+### Orchestration step order "{number}" in user journey "{name}" ...is followed by a claims provider selection step and must be a claims exchange, but it is of type...
- [!INCLUDE [active-directory-b2c-appreg-idp](../../includes/active-directory-b2c-appreg-idp.md)]
+The orchestration steps type of `ClaimsProviderSelection`, and `CombinedSignInAndSignUp` contain a list of options a user can choose from. It must follow by type of `ClaimsExchange` with one or more claims exchange.
-* To trace the exchange of messages between your client browser and Azure AD B2C, use [Fiddler](https://www.telerik.com/fiddler). It can help you get an indication of where your user journey is failing in your orchestration steps.
+The following orchestration steps cause this type or error. The second orchestration step must be type of `ClaimsExchange`, not `ClaimsProviderSelection`.
-* In **Development mode**, use [Application Insights](troubleshoot-with-application-insights.md) to trace the activity of your Identity Experience Framework user journey. In **Development mode**, you can observe the exchange of claims between the Identity Experience Framework and the various claims providers that are defined by technical profiles, such as identity providers, API-based services, the Azure AD B2C user directory, and other services, like Azure AD Multi-Factor Authentication.
+```xml
+<!--
+<UserJourneys>
+ <UserJourney Id="SignUpOrSignIn">
+ <OrchestrationSteps>-->
+ <OrchestrationStep Order="1" Type="CombinedSignInAndSignUp" ContentDefinitionReferenceId="api.signuporsignin">
+ <ClaimsProviderSelections>
+ <ClaimsProviderSelection TargetClaimsExchangeId="FacebookExchange"/>
+ <ClaimsProviderSelection ValidationClaimsExchangeId="LocalAccountSigninEmailExchange"/>
+ </ClaimsProviderSelections>
+ <ClaimsExchanges>
+ <ClaimsExchange Id="LocalAccountSigninEmailExchange" TechnicalProfileReferenceId="SelfAsserted-LocalAccountSignin-Email"/>
+ </ClaimsExchanges>
+ </OrchestrationStep>
+
+ <OrchestrationStep Order="2" Type="ClaimsProviderSelection">
+ ...
+ </OrchestrationStep>
+ ...
+ <!--
+ </OrchestrationSteps>
+ </UserJourney>
+</UserJourneys> -->
+```
-## Recommended practices
+### ...step {number} with 2 claims exchanges. It must be preceded by a claims provider selection in order to determine which claims exchange can be used
-**Keep multiple versions of your scenarios. Group them in a project with your application.** The base, extensions, and relying party files are directly dependent on each other. Save them as a group. As new features are added to your policies, keep separate working versions. Stage working versions in your own file system with the application code they interact with. Your applications might invoke many different relying party policies in a tenant. They might become dependent on the claims that they expect from your Azure AD B2C policies.
+An orchestration step type of `ClaimsExchange` must have a single `ClaimsExchange`, unless the previous step is type of `ClaimsProviderSelection`, or `CombinedSignInAndSignUp`. The following orchestration steps cause this type of error. The sixth step contains two claims exchange.
-**Develop and test technical profiles with known user journeys.** Use tested starter pack policies to set up your technical profiles. Test them separately before you incorporate them into your own user journeys.
+```xml
+<!--
+<UserJourneys>
+ <UserJourney Id="SignUpOrSignIn">
+ <OrchestrationSteps>-->
+ ...
+ <OrchestrationStep Order="5" Type="ClaimsExchange">
+ ...
+ <ClaimsExchanges>
+ <ClaimsExchange Id="SelfAsserted-Social" TechnicalProfileReferenceId="SelfAsserted-Social"/>
+ </ClaimsExchanges>
+ </OrchestrationStep>
+ <OrchestrationStep Order="6" Type="ClaimsExchange">
+ <ClaimsExchanges>
+ <ClaimsExchange Id="Call-REST-First-API" TechnicalProfileReferenceId="Call-REST-First-API"/>
+ <ClaimsExchange Id="Call-REST-Second-API" TechnicalProfileReferenceId="Call-REST-Second-API"/>
+ </ClaimsExchanges>
+ </OrchestrationStep>
+ ...
+ <!--
+ </OrchestrationSteps>
+ </UserJourney>
+</UserJourneys> -->
+```
-**Develop and test user journeys with tested technical profiles.** Change the orchestration steps of a user journey incrementally. Progressively build your intended scenarios.
+To fix this type of error, use two orchestration steps. Each orchestration step with one claims exchange.
-## Next steps
+```xml
+<!--
+<UserJourneys>
+ <UserJourney Id="SignUpOrSignIn">
+ <OrchestrationSteps>-->
+ ...
+ <OrchestrationStep Order="5" Type="ClaimsExchange">
+ ...
+ <ClaimsExchanges>
+ <ClaimsExchange Id="SelfAsserted-Social" TechnicalProfileReferenceId="SelfAsserted-Social"/>
+ </ClaimsExchanges>
+ </OrchestrationStep>
+ <OrchestrationStep Order="6" Type="ClaimsExchange">
+ <ClaimsExchanges>
+ <ClaimsExchange Id="Call-REST-First-API" TechnicalProfileReferenceId="Call-REST-First-API"/>
+ </ClaimsExchanges>
+ </OrchestrationStep>
+ <OrchestrationStep Order="7" Type="ClaimsExchange">
+ <ClaimsExchanges>
+ <ClaimsExchange Id="Call-REST-Second-API" TechnicalProfileReferenceId="Call-REST-Second-API"/>
+ </ClaimsExchanges>
+ </OrchestrationStep>
+ ...
+ <!--
+ </OrchestrationSteps>
+ </UserJourney>
+</UserJourneys> -->
+```
-Available on GitHub, download the [active-directory-b2c-custom-policy-starterpack](https://github.com/Azure-Samples/active-directory-b2c-custom-policy-starterpack/archive/master.zip) .zip archive. You can also clone the repository:
+### There is a duplicate key sequence '{name}'
+A journey has multiple `ClaimsExchange` with the same `Id`. The following steps cause this type of error. The ID *AADUserWrite* appears twice in the user journey.
+
+```xml
+<!--
+<UserJourneys>
+ <UserJourney Id="SignUpOrSignIn">
+ <OrchestrationSteps>-->
+ ...
+ <OrchestrationStep Order="7" Type="ClaimsExchange">
+ <ClaimsExchanges>
+ <ClaimsExchange Id="AADUserWrite" TechnicalProfileReferenceId="AAD-UserWriteUsingAlternativeSecurityId"/>
+ </ClaimsExchanges>
+ </OrchestrationStep>
+ <OrchestrationStep Order="8" Type="ClaimsExchange">
+ <ClaimsExchanges>
+ <ClaimsExchange Id="AADUserWrite" TechnicalProfileReferenceId="Call-REST-API"/>
+ </ClaimsExchanges>
+ </OrchestrationStep>
+ ...
+ <!--
+ </OrchestrationSteps>
+ </UserJourney>
+</UserJourneys> -->
+```
+
+To fix this type of error, change the eighth orchestration steps' claims exchange to a unique name, such as *Call-REST-API*.
+
+```xml
+<!--
+<UserJourneys>
+ <UserJourney Id="SignUpOrSignIn">
+ <OrchestrationSteps>-->
+ ...
+ <OrchestrationStep Order="7" Type="ClaimsExchange">
+ <ClaimsExchanges>
+ <ClaimsExchange Id="AADUserWrite" TechnicalProfileReferenceId="AAD-UserWriteUsingAlternativeSecurityId"/>
+ </ClaimsExchanges>
+ </OrchestrationStep>
+ <OrchestrationStep Order="8" Type="ClaimsExchange">
+ <ClaimsExchanges>
+ <ClaimsExchange Id="Call-REST-API" TechnicalProfileReferenceId="Call-REST-API"/>
+ </ClaimsExchanges>
+ </OrchestrationStep>
+ ...
+ <!--
+ </OrchestrationSteps>
+ </UserJourney>
+</UserJourneys> -->
+```
+
+### ...makes a reference to ClaimType with id "{claim name}" but neither the policy nor any of its base policies contain such an element
+
+This type of error happens when your policy makes a reference to a claim that is not declared in the [claims schema](claimsschema.md). Claims must be defined in at least one of the files in the policy.
+
+For example, a technical profile with the *schoolId* output claim. But the output claim *schoolId* is never declared in the policy, or in an ancestor policy.
+
+```xml
+<OutputClaims>
+ <OutputClaim ClaimTypeReferenceId="schoolId" />
+ ...
+</OutputClaims>
```
-git clone https://github.com/Azure-Samples/active-directory-b2c-custom-policy-starterpack
+
+To fix this type of error, check whether the `ClaimTypeReferenceId` value is misspelled, or does not exist in the schema. If the claim is defined in the extensions policy, but it's also being used in the base policy. Make sure the claim is defined in the policy it's in used, or in an upper level policy.
+
+Adding the claim to the claims schema solves this type of error.
+
+```xml
+<!--
+<BuildingBlocks>
+ <ClaimsSchema> -->
+ <ClaimType Id="schoolId">
+ <DisplayName>School name</DisplayName>
+ <DataType>string</DataType>
+ <UserHelpText>Enter your school name</UserHelpText>
+ <UserInputType>TextBox</UserInputType>
+ </ClaimType>
+ <!--
+ </ClaimsSchema>
+</BuildingBlocks> -->
```+
+### ...makes a reference to a ClaimsTransformation with ID...
+
+The cause for this error is similar to the one for the claim error. Check the previous error.
+
+### User is currently logged as a user of 'yourtenant.onmicrosoft.com' tenant...
+
+You login with an account from a tenant that is different than the policy you try to upload. For example, you sign-in with admin@contoso.onmicrosoft.com, while your policy `TenantId` is set to `fabrikam.onmicrosoft.com`.
+
+```xml
+<TrustFrameworkPolicy ...
+ TenantId="fabrikam.onmicrosoft.com"
+ PolicyId="B2C_1A_signup_signin"
+ PublicPolicyUri="http://fabrikam.onmicrosoft.com/B2C_1A_signup_signin">
+
+ <BasePolicy>
+ <TenantId>fabrikam.onmicrosoft.com</TenantId>
+ <PolicyId>B2C_1A_TrustFrameworkExtensions</PolicyId>
+ </BasePolicy>
+ ...
+</TrustFrameworkPolicy>
+```
+
+- Check that the `TenantId` value in the `<TrustFrameworkPolicy\>` and `<BasePolicy\>` elements match your target Azure AD B2C tenant.
+
+### Claim type "{name}" is the output claim of the relying party's technical profile, but it is not an output claim in any of the steps of user journey...
+
+In a relying party policy, you added an output claim, but the output claim is not an output claim in any of the steps of user journey. Azure AD B2C can't read the claim value from the claims bag.
+
+In the following example, the *schoolId* claim is an output claim of the relying party's technical profile, but it is not an output claim in any of the steps of *SignUpOrSignIn* user journey.
+
+```xml
+<RelyingParty>
+ <DefaultUserJourney ReferenceId="SignUpOrSignIn" />
+ <TechnicalProfile Id="PolicyProfile">
+ <DisplayName>PolicyProfile</DisplayName>
+ <Protocol Name="OpenIdConnect" />
+ <OutputClaims>
+ <OutputClaim ClaimTypeReferenceId="schoolId" />
+ ...
+ </OutputClaims>
+ <SubjectNamingInfo ClaimType="sub" />
+ </TechnicalProfile>
+</RelyingParty>
+```
+
+To fix this type of error, make sure the output claims appears in at least one orchestration steps' technical profile output claims collection. If your user journey can't output the claim, in the relying party technical profile, set a default value, such as empty string.
+
+```xml
+<OutputClaim ClaimTypeReferenceId="schoolId" DefaultValue="" />
+```
+
+### Input string was not in a correct format
+
+You set incorrect value type to a claim from another type. For example, you define an integer claim.
+
+```xml
+<!--
+<BuildingBlocks>
+ <ClaimsSchema> -->
+ <ClaimType Id="age">
+ <DisplayName>Age</DisplayName>
+ <DataType>int</DataType>
+ </ClaimType>
+ <!--
+ </ClaimsSchema>
+</BuildingBlocks> -->
+```
+
+Then you try to set a string value:
+
+```xml
+<OutputClaim ClaimTypeReferenceId="age" DefaultValue="ABCD" />
+```
+
+To fix this type of error, make sure you set the correct value, such as `DefaultValue="0"`.
++
+### Tenant "{name}" already has a policy with id "{name}". Another policy with the same id cannot be stored
+
+You try to upload a policy to your tenant, but a policy with same name is already uploaded to your tenant.
+
+To fix this type of error, when you upload the policy, select the **Overwrite the custom policy if it already exists** checkbox.
+
+![Screenshot that demonstrates how to overwrite the custom policy if it already exists.](./media/troubleshoot-custom-policies/overwrite-custom-policy-if-exists.png)
++
+## Next steps
+
+- Learn how to [collect Azure Active Directory B2C logs with Application Insights](troubleshoot-with-application-insights.md).
+
active-directory-b2c Troubleshoot With Application Insights https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/troubleshoot-with-application-insights.md
The entries may be long. Export to CSV for a closer look.
For more information about querying, see [Overview of log queries in Azure Monitor](../azure-monitor/logs/log-query-overview.md).
+## See the logs in VS Code extension
+
+We recommend you to install the [Azure AD B2C extension](https://marketplace.visualstudio.com/items?itemName=AzureADB2CTools.aadb2c) for [VS Code](https://code.visualstudio.com/). With the Azure AD B2C extension, the logs are organized for you by the policy name, correlation ID (the application insights presents the first digit of the correlation ID), and the log timestamp. This feature helps you to find the relevant log based on the local timestamp and see the user journey as executed by Azure AD B2C.
+
+> [!NOTE]
+> The community has developed the vs code extension for Azure AD B2C to help identity developers. The extension is not supported by Microsoft, and is made available strictly as-is.
+
+### Set Application Insights API access
+
+After you set up the Application Insights, and configure the custom policy, you need to get your Application Insights **API ID**, and create **API Key**. Both the API ID and API key are used by Azure AD B2C extension to read the Application Insights events (telemetries). Your API keys should be managed like passwords. Keep it secret.
+
+> [!NOTE]
+> Application Insights instrumentation key that your create earlier is used by Azure AD B2C to send telemetries to Application Insights. You use the instrumentation key only in your Azure AD B2C policy, not in the vs code extension.
+
+To get Application Insights ID and key:
+
+1. In Azure portal, open the Application Insights resource for your application.
+1. Select **Settings**, then select **API Access**.
+1. Copy the **Application ID**
+1. Select **Create API Key**
+1. Check the **Read telemetry** box.
+1. Copy the **Key** before closing the Create API key blade and save it somewhere secure. If you lose the key, you'll need to create another.
+
+ ![Screenshot that demonstrates how to create API access key.](./media/troubleshoot-with-application-insights/application-insights-api-access.png)
+
+### Set up Azure AD B2C VS Code extension
+
+Now the you have Azure Application insights API ID and Key, you can configure the vs code extension to read the logs. Azure AD B2C VS Code extension provides two scopes for settings:
+
+- **User Global Settings** - Settings that apply globally to any instance of VS Code you open.
+- **Workspace Settings** - Settings stored inside your workspace and only apply when the workspace is opened (using VS Code **open folder**).
+
+1. From the **Azure AD B2C Trace** explorer, click on the **Settings** icon.
+
+ ![Screenshot that demonstrates select the application insights settings.](./media/troubleshoot-with-application-insights/app-insights-settings.png)
+
+1. Provide the Azure Application Insights **ID** and **key**.
+1. Click **Save**
+
+After you save the settings the Application insights logs appear on the **Azure AD B2C Trace (App Insights)** window.
+
+![Screenshot of Azure AD B2C extension for vscode, presenting the Azure Application insights trace.](./media/troubleshoot-with-application-insights/vscode-extension-application-insights-trace.png)
++ ## Configure Application Insights in Production To improve your production environment performance and better user experience, it's important to configure your policy to ignore messages that are unimportant. Use the following configuration to send only critical error messages to your Application Insights.
To improve your production environment performance and better user experience, i
1. Upload and test your policy.
-## Next steps
-The community has developed a user journey viewer to help identity developers. It reads from your Application Insights instance and provides a well-structured view of the user journey events. You obtain the source code and deploy it in your own solution.
-The user journey player is not supported by Microsoft, and is made available strictly as-is.
-
-You can find the version of the viewer that reads events from Application Insights on GitHub, here:
+## Next steps
-[Azure-Samples/active-directory-b2c-advanced-policies](https://github.com/Azure-Samples/active-directory-b2c-advanced-policies/tree/master/wingtipgamesb2c/src/WingTipUserJourneyPlayerWebApplication)
+- Learn how to [troubleshoot Azure AD B2C custom policies](troubleshoot-custom-policies.md)
active-directory Application Provisioning Config Problem Scim Compatibility https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/app-provisioning/application-provisioning-config-problem-scim-compatibility.md
Previously updated : 08/05/2020 Last updated : 04/07/2021
Use the flags below in the tenant URL of your application in order to change the
:::image type="content" source="media/application-provisioning-config-problem-scim-compatibility/scim-flags.jpg" alt-text="SCIM flags to later behavior."::: * Use the following URL to update PATCH behavior and ensure SCIM compliance (e.g. active as boolean and proper group membership removals). This behavior is currently only available when using the flag, but will become the default behavior over the next few months. Note this preview flag currently does not work with on-demand provisioning.
- * **URL (SCIM Compliant):** AzureAdScimPatch062020
+ * **URL (SCIM Compliant):** aadOptscim062020
* **SCIM RFC references:** * https://tools.ietf.org/html/rfc7644#section-3.5.2 * **Behavior:**
active-directory Monitor Sign In Health For Resilience https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/fundamentals/monitor-sign-in-health-for-resilience.md
Modify your queries and alerts for maximum effectiveness.
```kusto let today = SigninLogs-
-| where TimeGenerated > ago(1h) // Query failure rate in the last hour
-
+| where TimeGenerated > ago(1h) // Query failure rate in the last hour
| project TimeGenerated, UserPrincipalName, AppDisplayName, status = case(Status.errorCode == "0", "success", "failure")- // Optionally filter by a specific application- //| where AppDisplayName == **APP NAME**- | summarize success = countif(status == "success"), failure = countif(status == "failure") by bin(TimeGenerated, 1h) // hourly failure rate- | project TimeGenerated, failureRate = (failure * 1.0) / ((failure + success) * 1.0)- | sort by TimeGenerated desc- | serialize rowNumber = row_number();- let yesterday = SigninLogs- | where TimeGenerated between((ago(1h) - totimespan(1d))..(now() - totimespan(1d))) // Query failure rate at the same time yesterday- | project TimeGenerated, UserPrincipalName, AppDisplayName, status = case(Status.errorCode == "0", "success", "failure")- // Optionally filter by a specific application- //| where AppDisplayName == **APP NAME**- | summarize success = countif(status == "success"), failure = countif(status == "failure") by bin(TimeGenerated, 1h) // hourly failure rate at same time yesterday- | project TimeGenerated, failureRateYesterday = (failure * 1.0) / ((failure + success) * 1.0)- | sort by TimeGenerated desc- | serialize rowNumber = row_number(); today | join (yesterday) on rowNumber // join data from same time today and yesterday- | project TimeGenerated, failureRate, failureRateYesterday- // Set threshold to be the percent difference in failure rate in the last hour as compared to the same time yesterday-
+// Day variable is the number of days since the previous Sunday. Optionally ignore results on Sat, Sun, and Mon because large variability in traffic is expected.
+| extend day = dayofweek(now())
+| where day != time(6.00:00:00) // exclude Sat
+| where day != time(0.00:00:00) // exclude Sun
+| where day != time(1.00:00:00) // exclude Mon
| where abs(failureRate - failureRateYesterday) > 0.5 ```
The ratio at the bottom can be adjusted as necessary and represents the percent
```Kusto let today = SigninLogs // Query traffic in the last hour- | where TimeGenerated > ago(1h)- | project TimeGenerated, AppDisplayName, UserPrincipalName- // Optionally filter by AppDisplayName to scope query to a single application- //| where AppDisplayName contains "Office 365 Exchange Online"- | summarize users = dcount(UserPrincipalName) by bin(TimeGenerated, 1hr) // Count distinct users in the last hour- | sort by TimeGenerated desc- | serialize rn = row_number();- let yesterday = SigninLogs // Query traffic at the same hour yesterday- | where TimeGenerated between((ago(1h) - totimespan(1d))..(now() - totimespan(1d))) // Count distinct users in the same hour yesterday- | project TimeGenerated, AppDisplayName, UserPrincipalName- // Optionally filter by AppDisplayName to scope query to a single application- //| where AppDisplayName contains "Office 365 Exchange Online"- | summarize usersYesterday = dcount(UserPrincipalName) by bin(TimeGenerated, 1hr)- | sort by TimeGenerated desc- | serialize rn = row_number();- today | join // Join data from today and yesterday together ( yesterday ) on rn- // Calculate the difference in number of users in the last hour compared to the same time yesterday- | project TimeGenerated, users, usersYesterday, difference = abs(users - usersYesterday), max = max_of(users, usersYesterday)-
- extend ratio = (difference * 1.0) / max // Ratio is the percent difference in traffic in the last hour as compared to the same time yesterday
-
+| extend ratio = (difference * 1.0) / max // Ratio is the percent difference in traffic in the last hour as compared to the same time yesterday
// Day variable is the number of days since the previous Sunday. Optionally ignore results on Sat, Sun, and Mon because large variability in traffic is expected.- | extend day = dayofweek(now())- | where day != time(6.00:00:00) // exclude Sat- | where day != time(0.00:00:00) // exclude Sun- | where day != time(1.00:00:00) // exclude Mon- | where ratio > 0.7 // Threshold percent difference in sign-in traffic as compared to same hour yesterday ```
active-directory Service Accounts Introduction Azure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/fundamentals/service-accounts-introduction-azure.md
There are three types of service accounts native to Azure Active Directory: Mana
## Types of Azure Active Directory service accounts
-For services hosted in Azure, we recommend using a managed identity if possible, and a service principal if not. Managed identities canΓÇÖt be used for services hosted outside of Azure. In that case, we recommend a service principal. If you can use a managed identity or a service principal, do so. We recommend that you not use an Azure Active Directory user account as a service principal. See the following table for a summary.
+For services hosted in Azure, we recommend using a managed identity if possible, and a service principal if not. Managed identities canΓÇÖt be used for services hosted outside of Azure. In that case, we recommend a service principal. If you can use a managed identity or a service principal, do so. We recommend that you not use an Azure Active Directory user account as a service account. See the following table for a summary.
| Service hosting| Managed identity| Service principal| Azure user account |
A service principal is the local representation of an application object in a si
There are two mechanisms for authentication using service principalsΓÇöclient certificates and client secrets. Certificates are more secure: use client certificates if possible. Unlike client secrets, client certificates cannot accidentally be embedded in code.
-For information on securing service principals, see Securing service principals.
+For information on securing service principals, see [Securing service principals](service-accounts-principal.md).
## Next steps
For more information on securing Azure service accounts, see:
[Securing service principals](service-accounts-principal.md)
-[Governing Azure service accounts](service-accounts-governing-azure.md)
+[Governing Azure service accounts](service-accounts-governing-azure.md)
active-directory Tutorial Linux Vm Access Nonaad https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/managed-identities-azure-resources/tutorial-linux-vm-access-nonaad.md
ms.devlang: na
na Previously updated : 12/10/2020 Last updated : 12/16/2020 #Customer intent: As a developer or administrator I want to configure a Linux virtual machine to retrieve a secret from key vault using a managed identity and have a simple way to validate my configuration before using it for development
The managed identity used by the virtual machine needs to be granted access to r
## Access data To complete these steps, you need an SSH client.  If you are using Windows, you can use the SSH client in the [Windows Subsystem for Linux](/windows/wsl/about). If you need assistance configuring your SSH client's keys, see [How to Use SSH keys with Windows on Azure](../../virtual-machines/linux/ssh-from-windows.md), or [How to create and use an SSH public and private key pair for Linux VMs in Azure](../../virtual-machines/linux/mac-create-ssh-keys.md).
- 
+
+>[!IMPORTANT]
+> All Azure SDKs support the Azure.Identity library that makes it easy to acquire Azure AD tokens to access target services. Learn more about [Azure SDKs](https://azure.microsoft.com/downloads/) and leverage the Azure.Identity library.
+> - [.NET](https://docs.microsoft.com/dotnet/api/overview/azure/identity-readme?view=azure-dotnet)
+> - [JAVA](https://docs.microsoft.com/java/api/overview/azure/identity-readme?view=azure-java-stable)
+> - [Javascript](https://docs.microsoft.com/javascript/api/overview/azure/identity-readme?view=azure-node-latest)
+> - [Python](https://docs.microsoft.com/python/api/overview/azure/identity-readme?view=azure-python)
++ 1. In the portal, navigate to your Linux VM and in the **Overview**, click **Connect**.  2. **Connect** to the VM with the SSH client of your choice.  3. In the terminal window, using CURL, make a request to the local managed identities for Azure resources endpoint to get an access token for Azure Key Vault.  
active-directory Permissions Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/roles/permissions-reference.md
Previously updated : 03/29/2021 Last updated : 04/06/2021
This article lists the Azure AD built-in roles you can assign to allow managemen
> | [Domain Name Administrator](#domain-name-administrator) | Can manage domain names in cloud and on-premises. | 8329153b-31d0-4727-b945-745eb3bc5f31 | > | [Dynamics 365 Administrator](#dynamics-365-administrator) | Can manage all aspects of the Dynamics 365 product. | 44367163-eba1-44c3-98af-f5787879f96a | > | [Exchange Administrator](#exchange-administrator) | Can manage all aspects of the Exchange product. | 29232cdf-9323-42fd-ade2-1d097af3e4de |
+> | [Exchange Recipient Administrator](#exchange-recipient-administrator) | Can create or update Exchange Online recipients within the Exchange Online organization. | 31392ffb-586c-42d1-9346-e59415a2cc4e |
> | [External ID User Flow Administrator](#external-id-user-flow-administrator) | Can create and manage all aspects of user flows. | 6e591065-9bad-43ed-90f3-e9424366d2f0 | > | [External ID User Flow Attribute Administrator](#external-id-user-flow-attribute-administrator) | Can create and manage the attribute schema available to all user flows. | 0f971eea-41eb-4569-a71e-57bb8a3eff1e | > | [External Identity Provider Administrator](#external-identity-provider-administrator) | Can configure identity providers for use in direct federation. | be2f45a1-457d-42af-a067-6ec1fa63bc45 |
This article lists the Azure AD built-in roles you can assign to allow managemen
> | [Groups Administrator](#groups-administrator) | Members of this role can create/manage groups, create/manage groups settings like naming and expiration policies, and view groups activity and audit reports. | fdd7a751-b60b-444a-984c-02652fe8fa1c | > | [Guest Inviter](#guest-inviter) | Can invite guest users independent of the 'members can invite guests' setting. | 95e79109-95c0-4d8e-aee3-d01accf2d47b | > | [Helpdesk Administrator](#helpdesk-administrator) | Can reset passwords for non-administrators and Helpdesk Administrators. | 729827e3-9c14-49f7-bb1b-9608f156bbb8 |
-> | [Hybrid Identity Administrator](#hybrid-identity-administrator) | Can manage AD to Azure AD cloud provisioning, Azure AD Connect and federation settings. | 8ac3fc64-6eca-42ea-9e69-59f4c7b60eb2 |
+> | [Hybrid Identity Administrator](#hybrid-identity-administrator) | Can manage AD to Azure AD cloud provisioning, Azure AD Connect, and federation settings. | 8ac3fc64-6eca-42ea-9e69-59f4c7b60eb2 |
> | [Insights Administrator](#insights-administrator) | Has administrative access in the Microsoft 365 Insights app. | eb1f4a8d-243a-41f0-9fbd-c7cdf6c5ef7c | > | [Insights Business Leader](#insights-business-leader) | Can view and share dashboards and insights via the M365 Insights app. | 31e939ad-9672-4796-9c2e-873181342d2d | > | [Intune Administrator](#intune-administrator) | Can manage all aspects of the Intune product. | 3a2c62db-5318-420d-8d74-23affee5d9d5 |
This role also grants the ability to consent for delegated permissions and appli
> | microsoft.directory/applications/delete | Delete all types of applications | > | microsoft.directory/applications/applicationProxy/read | Read all application proxy properties | > | microsoft.directory/applications/applicationProxy/update | Update all application proxy properties |
-> | microsoft.directory/applications/applicationProxyAuthentication/update | Update application proxy authentication properties |
-> | microsoft.directory/applications/applicationProxySslCertificate/update | Update application proxy custom domains |
-> | microsoft.directory/applications/applicationProxyUrlSettings/update | Update application proxy internal and external URLs |
+> | microsoft.directory/applications/applicationProxyAuthentication/update | Update authentication on all types of applications |
+> | microsoft.directory/applications/applicationProxySslCertificate/update | Update SSL certificate settings for application proxy |
+> | microsoft.directory/applications/applicationProxyUrlSettings/update | Update URL settings for application proxy |
> | microsoft.directory/applications/appRoles/update | Update the appRoles property on all types of applications | > | microsoft.directory/applications/audience/update | Update the audience property for applications | > | microsoft.directory/applications/authentication/update | Update authentication on all types of applications |
Users in this role can create attack payloads but not actually launch or schedul
> | Actions | Description | > | | | > | microsoft.office365.protectionCenter/attackSimulator/payload/allProperties/allTasks | Create and manage attack payloads in Attack Simulator |
-> | microsoft.office365.protectionCenter/attackSimulator/reports/allProperties/read | Read reports of attack simulation, responses, and associated training |
+> | microsoft.office365.protectionCenter/attackSimulator/reports/allProperties/read | Read reports of attack simulation responses and associated training |
## Attack Simulation Administrator
Users in this role can create and manage all aspects of attack simulation creati
> | Actions | Description | > | | | > | microsoft.office365.protectionCenter/attackSimulator/payload/allProperties/allTasks | Create and manage attack payloads in Attack Simulator |
-> | microsoft.office365.protectionCenter/attackSimulator/reports/allProperties/read | Read reports of attack simulation, responses, and associated training |
+> | microsoft.office365.protectionCenter/attackSimulator/reports/allProperties/read | Read reports of attack simulation responses and associated training |
> | microsoft.office365.protectionCenter/attackSimulator/simulation/allProperties/allTasks | Create and manage attack simulation templates in Attack Simulator | ## Authentication Administrator
The [Authentication administrator](#authentication-administrator) and [Privilege
| Authentication policy administrator | No | No | Yes | Yes | Yes | > [!IMPORTANT]
-> This role can't manage MFA settings in the legacy MFA management portal or Hardware OATH tokens.
+> This role can't manage MFA settings in the legacy MFA management portal or Hardware OATH tokens.
> [!div class="mx-tableFixed"] > | Actions | Description |
Users with this role have the ability to manage Azure Active Directory Condition
> | | | > | microsoft.directory/conditionalAccessPolicies/create | Create conditional access policies | > | microsoft.directory/conditionalAccessPolicies/delete | Delete conditional access policies |
-> | microsoft.directory/conditionalAccessPolicies/standard/read | Read policies.conditionalAccess property |
-> | microsoft.directory/conditionalAccessPolicies/owners/read | Read policies.conditionalAccess property |
-> | microsoft.directory/conditionalAccessPolicies/policyAppliedTo/read | Read policies.conditionalAccess property |
+> | microsoft.directory/conditionalAccessPolicies/standard/read | Read conditional access for policies |
+> | microsoft.directory/conditionalAccessPolicies/owners/read | Read the owners of conditional access policies |
+> | microsoft.directory/conditionalAccessPolicies/policyAppliedTo/read | Read the "applied to" property for conditional access policies |
> | microsoft.directory/conditionalAccessPolicies/basic/update | Update basic properties for conditional access policies |
-> | microsoft.directory/conditionalAccessPolicies/owners/update | Update policies.conditionalAccess property |
-> | microsoft.directory/conditionalAccessPolicies/tenantDefault/update | Update policies.conditionalAccess property |
+> | microsoft.directory/conditionalAccessPolicies/owners/update | Update owners for conditional access policies |
+> | microsoft.directory/conditionalAccessPolicies/tenantDefault/update | Update the default tenant for conditional access policies |
> | microsoft.directory/crossTenantAccessPolicies/create | Create cross-tenant access policies | > | microsoft.directory/crossTenantAccessPolicies/delete | Delete cross-tenant access policies | > | microsoft.directory/crossTenantAccessPolicies/standard/read | Read basic properties of cross-tenant access policies |
Users in this role can read and update basic information of users, groups, and s
> | microsoft.directory/groups/dynamicMembershipRule/update | Update dynamic membership rule of groups, excluding role-assignable groups | > | microsoft.directory/groups/groupType/update | Update the groupType property for a group | > | microsoft.directory/groups/members/update | Update members of groups, excluding role-assignable groups |
-> | microsoft.directory/groups/onPremWriteBack/update | Update Azure AD groups to be written back to on-premises |
+> | microsoft.directory/groups/onPremWriteBack/update | Update Azure Active Directory groups to be written back to on-premises with Azure AD Connect |
> | microsoft.directory/groups/owners/update | Update owners of groups, excluding role-assignable groups | > | microsoft.directory/groups/settings/update | Update settings of groups | > | microsoft.directory/groups/visibility/update | Update the visibility property of groups |
Users with this role have global permissions within Microsoft Exchange Online, w
> | microsoft.office365.usageReports/allEntities/allProperties/read | Read Office 365 usage reports | > | microsoft.office365.webPortal/allEntities/standard/read | Read basic properties on all resources in the Microsoft 365 admin center |
+## Exchange Recipient Administrator
+
+Users with this role have read access to recipients and write access to the attributes of those recipients in Exchange Online. More information at [Exchange Recipients](/exchange/recipients/recipients).
+
+> [!div class="mx-tableFixed"]
+> | Actions | Description |
+> | | |
+> | microsoft.office365.exchange/allRecipients/allProperties/allTasks | Create and delete all recipients, and read and update all properties of recipients in Exchange Online |
+> | microsoft.office365.exchange/messageTracking/allProperties/allTasks | Manage all tasks in message tracking in Exchange Online |
+> | microsoft.office365.exchange/migration/allProperties/allTasks | Manage all tasks related to migration of recipients in Exchange Online |
+ ## External ID User Flow Administrator Users with this role can create and manage user flows (also called "built-in" policies) in the Azure portal. These users can customize HTML/CSS/JavaScript content, change MFA requirements, select claims in the token, manage API connectors, and configure session settings for all user flows in the Azure AD organization. On the other hand, this role does not include the ability to review user data or make changes to the attributes that are included in the organization schema. Changes to Identity Experience Framework policies (also known as custom policies) are also outside the scope of this role.
Users with this role have access to all administrative features in Azure Active
> | microsoft.directory/serviceAction/activateService | Can perform the "activate service" action for a service | > | microsoft.directory/serviceAction/disableDirectoryFeature | Can perform the "disable directory feature" service action | > | microsoft.directory/serviceAction/enableDirectoryFeature | Can perform the "enable directory feature" service action |
-> | microsoft.directory/serviceAction/getAvailableExtentionProperties | Can perform the Getavailableextentionproperties service action |
+> | microsoft.directory/serviceAction/getAvailableExtentionProperties | Can perform the getAvailableExtentionProperties service action |
> | microsoft.directory/servicePrincipals/allProperties/allTasks | Create and delete service principals, and read and update all properties | > | microsoft.directory/servicePrincipals/managePermissionGrantsForAll.microsoft-company-admin | Grant consent for any permission to any application | > | microsoft.directory/servicePrincipals/managePermissionGrantsForGroup.microsoft-all-application-permissions | Grant a service principal direct access to a group's data |
Users in this role can read settings and administrative information across Micro
> | microsoft.directory/policies/standard/read | Read basic properties on policies | > | microsoft.directory/policies/owners/read | Read owners of policies | > | microsoft.directory/policies/policyAppliedTo/read | Read policies.policyAppliedTo property |
-> | microsoft.directory/conditionalAccessPolicies/standard/read | Read policies.conditionalAccess property |
-> | microsoft.directory/conditionalAccessPolicies/owners/read | Read policies.conditionalAccess property |
-> | microsoft.directory/conditionalAccessPolicies/policyAppliedTo/read | Read policies.conditionalAccess property |
+> | microsoft.directory/conditionalAccessPolicies/standard/read | Read conditional access for policies |
+> | microsoft.directory/conditionalAccessPolicies/owners/read | Read the owners of conditional access policies |
+> | microsoft.directory/conditionalAccessPolicies/policyAppliedTo/read | Read the "applied to" property for conditional access policies |
> | microsoft.directory/provisioningLogs/allProperties/read | Read all properties of provisioning logs | > | microsoft.directory/servicePrincipals/authentication/read | Read authentication properties on service principals | > | microsoft.directory/servicePrincipals/synchronization/standard/read | Read provisioning settings associated with your service principal |
Users in this role can create/manage groups and its settings like naming and exp
> | microsoft.directory/groups/dynamicMembershipRule/update | Update dynamic membership rule of groups, excluding role-assignable groups | > | microsoft.directory/groups/groupType/update | Update the groupType property for a group | > | microsoft.directory/groups/members/update | Update members of groups, excluding role-assignable groups |
-> | microsoft.directory/groups/onPremWriteBack/update | Update Azure AD groups to be written back to on-premises |
+> | microsoft.directory/groups/onPremWriteBack/update | Update Azure Active Directory groups to be written back to on-premises with Azure AD Connect |
> | microsoft.directory/groups/owners/update | Update owners of groups, excluding role-assignable groups | > | microsoft.directory/groups/settings/update | Update settings of groups | > | microsoft.directory/groups/visibility/update | Update the visibility property of groups |
Users in this role have full access to all knowledge, learning and intelligent f
> | microsoft.directory/groups.security/owners/update | Update owners of Security groups with the exclusion of role-assignable groups | > | microsoft.office365.knowledge/contentUnderstanding/allProperties/allTasks | Read and update all properties of content understanding in Microsoft 365 admin center | > | microsoft.office365.knowledge/knowledgeNetwork/allProperties/allTasks | Read and update all properties of knowledge network in Microsoft 365 admin center |
-> | microsoft.office365.protectionCenter/sensitivityLabels/allProperties/read | Read sensitivity labels in the Security and Compliance centers |
+> | microsoft.office365.protectionCenter/sensitivityLabels/allProperties/read | Read all properties of sensitivity labels in the Security and Compliance centers |
> | microsoft.office365.sharePoint/allEntities/allTasks | Create and delete all resources, and read and update standard properties in SharePoint | > | microsoft.office365.supportTickets/allEntities/allTasks | Create and manage Microsoft 365 service requests | > | microsoft.office365.webPortal/allEntities/standard/read | Read basic properties on all resources in the Microsoft 365 admin center |
Do not use. This role has been deprecated and will be removed from Azure AD in t
> | microsoft.directory/contacts/create | Create contacts | > | microsoft.directory/contacts/delete | Delete contacts | > | microsoft.directory/contacts/basic/update | Update basic properties on contacts |
-> | microsoft.directory/domains/basic/allTasks | Create and delete domains, and read and update standard properties |
+> | microsoft.directory/domains/allProperties/allTasks | Create and delete domains, and read and update all properties |
> | microsoft.directory/groups/create | Create groups, excluding role-assignable groups | > | microsoft.directory/groups/delete | Delete groups, excluding role-assignable group | > | microsoft.directory/groups/restore | Restore deleted groups |
Windows Defender ATP and EDR | Assign roles<br>Manage machine groups<br>Configur
> | microsoft.directory/policies/tenantDefault/update | Update default organization policies | > | microsoft.directory/conditionalAccessPolicies/create | Create conditional access policies | > | microsoft.directory/conditionalAccessPolicies/delete | Delete conditional access policies |
-> | microsoft.directory/conditionalAccessPolicies/standard/read | Read policies.conditionalAccess property |
-> | microsoft.directory/conditionalAccessPolicies/owners/read | Read policies.conditionalAccess property |
-> | microsoft.directory/conditionalAccessPolicies/policyAppliedTo/read | Read policies.conditionalAccess property |
+> | microsoft.directory/conditionalAccessPolicies/standard/read | Read conditional access for policies |
+> | microsoft.directory/conditionalAccessPolicies/owners/read | Read the owners of conditional access policies |
+> | microsoft.directory/conditionalAccessPolicies/policyAppliedTo/read | Read the "applied to" property for conditional access policies |
> | microsoft.directory/conditionalAccessPolicies/basic/update | Update basic properties for conditional access policies |
-> | microsoft.directory/conditionalAccessPolicies/owners/update | Update policies.conditionalAccess property |
-> | microsoft.directory/conditionalAccessPolicies/tenantDefault/update | Update policies.conditionalAccess property |
+> | microsoft.directory/conditionalAccessPolicies/owners/update | Update owners for conditional access policies |
+> | microsoft.directory/conditionalAccessPolicies/tenantDefault/update | Update the default tenant for conditional access policies |
> | microsoft.directory/privilegedIdentityManagement/allProperties/read | Read all resources in Privileged Identity Management | > | microsoft.directory/provisioningLogs/allProperties/read | Read all properties of provisioning logs | > | microsoft.directory/servicePrincipals/policies/update | Update policies of service principals |
Windows Defender ATP and EDR | Assign roles<br>Manage machine groups<br>Configur
> | microsoft.office365.protectionCenter/allEntities/standard/read | Read standard properties of all resources in the Security and Compliance centers | > | microsoft.office365.protectionCenter/allEntities/basic/update | Update basic properties of all resources in the Security and Compliance centers | > | microsoft.office365.protectionCenter/attackSimulator/payload/allProperties/allTasks | Create and manage attack payloads in Attack Simulator |
-> | microsoft.office365.protectionCenter/attackSimulator/reports/allProperties/read | Read reports of attack simulation, responses, and associated training |
+> | microsoft.office365.protectionCenter/attackSimulator/reports/allProperties/read | Read reports of attack simulation responses and associated training |
> | microsoft.office365.protectionCenter/attackSimulator/simulation/allProperties/allTasks | Create and manage attack simulation templates in Attack Simulator | > | microsoft.office365.serviceHealth/allEntities/allTasks | Read and configure Service Health in the Microsoft 365 admin center | > | microsoft.office365.supportTickets/allEntities/allTasks | Create and manage Microsoft 365 service requests |
Windows Defender ATP and EDR | View and investigate alerts. When you turn on rol
> | microsoft.directory/policies/standard/read | Read basic properties on policies | > | microsoft.directory/policies/owners/read | Read owners of policies | > | microsoft.directory/policies/policyAppliedTo/read | Read policies.policyAppliedTo property |
-> | microsoft.directory/conditionalAccessPolicies/standard/read | Read policies.conditionalAccess property |
-> | microsoft.directory/conditionalAccessPolicies/owners/read | Read policies.conditionalAccess property |
-> | microsoft.directory/conditionalAccessPolicies/policyAppliedTo/read | Read policies.conditionalAccess property |
+> | microsoft.directory/conditionalAccessPolicies/standard/read | Read conditional access for policies |
+> | microsoft.directory/conditionalAccessPolicies/owners/read | Read the owners of conditional access policies |
+> | microsoft.directory/conditionalAccessPolicies/policyAppliedTo/read | Read the "applied to" property for conditional access policies |
> | microsoft.directory/privilegedIdentityManagement/allProperties/read | Read all resources in Privileged Identity Management | > | microsoft.directory/provisioningLogs/allProperties/read | Read all properties of provisioning logs | > | microsoft.directory/signInReports/allProperties/read | Read all properties on sign-in reports, including privileged properties | > | microsoft.azure.serviceHealth/allEntities/allTasks | Read and configure Azure Service Health | > | microsoft.office365.protectionCenter/allEntities/standard/read | Read standard properties of all resources in the Security and Compliance centers | > | microsoft.office365.protectionCenter/attackSimulator/payload/allProperties/read | Read all properties of attack payloads in Attack Simulator |
-> | microsoft.office365.protectionCenter/attackSimulator/reports/allProperties/read | Read reports of attack simulation, responses, and associated training |
+> | microsoft.office365.protectionCenter/attackSimulator/reports/allProperties/read | Read reports of attack simulation responses and associated training |
> | microsoft.office365.protectionCenter/attackSimulator/simulation/allProperties/read | Read all properties of attack simulation templates in Attack Simulator | > | microsoft.office365.serviceHealth/allEntities/allTasks | Read and configure Service Health in the Microsoft 365 admin center | > | microsoft.office365.webPortal/allEntities/standard/read | Read basic properties on all resources in the Microsoft 365 admin center |
Users with this role can create users, and manage all aspects of users with some
> | microsoft.directory/groups/dynamicMembershipRule/update | Update dynamic membership rule of groups, excluding role-assignable groups | > | microsoft.directory/groups/groupType/update | Update the groupType property for a group | > | microsoft.directory/groups/members/update | Update members of groups, excluding role-assignable groups |
-> | microsoft.directory/groups/onPremWriteBack/update | Update Azure AD groups to be written back to on-premises |
+> | microsoft.directory/groups/onPremWriteBack/update | Update Azure Active Directory groups to be written back to on-premises with Azure AD Connect |
> | microsoft.directory/groups/owners/update | Update owners of groups, excluding role-assignable groups | > | microsoft.directory/groups/settings/update | Update settings of groups | > | microsoft.directory/groups/visibility/update | Update the visibility property of groups |
aks Aks Migration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/aks-migration.md
Title: Migrate to Azure Kubernetes Service (AKS)
description: Migrate to Azure Kubernetes Service (AKS). Previously updated : 02/25/2020 Last updated : 03/25/2021 # Migrate to Azure Kubernetes Service (AKS)
-This article helps you plan and execute a successful migration to Azure Kubernetes Service (AKS). To help you make key decisions, this guide provides details for the current recommended configuration for AKS. This article doesn't cover every scenario, and where appropriate, the article contains links to more detailed information for planning a successful migration.
+To help you plan and execute a successful migration to Azure Kubernetes Service (AKS), this guide provides details for the current recommended AKS configuration. While this article doesn't cover every scenario, it contains links to more detailed information for planning a successful migration.
-This document can be used to help support the following scenarios:
+This document helps support the following scenarios:
-* Containerizing certain applications and migrating them to AKS using [Azure Migrate](../migrate/migrate-services-overview.md)
-* Migrating an AKS Cluster backed by [Availability Sets](../virtual-machines/windows/tutorial-availability-sets.md) to [Virtual Machine Scale Sets](../virtual-machine-scale-sets/overview.md)
-* Migrating an AKS cluster to use a [Standard SKU load balancer](./load-balancer-standard.md)
-* Migrating from [Azure Container Service (ACS) - retiring January 31, 2020](https://azure.microsoft.com/updates/azure-container-service-will-retire-on-january-31-2020/) to AKS
-* Migrating from [AKS engine](/azure-stack/user/azure-stack-kubernetes-aks-engine-overview) to AKS
-* Migrating from non-Azure based Kubernetes clusters to AKS
-* Moving existing resources to a different region
+* Containerizing certain applications and migrating them to AKS using [Azure Migrate](../migrate/migrate-services-overview.md).
+* Migrating an AKS Cluster backed by [Availability Sets](../virtual-machines/windows/tutorial-availability-sets.md) to [Virtual Machine Scale Sets](../virtual-machine-scale-sets/overview.md).
+* Migrating an AKS cluster to use a [Standard SKU load balancer](./load-balancer-standard.md).
+* Migrating from [Azure Container Service (ACS) - retiring January 31, 2020](https://azure.microsoft.com/updates/azure-container-service-will-retire-on-january-31-2020/) to AKS.
+* Migrating from [AKS engine](/azure-stack/user/azure-stack-kubernetes-aks-engine-overview) to AKS.
+* Migrating from non-Azure based Kubernetes clusters to AKS.
+* Moving existing resources to a different region.
-When migrating, ensure your target Kubernetes version is within the supported window for AKS. If using an older version, it may not be within the supported range and require upgrading versions to be supported by AKS. See [AKS supported Kubernetes versions](./supported-kubernetes-versions.md) for more information.
+When migrating, ensure your target Kubernetes version is within the supported window for AKS. Older versions may not be within the supported range and will require a version upgrade to be supported by AKS. For more information, see [AKS supported Kubernetes versions](./supported-kubernetes-versions.md).
If you're migrating to a newer version of Kubernetes, review [Kubernetes version and version skew support policy](https://kubernetes.io/docs/setup/release/version-skew-policy/#supported-versions).
In this article we will summarize migration details for:
## Use Azure Migrate to migrate your applications to AKS
-Azure Migrate offers a unified platform to assess and migrate to Azure on-premises servers, infrastructure, applications, and data. For AKS, you can use Azure Migrate for the following:
+Azure Migrate offers a unified platform to assess and migrate to Azure on-premises servers, infrastructure, applications, and data. For AKS, you can use Azure Migrate for the following tasks:
* [Containerize ASP.NET applications and migrate to AKS](../migrate/tutorial-containerize-aspnet-kubernetes.md) * [Containerize Java web applications and migrate to AKS](../migrate/tutorial-containerize-java-kubernetes.md) ## AKS with Standard Load Balancer and Virtual Machine Scale Sets
-AKS is a managed service offering unique capabilities with lower management overhead. As a result of being a managed service, you must select from a set of [regions](./quotas-skus-regions.md) which AKS supports. The transition from your existing cluster to AKS may require modifying your existing applications so they remain healthy on the AKS managed control plane.
+AKS is a managed service offering unique capabilities with lower management overhead. Since AKS is a managed service, you must select from a set of [regions](./quotas-skus-regions.md) which AKS supports. You may need to modify your existing applications to keep them healthy on the AKS-managed control plane during the transition from your existing cluster to AKS.
-We recommend using AKS clusters backed by [Virtual Machine Scale Sets](../virtual-machine-scale-sets/index.yml) and the [Azure Standard Load Balancer](./load-balancer-standard.md) to ensure you get features such as [multiple node pools](./use-multiple-node-pools.md), [Availability Zones](../availability-zones/az-overview.md), [Authorized IP ranges](./api-server-authorized-ip-ranges.md), [Cluster Autoscaler](./cluster-autoscaler.md), [Azure Policy for AKS](../governance/policy/concepts/policy-for-kubernetes.md), and other new features as they are released.
+We recommend using AKS clusters backed by [Virtual Machine Scale Sets](../virtual-machine-scale-sets/index.yml) and the [Azure Standard Load Balancer](./load-balancer-standard.md) to ensure you get features such as:
+* [Multiple node pools](./use-multiple-node-pools.md),
+* [Availability Zones](../availability-zones/az-overview.md),
+* [Authorized IP ranges](./api-server-authorized-ip-ranges.md),
+* [Cluster Autoscaler](./cluster-autoscaler.md),
+* [Azure Policy for AKS](../governance/policy/concepts/policy-for-kubernetes.md), and
+* Other new features as they are released.
AKS clusters backed by [Virtual Machine Availability Sets](../virtual-machines/availability.md#availability-sets) lack support for many of these features.
-The following example creates an AKS cluster with single node pool backed by a virtual machine scale set. It uses a standard load balancer. It also enables the cluster autoscaler on the node pool for the cluster and sets a minimum of *1* and maximum of *3* nodes:
+The following example creates an AKS cluster with single node pool backed by a virtual machine (VM) scale set. The cluster:
+* Uses a standard load balancer.
+* Enables the cluster autoscaler on the node pool for the cluster.
+* Sets a minimum of *1* and maximum of *3* nodes.
```azurecli-interactive # First create a resource group
az aks create \
## Existing attached Azure Services
-When migrating clusters you may have attached external Azure services. These do not require resource recreation, but they will require updating connections from previous to new clusters to maintain functionality.
+When migrating clusters, you may have attached external Azure services. While the following services don't require resource recreation, they will require updating connections from previous to new clusters to maintain functionality.
* Azure Container Registry * Log Analytics
When migrating clusters you may have attached external Azure services. These do
## Ensure valid quotas
-Because additional virtual machines will be deployed into your subscription during migration, you should verify that your quotas and limits are sufficient for these resources. You may need to request an increase in [vCPU quota](../azure-portal/supportability/per-vm-quota-requests.md).
+Since other VMs will be deployed into your subscription during migration, you should verify that your quotas and limits are sufficient for these resources. If necessary, request an increase in [vCPU quota](../azure-portal/supportability/per-vm-quota-requests.md).
-You may need to request an increase for [Network quotas](../azure-portal/supportability/networking-quota-requests.md) to ensure you don't exhaust IPs. See [networking and IP ranges for AKS](./configure-kubenet.md) for additional information.
+You may need to request an increase for [Network quotas](../azure-portal/supportability/networking-quota-requests.md) to ensure you don't exhaust IPs. For more information, see [networking and IP ranges for AKS](./configure-kubenet.md).
For more information, see [Azure subscription and service limits](../azure-resource-manager/management/azure-subscription-service-limits.md). To check your current quotas, in the Azure portal, go to the [subscriptions blade](https://portal.azure.com/#blade/Microsoft_Azure_Billing/SubscriptionsBlade), select your subscription, and then select **Usage + quotas**. ## High Availability and Business Continuity
-If your application cannot handle downtime, you will need to follow best practices for high availability migration scenarios. Best practices for complex business continuity planning, disaster recovery, and maximizing uptime are beyond the scope of this document. Read more about [Best practices for business continuity and disaster recovery in Azure Kubernetes Service (AKS)](./operator-best-practices-multi-region.md) to learn more.
+If your application can't handle downtime, you will need to follow best practices for high availability migration scenarios. Read more about [Best practices for complex business continuity planning, disaster recovery, and maximizing uptime in Azure Kubernetes Service (AKS)](./operator-best-practices-multi-region.md).
-For complex applications, you'll typically migrate over time rather than all at once. That means that the old and new environments might need to communicate over the network. Applications that previously used `ClusterIP` services to communicate might need to be exposed as type `LoadBalancer` and be secured appropriately.
+For complex applications, you'll typically migrate over time rather than all at once, meaning the old and new environments might need to communicate over the network. Applications previously using `ClusterIP` services to communicate might need to be exposed as type `LoadBalancer` and be secured appropriately.
-To complete the migration, you'll want to point clients to the new services that are running on AKS. We recommend that you redirect traffic by updating DNS to point to the Load Balancer that sits in front of your AKS cluster.
+To complete the migration, you'll want to point clients to the new services that are running on AKS. We recommend that you redirect traffic by updating DNS to point to the Load Balancer sitting in front of your AKS cluster.
-[Azure Traffic Manager](../traffic-manager/index.yml) can direct customers to the desired Kubernetes cluster and application instance. Traffic Manager is a DNS-based traffic load balancer that can distribute network traffic across regions. For the best performance and redundancy, direct all application traffic through Traffic Manager before it goes to your AKS cluster. In a multicluster deployment, customers should connect to a Traffic Manager DNS name that points to the services on each AKS cluster. Define these services by using Traffic Manager endpoints. Each endpoint is the *service load balancer IP*. Use this configuration to direct network traffic from the Traffic Manager endpoint in one region to the endpoint in a different region.
+[Azure Traffic Manager](../traffic-manager/index.yml) can direct customers to the desired Kubernetes cluster and application instance. Traffic Manager is a DNS-based traffic load balancer that can distribute network traffic across regions. For the best performance and redundancy, direct all application traffic through Traffic Manager before it goes to your AKS cluster.
+
+In a multi-cluster deployment, customers should connect to a Traffic Manager DNS name that points to the services on each AKS cluster. Define these services by using Traffic Manager endpoints. Each endpoint is the *service load balancer IP*. Use this configuration to direct network traffic from the Traffic Manager endpoint in one region to the endpoint in a different region.
![AKS with Traffic Manager](media/operator-best-practices-bc-dr/aks-azure-traffic-manager.png)
-[Azure Front Door Service](../frontdoor/front-door-overview.md) is another option for routing traffic for AKS clusters. Azure Front Door Service enables you to define, manage, and monitor the global routing for your web traffic by optimizing for best performance and instant global failover for high availability.
+[Azure Front Door Service](../frontdoor/front-door-overview.md) is another option for routing traffic for AKS clusters. With Azure Front Door Service, you can define, manage, and monitor the global routing for your web traffic by optimizing for best performance and instant global failover for high availability.
### Considerations for stateless applications
-Stateless application migration is the most straightforward case. Apply your resource definitions (YAML or Helm) to the new cluster, make sure everything works as expected, and redirect traffic to activate your new cluster.
+Stateless application migration is the most straightforward case:
+1. Apply your resource definitions (YAML or Helm) to the new cluster.
+1. Ensure everything works as expected.
+1. Redirect traffic to activate your new cluster.
### Considerations for stateful applications Carefully plan your migration of stateful applications to avoid data loss or unexpected downtime.
-If you use Azure Files, you can mount the file share as a volume into the new cluster:
-* [Mount Static Azure Files as a Volume](./azure-files-volume.md#mount-file-share-as-an-persistent-volume)
-
-If you use Azure Managed Disks, you can only mount the disk if unattached to any VM:
-* [Mount Static Azure Disk as a Volume](./azure-disk-volume.md#mount-disk-as-volume)
-
-If neither of those approaches work, you can use a backup and restore options:
-* [Velero on Azure](https://github.com/vmware-tanzu/velero-plugin-for-microsoft-azure/blob/master/README.md)
+* If you use Azure Files, you can mount the file share as a volume into the new cluster. See [Mount Static Azure Files as a Volume](./azure-files-volume.md#mount-file-share-as-an-persistent-volume).
+* If you use Azure Managed Disks, you can only mount the disk if unattached to any VM. See [Mount Static Azure Disk as a Volume](./azure-disk-volume.md#mount-disk-as-volume).
+* If neither of those approaches work, you can use a backup and restore options. See [Velero on Azure](https://github.com/vmware-tanzu/velero-plugin-for-microsoft-azure/blob/master/README.md).
#### Azure Files
-Unlike disks, Azure Files can be mounted to multiple hosts concurrently. In your AKS cluster, Azure and Kubernetes don't prevent you from creating a pod that your ACS cluster still uses. To prevent data loss and unexpected behavior, ensure that the clusters don't write to the same files at the same time.
+Unlike disks, Azure Files can be mounted to multiple hosts concurrently. In your AKS cluster, Azure and Kubernetes don't prevent you from creating a pod that your AKS cluster still uses. To prevent data loss and unexpected behavior, ensure that the clusters don't write to the same files simultaneously.
-If your application can host multiple replicas that point to the same file share, follow the stateless migration steps and deploy your YAML definitions to your new cluster. If not, one possible migration approach involves the following steps:
+If your application can host multiple replicas that point to the same file share, follow the stateless migration steps and deploy your YAML definitions to your new cluster.
-* Validate your application is working correctly.
-* Point your live traffic to your new AKS cluster.
-* Disconnect the old cluster.
+If not, one possible migration approach involves the following steps:
+
+1. Validate your application is working correctly.
+1. Point your live traffic to your new AKS cluster.
+1. Disconnect the old cluster.
If you want to start with an empty share and make a copy of the source data, you can use the [`az storage file copy`](/cli/azure/storage/file/copy) commands to migrate your data.
If you want to start with an empty share and make a copy of the source data, you
If you're migrating existing persistent volumes to AKS, you'll generally follow these steps:
-* Quiesce writes to the application. (This step is optional and requires downtime.)
-* Take snapshots of the disks.
-* Create new managed disks from the snapshots.
-* Create persistent volumes in AKS.
-* Update pod specifications to [use existing volumes](./azure-disk-volume.md) rather than PersistentVolumeClaims (static provisioning).
-* Deploy your application to AKS.
-* Validate your application is working correctly.
-* Point your live traffic to your new AKS cluster.
+1. Quiesce writes to the application.
+ * This step is optional and requires downtime.
+1. Take snapshots of the disks.
+1. Create new managed disks from the snapshots.
+1. Create persistent volumes in AKS.
+1. Update pod specifications to [use existing volumes](./azure-disk-volume.md) rather than PersistentVolumeClaims (static provisioning).
+1. Deploy your application to AKS.
+1. Validate your application is working correctly.
+1. Point your live traffic to your new AKS cluster.
> [!IMPORTANT] > If you choose not to quiesce writes, you'll need to replicate data to the new deployment. Otherwise you'll miss the data that was written after you took the disk snapshots.
kubectl get deployment -o=yaml --export > deployments.yaml
### Moving existing resources to another region
-You may want to move your AKS cluster to a [different region supported by AKS][region-availability]. We recommend that you create a new cluster in the other region then deploy your resources and applications to your new cluster. In addition, if you have any services such as [Azure Dev Spaces][azure-dev-spaces] running on your AKS cluster, you will also need to install and configure those services on your cluster in the new region.
+You may want to move your AKS cluster to a [different region supported by AKS][region-availability]. We recommend that you create a new cluster in the other region, then deploy your resources and applications to your new cluster.
+
+In addition, if you have any services such as [Azure Dev Spaces][azure-dev-spaces] running on your AKS cluster, you will need to install and configure those services on your cluster in the new region.
-In this article we summarized migration details for:
+In this article, we summarized migration details for:
> [!div class="checklist"] > * AKS with Standard Load Balancer and Virtual Machine Scale Sets
aks Concepts Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/concepts-diagnostics.md
description: Learn about self-diagnosing clusters in Azure Kubernetes Service.
Previously updated : 11/04/2019 Last updated : 03/29/2021 # Azure Kubernetes Service Diagnostics (preview) overview
-Troubleshooting Azure Kubernetes Service (AKS) cluster issues is an important part of maintaining your cluster, especially if your cluster is running mission-critical workloads. AKS Diagnostics is an intelligent, self-diagnostic experience that helps you identify and resolve problems in your cluster. AKS Diagnostics is cloud-native, and you can use it with no extra configuration or billing cost.
+Troubleshooting Azure Kubernetes Service (AKS) cluster issues plays an important role in maintaining your cluster, especially if your cluster is running mission-critical workloads. AKS Diagnostics is an intelligent, self-diagnostic experience that:
+* Helps you identify and resolve problems in your cluster.
+* Is cloud-native.
+* Requires no extra configuration or billing cost.
-This feature is now in public preview.
+This feature is now in public preview.
## Open AKS Diagnostics To access AKS Diagnostics: -- Navigate to your Kubernetes cluster in the [Azure portal](https://portal.azure.com).-- Click on **Diagnose and solve problems** in the left navigation, which opens AKS Diagnostics.-- Choose a category that best describes the issue of your cluster by using the keywords in the homepage tile, or type a keyword that best describes your issue in the search bar, for example _Cluster Node Issues_.
+1. Navigate to your Kubernetes cluster in the [Azure portal](https://portal.azure.com).
+1. Click on **Diagnose and solve problems** in the left navigation, which opens AKS Diagnostics.
+1. Choose a category that best describes the issue of your cluster, like _Cluster Node Issues_, by:
+ * Using the keywords in the homepage tile.
+ * Typing a keyword that best describes your issue in the search bar.
![Homepage](./media/concepts-diagnostics/aks-diagnostics-homepage.png) ## View a diagnostic report
-After you click on a category, you can view a diagnostic report specific to your cluster. Diagnostic report intelligently calls out if there is any issue in your cluster with status icons. You can drill down on each topic by clicking on **More Info** to see detailed description of the issue, recommended actions, links to helpful docs, related-metrics, and logging data. Diagnostic reports are intelligently generated based on the current state of your cluster after running a variety of checks. Diagnostic reports can be a useful tool for pinpointing the problem of your cluster and finding the next steps to resolve the issue.
+After you click on a category, you can view a diagnostic report specific to your cluster. Diagnostic reports intelligently call out any issues in your cluster with status icons. You can drill down on each topic by clicking **More Info** to see a detailed description of:
+* Issues
+* Recommended actions
+* Links to helpful docs
+* Related-metrics
+* Logging data
+
+Diagnostic reports generate based on the current state of your cluster after running various checks. They can be useful for pinpointing the problem of your cluster and understanding next steps to resolve the issue.
![Diagnostic Report](./media/concepts-diagnostics/diagnostic-report.png)
The following diagnostic checks are available in **Cluster Insights**.
### Cluster Node Issues
-Cluster Node Issues checks for node-related issues that may cause your cluster to behave unexpectedly.
+Cluster Node Issues checks for node-related issues that cause your cluster to behave unexpectedly.
- Node readiness issues - Node failures
Cluster Node Issues checks for node-related issues that may cause your cluster t
- Node authentication failure - Node kube-proxy stale
-### Create, read, update & delete operations
+### Create, read, update & delete (CRUD) operations
-CRUD Operations checks for any CRUD operations that may cause issues in your cluster.
+CRUD Operations checks for any CRUD operations that cause issues in your cluster.
- In-use subnet delete operation error - Network security group delete operation error
CRUD Operations checks for any CRUD operations that may cause issues in your clu
### Identity and security management
-Identity and Security Management detects authentication and authorization errors that may prevent communication to your cluster.
+Identity and Security Management detects authentication and authorization errors that prevent communication to your cluster.
- Node authorization failures - 401 errors
Identity and Security Management detects authentication and authorization errors
## Next steps
-Collect logs to help you further troubleshoot your cluster issues by using [AKS Periscope](https://aka.ms/aksperiscope).
+* Collect logs to help you further troubleshoot your cluster issues by using [AKS Periscope](https://aka.ms/aksperiscope).
-Read the [triage practices section](/azure/architecture/operator-guides/aks/aks-triage-practices) of the AKS day-2 operations guide.
+* Read the [triage practices section](/azure/architecture/operator-guides/aks/aks-triage-practices) of the AKS day-2 operations guide.
-Post your questions or feedback at [UserVoice](https://feedback.azure.com/forums/914020-azure-kubernetes-service-aks) by adding "[Diag]" in the title.
+* Post your questions or feedback at [UserVoice](https://feedback.azure.com/forums/914020-azure-kubernetes-service-aks) by adding "[Diag]" in the title.
aks Concepts Sustainable Software Engineering https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/concepts-sustainable-software-engineering.md
Title: Concepts - Sustainable software engineering in Azure Kubernetes Services
description: Learn about sustainable software engineering in Azure Kubernetes Service (AKS). Previously updated : 08/26/2020 Last updated : 03/29/2021 # Sustainable software engineering principles in Azure Kubernetes Service (AKS) The sustainable software engineering principles are a set of competencies to help you define, build, and run sustainable applications. The overall goal is to reduce your carbon footprint of every aspect of your application. [The Principles of Sustainable Software Engineering][principles-sse] has an overview of the principles of sustainable software engineering.
-An important idea to understand about sustainable software engineering is that it's a shift in priorities and focus. In many cases, software is designed and ran in a way that focuses on fast performance and low latency. Sustainable software engineering focuses on reducing as much carbon emissions as possible. In some cases, applying sustainable software engineering principles can give you faster performance or lower latency, such as by lowering total network travel. In other cases, reducing carbon emissions may cause slower performance or increased latency, such as delaying low-priority workloads. Before considering applying sustainable software engineering principles to your application, review the priorities, needs, and trade-offs of your application.
+Sustainable software engineering is a shift in priorities and focus. In many cases, the way most software is designed and run highlights fast performance and low latency. Meanwhile, sustainable software engineering focuses on reducing as much carbon emission as possible. Consider:
+
+* Applying sustainable software engineering principles can give you faster performance or lower latency, such as by lowering total network travel.
+* Reducing carbon emissions may cause slower performance or increased latency, such as delaying low-priority workloads.
+
+Before applying sustainable software engineering principles to your application, review the priorities, needs, and trade-offs of your application.
## Measure and optimize
-To lower the carbon footprint of your AKS clusters, you need understand how your cluster's resources are being used. [Azure Monitor][azure-monitor] provides details on your cluster's resource usage, such as memory and CPU usage. This data can help inform your decisions to reduce the carbon footprint of your cluster and observe the effect of your changes. You can also install the [Microsoft Sustainability Calculator][sustainability-calculator] to see the carbon footprint of all your Azure resources.
+To lower the carbon footprint of your AKS clusters, you need understand how your cluster's resources are being used. [Azure Monitor][azure-monitor] provides details on your cluster's resource usage, such as memory and CPU usage. This data informs your decision to reduce the carbon footprint of your cluster and observes the effect of your changes.
+
+You can also install the [Microsoft Sustainability Calculator][sustainability-calculator] to see the carbon footprint of all your Azure resources.
## Increase resource utilization
-One approach to lowering your carbon footprint is to reduce the amount of idle time for your compute resources. Reducing your idle time involves increasing the utilization of your compute resources. For example, if you had four nodes in your cluster, each running at 50% capacity, all four of your nodes have 50% unused capacity remaining idle. If you reduced your cluster to three nodes, then the same workload would cause your three nodes to run at 67% capacity, reducing your unused capacity to 33% on each node and increasing your utilization.
+One approach to lowering your carbon footprint is to reduce your idle time. Reducing your idle time involves increasing the utilization of your compute resources. For example:
+1. You had four nodes in your cluster, each running at 50% capacity. So, all four of your nodes have 50% unused capacity remaining idle.
+1. You reduced your cluster to three nodes, each running at 67% capacity with the same workload. You would have successfully decreased your unused capacity to 33% on each node and increased your utilization.
> [!IMPORTANT]
-> When considering making changes to the resources in your cluster, verify your [system pools][system-pools] have enough resources to maintain the stability of the core system components of your cluster. Never reduce your cluster's resources to the point where your cluster may become unstable.
+> When considering changing the resources in your cluster, verify your [system pools][system-pools] have enough resources to maintain the stability of your cluster's core system components. **Never** reduce your cluster's resources to the point where your cluster may become unstable.
+
+After reviewing your cluster's utilization, consider using the features offered by [multiple node pools][multiple-node-pools]:
+
+* Node sizing
+
+ Use [node sizing][node-sizing] to define node pools with specific CPU and memory profiles, allowing you to tailor your nodes to your workload needs. By sizing your nodes to your workload needs, you can run a few nodes at higher utilization.
+
+* Cluster scaling
+
+ Configure how your cluster [scales][scale]. Use the [horizontal pod autoscaler][scale-horizontal] and the [cluster autoscaler][scale-auto] to scale your cluster automatically based on your configuration. Control how your cluster scales to keep all your nodes running at a high utilization while staying in sync with changes to your cluster's workload.
+
+* Spot pools
+
+ For cases where a workload is tolerant to sudden interruptions or terminations, you can use [spot pools][spot-pools]. Spot pools take advantage of idle capacity within Azure. For example, spot pools may work well for batch jobs or development environments.
-After reviewing your cluster's utilization, consider using the features offered by [multiple node pools][multiple-node-pools]. You can use [node sizing][node-sizing] to define node pools with specific CPU and memory profiles, allowing you to tailor your nodes to your workload needs. Sizing your nodes to your workload needs can help you run few nodes at higher utilization. You can also configure how your cluster [scales][scale] and use the [horizontal pod autoscaler][scale-horizontal] and the [cluster autoscaler][scale-auto] to scale your cluster automatically based on your configuration. Controlling how your cluster scales can help you keep all your nodes running at a high utilization while keeping up with changes to your cluster's workload. For cases where a workload is tolerant to sudden interruptions or terminations, you can use [spot pools][spot-pools] to take advantage of idle capacity within Azure. For example, spot pools may work well for batch jobs or development environments.
+> [!NOTE]
+>Increasing utilization can also reduce excess nodes, which reduces the energy consumed by [resource reservations on each node][resource-reservations].
-Increasing utilization can also reduce excess nodes, which reduces the energy consumed by [resource reservations on each node][resource-reservations].
+Finally, review the CPU and memory *requests* and *limits* in the Kubernetes manifests of your applications.
+* As you lower memory and CPU values, more memory and CPU are available to the cluster to run other workloads.
+* As you run more workloads with lower CPU and memory, your cluster becomes more densely allocated, which increases your utilization.
-Also review the CPU and memory *requests* and *limits* in the Kubernetes manifests of your applications. As you lower those values for memory and CPU, more memory and CPU are available to the cluster to run other workloads. As you run more workloads with lower CPU and memory, your cluster becomes more densely allocated which increases your utilization. When reducing the CPU and memory for your applications, the behavior of your applications may become degraded or unstable if you set these values too low. Before changing the CPU and memory *requests* and *limits*, consider running some benchmarking tests to understand if these values are set appropriately. Moreover, never reduce these values to the point when your application becomes unstable.
+When reducing the CPU and memory for your applications, your applications' behavior may become degraded or unstable if you set CPU and memory values too low. Before changing the CPU and memory *requests* and *limits*, run some benchmarking tests to verify if the values are set appropriately. Never reduce these values to the point of application instability.
## Reduce network travel
-Reducing the distance requests and responses to and from your cluster have to travel usually reduces electricity consumption by networking devices and reduces carbon emissions. After reviewing your network traffic, consider creating clusters [in regions][regions] closer to the source of your network traffic. You can also use [Azure Traffic Manager][azure-traffic-manager] to help with routing traffic to the closest cluster and [proximity placement groups][proiximity-placement-groups] to help reduce the distance between Azure resources.
+By reducing requests and responses travel distance to and from your cluster, you can reduce carbon emissions and electricity consumption by networking devices. After reviewing your network traffic, consider creating clusters [in regions][regions] closer to the source of your network traffic. You can use [Azure Traffic Manager][azure-traffic-manager] to route traffic to the closest cluster and [proximity placement groups][proiximity-placement-groups] and reduce the distance between Azure resources.
> [!IMPORTANT]
-> When considering making changes to your cluster's networking, never reduce network travel at the cost of meeting workload requirements. For example, using [availability zones][availability-zones] causes more network travel on your cluster but using that feature may be necessary to handle workload requirements.
+> When considering making changes to your cluster's networking, never reduce network travel at the cost of meeting workload requirements. For example, while using [availability zones][availability-zones] causes more network travel on your cluster, availability zones may be necessary to handle workload requirements.
## Demand shaping
-Where possible, consider shifting demand for your cluster's resources to times or regions where you can use excess capacity. For example, consider changing the time or region for a batch job to run or use [spot pools][spot-pools]. Also consider refactoring your application to use a queue to defer running workloads that don't need immediate processing.
+Where possible, consider shifting demand for your cluster's resources to times or regions where you can use excess capacity. For example, consider:
+* Changing the time or region for a batch job to run.
+* Using [spot pools][spot-pools].
+* Refactoring your application to use a queue to defer running workloads that don't need immediate processing.
## Next steps
aks Quotas Skus Regions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/quotas-skus-regions.md
description: Learn about the default quotas, restricted node VM SKU sizes, and region availability of the Azure Kubernetes Service (AKS). Previously updated : 04/09/2019 Last updated : 03/25/2021 # Quotas, virtual machine size restrictions, and region availability in Azure Kubernetes Service (AKS)
-All Azure services set default limits and quotas for resources and features. Certain virtual machine (VM) SKUs are also restricted for use.
+All Azure services set default limits and quotas for resources and features, including usage restrictions for certain virtual machine (VM) SKUs.
This article details the default resource limits for Azure Kubernetes Service (AKS) resources and the availability of AKS in Azure regions.
This article details the default resource limits for Azure Kubernetes Service (A
All other network, compute, and storage limitations apply to the provisioned infrastructure. For the relevant limits, see [Azure subscription and service limits](../azure-resource-manager/management/azure-subscription-service-limits.md). > [!IMPORTANT]
-> When you upgrade an AKS cluster, additional resources are temporarily consumed. These resources include available IP addresses in a virtual network subnet, or virtual machine vCPU quota. If you use Windows Server containers, the only endorsed approach to apply the latest updates to the nodes is to perform an upgrade operation. A failed cluster upgrade process may indicate that you don't have the available IP address space or vCPU quota to handle these temporary resources. For more information on the Windows Server node upgrade process, see [Upgrade a node pool in AKS][nodepool-upgrade].
+> When you upgrade an AKS cluster, extra resources are temporarily consumed. These resources include include available IP addresses in a virtual network subnet or virtual machine vCPU quota.
+>
+> For Windows Server containers, you can perform an upgrade operation to apply the latest node updates. If you don't have the available IP address space or vCPU quota to handle these temporary resources, the cluster upgrade process will fail. For more information on the Windows Server node upgrade process, see [Upgrade a node pool in AKS][nodepool-upgrade].
## Restricted VM sizes
-Each node in an AKS cluster contains a fixed amount of compute resources such as vCPU and memory. If an AKS node contains insufficient compute resources, pods might fail to run correctly. To ensure that the required *kube-system* pods and your applications can reliably be scheduled, **don't use the following VM SKUs in AKS**:
+Each node in an AKS cluster contains a fixed amount of compute resources such as vCPU and memory. If an AKS node contains insufficient compute resources, pods might fail to run correctly. To ensure the required *kube-system* pods and your applications can be reliably scheduled, **don't use the following VM SKUs in AKS**:
- Standard_A0 - Standard_A1
For the latest list of where you can deploy and run clusters, see [AKS region av
## Next steps
-Certain default limits and quotas can be increased. If your resource supports an increase, request the increase through an [Azure support request][azure-support] (for **Issue type**, select **Quota**).
+You can increase certain default limits and quotas. If your resource supports an increase, request the increase through an [Azure support request][azure-support] (for **Issue type**, select **Quota**).
<!-- LINKS - External --> [azure-support]: https://ms.portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest
aks Security Hardened Vm Host Image https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/security-hardened-vm-host-image.md
description: Learn about the security hardening in AKS VM host OS
Previously updated : 09/11/2019 Last updated : 03/29/2021 # Security hardening for AKS agent node host OS
-Azure Kubernetes Service (AKS) is a secure service compliant with SOC, ISO, PCI DSS, and HIPAA standards. This article covers the security hardening applied to AKS virtual machine hosts. For more information about AKS security, see [Security concepts for applications and clusters in Azure Kubernetes Service (AKS)](./concepts-security.md).
+As a secure service, Azure Kubernetes Service (AKS) complies with SOC, ISO, PCI DSS, and HIPAA standards. This article covers the security hardening applied to AKS virtual machine (VM) hosts. For more information about AKS security, see [Security concepts for applications and clusters in Azure Kubernetes Service (AKS)](./concepts-security.md).
> [!Note] > This document is scoped to Linux agents in AKS only.
-AKS clusters are deployed on host virtual machines, which run a security optimized OS which is utilized for containers running on AKS. This host OS is based on an **Ubuntu 16.04.LTS** image with additional security hardening and optimizations applied (see Security hardening details).
+AKS clusters are deployed on host VMs, which run a security-optimized OS used for containers running on AKS. This host OS is based on an **Ubuntu 16.04.LTS** image with more [security hardening](#security-hardening-features) and optimizations applied.
The goal of the security hardened host OS is to reduce the surface area of attack and optimize for the deployment of containers in a secure manner. > [!Important]
-> The security hardened OS is NOT CIS benchmarked. While there are overlaps with CIS benchmarks, the goal is not to be CIS-compliant. The goal for host OS hardening is to converge on a level of security consistent with Microsoft's own internal host security standards.
+> The security hardened OS is **not** CIS benchmarked. While it overlaps with CIS benchmarks, the goal is not to be CIS-compliant. The goal for host OS hardening is to converge on a level of security consistent with Microsoft's own internal host security standards.
## Security hardening features
-* AKS provides a security optimized host OS by default. There is no option to select an alternate operating system.
+* AKS provides a security-optimized host OS by default, but no option to select an alternate operating system.
-* Azure applies daily patches (including security patches) to AKS virtual machine hosts. Some of these patches will require a reboot, while others will not. You are responsible for scheduling AKS VM host reboots as needed. For guidance on how to automate AKS patching see [patching AKS nodes](./node-updates-kured.md).
+* Azure applies daily patches (including security patches) to AKS virtual machine hosts.
+ * Some of these patches require a reboot, while others will not.
+ * You're responsible for scheduling AKS VM host reboots as needed.
+ * For guidance on how to automate AKS patching, see [patching AKS nodes](./node-updates-kured.md).
## What is configured
The goal of the security hardened host OS is to reduce the surface area of attac
* To further reduce the attack surface area, some unnecessary kernel module drivers have been disabled in the OS.
-* The security hardened OS is built and maintained specifically for AKS and is NOT supported outside of the AKS platform.
+* The security hardened OS is built and maintained specifically for AKS and is **not** supported outside of the AKS platform.
## Next steps
-See the following articles for more information about AKS security:
+For more information about AKS security, see the following articles:
-[Azure Kubernetes Service (AKS)](./intro-kubernetes.md)
-
-[AKS security considerations ](./concepts-security.md)
-
-[AKS best practices ](./best-practices.md)
+* [Azure Kubernetes Service (AKS)](./intro-kubernetes.md)
+* [AKS security considerations](./concepts-security.md)
+* [AKS best practices](./best-practices.md)
aks Supported Kubernetes Versions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/supported-kubernetes-versions.md
Title: Supported Kubernetes versions in Azure Kubernetes Service
description: Understand the Kubernetes version support policy and lifecycle of clusters in Azure Kubernetes Service (AKS) Previously updated : 09/08/2020 Last updated : 03/29/2021 # Supported Kubernetes versions in Azure Kubernetes Service (AKS)
-The Kubernetes community releases minor versions roughly every three months. Recently the Kubernetes community has [increased the window of support for each version from 9 months to 12 months](https://kubernetes.io/blog/2020/08/31/kubernetes-1-19-feature-one-year-support/), starting with version 1.19. These releases include new features and improvements. Patch releases are more frequent (sometimes weekly) and are intended for critical bug fixes within a minor version. These patch releases include fixes for security vulnerabilities or major bugs.
+The Kubernetes community releases minor versions roughly every three months. Recently, the Kubernetes community has [increased the support window for each version from 9 months to 12 months](https://kubernetes.io/blog/2020/08/31/kubernetes-1-19-feature-one-year-support/), starting with version 1.19.
+
+Minor version releases include new features and improvements. Patch releases are more frequent (sometimes weekly) and are intended for critical bug fixes within a minor version. Patch releases include fixes for security vulnerabilities or major bugs.
## Kubernetes versions
-Kubernetes uses the standard [Semantic Versioning](https://semver.org/) versioning scheme, which means that each version of Kubernetes follows this numbering scheme:
+Kubernetes uses the standard [Semantic Versioning](https://semver.org/) versioning scheme for each version:
``` [major].[minor].[patch]
Example:
Each number in the version indicates general compatibility with the previous version:
-* Major versions change when incompatible API changes or backwards compatibility may be broken.
-* Minor versions change when functionality changes are made that are backwards compatible to the other minor releases.
-* Patch versions change when backwards-compatible bug fixes are made.
+* **Major versions** change when incompatible API updates or backwards compatibility may be broken.
+* **Minor versions** change when functionality updates are made that are backwards compatible to the other minor releases.
+* **Patch versions** change when backwards-compatible bug fixes are made.
-Users should aim to run the latest patch release of the minor version they're running, for example if your production cluster is on **`1.17.7`** and **`1.17.8`** is the latest available patch version available for the *1.17* series, you should upgrade to **`1.17.8`** as soon as you're able, to ensure your cluster is fully patched and supported.
+Aim to run the latest patch release of the minor version you're running. For example, your production cluster is on **`1.17.7`**. **`1.17.8`** is the latest available patch version available for the *1.17* series. You should upgrade to **`1.17.8`** as soon as possible to ensure your cluster is fully patched and supported.
## Kubernetes version support policy
-AKS defines a generally available version, as a version enabled in all SLO or SLA measurements and when available in all regions. AKS supports three GA minor versions of Kubernetes:
+AKS defines a generally available version as a version enabled in all SLO or SLA measurements and available in all regions. AKS supports three GA minor versions of Kubernetes:
* The latest GA minor version that is released in AKS (which we'll refer to as N). * Two previous minor versions.
-* Each supported minor version also supports a maximum of two (2) stable patches.
-* AKS may also support preview versions, which are explicitly labeled and subject to [Preview terms and conditions][preview-terms].
+ * Each supported minor version also supports a maximum of two (2) stable patches.
+
+AKS may also support preview versions, which are explicitly labeled and subject to [Preview terms and conditions][preview-terms].
> [!NOTE] > AKS uses safe deployment practices which involve gradual region deployment. This means it may take up to 10 business days for a new release or a new version to be available in all regions.
New minor version | Supported Version List
Where ".letter" is representative of patch versions.
-When a new minor version is introduced, the oldest minor version and patch releases supported are deprecated and removed. For example, if the current supported version list is:
+When a new minor version is introduced, the oldest minor version and patch releases supported are deprecated and removed. For example, the current supported version list is:
``` 1.17.a
When a new minor version is introduced, the oldest minor version and patch relea
1.15.f ```
-And AKS releases 1.18.\*, it means that all the 1.15.\* versions will be removed and will be out of support in 30 days.
+AKS releases 1.18.\*, removing all the 1.15.\* versions out of support in 30 days.
> [!NOTE]
-> Please note, that if customers are running an unsupported Kubernetes version, they will be asked to upgrade when
-> requesting support for the cluster. Clusters running unsupported Kubernetes releases are not covered by the
-> [AKS support policies](./support-policies.md).
+> If customers are running an unsupported Kubernetes version, they will be asked to upgrade when requesting support for the cluster. Clusters running unsupported Kubernetes releases are not covered by the [AKS support policies](./support-policies.md).
In addition to the above, AKS supports a maximum of two **patch** releases of a given minor version. So given the following supported versions:
New Supported Version List
### Supported `kubectl` versions
-You can use one minor version older or newer of `kubectl` relative to your *kube-apiserver* version, which is consistent with the [Kubernetes support policy for kubectl](https://kubernetes.io/docs/setup/release/version-skew-policy/#kubectl).
+You can use one minor version older or newer of `kubectl` relative to your *kube-apiserver* version, consistent with the [Kubernetes support policy for kubectl](https://kubernetes.io/docs/setup/release/version-skew-policy/#kubectl).
For example, if your *kube-apiserver* is at *1.17*, then you can use versions *1.16* to *1.18* of `kubectl` with that *kube-apiserver*.
To install or update your version of `kubectl`, run `az aks install-cli`.
You can reference upcoming version releases and deprecations on the [AKS Kubernetes Release Calendar](#aks-kubernetes-release-calendar).
-For new **minor** versions of Kubernetes
-1. AKS publishes a pre-announcement with the planned date of a new version release and respective old version deprecation on the [AKS Release notes](https://aka.ms/aks/releasenotes) at least 30 days prior to removal.
-2. AKS publishes a [service health notification](../service-health/service-health-overview.md) available to all users with AKS and portal access, and sends an email to the subscription administrators with the planned version removal dates.
-````
-To find out who is your subscription administrators or to change it, please refer to [manage Azure subscriptions](../cost-management-billing/manage/add-change-subscription-administrator.md#assign-a-subscription-administrator).
-````
-3. Users have **30 days** from version removal to upgrade to a supported minor version release to continue receiving support.
+For new **minor** versions of Kubernetes:
+ * AKS publishes a pre-announcement with the planned date of a new version release and respective old version deprecation on the [AKS Release notes](https://aka.ms/aks/releasenotes) at least 30 days prior to removal.
+ * AKS publishes a [service health notification](../service-health/service-health-overview.md) available to all users with AKS and portal access, and sends an email to the subscription administrators with the planned version removal dates.
+
+ ````
+ To find out who is your subscription administrators or to change it, please refer to [manage Azure subscriptions](../cost-management-billing/manage/add-change-subscription-administrator.md#assign-a-subscription-administrator).
+ ````
+ * Users have **30 days** from version removal to upgrade to a supported minor version release to continue receiving support.
-For new **patch** versions of Kubernetes
- * Because of the urgent nature of patch versions, these can be introduced into the service as they become available.
- * In general, AKS does not do broad communications for the release of new patch versions. However, AKS constantly monitors and validates available CVE patches to support them in AKS in a timely manner. If a critical patch is found or user action is required, AKS will notify users to upgrade to the newly available patch.
- * Users have **30 days** from the time a patch release is removed from AKS to upgrade into a supported patch and continue receiving support.
+For new **patch** versions of Kubernetes:
+ * Because of the urgent nature of patch versions, they can be introduced into the service as they become available.
+ * In general, AKS does not broadly communicate the release of new patch versions. However, AKS constantly monitors and validates available CVE patches to support them in AKS in a timely manner. If a critical patch is found or user action is required, AKS will notify users to upgrade to the newly available patch.
+ * Users have **30 days** from a patch release's removal from AKS to upgrade into a supported patch and continue receiving support.
### Supported versions policy exceptions
-AKS reserves the right to add or remove new/existing versions that have been identified to have one or more critical production impacting bugs or security issues without advance notice.
+AKS reserves the right to add or remove new/existing versions with one or more critical production-impacting bugs or security issues without advance notice.
-Specific patch releases may be skipped, or rollout accelerated depending on the severity of the bug or security issue.
+Specific patch releases may be skipped or rollout accelerated, depending on the severity of the bug or security issue.
## Azure portal and CLI versions
-When you deploy an AKS cluster in the portal or with the Azure CLI, the cluster is defaulted to the N-1 minor version and latest patch. For example, if AKS supports *1.17.a*, *1.17.b*, *1.16.c*, *1.16.d*, *1.15.e*, and *1.15.f*, the default version selected is *1.16.c*.
+When you deploy an AKS cluster in the portal or with the Azure CLI, the cluster defaults to the N-1 minor version and latest patch. For example, if AKS supports *1.17.a*, *1.17.b*, *1.16.c*, *1.16.d*, *1.15.e*, and *1.15.f*, the default version selected is *1.16.c*.
To find out what versions are currently available for your subscription and region, use the [az aks get-versions][az-aks-get-versions] command. The following example lists the available Kubernetes versions for the *EastUS* region:
For the past release history, see [Kubernetes](https://en.wikipedia.org/wiki/Kub
**How often should I expect to upgrade Kubernetes versions to stay in support?**
-Starting with Kubernetes 1.19, the [open source community has expanded support to 1 year](https://kubernetes.io/blog/2020/08/31/kubernetes-1-19-feature-one-year-support/). AKS commits to enabling patches and support matching the upstream commitments, at a minimum. This means starting with AKS clusters on 1.19, you will be able to upgrade at a minimum of once a year to stay on a supported version. For versions on 1.18 or below, the window of support remains at 9 months which requires an upgrade once every 9 months to stay on a supported version. It is highly recommended to regularly test new versions and be prepared to upgrade to newer versions to capture the latest stable enhancements within Kubernetes.
+Starting with Kubernetes 1.19, the [open source community has expanded support to 1 year](https://kubernetes.io/blog/2020/08/31/kubernetes-1-19-feature-one-year-support/). AKS commits to enabling patches and support matching the upstream commitments. For AKS clusters on 1.19 and greater, you will be able to upgrade at a minimum of once a year to stay on a supported version.
+
+For versions on 1.18 or below, the window of support remains at 9 months, requiring an upgrade once every 9 months to stay on a supported version. Regularly test new versions and be prepared to upgrade to newer versions to capture the latest stable enhancements within Kubernetes.
**What happens when a user upgrades a Kubernetes cluster with a minor version that isn't supported?** If you're on the *n-3* version or older, it means you're outside of support and will be asked to upgrade. When your upgrade from version n-3 to n-2 succeeds, you're back within our support policies. For example: - If the oldest supported AKS version is *1.15.a* and you are on *1.14.b* or older, you're outside of support.-- When the upgrade from *1.14.b* to *1.15.a* or higher succeeds, you're back within our support policies.
+- When you successfully upgrade from *1.14.b* to *1.15.a* or higher, you're back within our support policies.
Downgrades are not supported. **What does 'Outside of Support' mean**
-'Outside of Support' means that the version you're running is outside of the supported versions list, and you'll be asked to upgrade the cluster to a supported version when requesting support, unless you're within the 30-day grace period after version deprecation. Additionally, AKS doesn't make any runtime or other guarantees for clusters outside of the supported versions list.
+'Outside of Support' means that:
+* The version you're running is outside of the supported versions list.
+* You'll be asked to upgrade the cluster to a supported version when requesting support, unless you're within the 30-day grace period after version deprecation.
+
+Additionally, AKS doesn't make any runtime or other guarantees for clusters outside of the supported versions list.
**What happens when a user scales a Kubernetes cluster with a minor version that isn't supported?**
-For minor versions not supported by AKS, scaling in or out should continue to work, but there are no Quality of Service guarantees, so it's highly recommended to upgrade to bring your cluster back into support.
+For minor versions not supported by AKS, scaling in or out should continue to work. Since there are no Quality of Service guarantees, we recommend upgrading to bring your cluster back into support.
**Can a user stay on a Kubernetes version forever?**
-If a cluster has been out of support for more than three (3) minor versions and has been found to carry security risks, Azure contacts you to proactively upgrade your cluster. If you do not take further action, Azure reserves the right to automatically upgrade your cluster on your behalf.
+If a cluster has been out of support for more than three (3) minor versions and has been found to carry security risks, Azure proactively contacts you to upgrade your cluster. If you do not take further action, Azure reserves the right to automatically upgrade your cluster on your behalf.
**What version does the control plane support if the node pool is not in one of the supported AKS versions?**
The control plane must be within a window of versions from all node pools. For d
**Can I skip multiple AKS versions during cluster upgrade?**
-When you upgrade a supported AKS cluster, Kubernetes minor versions cannot be skipped. For example, upgrades between *1.12.x* -> *1.13.x* or *1.13.x* -> *1.14.x* are allowed, however *1.12.x* -> *1.14.x* is not.
+When you upgrade a supported AKS cluster, Kubernetes minor versions cannot be skipped. For example, upgrades between:
+ * *1.12.x* -> *1.13.x*: allowed.
+ * *1.13.x* -> *1.14.x*: allowed.
+ * *1.12.x* -> *1.14.x*: not allowed.
-To upgrade, from *1.12.x* -> *1.14.x*, first upgrade from *1.12.x* -> *1.13.x*, then upgrade from *1.13.x* -> *1.14.x*.
+To upgrade from *1.12.x* -> *1.14.x*:
+1. Upgrade from *1.12.x* -> *1.13.x*.
+1. Upgrade from *1.13.x* -> *1.14.x*.
-Skipping multiple versions can only be done when upgrading from an unsupported version back into a supported version. For example, upgrade from an unsupported *1.10.x* --> a supported *1.15.x* can be completed.
+Skipping multiple versions can only be done when upgrading from an unsupported version back into a supported version. For example, you can upgrade from an unsupported *1.10.x* to a supported *1.15.x*.
**Can I create a new 1.xx.x cluster during its 30 day support window?**
aks Use Pod Security Policies https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/use-pod-security-policies.md
Last updated 03/25/2021
# Preview - Secure your cluster using pod security policies in Azure Kubernetes Service (AKS) > [!WARNING]
-> **The feature described in this document, pod security policy (preview), will begin deprecation with Kubernetes version 1.21, with its removal in version 1.25.** As Kubernetes Upstream approaches that milestone, the Kubernetes community will be working to document viable alternatives. The previous deprecation announcement was made at the time as there was not a viable option for customers. Now that the Kubernetes community is working on an alternative, there no longer is a pressing need to deprecate ahead of Kubernetes.
+> **The feature described in this document, pod security policy (preview), will begin [deprecation](https://kubernetes.io/blog/2021/04/06/podsecuritypolicy-deprecation-past-present-and-future/) with Kubernetes version 1.21, with its removal in version 1.25.** As Kubernetes Upstream approaches that milestone, the Kubernetes community will be working to document viable alternatives. The previous deprecation announcement was made at the time as there was not a viable option for customers. Now that the Kubernetes community is working on an alternative, there no longer is a pressing need to deprecate ahead of Kubernetes.
> > After pod security policy (preview) is deprecated, you must disable the feature on any existing clusters using the deprecated feature to perform future cluster upgrades and stay within Azure support.
api-management Api Management Transformation Policies https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/api-management-transformation-policies.md
The `set-body` policy can be configured to use the [Liquid](https://shopify.gith
> [!IMPORTANT] > The implementation of Liquid used in the `set-body` policy is configured in 'C# mode'. This is particularly important when doing things such as filtering. As an example, using a date filter requires the use of Pascal casing and C# date formatting e.g.: >
-> {{body.foo.startDateTime| Date:"yyyyMMddTHH:mm:ddZ"}}
+> {{body.foo.startDateTime| Date:"yyyyMMddTHH:mm:ssZ"}}
> [!IMPORTANT] > In order to correctly bind to an XML body using the Liquid template, use a `set-header` policy to set Content-Type to either application/xml, text/xml (or any type ending with +xml); for a JSON body, it must be application/json, text/json (or any type ending with +json).
For more information, see the following topics:
+ [Policies in API Management](api-management-howto-policies.md) + [Policy Reference](./api-management-policies.md) for a full list of policy statements and their settings
-+ [Policy samples](./policy-reference.md)
++ [Policy samples](./policy-reference.md)
app-service Deploy Container Github Action https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/deploy-container-github-action.md
+
+ Title: Custom container CI/CD from GitHub Actions
+description: Learn how to use GitHub Actions to deploy your custom Linux container to App Service from a CI/CD pipeline.
+ms.devlang: na
+ Last updated : 12/04/2020++++++
+# Deploy a custom container to App Service using GitHub Actions
+
+[GitHub Actions](https://docs.github.com/en/actions) gives you the flexibility to build an automated software development workflow. With the [Azure Web Deploy action](https://github.com/Azure/webapps-deploy), you can automate your workflow to deploy custom containers to [App Service](overview.md) using GitHub Actions.
+
+A workflow is defined by a YAML (.yml) file in the `/.github/workflows/` path in your repository. This definition contains the various steps and parameters that are in the workflow.
+
+For an Azure App Service container workflow, the file has three sections:
+
+|Section |Tasks |
+|||
+|**Authentication** | 1. Retrieve a service principal or publish profile. <br /> 2. Create a GitHub secret. |
+|**Build** | 1. Create the environment. <br /> 2. Build the container image. |
+|**Deploy** | 1. Deploy the container image. |
+
+## Prerequisites
+
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F)
+- A GitHub account. If you don't have one, sign up for [free](https://github.com/join). You need to have code in a GitHub repository to deploy to Azure App Service.
+- A working container registry and Azure App Service app for containers. This example uses Azure Container Registry. Make sure to complete the full deployment to Azure App Service for containers. Unlike regular web apps, web apps for containers do not have a default landing page. Publish the container to have a working example.
+ - [Learn how to create a containerized Node.js application using Docker, push the container image to a registry, and then deploy the image to Azure App Service](/azure/developer/javascript/tutorial-vscode-docker-node-01)
+
+## Generate deployment credentials
+
+The recommended way to authenticate with Azure App Services for GitHub Actions is with a publish profile. You can also authenticate with a service principal but the process requires more steps.
+
+Save your publish profile credential or service principal as a [GitHub secret](https://docs.github.com/en/actions/reference/encrypted-secrets) to authenticate with Azure. You'll access the secret within your workflow.
+
+# [Publish profile](#tab/publish-profile)
+
+A publish profile is an app-level credential. Set up your publish profile as a GitHub secret.
+
+1. Go to your app service in the Azure portal.
+
+1. On the **Overview** page, select **Get Publish profile**.
+
+ > [!NOTE]
+ > As of October 2020, Linux web apps will need the app setting `WEBSITE_WEBDEPLOY_USE_SCM` set to `true` **before downloading the file**. This requirement will be removed in the future. See [Configure an App Service app in the Azure portal](./configure-common.md), to learn how to configure common web app settings.
+
+1. Save the downloaded file. You'll use the contents of the file to create a GitHub secret.
+
+# [Service principal](#tab/service-principal)
+
+You can create a [service principal](../active-directory/develop/app-objects-and-service-principals.md#service-principal-object) with the [az ad sp create-for-rbac](/cli/azure/ad/sp#az-ad-sp-create-for-rbac) command in the [Azure CLI](/cli/azure/). Run this command with [Azure Cloud Shell](https://shell.azure.com/) in the Azure portal or by selecting the **Try it** button.
+
+```azurecli-interactive
+az ad sp create-for-rbac --name "myApp" --role contributor \
+ --scopes /subscriptions/<subscription-id>/resourceGroups/<group-name>/providers/Microsoft.Web/sites/<app-name> \
+ --sdk-auth
+```
+
+In the example, replace the placeholders with your subscription ID, resource group name, and app name. The output is a JSON object with the role assignment credentials that provide access to your App Service app. Copy this JSON object for later.
+
+```output
+ {
+ "clientId": "<GUID>",
+ "clientSecret": "<GUID>",
+ "subscriptionId": "<GUID>",
+ "tenantId": "<GUID>",
+ (...)
+ }
+```
+
+> [!IMPORTANT]
+> It is always a good practice to grant minimum access. The scope in the previous example is limited to the specific App Service app and not the entire resource group.
++
+## Configure the GitHub secret for authentication
+
+# [Publish profile](#tab/publish-profile)
+
+In [GitHub](https://github.com/), browse your repository, select **Settings > Secrets > Add a new secret**.
+
+To use [app-level credentials](#generate-deployment-credentials), paste the contents of the downloaded publish profile file into the secret's value field. Name the secret `AZURE_WEBAPP_PUBLISH_PROFILE`.
+
+When you configure your GitHub workflow, you use the `AZURE_WEBAPP_PUBLISH_PROFILE` in the deploy Azure Web App action. For example:
+
+```yaml
+- uses: azure/webapps-deploy@v2
+ with:
+ publish-profile: ${{ secrets.AZURE_WEBAPP_PUBLISH_PROFILE }}
+```
+
+# [Service principal](#tab/service-principal)
+
+In [GitHub](https://github.com/), browse your repository, select **Settings > Secrets > Add a new secret**.
+
+To use [user-level credentials](#generate-deployment-credentials), paste the entire JSON output from the Azure CLI command into the secret's value field. Give the secret the name like `AZURE_CREDENTIALS`.
+
+When you configure the workflow file later, you use the secret for the input `creds` of the Azure Login action. For example:
+
+```yaml
+- uses: azure/login@v1
+ with:
+ creds: ${{ secrets.AZURE_CREDENTIALS }}
+```
+++
+## Configure GitHub secrets for your registry
+
+Define secrets to use with the Docker Login action. The example in this document uses Azure Container Registry for the container registry.
+
+1. Go to your container in the Azure portal or Docker and copy the username and password. You can find the Azure Container Registry username and password in the Azure portal under **Settings** > **Access keys** for your registry.
+
+2. Define a new secret for the registry username named `REGISTRY_USERNAME`.
+
+3. Define a new secret for the registry password named `REGISTRY_PASSWORD`.
+
+## Build the Container image
+
+The following example show part of the workflow that builds a Node.JS Docker image. Use [Docker Login](https://github.com/azure/docker-login) to log into a private container registry. This example uses Azure Container Registry but the same action works for other registries.
++
+```yaml
+name: Linux Container Node Workflow
+
+on: [push]
+
+jobs:
+ build:
+ runs-on: ubuntu-latest
+
+ steps:
+ - uses: actions/checkout@v2
+ - uses: azure/docker-login@v1
+ with:
+ login-server: mycontainer.azurecr.io
+ username: ${{ secrets.REGISTRY_USERNAME }}
+ password: ${{ secrets.REGISTRY_PASSWORD }}
+ - run: |
+ docker build . -t mycontainer.azurecr.io/myapp:${{ github.sha }}
+ docker push mycontainer.azurecr.io/myapp:${{ github.sha }}
+```
+
+You can also use [Docker Login](https://github.com/azure/docker-login) to log into multiple container registries at the same time. This example includes two new GitHub secrets for authentication with docker.io. The example assumes that there is a Dockerfile at the root level of the registry.
+
+```yml
+name: Linux Container Node Workflow
+
+on: [push]
+
+jobs:
+ build:
+ runs-on: ubuntu-latest
+
+ steps:
+ - uses: actions/checkout@v2
+ - uses: azure/docker-login@v1
+ with:
+ login-server: mycontainer.azurecr.io
+ username: ${{ secrets.REGISTRY_USERNAME }}
+ password: ${{ secrets.REGISTRY_PASSWORD }}
+ - uses: azure/docker-login@v1
+ with:
+ login-server: index.docker.io
+ username: ${{ secrets.DOCKERIO_USERNAME }}
+ password: ${{ secrets.DOCKERIO_PASSWORD }}
+ - run: |
+ docker build . -t mycontainer.azurecr.io/myapp:${{ github.sha }}
+ docker push mycontainer.azurecr.io/myapp:${{ github.sha }}
+```
+
+## Deploy to an App Service container
+
+To deploy your image to a custom container in App Service, use the `azure/webapps-deploy@v2` action. This action has seven parameters:
+
+| **Parameter** | **Explanation** |
+|||
+| **app-name** | (Required) Name of the App Service app |
+| **publish-profile** | (Optional) Applies to Web Apps(Windows and Linux) and Web App Containers(linux). Multi container scenario not supported. Publish profile (\*.publishsettings) file contents with Web Deploy secrets |
+| **slot-name** | (Optional) Enter an existing Slot other than the Production slot |
+| **package** | (Optional) Applies to Web App only: Path to package or folder. \*.zip, \*.war, \*.jar or a folder to deploy |
+| **images** | (Required) Applies to Web App Containers only: Specify the fully qualified container image(s) name. For example, 'myregistry.azurecr.io/nginx:latest' or 'python:3.7.2-alpine/'. For a multi-container app, multiple container image names can be provided (multi-line separated) |
+| **configuration-file** | (Optional) Applies to Web App Containers only: Path of the Docker-Compose file. Should be a fully qualified path or relative to the default working directory. Required for multi-container apps. |
+| **startup-command** | (Optional) Enter the start-up command. For ex. dotnet run or dotnet filename.dll |
+
+# [Publish profile](#tab/publish-profile)
+
+```yaml
+name: Linux Container Node Workflow
+
+on: [push]
+
+jobs:
+ build:
+ runs-on: ubuntu-latest
+
+ steps:
+ - uses: actions/checkout@v2
+
+ - uses: azure/docker-login@v1
+ with:
+ login-server: mycontainer.azurecr.io
+ username: ${{ secrets.REGISTRY_USERNAME }}
+ password: ${{ secrets.REGISTRY_PASSWORD }}
+
+ - run: |
+ docker build . -t mycontainer.azurecr.io/myapp:${{ github.sha }}
+ docker push mycontainer.azurecr.io/myapp:${{ github.sha }}
+
+ - uses: azure/webapps-deploy@v2
+ with:
+ app-name: 'myapp'
+ publish-profile: ${{ secrets.AZURE_WEBAPP_PUBLISH_PROFILE }}
+ images: 'mycontainer.azurecr.io/myapp:${{ github.sha }}'
+```
+# [Service principal](#tab/service-principal)
+
+```yaml
+on: [push]
+
+name: Linux_Container_Node_Workflow
+
+jobs:
+ build-and-deploy:
+ runs-on: ubuntu-latest
+ steps:
+ # checkout the repo
+ - name: 'Checkout GitHub Action'
+ uses: actions/checkout@main
+
+ - name: 'Login via Azure CLI'
+ uses: azure/login@v1
+ with:
+ creds: ${{ secrets.AZURE_CREDENTIALS }}
+
+ - uses: azure/docker-login@v1
+ with:
+ login-server: mycontainer.azurecr.io
+ username: ${{ secrets.REGISTRY_USERNAME }}
+ password: ${{ secrets.REGISTRY_PASSWORD }}
+ - run: |
+ docker build . -t mycontainer.azurecr.io/myapp:${{ github.sha }}
+ docker push mycontainer.azurecr.io/myapp:${{ github.sha }}
+
+ - uses: azure/webapps-deploy@v2
+ with:
+ app-name: 'myapp'
+ images: 'mycontainer.azurecr.io/myapp:${{ github.sha }}'
+
+ - name: Azure logout
+ run: |
+ az logout
+```
+++
+## Next steps
+
+You can find our set of Actions grouped into different repositories on GitHub, each one containing documentation and examples to help you use GitHub for CI/CD and deploy your apps to Azure.
+
+- [Actions workflows to deploy to Azure](https://github.com/Azure/actions-workflow-samples)
+
+- [Azure login](https://github.com/Azure/login)
+
+- [Azure WebApp](https://github.com/Azure/webapps-deploy)
+
+- [Docker login/logout](https://github.com/Azure/docker-login)
+
+- [Events that trigger workflows](https://docs.github.com/en/actions/reference/events-that-trigger-workflows)
+
+- [K8s deploy](https://github.com/Azure/k8s-deploy)
+
+- [Starter Workflows](https://github.com/actions/starter-workflows)
app-service Deploy Github Actions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/deploy-github-actions.md
+
+ Title: Configure CI/CD with GitHub Actions
+description: Learn how to deploy your code to Azure App Service from a CI/CD pipeline with GitHub Actions. Customize the build tasks and execute complex deployments.
+ms.devlang: na
+ Last updated : 09/14/2020++++++
+# Deploy to App Service using GitHub Actions
+
+Get started with [GitHub Actions](https://docs.github.com/en/actions/learn-github-actions) to automate your workflow and deploy to [Azure App Service](overview.md) from GitHub.
+
+## Prerequisites
+
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+- A GitHub account. If you don't have one, sign up for [free](https://github.com/join).
+- A working Azure App Service app.
+ - .NET: [Create an ASP.NET Core web app in Azure](quickstart-dotnetcore.md)
+ - ASP.NET: [Create an ASP.NET Framework web app in Azure](quickstart-dotnet-framework.md)
+ - JavaScript: [Create a Node.js web app in Azure App Service](quickstart-nodejs.md)
+ - Java: [Create a Java app on Azure App Service](quickstart-java.md)
+ - Python: [Create a Python app in Azure App Service](quickstart-python.md)
+
+## Workflow file overview
+
+A workflow is defined by a YAML (.yml) file in the `/.github/workflows/` path in your repository. This definition contains the various steps and parameters that make up the workflow.
+
+The file has three sections:
+
+|Section |Tasks |
+|||
+|**Authentication** | 1. Define a service principal or publish profile. <br /> 2. Create a GitHub secret. |
+|**Build** | 1. Set up the environment. <br /> 2. Build the web app. |
+|**Deploy** | 1. Deploy the web app. |
+
+## Use the Deployment Center
+
+You can quickly get started with GitHub Actions by using the App Service Deployment Center. This will automatically generate a workflow file based on your application stack and commit it to your GitHub repository in the correct directory.
+
+1. Navigate to your webapp in the Azure portal
+1. On the left side, click **Deployment Center**
+1. Under **Continuous Deployment (CI / CD)**, select **GitHub**
+1. Next, select **GitHub Actions**
+1. Use the dropdowns to select your GitHub repository, branch, and application stack
+ - If the selected branch is protected, you can still continue to add the workflow file. Be sure to review your branch protections before continuing.
+1. On the final screen, you can review your selections and preview the workflow file that will be committed to the repository. If the selections are correct, click **Finish**
+
+This will commit the workflow file to the repository. The workflow to build and deploy your app will start immediately.
+
+## Set up a workflow manually
+
+You can also deploy a workflow without using the Deployment Center. To do so, you will need to first generate deployment credentials.
+
+## Generate deployment credentials
+
+The recommended way to authenticate with Azure App Services for GitHub Actions is with a publish profile. You can also authenticate with a service principal but the process requires more steps.
+
+Save your publish profile credential or service principal as a [GitHub secret](https://docs.github.com/en/actions/reference/encrypted-secrets) to authenticate with Azure. You'll access the secret within your workflow.
+
+# [Publish profile](#tab/applevel)
+
+A publish profile is an app-level credential. Set up your publish profile as a GitHub secret.
+
+1. Go to your app service in the Azure portal.
+
+1. On the **Overview** page, select **Get Publish profile**.
+
+1. Save the downloaded file. You'll use the contents of the file to create a GitHub secret.
+
+> [!NOTE]
+> As of October 2020, Linux web apps will need the app setting `WEBSITE_WEBDEPLOY_USE_SCM` set to `true` **before downloading the publish profile**. This requirement will be removed in the future.
+
+# [Service principal](#tab/userlevel)
+
+You can create a [service principal](../active-directory/develop/app-objects-and-service-principals.md#service-principal-object) with the [az ad sp create-for-rbac](/cli/azure/ad/sp#az-ad-sp-create-for-rbac) command in the [Azure CLI](/cli/azure/). Run this command with [Azure Cloud Shell](https://shell.azure.com/) in the Azure portal or by selecting the **Try it** button.
+
+```azurecli-interactive
+az ad sp create-for-rbac --name "myApp" --role contributor \
+ --scopes /subscriptions/<subscription-id>/resourceGroups/<group-name>/providers/Microsoft.Web/sites/<app-name> \
+ --sdk-auth
+```
+
+In the example above, replace the placeholders with your subscription ID, resource group name, and app name. The output is a JSON object with the role assignment credentials that provide access to your App Service app similar to below. Copy this JSON object for later.
+
+```output
+ {
+ "clientId": "<GUID>",
+ "clientSecret": "<GUID>",
+ "subscriptionId": "<GUID>",
+ "tenantId": "<GUID>",
+ (...)
+ }
+```
+
+> [!IMPORTANT]
+> It is always a good practice to grant minimum access. The scope in the previous example is limited to the specific App Service app and not the entire resource group.
+++
+## Configure the GitHub secret
++
+# [Publish profile](#tab/applevel)
+
+In [GitHub](https://github.com/), browse your repository, select **Settings > Secrets > Add a new secret**.
+
+To use [app-level credentials](#generate-deployment-credentials), paste the contents of the downloaded publish profile file into the secret's value field. Name the secret `AZURE_WEBAPP_PUBLISH_PROFILE`.
+
+When you configure your GitHub workflow, you use the `AZURE_WEBAPP_PUBLISH_PROFILE` in the deploy Azure Web App action. For example:
+
+```yaml
+- uses: azure/webapps-deploy@v2
+ with:
+ publish-profile: ${{ secrets.AZURE_WEBAPP_PUBLISH_PROFILE }}
+```
+
+# [Service principal](#tab/userlevel)
+
+In [GitHub](https://github.com/), browse your repository, select **Settings > Secrets > Add a new secret**.
+
+To use [user-level credentials](#generate-deployment-credentials), paste the entire JSON output from the Azure CLI command into the secret's value field. Give the secret the name `AZURE_CREDENTIALS`.
+
+When you configure the workflow file later, you use the secret for the input `creds` of the Azure Login action. For example:
+
+```yaml
+- uses: azure/login@v1
+ with:
+ creds: ${{ secrets.AZURE_CREDENTIALS }}
+```
+++
+## Set up the environment
+
+Setting up the environment can be done using one of the setup actions.
+
+|**Language** |**Setup Action** |
+|||
+|**.NET** | `actions/setup-dotnet` |
+|**ASP.NET** | `actions/setup-dotnet` |
+|**Java** | `actions/setup-java` |
+|**JavaScript** | `actions/setup-node` |
+|**Python** | `actions/setup-python` |
+
+The following examples show how to set up the environment for the different supported languages:
+
+**.NET**
+
+```yaml
+ - name: Setup Dotnet 3.3.x
+ uses: actions/setup-dotnet@v1
+ with:
+ dotnet-version: '3.3.x'
+```
+
+**ASP.NET**
+
+```yaml
+ - name: Install Nuget
+ uses: nuget/setup-nuget@v1
+ with:
+ nuget-version: ${{ env.NUGET_VERSION}}
+```
+
+**Java**
+
+```yaml
+ - name: Setup Java 1.8.x
+ uses: actions/setup-java@v1
+ with:
+ # If your pom.xml <maven.compiler.source> version is not in 1.8.x,
+ # change the Java version to match the version in pom.xml <maven.compiler.source>
+ java-version: '1.8.x'
+```
+
+**JavaScript**
+
+```yaml
+env:
+ NODE_VERSION: '14.x' # set this to the node version to use
+
+jobs:
+ build-and-deploy:
+ name: Build and Deploy
+ runs-on: ubuntu-latest
+ steps:
+ - uses: actions/checkout@main
+ - name: Use Node.js ${{ env.NODE_VERSION }}
+ uses: actions/setup-node@v1
+ with:
+ node-version: ${{ env.NODE_VERSION }}
+```
+**Python**
+
+```yaml
+ - name: Setup Python 3.x
+ uses: actions/setup-python@v1
+ with:
+ python-version: 3.x
+```
+
+## Build the web app
+
+The process of building a web app and deploying to Azure App Service changes depending on the language.
+
+The following examples show the part of the workflow that builds the web app, in different supported languages.
+
+For all languages, you can set the web app root directory with `working-directory`.
+
+**.NET**
+
+The environment variable `AZURE_WEBAPP_PACKAGE_PATH` sets the path to your web app project.
+
+```yaml
+- name: dotnet build and publish
+ run: |
+ dotnet restore
+ dotnet build --configuration Release
+ dotnet publish -c Release -o '${{ env.AZURE_WEBAPP_PACKAGE_PATH }}/myapp'
+```
+**ASP.NET**
+
+You can restore NuGet dependencies and run msbuild with `run`.
+
+```yaml
+- name: NuGet to restore dependencies as well as project-specific tools that are specified in the project file
+ run: nuget restore
+
+- name: Add msbuild to PATH
+ uses: microsoft/setup-msbuild@v1.0.0
+
+- name: Run msbuild
+ run: msbuild .\SampleWebApplication.sln
+```
+
+**Java**
+
+```yaml
+- name: Build with Maven
+ run: mvn package --file pom.xml
+```
+
+**JavaScript**
+
+For Node.js, you can set `working-directory` or change for npm directory in `pushd`.
+
+```yaml
+- name: npm install, build, and test
+ run: |
+ npm install
+ npm run build --if-present
+ npm run test --if-present
+ working-directory: my-app-folder # set to the folder with your app if it is not the root directory
+```
+
+**Python**
+
+```yaml
+- name: Install dependencies
+ run: |
+ python -m pip install --upgrade pip
+ pip install -r requirements.txt
+```
++
+## Deploy to App Service
+
+To deploy your code to an App Service app, use the `azure/webapps-deploy@v2` action. This action has four parameters:
+
+| **Parameter** | **Explanation** |
+|||
+| **app-name** | (Required) Name of the App Service app |
+| **publish-profile** | (Optional) Publish profile file contents with Web Deploy secrets |
+| **package** | (Optional) Path to package or folder. The path can include *.zip, *.war, *.jar, or a folder to deploy |
+| **slot-name** | (Optional) Enter an existing slot other than the production [slot](deploy-staging-slots.md) |
++
+# [Publish profile](#tab/applevel)
+
+### .NET Core
+
+Build and deploy a .NET Core app to Azure using an Azure publish profile. The `publish-profile` input references the `AZURE_WEBAPP_PUBLISH_PROFILE` secret that you created earlier.
+
+```yaml
+name: .NET Core CI
+
+on: [push]
+
+env:
+ AZURE_WEBAPP_NAME: my-app-name # set this to your application's name
+ AZURE_WEBAPP_PACKAGE_PATH: '.' # set this to the path to your web app project, defaults to the repository root
+ DOTNET_VERSION: '3.1.x' # set this to the dot net version to use
+
+jobs:
+ build:
+ runs-on: ubuntu-latest
+
+ steps:
+ # Checkout the repo
+ - uses: actions/checkout@main
+
+ # Setup .NET Core SDK
+ - name: Setup .NET Core
+ uses: actions/setup-dotnet@v1
+ with:
+ dotnet-version: ${{ env.DOTNET_VERSION }}
+
+ # Run dotnet build and publish
+ - name: dotnet build and publish
+ run: |
+ dotnet restore
+ dotnet build --configuration Release
+ dotnet publish -c Release -o '${{ env.AZURE_WEBAPP_PACKAGE_PATH }}/myapp'
+
+ # Deploy to Azure Web apps
+ - name: 'Run Azure webapp deploy action using publish profile credentials'
+ uses: azure/webapps-deploy@v2
+ with:
+ app-name: ${{ env.AZURE_WEBAPP_NAME }} # Replace with your app name
+ publish-profile: ${{ secrets.AZURE_WEBAPP_PUBLISH_PROFILE }} # Define secret variable in repository settings as per action documentation
+ package: '${{ env.AZURE_WEBAPP_PACKAGE_PATH }}/myapp'
+```
+
+### ASP.NET
+
+Build and deploy an ASP.NET MVC app that uses NuGet and `publish-profile` for authentication.
++
+```yaml
+name: Deploy ASP.NET MVC App deploy to Azure Web App
+
+on: [push]
+
+env:
+ AZURE_WEBAPP_NAME: my-app # set this to your application's name
+ AZURE_WEBAPP_PACKAGE_PATH: '.' # set this to the path to your web app project, defaults to the repository root
+ NUGET_VERSION: '5.3.x' # set this to the dot net version to use
+
+jobs:
+ build-and-deploy:
+ runs-on: windows-latest
+ steps:
+
+ - uses: actions/checkout@main
+
+ - name: Install Nuget
+ uses: nuget/setup-nuget@v1
+ with:
+ nuget-version: ${{ env.NUGET_VERSION}}
+ - name: NuGet to restore dependencies as well as project-specific tools that are specified in the project file
+ run: nuget restore
+
+ - name: Add msbuild to PATH
+ uses: microsoft/setup-msbuild@v1.0.0
+
+ - name: Run MSBuild
+ run: msbuild .\SampleWebApplication.sln
+
+ - name: 'Run Azure webapp deploy action using publish profile credentials'
+ uses: azure/webapps-deploy@v2
+ with:
+ app-name: ${{ env.AZURE_WEBAPP_NAME }} # Replace with your app name
+ publish-profile: ${{ secrets.AZURE_WEBAPP_PUBLISH_PROFILE }} # Define secret variable in repository settings as per action documentation
+ package: '${{ env.AZURE_WEBAPP_PACKAGE_PATH }}/SampleWebApplication/'
+```
+
+### Java
+
+Build and deploy a Java Spring app to Azure using an Azure publish profile. The `publish-profile` input references the `AZURE_WEBAPP_PUBLISH_PROFILE` secret that you created earlier.
+
+```yaml
+name: Java CI with Maven
+
+on: [push]
+
+jobs:
+ build:
+
+ runs-on: ubuntu-latest
+
+ steps:
+ - uses: actions/checkout@v2
+ - name: Set up JDK 1.8
+ uses: actions/setup-java@v1
+ with:
+ java-version: 1.8
+ - name: Build with Maven
+ run: mvn -B package --file pom.xml
+ working-directory: my-app-path
+ - name: Azure WebApp
+ uses: Azure/webapps-deploy@v2
+ with:
+ app-name: my-app-name
+ publish-profile: ${{ secrets.AZURE_WEBAPP_PUBLISH_PROFILE }}
+ package: my/target/*.jar
+```
+
+To deploy a `war` instead of a `jar`, change the `package` value.
++
+```yaml
+ - name: Azure WebApp
+ uses: Azure/webapps-deploy@v2
+ with:
+ app-name: my-app-name
+ publish-profile: ${{ secrets.AZURE_WEBAPP_PUBLISH_PROFILE }}
+ package: my/target/*.war
+```
+
+### JavaScript
+
+Build and deploy a Node.js app to Azure using the app's publish profile. The `publish-profile` input references the `AZURE_WEBAPP_PUBLISH_PROFILE` secret that you created earlier.
+
+```yaml
+# File: .github/workflows/workflow.yml
+name: JavaScript CI
+
+on: [push]
+
+env:
+ AZURE_WEBAPP_NAME: my-app-name # set this to your application's name
+ AZURE_WEBAPP_PACKAGE_PATH: 'my-app-path' # set this to the path to your web app project, defaults to the repository root
+ NODE_VERSION: '14.x' # set this to the node version to use
+
+jobs:
+ build-and-deploy:
+ name: Build and Deploy
+ runs-on: ubuntu-latest
+ steps:
+ - uses: actions/checkout@main
+ - name: Use Node.js ${{ env.NODE_VERSION }}
+ uses: actions/setup-node@v1
+ with:
+ node-version: ${{ env.NODE_VERSION }}
+ - name: npm install, build, and test
+ run: |
+ # Build and test the project, then
+ # deploy to Azure Web App.
+ npm install
+ npm run build --if-present
+ npm run test --if-present
+ working-directory: my-app-path
+ - name: 'Deploy to Azure WebApp'
+ uses: azure/webapps-deploy@v2
+ with:
+ app-name: ${{ env.AZURE_WEBAPP_NAME }}
+ publish-profile: ${{ secrets.AZURE_WEBAPP_PUBLISH_PROFILE }}
+ package: ${{ env.AZURE_WEBAPP_PACKAGE_PATH }}
+```
+
+### Python
+
+Build and deploy a Python app to Azure using the app's publish profile. Note how the `publish-profile` input references the `AZURE_WEBAPP_PUBLISH_PROFILE` secret that you created earlier.
+
+```yaml
+name: Python CI
+
+on:
+ [push]
+
+env:
+ AZURE_WEBAPP_NAME: my-web-app # set this to your application's name
+ AZURE_WEBAPP_PACKAGE_PATH: '.' # set this to the path to your web app project, defaults to the repository root
+
+jobs:
+ build:
+ runs-on: ubuntu-latest
+ steps:
+ - uses: actions/checkout@v2
+ - name: Set up Python 3.x
+ uses: actions/setup-python@v2
+ with:
+ python-version: 3.x
+ - name: Install dependencies
+ run: |
+ python -m pip install --upgrade pip
+ pip install -r requirements.txt
+ - name: Building web app
+ uses: azure/appservice-build@v2
+ - name: Deploy web App using GH Action azure/webapps-deploy
+ uses: azure/webapps-deploy@v2
+ with:
+ app-name: ${{ env.AZURE_WEBAPP_NAME }}
+ publish-profile: ${{ secrets.AZURE_WEBAPP_PUBLISH_PROFILE }}
+ package: ${{ env.AZURE_WEBAPP_PACKAGE_PATH }}
+```
+
+# [Service principal](#tab/userlevel)
+
+### .NET Core
+
+Build and deploy a .NET Core app to Azure using an Azure service principal. Note how the `creds` input references the `AZURE_CREDENTIALS` secret that you created earlier.
++
+```yaml
+name: .NET Core
+
+on: [push]
+
+env:
+ AZURE_WEBAPP_NAME: my-app # set this to your application's name
+ AZURE_WEBAPP_PACKAGE_PATH: '.' # set this to the path to your web app project, defaults to the repository root
+ DOTNET_VERSION: '3.1.x' # set this to the dot net version to use
+
+jobs:
+ build:
+ runs-on: ubuntu-latest
+
+ steps:
+ # Checkout the repo
+ - uses: actions/checkout@main
+ - uses: azure/login@v1
+ with:
+ creds: ${{ secrets.AZURE_CREDENTIALS }}
+
+
+ # Setup .NET Core SDK
+ - name: Setup .NET Core
+ uses: actions/setup-dotnet@v1
+ with:
+ dotnet-version: ${{ env.DOTNET_VERSION }}
+
+ # Run dotnet build and publish
+ - name: dotnet build and publish
+ run: |
+ dotnet restore
+ dotnet build --configuration Release
+ dotnet publish -c Release -o '${{ env.AZURE_WEBAPP_PACKAGE_PATH }}/myapp'
+
+ # Deploy to Azure Web apps
+ - name: 'Run Azure webapp deploy action using publish profile credentials'
+ uses: azure/webapps-deploy@v2
+ with:
+ app-name: ${{ env.AZURE_WEBAPP_NAME }} # Replace with your app name
+ package: '${{ env.AZURE_WEBAPP_PACKAGE_PATH }}/myapp'
+
+ - name: logout
+ run: |
+ az logout
+```
+
+### ASP.NET
+
+Build and deploy a ASP.NET MVC app to Azure using an Azure service principal. Note how the `creds` input references the `AZURE_CREDENTIALS` secret that you created earlier.
+
+```yaml
+name: Deploy ASP.NET MVC App deploy to Azure Web App
+
+on: [push]
+
+env:
+ AZURE_WEBAPP_NAME: my-app # set this to your application's name
+ AZURE_WEBAPP_PACKAGE_PATH: '.' # set this to the path to your web app project, defaults to the repository root
+ NUGET_VERSION: '5.3.x' # set this to the dot net version to use
+
+jobs:
+ build-and-deploy:
+ runs-on: windows-latest
+ steps:
+
+ # checkout the repo
+ - uses: actions/checkout@main
+
+ - uses: azure/login@v1
+ with:
+ creds: ${{ secrets.AZURE_CREDENTIALS }}
+
+ - name: Install Nuget
+ uses: nuget/setup-nuget@v1
+ with:
+ nuget-version: ${{ env.NUGET_VERSION}}
+ - name: NuGet to restore dependencies as well as project-specific tools that are specified in the project file
+ run: nuget restore
+
+ - name: Add msbuild to PATH
+ uses: microsoft/setup-msbuild@v1.0.0
+
+ - name: Run MSBuild
+ run: msbuild .\SampleWebApplication.sln
+
+ - name: 'Run Azure webapp deploy action using publish profile credentials'
+ uses: azure/webapps-deploy@v2
+ with:
+ app-name: ${{ env.AZURE_WEBAPP_NAME }} # Replace with your app name
+ package: '${{ env.AZURE_WEBAPP_PACKAGE_PATH }}/SampleWebApplication/'
+
+ # Azure logout
+ - name: logout
+ run: |
+ az logout
+```
+
+### Java
+
+Build and deploy a Java Spring app to Azure using an Azure service principal. Note how the `creds` input references the `AZURE_CREDENTIALS` secret that you created earlier.
+
+```yaml
+name: Java CI with Maven
+
+on: [push]
+
+jobs:
+ build:
+
+ runs-on: ubuntu-latest
+
+ steps:
+ - uses: actions/checkout@v2
+ - uses: azure/login@v1
+ with:
+ creds: ${{ secrets.AZURE_CREDENTIALS }}
+ - name: Set up JDK 1.8
+ uses: actions/setup-java@v1
+ with:
+ java-version: 1.8
+ - name: Build with Maven
+ run: mvn -B package --file pom.xml
+ working-directory: complete
+ - name: Azure WebApp
+ uses: Azure/webapps-deploy@v2
+ with:
+ app-name: my-app-name
+ package: my/target/*.jar
+
+ # Azure logout
+ - name: logout
+ run: |
+ az logout
+```
+
+### JavaScript
+
+Build and deploy a Node.js app to Azure using an Azure service principal. Note how the `creds` input references the `AZURE_CREDENTIALS` secret that you created earlier.
+
+```yaml
+name: JavaScript CI
+
+on: [push]
+
+name: Node.js
+
+env:
+ AZURE_WEBAPP_NAME: my-app # set this to your application's name
+ NODE_VERSION: '14.x' # set this to the node version to use
+
+jobs:
+ build-and-deploy:
+ runs-on: ubuntu-latest
+ steps:
+ # checkout the repo
+ - name: 'Checkout GitHub Action'
+ uses: actions/checkout@main
+
+ - uses: azure/login@v1
+ with:
+ creds: ${{ secrets.AZURE_CREDENTIALS }}
+
+ - name: Setup Node ${{ env.NODE_VERSION }}
+ uses: actions/setup-node@v1
+ with:
+ node-version: ${{ env.NODE_VERSION }}
+
+ - name: 'npm install, build, and test'
+ run: |
+ npm install
+ npm run build --if-present
+ npm run test --if-present
+ working-directory: my-app-path
+
+ # deploy web app using Azure credentials
+ - uses: azure/webapps-deploy@v2
+ with:
+ app-name: ${{ env.AZURE_WEBAPP_NAME }}
+ package: ${{ env.AZURE_WEBAPP_PACKAGE_PATH }}
+
+ # Azure logout
+ - name: logout
+ run: |
+ az logout
+```
+
+### Python
+
+Build and deploy a Python app to Azure using an Azure service principal. Note how the `creds` input references the `AZURE_CREDENTIALS` secret that you created earlier.
+
+```yaml
+name: Python application
+
+on:
+ [push]
+
+env:
+ AZURE_WEBAPP_NAME: my-app # set this to your application's name
+ AZURE_WEBAPP_PACKAGE_PATH: '.' # set this to the path to your web app project, defaults to the repository root
+
+jobs:
+ build:
+ runs-on: ubuntu-latest
+ steps:
+ - uses: actions/checkout@v2
+
+ - uses: azure/login@v1
+ with:
+ creds: ${{ secrets.AZURE_CREDENTIALS }}
+
+ - name: Set up Python 3.x
+ uses: actions/setup-python@v2
+ with:
+ python-version: 3.x
+ - name: Install dependencies
+ run: |
+ python -m pip install --upgrade pip
+ pip install -r requirements.txt
+ - name: Deploy web App using GH Action azure/webapps-deploy
+ uses: azure/webapps-deploy@v2
+ with:
+ app-name: ${{ env.AZURE_WEBAPP_NAME }}
+ package: ${{ env.AZURE_WEBAPP_PACKAGE_PATH }}
+ - name: logout
+ run: |
+ az logout
+```
++++
+## Next steps
+
+You can find our set of Actions grouped into different repositories on GitHub, each one containing documentation and examples to help you use GitHub for CI/CD and deploy your apps to Azure.
+
+- [Actions workflows to deploy to Azure](https://github.com/Azure/actions-workflow-samples)
+
+- [Azure login](https://github.com/Azure/login)
+
+- [Azure WebApp](https://github.com/Azure/webapps-deploy)
+
+- [Azure WebApp for containers](https://github.com/Azure/webapps-container-deploy)
+
+- [Docker login/logout](https://github.com/Azure/docker-login)
+
+- [Events that trigger workflows](https://docs.github.com/en/actions/reference/events-that-trigger-workflows)
+
+- [K8s deploy](https://github.com/Azure/k8s-deploy)
+
+- [Starter Workflows](https://github.com/actions/starter-workflows)
app-service Faq App Service Linux https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/faq-app-service-linux.md
We have automatic port detection. You can also specify an app setting called *WE
No, the platform handles HTTPS termination at the shared front ends.
+**Do I need to use PORT variable in code for built-in containers?**
+
+No, PORT variable is not necessary due to automatic port detection. If no port is detected, it defaults to 80.
+To manually configure a custom port, use the EXPOSE instruction in the Dockerfile and the app setting, WEBSITES_PORT, with a port value to bind on the container.
+
+**Do I need to use WEBSITES_PORT for custom containers?**
+
+Yes, this is required for custom containers. To manually configure a custom port, use the EXPOSE instruction in the Dockerfile and the app setting, WEBSITES_PORT, with a port value to bind on the container.
+
+**Can I use ASPNETCORE_URLS in the Docker image?**
+
+Yes, overwrite the environmental variable before .NET core app starts.
+E.g. In the init.sh script: export ASPNETCORE_URLS={Your value}
+ ## Multi-container with Docker Compose **How do I configure Azure Container Registry (ACR) to use with multi-container?**
You can submit your idea at the [Web Apps feedback forum](https://aka.ms/webapps
- [What is Azure App Service on Linux?](overview.md#app-service-on-linux) - [Set up staging environments in Azure App Service](deploy-staging-slots.md) - [Continuous Deployment with Web App for Containers](./deploy-ci-cd-custom-container.md)
+- [Things You Should Know: Web Apps and Linux](https://techcommunity.microsoft.com/t5/apps-on-azure/things-you-should-know-web-apps-and-linux/ba-p/392472)
app-service Quickstart Nodejs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/quickstart-nodejs.md
You can deploy changes to this app by using the same process and choosing the ex
In this section, you learn how to view (or "tail") the logs from the running App Service app. Any calls to `console.log` in the app are displayed in the output window in Visual Studio Code.
-Find the app in the **AZURE APP SERVICE** explorer, right-click the app, and choose **View Streaming Logs**.
+Find the app in the **AZURE APP SERVICE** explorer, right-click the app, and choose **Start Streaming Logs**.
The VS Code output window opens with a connection to the log stream.
-![View Streaming Logs](./media/quickstart-nodejs/view-logs.png)
+![Start Streaming Logs](./media/quickstart-nodejs/view-logs.png)
:::image type="content" source="./media/quickstart-nodejs/enable-restart.png" alt-text="Screenshot of the VS Code prompt to enable file logging and restart the web app, with the yes button selected.":::
app-service Quickstart Python Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/quickstart-python-portal.md
description: Get started with Azure App Service by deploying your first Python a
Last updated 04/01/2021 + # Quickstart: Create a Python app using Azure App Service on Linux (Azure portal)
app-service Quickstart Python https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/quickstart-python.md
Last updated 11/10/2020
zone_pivot_groups: python-frameworks-01 adobe-target: true
-adobe-target-activity: DocsExpΓÇô377467ΓÇôA/BΓÇô Quickstarts/Python AppΓÇô12.11
+adobe-target-activity: DocsExpΓÇô393165ΓÇôA/BΓÇôDocs/PythonQuickstartΓÇôCLIvsPortalΓÇôFY21Q4
adobe-target-experience: Experience B
-adobe-target-content: ./quickstart-python-1
+adobe-target-content: ./quickstart-python-portal
# Quickstart: Create a Python app using Azure App Service on Linux
application-gateway Create Multiple Sites Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/application-gateway/create-multiple-sites-portal.md
In this tutorial, you learn how to:
> * Create an application gateway > * Create virtual machines for backend servers > * Create backend pools with the backend servers
-> * Create backend listeners
+> * Create listeners
> * Create routing rules > * Edit Hosts file for name resolution
To restore the hosts file:
## Next steps > [!div class="nextstepaction"]
-> [Learn more about what you can do with Azure Application Gateway](./overview.md)
+> [Learn more about what you can do with Azure Application Gateway](./overview.md)
automation Automation Linux Hrw Install https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/automation-linux-hrw-install.md
Title: Deploy a Linux Hybrid Runbook Worker in Azure Automation
description: This article tells how to install an Azure Automation Hybrid Runbook Worker to run runbooks on Linux-based machines in your local datacenter or cloud environment. Previously updated : 04/02/2021 Last updated : 04/06/2021
There are two methods to deploy a Hybrid Runbook Worker. You can import and run
### Importing a runbook from the Runbook Gallery
-The import procedure is described in detail in [Import a PowerShell runbook from GitHub with the Azure portal](automation-runbook-gallery.md#import-a-powershell-runbook-from-github-with-the-azure-portal). The name of the runbook to import is **Create Automation Linux HybridWorker**.
+The import procedure is described in detail in [Import runbooks from GitHub with the Azure portal](automation-runbook-gallery.md#import-runbooks-from-github-with-the-azure-portal). The name of the runbook to import is **Create Automation Linux HybridWorker**.
The runbook uses the following parameters.
automation Automation Runbook Gallery https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/automation-runbook-gallery.md
Title: Use Azure Automation runbooks and modules in PowerShell Gallery
-description: This article tells how to use runbooks and modules from Microsoft and the community in PowerShell Gallery.
+description: This article tells how to use runbooks and modules from Microsoft GitHub repos and the PowerShell Gallery.
Previously updated : 03/04/2021 Last updated : 04/07/2021
-# Use runbooks and modules in PowerShell Gallery
+# Use existing runbooks and modules
-Rather than creating your own runbooks and modules in Azure Automation, you can access scenarios that have already been built by Microsoft and the community. You can get PowerShell runbooks and [modules](#modules-in-powershell-gallery) from the PowerShell Gallery and [Python runbooks](#use-python-runbooks) from the Azure Automation GitHub organization. You can also contribute to the community by sharing [scenarios that you develop](#add-a-powershell-runbook-to-the-gallery).
+Rather than creating your own runbooks and modules in Azure Automation, you can access scenarios that have already been built by Microsoft and the community. You can get Azure-related PowerShell and Python runbooks from the Runbook Gallery in the Azure portal, and [modules](#modules-in-the-powershell-gallery) and [runbooks](#runbooks-in-the-powershell-gallery) (which may or may not be specific to Azure) from the PowerShell Gallery. You can also contribute to the community by sharing [scenarios that you develop](#contribute-to-the-community).
> [!NOTE]
-> The TechNet Script Center is retiring. All of the runbooks from Script Center in the Runbook gallery have been moved to our [Automation GitHub organization](https://github.com/azureautomation) For more information, see [here](https://techcommunity.microsoft.com/t5/azure-governance-and-management/azure-automation-runbooks-moving-to-github/ba-p/2039337).
+> The TechNet Script Center is retiring. All of the runbooks from Script Center in the Runbook gallery have been moved to our [Automation GitHub organization](https://github.com/azureautomation) For more information, see [Azure Automation Runbooks moving to GitHub](https://techcommunity.microsoft.com/t5/azure-governance-and-management/azure-automation-runbooks-moving-to-github/ba-p/2039337).
-## Runbooks in PowerShell Gallery
+## Import runbooks from GitHub with the Azure portal
-The [PowerShell Gallery](https://www.powershellgallery.com/packages) provides a variety of runbooks from Microsoft and the community that you can import into Azure Automation. To use one, download a runbook from the gallery, or you can directly import runbooks from the gallery, or from your Automation account in the Azure portal.
+1. In the Azure portal, open your Automation account.
+2. Select **Runbooks gallery** under **Process Automation**.
+3. Select **Source: GitHub**.
+4. You can use the filters above the list to narrow the display by publisher, type, and sort. Locate the gallery item you want and select it to view its details.
+
+ :::image type="content" source="./media/automation-runbook-gallery/browse-gallery-github.png" alt-text="Browsing runbook gallery." lightbox="./media/automation-runbook-gallery/browse-gallery-github-expanded.png":::
+
+5. To import an item, click **Import** on the details page.
+
+ :::image type="content" source="./media/automation-runbook-gallery/gallery-item-import.png" alt-text="Gallery item import.":::
+
+6. Optionally, change the name of the runbook on the import blade, and then click **OK** to import the runbook.
+
+ :::image type="content" source="./media/automation-runbook-gallery/gallery-item-import-blade.png" alt-text="Gallery item import blade.":::
+
+7. The runbook appears on the **Runbooks** tab for the Automation account.
+
+## Runbooks in the PowerShell Gallery
+
+> [!IMPORTANT]
+> You should validate the contents of any runbooks that you get from the PowerShell Gallery. Use extreme caution in installing and running them in a production environment.
+
+The [PowerShell Gallery](https://www.powershellgallery.com/packages) provides various runbooks from Microsoft and the community that you can import into Azure Automation. To use one, download a runbook from the gallery, or you can directly import runbooks from the gallery, or from your Automation account in the Azure portal.
> [!NOTE] > Graphical runbooks are not supported in PowerShell Gallery.
-You can only import directly from the PowerShell Gallery using the Azure portal. You cannot perform this function using PowerShell.
+You can only import directly from the PowerShell Gallery using the Azure portal. You cannot perform this function using PowerShell. The procedure is the same as shown in [Import runbooks from GitHub with the Azure portal](#import-runbooks-from-github-with-the-azure-portal), except that the **Source** will be **PowerShell Gallery**.
-> [!NOTE]
-> You should validate the contents of any runbooks that you get from the PowerShell Gallery and use extreme caution in installing and running them in a production environment.
-## Modules in PowerShell Gallery
+## Modules in the PowerShell Gallery
PowerShell modules contain cmdlets that you can use in your runbooks. Existing modules that you can install in Azure Automation are available in the [PowerShell Gallery](https://www.powershellgallery.com). You can launch this gallery from the Azure portal and install the modules directly into Azure Automation, or you can manually download and install them.
-## Common scenarios available in PowerShell Gallery
+You can also find modules to import in the Azure portal. They're listed for your Automation Account in the **Modules gallery** under **Shared resources**.
+
+## Common scenarios available in the PowerShell Gallery
The list below contains a few runbooks that support common scenarios. For a full list of runbooks created by the Azure Automation team, see [AzureAutomationTeam profile](https://www.powershellgallery.com/profiles/AzureAutomationTeam).
The list below contains a few runbooks that support common scenarios. For a full
* [Copy-ItemFromAzureVM](https://www.powershellgallery.com/packages/Copy-ItemFromAzureVM/) - Copies a remote file from a Windows Azure virtual machine. * [Copy-ItemToAzureVM](https://www.powershellgallery.com/packages/Copy-ItemToAzureVM/) - Copies a local file to an Azure virtual machine.
-## Import a PowerShell runbook from GitHub with the Azure portal
+## Contribute to the community
-1. In the Azure portal, open your Automation account.
-1. Select **Runbooks gallery** under **Process Automation**.
-1. Select **Source: GitHub**.
-1. You can use the filters above the list to narrow the display by publisher, type, and sort. Locate the gallery item you want and select it to view its details.
+We strongly encourage you to contribute and help grow the Azure Automation community. Share the amazing runbooks you've built with the community. Your contributions will be appreciated!
- :::image type="content" source="media/automation-runbook-gallery/browse-gallery-github-sm.png" alt-text="Browsing the GitHub gallery." lightbox="media/automation-runbook-gallery/browse-gallery-github-lg.png":::
+### Add a runbook to the GitHub Runbook gallery
-1. To import an item, click **Import** on the details blade.
+You can add new PowerShell or Python runbooks to the Runbook gallery with this GitHub workflow.
- :::image type="content" source="media/automation-runbook-gallery/gallery-item-details-blade-github-sm.png" alt-text="Detailed view of a runbook from the GitHub gallery." lightbox="media/automation-runbook-gallery/gallery-item-details-blade-github-lg.png":::
+1. Create a public repository on GitHub, and add the runbook and any other necessary files (like readme.md, description, and so on).
+1. Add the topic `azureautomationrunbookgallery` to make sure the repository is discovered by our service, so it can be displayed in the Automation Runbook gallery.
+1. If the runbook that you created is a PowerShell workflow, add the topic `PowerShellWorkflow`. If it's a Python 3 runbook, add `Python3`. No other specific topics are required for Azure runbooks, but we encourage you to add other topics that can be used for categorization and search in the Runbook Gallery.
-1. Optionally, change the name of the runbook and then click **OK** to import the runbook.
-1. The runbook appears on the **Runbooks** tab for the Automation account.
+ >[!NOTE]
+ >Check out existing runbooks in the gallery for things like formatting, headers, and existing tags that you might use (like `Azure Automation` or `Linux Azure Virtual Machines`).
-## Import a PowerShell runbook from the runbook gallery with the Azure portal
-
-1. In the Azure portal, open your Automation account.
-1. Select **Runbooks gallery** under **Process Automation**.
-1. Select **Source: PowerShell Gallery**. This shows a list of available runbooks that you can browse.
-1. You can use the search box above the list to narrow the list, or you can use the filters to narrow the display by publisher, type, and sort. Locate the gallery item you want and select it to view its details.
+To suggest changes to an existing runbook, file a pull request against it.
- :::image type="content" source="media/automation-runbook-gallery/browse-gallery-sm.png" alt-text="Browsing the runbook gallery." lightbox="media/automation-runbook-gallery/browse-gallery-lg.png":::
+If you decide to clone and edit an existing runbook, best practice is to give it a different name. If you re-use the old name, it will show up twice in the Runbook gallery listing.
-1. To import an item, click **Import** on the details blade.
+>[!NOTE]
+>Please allow at least 12 hours for synchronization between GitHub and the Automation Runbook Gallery, for both updated and new runbooks.
- :::image type="content" source="media/automation-runbook-gallery/gallery-item-detail-sm.png" alt-text="Show a runbook gallery item detail." lightbox="media/automation-runbook-gallery/gallery-item-detail-lg.png":::
-
-1. Optionally, change the name of the runbook and then click **OK** to import the runbook.
-1. The runbook appears on the **Runbooks** tab for the Automation account.
-
-## Add a PowerShell runbook to the gallery
+### Add a PowerShell runbook to the PowerShell gallery
Microsoft encourages you to add runbooks to the PowerShell Gallery that you think would be useful to other customers. The PowerShell Gallery accepts PowerShell modules and PowerShell scripts. You can add a runbook by [uploading it to the PowerShell Gallery](/powershell/scripting/gallery/how-to/publishing-packages/publishing-a-package).
-## Import a module from the module gallery with the Azure portal
+## Import a module from the Modules gallery in the Azure portal
1. In the Azure portal, open your Automation account.
-1. Select **Modules** under **Shared Resources** to open the list of modules.
-1. Click **Browse gallery** from the top of the page.
+1. Under **Shared Resources**, select **Modules gallery** to open the list of modules.
:::image type="content" source="media/automation-runbook-gallery/modules-blade-sm.png" alt-text="View of the module gallery." lightbox="media/automation-runbook-gallery/modules-blade-lg.png":::
-1. On the Browse gallery page, you can use the search box to find matches in any of the following fields:
+1. On the Browse gallery page, you can search by the following fields:
* Module Name * Tags
Microsoft encourages you to add runbooks to the PowerShell Gallery that you thin
> [!NOTE] > Modules that only support PowerShell core are not supported in Azure Automation and are unable to be imported in the Azure portal, or deployed directly from the PowerShell Gallery.
-## Use Python runbooks
-
-Python Runbooks are available in the [Azure Automation GitHub organization](https://github.com/azureautomation). When you contribute to our GitHub repo, add the tag **(GitHub Topic) : Python3** when you upload your contribution.
- ## Request a runbook or module You can send requests to [User Voice](https://feedback.azure.com/forums/246290-azure-automation/). If you need help with writing a runbook or have a question about PowerShell, post a question to our [Microsoft Q&A question page](/answers/topics/azure-automation.html). ## Next steps
-* To get started with a PowerShell runbook, see [Tutorial: Create a PowerShell runbook](learn/automation-tutorial-runbook-textual-powershell.md).
+* To get started with PowerShell runbooks, see [Tutorial: Create a PowerShell runbook](learn/automation-tutorial-runbook-textual-powershell.md).
* To work with runbooks, see [Manage runbooks in Azure Automation](manage-runbooks.md).
-* For details of PowerShell, see [PowerShell Docs](/powershell/scripting/overview).
-* For a PowerShell cmdlet reference, see [Az.Automation](/powershell/module/az.automation).
+* For more info on PowerShell scripting, see [PowerShell Docs](/powershell/scripting/overview).
+* For a PowerShell cmdlet reference, see [Az.Automation](/powershell/module/az.automation).
automation Automation Solution Vm Management Config https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/automation-solution-vm-management-config.md
You can enable either targeting the action against a subscription and resource g
> [!NOTE] > The value for **Target ResourceGroup Names** is stored as the values for both `External_Start_ResourceGroupNames` and `External_Stop_ResourceGroupNames`. For further granularity, you can modify each of these variables to target different resource groups. For start action, use `External_Start_ResourceGroupNames`, and use `External_Stop_ResourceGroupNames` for stop action. VMs are automatically added to the start and stop schedules.
-## <a name="tags"></a>Scenario 2: Start/Stop VMS in sequence by using tags
+## <a name="tags"></a>Scenario 2: Start/Stop VMs in sequence by using tags
In an environment that includes two or more components on multiple VMs supporting a distributed workload, supporting the sequence in which components are started and stopped in order is important.
automation Automation Windows Hrw Install https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/automation-windows-hrw-install.md
There are two methods to automatically deploy a Hybrid Runbook Worker. You can i
### Importing a runbook from the Runbook Gallery
-The import procedure is described in detail in [Import a PowerShell runbook from GitHub with the Azure portal](automation-runbook-gallery.md#import-a-powershell-runbook-from-github-with-the-azure-portal). The name of the runbook to import is **Create Automation Windows HybridWorker**.
+The import procedure is described in detail in [Import runbooks from GitHub with the Azure portal](automation-runbook-gallery.md#import-runbooks-from-github-with-the-azure-portal). The name of the runbook to import is **Create Automation Windows HybridWorker**.
The runbook uses the following parameters.
azure-arc Create Data Controller Azure Data Studio https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/create-data-controller-azure-data-studio.md
Previously updated : 12/09/2020 Last updated : 04/07/2021
At the current time, you can create a data controller using the method described
Follow these steps to create an Azure Arc data controller using the Deployment wizard. 1. In Azure Data Studio, click on the Connections tab on the left navigation.
-2. Click on the **...** button at the top of the Connections panel and choose **New Deployment...**
-3. In the new Deployment wizard, choose **Azure Arc Data Controller**, and then click the **Select** button at the bottom.
-4. Ensure the prerequisite tools are available and meet the required versions. **Click Next**.
-5. Use the default kubeconfig file or select another one. Click **Next**.
-6. Choose a Kubernetes cluster context. Click **Next**.
-7. Choose a deployment configuration profile depending on your target Kubernetes cluster. **Click Next**.
-8. If you are using Azure Red Hat OpenShift or Red Hat OpenShift container platform, apply security context constraints. Follow the instructions at [Apply a security context constraint for Azure Arc enabled data services on OpenShift](how-to-apply-security-context-constraint.md).
+1. Click on the **...** button at the top of the Connections panel and choose **New Deployment...**
+1. In the new Deployment wizard, choose **Azure Arc Data Controller**, and then click the **Select** button at the bottom.
+1. Ensure the prerequisite tools are available and meet the required versions. **Click Next**.
+1. Use the default kubeconfig file or select another one. Click **Next**.
+1. Choose a Kubernetes cluster context. Click **Next**.
+1. Choose a deployment configuration profile depending on your target Kubernetes cluster. **Click Next**.
+1. If you are using Azure Red Hat OpenShift or Red Hat OpenShift container platform, apply security context constraints. Follow the instructions at [Apply a security context constraint for Azure Arc enabled data services on OpenShift](how-to-apply-security-context-constraint.md).
>[!IMPORTANT] >On Azure Red Hat OpenShift or Red Hat OpenShift container platform, you must apply the security context constraint before you create the data controller.
Follow these steps to create an Azure Arc data controller using the Deployment w
1. Select an Azure location. The Azure location selected here is the location in Azure where the *metadata* about the data controller and the database instances that it manages will be stored. The data controller and database instances will be actually created in your Kubernetes cluster wherever that may be.
+
+ Once done, click **Next**.
-10. Select the appropriate Connectivity Mode. Learn more on [Connectivity modes](./connectivity.md). **Click Next**.
-
- If you select direct connectivity mode Service Principal credentials are required as described in [Create service principal](upload-metrics-and-logs-to-azure-monitor.md#create-service-principal).
-
-11. Enter a name for the data controller and for the namespace that the data controller will be created in.
+1. Enter a name for the data controller and for the namespace that the data controller will be created in.
The data controller and namespace name will be used to create a custom resource in the Kubernetes cluster so they must conform to [Kubernetes naming conventions](https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names). If the namespace already exists it will be used if the namespace does not already contain other Kubernetes objects - pods, etc. If the namespace does not exist, an attempt to create the namespace will be made. Creating a namespace in a Kubernetes cluster requires Kubernetes cluster administrator privileges. If you don't have Kubernetes cluster administrator privileges, ask your Kubernetes cluster administrator to perform the first few steps in the [Create a data controller using Kubernetes-native tools](./create-data-controller-using-kubernetes-native-tools.md) article which are required to be performed by a Kubernetes administrator before you complete this wizard.
-12. Select the storage class where the data controller will be deployed.
-13. Enter a username and password and confirm the password for the data controller administrator user account. Click **Next**.
+1. Select the storage class where the data controller will be deployed.
+1. Enter a username and password and confirm the password for the data controller administrator user account. Click **Next**.
-14. Review the deployment configuration.
-15. Click the **Deploy** to deploy the desired configuration or the **Script to Notebook** to review the deployment instructions or make any changes necessary such as storage class names or service types. Click **Run All** at the top of the notebook.
+1. Review the deployment configuration.
+1. Click the **Deploy** to deploy the desired configuration or the **Script to Notebook** to review the deployment instructions or make any changes necessary such as storage class names or service types. Click **Run All** at the top of the notebook.
## Monitoring the creation status
azure-arc Create Data Controller Resource In Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/create-data-controller-resource-in-azure-portal.md
Previously updated : 03/02/2021 Last updated : 04/07/2021
When you use direct connect mode, you can provision the data controller directly
Follow the steps below to create an Azure Arc data controller using the Azure portal and Azure Data Studio. 1. First, log in to the [Azure portal marketplace](https://ms.portal.azure.com/#blade/Microsoft_Azure_Marketplace/MarketplaceOffersBlade/selectedMenuItemId/home/searchQuery/azure%20arc%20data%20controller). The marketplace search results will be filtered to show you the 'Azure Arc data controller'.
-2. If the first step has not entered the search criteria. Please enter in to the search results, click on 'Azure Arc data controller'.
-3. Select the Azure Data Controller tile from the marketplace.
-4. Click on the **Create** button.
-5. Review the requirements to create an Azure Arc data controller and install any missing prerequisite software such as Azure Data Studio and kubectl.
-6. Click on the **Data controller details** button.
-7. Choose a subscription, resource group and Azure location just like you would for any other resource that you would create in the Azure portal. In this case the Azure location that you select will be where the metadata about the resource will be stored. The resource itself will be created on whatever infrastructure you choose. It doesn't need to be on Azure infrastructure.
-8. Enter a name for your data controller.
-9. Select the connectivity mode for the data controller. Learn more about [Connectivity modes and requirements](./connectivity.md).
-
- > [!NOTE]
- > If you select **direct** connectivity mode, ensure the Service Principal credentials are set via environment variables as described in [Create service principal](upload-metrics-and-logs-to-azure-monitor.md#create-service-principal).
+1. If the first step has not entered the search criteria. Please enter in to the search results, click on 'Azure Arc data controller'.
+1. Select the Azure Data Controller tile from the marketplace.
+1. Click on the **Create** button.
+1. Select the indirect connectivity mode. Learn more about [Connectivity modes and requirements](./connectivity.md).
+1. Review the requirements to create an Azure Arc data controller and install any missing prerequisite software such as Azure Data Studio and kubectl.
+1. Click on the **Next: Data controller details** button.
+1. Choose a subscription, resource group and Azure location just like you would for any other resource that you would create in the Azure portal. In this case the Azure location that you select will be where the metadata about the resource will be stored. The resource itself will be created on whatever infrastructure you choose. It doesn't need to be on Azure infrastructure.
+1. Enter a name for your data controller.
1. Select a deployment configuration profile. 1. Click the **Open in Azure Studio** button.
azure-arc Create Data Controller Using Azdata https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/create-data-controller-using-azdata.md
Previously updated : 03/02/2021 Last updated : 04/07/2021
kubectl get namespace
kubectl config current-context ```
-### Connectivity modes
-
-As described in [Connectivity modes and requirements](./connectivity.md), Azure Arc data controller can be deployed either with either `direct` or `indirect` connectivity mode. With `direct` connectivity mode, usage data is automatically and continuously sent to Azure. In this articles, the examples specify `direct` connectivity mode as follows:
-
- ```console
- --connectivity-mode direct
- ```
-
- To create the controller with `indirect` connectivity mode, update the scripts in the example as specified below:
-
- ```console
- --connectivity-mode indirect
- ```
-
-#### Create service principal
-
-If you are deploying the Azure Arc data controller with `direct` connectivity mode, Service Principal credentials are required for the Azure connectivity. The service principal is used to upload usage and metrics data.
-
-Follow these commands to create your metrics upload service principal:
-
-> [!NOTE]
-> Creating a service principal requires [certain permissions in Azure](../../active-directory/develop/howto-create-service-principal-portal.md#permissions-required-for-registering-an-app).
-
-To create a service principal, update the following example. Replace `<ServicePrincipalName>` with the name of your service principal and run the command:
-
-```azurecli
-az ad sp create-for-rbac --name <ServicePrincipalName>
-```
-
-If you created the service principal earlier, and just need to get the current credentials, run the following command to reset the credential.
-
-```azurecli
-az ad sp credential reset --name <ServicePrincipalName>
-```
-
-For example, to create a service principal named `azure-arc-metrics`, run the following command
-
-```console
-az ad sp create-for-rbac --name azure-arc-metrics
-```
-
-Example output:
-
-```output
-"appId": "2e72adbf-de57-4c25-b90d-2f73f126e123",
-"displayName": "azure-arc-metrics",
-"name": "http://azure-arc-metrics",
-"password": "5039d676-23f9-416c-9534-3bd6afc78123",
-"tenant": "72f988bf-85f1-41af-91ab-2d7cd01ad1234"
-```
-
-Save the `appId`, `password`, and `tenant` values in an environment variable for use later.
-
-#### Save environment variables in Windows
-
-```console
-SET SPN_CLIENT_ID=<appId>
-SET SPN_CLIENT_SECRET=<password>
-SET SPN_TENANT_ID=<tenant>
-SET SPN_AUTHORITY=https://login.microsoftonline.com
-```
-
-#### Save environment variables in Linux or macOS
-
-```console
-export SPN_CLIENT_ID='<appId>'
-export SPN_CLIENT_SECRET='<password>'
-export SPN_TENANT_ID='<tenant>'
-export SPN_AUTHORITY='https://login.microsoftonline.com'
-```
-
-#### Save environment variables in PowerShell
-
-```console
-$Env:SPN_CLIENT_ID="<appId>"
-$Env:SPN_CLIENT_SECRET="<password>"
-$Env:SPN_TENANT_ID="<tenant>"
-$Env:SPN_AUTHORITY="https://login.microsoftonline.com"
-```
-
-After you have created the service principal, assign the service principal to the appropriate role.
-
-### Assign roles to the service principal
-
-Run this command to assign the service principal to the `Monitoring Metrics Publisher` role on the subscription where your database instance resources are located:
-
-#### Run the command on Windows
-
-> [!NOTE]
-> You need to use double quotes for role names when running from a Windows environment.
-
-```azurecli
-az role assignment create --assignee <appId> --role "Monitoring Metrics Publisher" --scope subscriptions/<Subscription ID>
-az role assignment create --assignee <appId> --role "Contributor" --scope subscriptions/<Subscription ID>
-```
-
-#### Run the command on Linux or macOS
-
-```azurecli
-az role assignment create --assignee <appId> --role 'Monitoring Metrics Publisher' --scope subscriptions/<Subscription ID>
-az role assignment create --assignee <appId> --role 'Contributor' --scope subscriptions/<Subscription ID>
-```
-
-#### Run the command in PowerShell
-
-```powershell
-az role assignment create --assignee <appId> --role 'Monitoring Metrics Publisher' --scope subscriptions/<Subscription ID>
-az role assignment create --assignee <appId> --role 'Contributor' --scope subscriptions/<Subscription ID>
-```
-
-```output
-{
- "canDelegate": null,
- "id": "/subscriptions/<Subscription ID>/providers/Microsoft.Authorization/roleAssignments/f82b7dc6-17bd-4e78-93a1-3fb733b912d",
- "name": "f82b7dc6-17bd-4e78-93a1-3fb733b9d123",
- "principalId": "5901025f-0353-4e33-aeb1-d814dbc5d123",
- "principalType": "ServicePrincipal",
- "roleDefinitionId": "/subscriptions/<Subscription ID>/providers/Microsoft.Authorization/roleDefinitions/3913510d-42f4-4e42-8a64-420c39005123",
- "scope": "/subscriptions/<Subscription ID>",
- "type": "Microsoft.Authorization/roleAssignments"
-}
-```
-
-With the service principal assigned to the appropriate role, and the environment variables set, you can proceed to create the data controller
- ## Create the Azure Arc data controller > [!NOTE]
By default, the AKS deployment profile uses the `managed-premium` storage class.
If you are going to use `managed-premium` as your storage class, then you can run the following command to create the data controller. Substitute the placeholders in the command with your resource group name, subscription ID, and Azure location. ```console
-azdata arc dc create --profile-name azure-arc-aks-premium-storage --namespace arc --name arc --subscription <subscription id> --resource-group <resource group name> --location <location> --connectivity-mode direct
+azdata arc dc create --profile-name azure-arc-aks-premium-storage --namespace arc --name arc --subscription <subscription id> --resource-group <resource group name> --location <location> --connectivity-mode indirect
#Example:
-#azdata arc dc create --profile-name azure-arc-aks-premium-storage --namespace arc --name arc --subscription 1e5ff510-76cf-44cc-9820-82f2d9b51951 --resource-group my-resource-group --location eastus --connectivity-mode direct
+#azdata arc dc create --profile-name azure-arc-aks-premium-storage --namespace arc --name arc --subscription 1e5ff510-76cf-44cc-9820-82f2d9b51951 --resource-group my-resource-group --location eastus --connectivity-mode indirect
``` If you are not sure what storage class to use, you should use the `default` storage class which is supported regardless of which VM type you are using. It just won't provide the fastest performance.
If you are not sure what storage class to use, you should use the `default` stor
If you want to use the `default` storage class, then you can run this command: ```console
-azdata arc dc create --profile-name azure-arc-aks-default-storage --namespace arc --name arc --subscription <subscription id> --resource-group <resource group name> --location <location> --connectivity-mode direct
+azdata arc dc create --profile-name azure-arc-aks-default-storage --namespace arc --name arc --subscription <subscription id> --resource-group <resource group name> --location <location> --connectivity-mode indirect
#Example:
-#azdata arc dc create --profile-name azure-arc-aks-default-storage --namespace arc --name arc --subscription 1e5ff510-76cf-44cc-9820-82f2d9b51951 --resource-group my-resource-group --location eastus --connectivity-mode direct
+#azdata arc dc create --profile-name azure-arc-aks-default-storage --namespace arc --name arc --subscription 1e5ff510-76cf-44cc-9820-82f2d9b51951 --resource-group my-resource-group --location eastus --connectivity-mode indirect
``` Once you have run the command, continue on to [Monitoring the creation status](#monitoring-the-creation-status).
By default, the deployment profile uses the `managed-premium` storage class. The
You can run the following command to create the data controller using the managed-premium storage class: ```console
-azdata arc dc create --profile-name azure-arc-aks-premium-storage --namespace arc --name arc --subscription <subscription id> --resource-group <resource group name> --location <location> --connectivity-mode direct
+azdata arc dc create --profile-name azure-arc-aks-premium-storage --namespace arc --name arc --subscription <subscription id> --resource-group <resource group name> --location <location> --connectivity-mode indirect
#Example:
-#azdata arc dc create --profile-name azure-arc-aks-premium-storage --namespace arc --name arc --subscription 1e5ff510-76cf-44cc-9820-82f2d9b51951 --resource-group my-resource-group --location eastus --connectivity-mode direct
+#azdata arc dc create --profile-name azure-arc-aks-premium-storage --namespace arc --name arc --subscription 1e5ff510-76cf-44cc-9820-82f2d9b51951 --resource-group my-resource-group --location eastus --connectivity-mode indirect
``` If you are not sure what storage class to use, you should use the `default` storage class which is supported regardless of which VM type you are using. In Azure Stack Hub, premium disks and standard disks are backed by the same storage infrastructure. Therefore, they are expected to provide the same general performance, but with different IOPS limits.
If you are not sure what storage class to use, you should use the `default` stor
If you want to use the `default` storage class, then you can run this command. ```console
-azdata arc dc create --profile-name azure-arc-aks-default-storage --namespace arc --name arc --subscription <subscription id> --resource-group <resource group name> --location <location> --connectivity-mode direct
+azdata arc dc create --profile-name azure-arc-aks-default-storage --namespace arc --name arc --subscription <subscription id> --resource-group <resource group name> --location <location> --connectivity-mode indirect
#Example:
-#azdata arc dc create --profile-name azure-arc-aks-premium-storage --namespace arc --name arc --subscription 1e5ff510-76cf-44cc-9820-82f2d9b51951 --resource-group my-resource-group --location eastus --connectivity-mode direct
+#azdata arc dc create --profile-name azure-arc-aks-default-storage --namespace arc --name arc --subscription 1e5ff510-76cf-44cc-9820-82f2d9b51951 --resource-group my-resource-group --location eastus --connectivity-mode indirect
``` Once you have run the command, continue on to [Monitoring the creation status](#monitoring-the-creation-status).
By default, the deployment profile uses a storage class named `default` and the
You can run the following command to create the data controller using the `default` storage class and service type `LoadBalancer`. ```console
-azdata arc dc create --profile-name azure-arc-aks-hci --namespace arc --name arc --subscription <subscription id> --resource-group <resource group name> --location <location> --connectivity-mode direct
+azdata arc dc create --profile-name azure-arc-aks-hci --namespace arc --name arc --subscription <subscription id> --resource-group <resource group name> --location <location> --connectivity-mode indirect
#Example:
-#azdata arc dc create --profile-name azure-arc-aks-hci --namespace arc --name arc --subscription 1e5ff510-76cf-44cc-9820-82f2d9b51951 --resource-group my-resource-group --location eastus --connectivity-mode direct
+#azdata arc dc create --profile-name azure-arc-aks-hci --namespace arc --name arc --subscription 1e5ff510-76cf-44cc-9820-82f2d9b51951 --resource-group my-resource-group --location eastus --connectivity-mode indirect
``` Once you have run the command, continue on to [Monitoring the creation status](#monitoring-the-creation-status).
You can run the following command to create the data controller:
> Use the same namespace here and in the `oc adm policy add-scc-to-user` commands above. Example is `arc`. ```console
-azdata arc dc create --profile-name azure-arc-azure-openshift --namespace arc --name arc --subscription <subscription id> --resource-group <resource group name> --location <location> --connectivity-mode direct
+azdata arc dc create --profile-name azure-arc-azure-openshift --namespace arc --name arc --subscription <subscription id> --resource-group <resource group name> --location <location> --connectivity-mode indirect
#Example
-#azdata arc dc create --profile-name azure-arc-azure-openshift --namespace arc --name arc --subscription 1e5ff510-76cf-44cc-9820-82f2d9b51951 --resource-group my-resource-group --location eastus --connectivity-mode direct
+#azdata arc dc create --profile-name azure-arc-azure-openshift --namespace arc --name arc --subscription 1e5ff510-76cf-44cc-9820-82f2d9b51951 --resource-group my-resource-group --location eastus --connectivity-mode indirect
``` Once you have run the command, continue on to [Monitoring the creation status](#monitoring-the-creation-status).
Now you are ready to create the data controller using the following command.
```console
-azdata arc dc create --path ./custom --namespace arc --name arc --subscription <subscription id> --resource-group <resource group name> --location <location> --connectivity-mode direct
+azdata arc dc create --path ./custom --namespace arc --name arc --subscription <subscription id> --resource-group <resource group name> --location <location> --connectivity-mode indirect
#Example:
-#azdata arc dc create --path ./custom --namespace arc --name arc --subscription 1e5ff510-76cf-44cc-9820-82f2d9b51951 --resource-group my-resource-group --location eastus --connectivity-mode direct
+#azdata arc dc create --path ./custom --namespace arc --name arc --subscription 1e5ff510-76cf-44cc-9820-82f2d9b51951 --resource-group my-resource-group --location eastus --connectivity-mode indirect
``` Once you have run the command, continue on to [Monitoring the creation status](#monitoring-the-creation-status).
azdata arc dc config replace --path ./custom/control.json --json-values "$.spec.
Now you are ready to create the data controller using the following command. ```console
-azdata arc dc create --path ./custom --namespace arc --name arc --subscription <subscription id> --resource-group <resource group name> --location <location> --connectivity-mode direct
+azdata arc dc create --path ./custom --namespace arc --name arc --subscription <subscription id> --resource-group <resource group name> --location <location> --connectivity-mode indirect
#Example:
-#azdata arc dc create --path ./custom --namespace arc --name arc --subscription 1e5ff510-76cf-44cc-9820-82f2d9b51951 --resource-group my-resource-group --location eastus --connectivity-mode direct
+#azdata arc dc create --path ./custom --namespace arc --name arc --subscription 1e5ff510-76cf-44cc-9820-82f2d9b51951 --resource-group my-resource-group --location eastus --connectivity-mode indirect
``` Once you have run the command, continue on to [Monitoring the creation status](#monitoring-the-creation-status).
By default, the EKS storage class is `gp2` and the service type is `LoadBalancer
Run the following command to create the data controller using the provided EKS deployment profile. ```console
-azdata arc dc create --profile-name azure-arc-eks --namespace arc --name arc --subscription <subscription id> --resource-group <resource group name> --location <location> --connectivity-mode direct
+azdata arc dc create --profile-name azure-arc-eks --namespace arc --name arc --subscription <subscription id> --resource-group <resource group name> --location <location> --connectivity-mode indirect
#Example:
-#azdata arc dc create --profile-name azure-arc-eks --namespace arc --name arc --subscription 1e5ff510-76cf-44cc-9820-82f2d9b51951 --resource-group my-resource-group --location eastus --connectivity-mode direct
+#azdata arc dc create --profile-name azure-arc-eks --namespace arc --name arc --subscription 1e5ff510-76cf-44cc-9820-82f2d9b51951 --resource-group my-resource-group --location eastus --connectivity-mode indirect
``` Once you have run the command, continue on to [Monitoring the creation status](#monitoring-the-creation-status).
By default, the GKE storage class is `standard` and the service type is `LoadBal
Run the following command to create the data controller using the provided GKE deployment profile. ```console
-azdata arc dc create --profile-name azure-arc-gke --namespace arc --name arc --subscription <subscription id> --resource-group <resource group name> --location <location> --connectivity-mode direct
+azdata arc dc create --profile-name azure-arc-gke --namespace arc --name arc --subscription <subscription id> --resource-group <resource group name> --location <location> --connectivity-mode indirect
#Example:
-#azdata arc dc create --profile-name azure-arc-gke --namespace arc --name arc --subscription 1e5ff510-76cf-44cc-9820-82f2d9b51951 --resource-group my-resource-group --location eastus --connectivity-mode direct
+#azdata arc dc create --profile-name azure-arc-gke --namespace arc --name arc --subscription 1e5ff510-76cf-44cc-9820-82f2d9b51951 --resource-group my-resource-group --location eastus --connectivity-mode indirect
``` Once you have run the command, continue on to [Monitoring the creation status](#monitoring-the-creation-status).
azure-arc Deploy Data Controller Direct Mode https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/deploy-data-controller-direct-mode.md
$ENV:location="<Azure location>"
### Create the Arc data services extension #### Linux
-```bash
-export ADSExtensionName=ads-extension
-export CustomLocationsRpOid=$(az ad sp list --filter "displayname eq 'Custom Locations RP'" --query '[].objectId' -o tsv)
-
-az k8s-extension create -c ${resourceName} -g ${resourceGroup} --name ${ADSExtensionName} --cluster-type connectedClusters --extension-type microsoft.arcdataservices --auto-upgrade false --scope cluster --release-namespace arc \
- --config Microsoft.CustomLocation.ServiceAccount=sa-bootstrapper \
- --config aad.customLocationObjectId=${CustomLocationsRpOid}
+```bash
+az k8s-extension create -c ${resourceName} -g ${resourceGroup} --name ${ADSExtensionName} --cluster-type connectedClusters --extension-type microsoft.arcdataservices --version "1.0.015564" --auto-upgrade false --scope cluster --release-namespace arc --config Microsoft.CustomLocation.ServiceAccount=sa-bootstrapper
az k8s-extension show -g ${resourceGroup} -c ${resourceName} --name ${ADSExtensionName} --cluster-type connectedclusters ```
az k8s-extension show -g ${resourceGroup} -c ${resourceName} --name ${ADSExtensi
#### Windows PowerShell ```PowerShell $ENV:ADSExtensionName="ads-extension"
-$CustomLocationsRpOid = az ad sp list --filter "displayname eq 'Custom Locations RP'" --query [].objectId -o tsv
-az k8s-extension create -c "$ENV:resourceName" -g "$ENV:resourceGroup" --name "$ENV:ADSExtensionName" --cluster-type connectedClusters --extension-type microsoft.arcdataservices --auto-upgrade false --scope cluster --release-namespace arc --config Microsoft.CustomLocation.ServiceAccount=sa-bootstrapper --config aad.customLocationObjectId="$ENV:CustomLocationsRpOid"
+az k8s-extension create -c "$ENV:resourceName" -g "$ENV:resourceGroup" --name "$ENV:ADSExtensionName" --cluster-type connectedClusters --extension-type microsoft.arcdataservices --version "1.0.015564" --auto-upgrade false --scope cluster --release-namespace arc --config Microsoft.CustomLocation.ServiceAccount=sa-bootstrapper
az k8s-extension show -g "$ENV:resourceGroup" -c "$ENV:resourceName" --name "$ENV:ADSExtensionName" --cluster-type connectedclusters ```
azure-arc Known Issues https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/known-issues.md
### Azure Arc enabled PostgreSQL Hyperscale
+- It is not supported to deploy an Azure Arc enabled Postgres Hyperscale server group in an Arc data controller enabled for direct connect mode.
- Passing an invalid value to the `--extensions` parameter when editing the configuration of a server group to enable additional extensions incorrectly resets the list of enabled extensions to what it was at the create time of the server group and prevents user from creating additional extensions. The only workaround available when that happens is to delete the server group and redeploy it. ## February 2021
azure-arc Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/release-notes.md
The March 2021 release is introduced on April 6, 2021.
Review limitations of this release in [Known issues - Azure Arc enabled data services (Preview)](known-issues.md).
-Azure Data CLI (`azdata`) version number: 20.3.2. Download at [https://aka.ms/azdata](https://aka.ms/azdata). You can install `azdata` from [Install Azure Data CLI (`azdata`)](/sql/azdata/install/deploy-install-azdata).
+Azure Data CLI (`azdata`) version number: 20.3.2. You can install `azdata` from [Install Azure Data CLI (`azdata`)](/sql/azdata/install/deploy-install-azdata).
### Data controller
You will delete the previous CRDs as you cleanup past installations. See [Cleanu
### New capabilities and features
-Azure Data CLI (`azdata`) version number: 20.3.1. Download at [https://aka.ms/azdata](https://aka.ms/azdata). You can install `azdata` from [Install Azure Data CLI (`azdata`)](/sql/azdata/install/deploy-install-azdata).
+Azure Data CLI (`azdata`) version number: 20.3.1. You can install `azdata` from [Install Azure Data CLI (`azdata`)](/sql/azdata/install/deploy-install-azdata).
Additional updates include:
For issues associated with this release, see [Known issues - Azure Arc enabled d
### New capabilities and features
-Azure Data CLI (`azdata`) version number: 20.3.0. Download at [https://aka.ms/azdata](https://aka.ms/azdata). You can install `azdata` from [Install Azure Data CLI (`azdata`)](/sql/azdata/install/deploy-install-azdata).
+Azure Data CLI (`azdata`) version number: 20.3.0. You can install `azdata` from [Install Azure Data CLI (`azdata`)](/sql/azdata/install/deploy-install-azdata).
Additional updates include: - Localized portal available for 17 new languages
Additional updates include:
### New capabilities & features
-Azure Data CLI (`azdata`) version number: 20.2.5. Download at [https://aka.ms/azdata](https://aka.ms/azdata).
+Azure Data CLI (`azdata`) version number: 20.2.5. You can install `azdata` from [Install Azure Data CLI (`azdata`)](/sql/azdata/install/deploy-install-azdata).
View endpoints for SQL Managed Instance and PostgreSQL Hyperscale using the Azure Data CLI (`azdata`) with `azdata arc sql endpoint list` and `azdata arc postgres endpoint list` commands.
azdata arc dc create --profile-name azure-arc-aks-hci --namespace arc --name arc
## October 2020
-Azure Data CLI (`azdata`) version number: 20.2.3. Download at [https://aka.ms/azdata](https://aka.ms/azdata).
+Azure Data CLI (`azdata`) version number: 20.2.3. You can install `azdata` from [Install Azure Data CLI (`azdata`)](/sql/azdata/install/deploy-install-azdata).
### Breaking changes
azure-maps About Azure Maps https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/about-azure-maps.md
For more information, see the [Traffic service documentation](/rest/api/maps/tra
Weather services offer APIs that developers can use to retrieve weather information for a particular location. The information contains details such as observation date and time, brief description of the weather conditions, weather icon, precipitation indicator flags, temperature, and wind speed information. Additional details such as RealFeelΓäó Temperature and UV index are also returned.
-Developers can use the [Get Weather along route API](/rest/api/maps/weather/getweatheralongroutepreview) to retrieve weather information along a particular route. Also, the service supports the generation of weather notifications for waypoints that are affected by weather hazards, such as flooding or heavy rain.
+Developers can use the [Get Weather along route API](/rest/api/maps/weather/getweatheralongroute) to retrieve weather information along a particular route. Also, the service supports the generation of weather notifications for waypoints that are affected by weather hazards, such as flooding or heavy rain.
The [Get Map Tile V2 API](/rest/api/maps/renderv2/getmaptilepreview) allows you to request past, current, and future radar and satellite tiles.
azure-maps How To Request Weather Data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/how-to-request-weather-data.md
Azure Maps [Weather services](/rest/api/maps/weather) are a set of RESTful APIs
In this article youΓÇÖll learn, how to:
-* Request real-time (current) weather data using the [Get Current Conditions API](/rest/api/maps/weather/getcurrentconditionspreview).
-* Request severe weather alerts using the [Get Severe Weather Alerts API](/rest/api/maps/weather/getsevereweatheralertspreview).
-* Request daily forecasts using the [Get Daily Forecast API](/rest/api/maps/weather/getdailyforecastpreview).
-* Request hourly forecasts using the [Get Hourly Forecast API](/rest/api/maps/weather/gethourlyforecastpreview).
-* Request minute by minute forecasts using the [Get Minute Forecast API](/rest/api/maps/weather/getminuteforecastpreview).
+* Request real-time (current) weather data using the [Get Current Conditions API](/rest/api/maps/weather/getcurrentconditions).
+* Request severe weather alerts using the [Get Severe Weather Alerts API](/rest/api/maps/weather/getsevereweatheralerts).
+* Request daily forecasts using the [Get Daily Forecast API](/rest/api/maps/weather/getdailyforecast).
+* Request hourly forecasts using the [Get Hourly Forecast API](/rest/api/maps/weather/gethourlyforecast).
+* Request minute by minute forecasts using the [Get Minute Forecast API](/rest/api/maps/weather/getminuteforecast).
This video provides examples for making REST calls to Azure Maps Weather services.
This video provides examples for making REST calls to Azure Maps Weather service
2. [Obtain a primary subscription key](quick-demo-map-app.md#get-the-primary-key-for-your-account), also known as the primary key or the subscription key. For more information on authentication in Azure Maps, see [manage authentication in Azure Maps](./how-to-manage-authentication.md). >[!IMPORTANT]
- >The [Get Minute Forecast API](/rest/api/maps/weather/getminuteforecastpreview) requires an S1 pricing tier key. All other APIs require an S0 pricing tier key.
+ >The [Get Minute Forecast API](/rest/api/maps/weather/getminuteforecast) requires an S1 pricing tier key. All other APIs require an S0 pricing tier key.
This tutorial uses the [Postman](https://www.postman.com/) application, but you may choose a different API development environment. ## Request real-time weather data
-The [Get Current Conditions API](/rest/api/maps/weather/getcurrentconditionspreview) returns detailed weather conditions such as precipitation, temperature, and wind for a given coordinate location. Also, observations from the past 6 or 24 hours for a particular location can be retrieved. The response includes details like observation date and time, brief description of the weather conditions, weather icon, precipitation indicator flags, and temperature. RealFeelΓäó Temperature and ultraviolet(UV) index are also returned.
+The [Get Current Conditions API](/rest/api/maps/weather/getcurrentconditions) returns detailed weather conditions such as precipitation, temperature, and wind for a given coordinate location. Also, observations from the past 6 or 24 hours for a particular location can be retrieved. The response includes details like observation date and time, brief description of the weather conditions, weather icon, precipitation indicator flags, and temperature. RealFeelΓäó Temperature and ultraviolet(UV) index are also returned.
-In this example, you'll use the [Get Current Conditions API](/rest/api/maps/weather/getcurrentconditionspreview) to retrieve current weather conditions at coordinates located in Seattle, WA.
+In this example, you'll use the [Get Current Conditions API](/rest/api/maps/weather/getcurrentconditions) to retrieve current weather conditions at coordinates located in Seattle, WA.
1. Open the Postman app. Near the top of the Postman app, select **New**. In the **Create New** window, select **Collection**. Name the collection and select the **Create** button. You'll use this collection for the rest of the examples in this document.
In this example, you'll use the [Get Current Conditions API](/rest/api/maps/weat
## Request severe weather alerts
-[Azure Maps Get Severe Weather Alerts API](/rest/api/maps/weather/getsevereweatheralertspreview) returns the severe weather alerts that are available worldwide from both official Government Meteorological Agencies and leading global to regional weather alert providers. The service can return details such as alert type, category, level, and detailed descriptions about the active severe alerts for the requested location, such as hurricanes, thunderstorms, lightning, heat waves or forest fires. As an example, logistics managers can visualize severe weather conditions on a map, along with business locations and planned routes, and coordinate further with drivers and local workers.
+[Azure Maps Get Severe Weather Alerts API](/rest/api/maps/weather/getsevereweatheralerts) returns the severe weather alerts that are available worldwide from both official Government Meteorological Agencies and leading global to regional weather alert providers. The service can return details such as alert type, category, level, and detailed descriptions about the active severe alerts for the requested location, such as hurricanes, thunderstorms, lightning, heat waves or forest fires. As an example, logistics managers can visualize severe weather conditions on a map, along with business locations and planned routes, and coordinate further with drivers and local workers.
-In this example, you'll use the [Get Severe Weather Alerts API](/rest/api/maps/weather/getsevereweatheralertspreview) to retrieve current weather conditions at coordinates located in Cheyenne, WY.
+In this example, you'll use the [Get Severe Weather Alerts API](/rest/api/maps/weather/getsevereweatheralerts) to retrieve current weather conditions at coordinates located in Cheyenne, WY.
>[!NOTE] >This example retrieves severe weather alerts at the time of this writing. It is likely that there are no longer any severe weather alerts at the requested location. To retrieve actual severe alert data when running this example, you'll need to retrieve data at a different coordinate location.
In this example, you'll use the [Get Severe Weather Alerts API](/rest/api/maps/w
## Request daily weather forecast data
-The [Get Daily Forecast API](/rest/api/maps/weather/getdailyforecastpreview) returns detailed daily weather forecast such as temperature and wind. The request can specify how many days to return: 1, 5, 10, 15, 25, or 45 days for a given coordinate location. The response includes details such as temperature, wind, precipitation, air quality, and UV index. In this example, we request for five days by setting `duration=5`.
+The [Get Daily Forecast API](/rest/api/maps/weather/getdailyforecast) returns detailed daily weather forecast such as temperature and wind. The request can specify how many days to return: 1, 5, 10, 15, 25, or 45 days for a given coordinate location. The response includes details such as temperature, wind, precipitation, air quality, and UV index. In this example, we request for five days by setting `duration=5`.
>[!IMPORTANT] >In the S0 pricing tier, you can request daily forecast for the next 1, 5, 10, and 15 days. In the S1 pricing tier, you can also request daily forecast for the next 25 days, and 45 days.
-In this example, you'll use the [Get Daily Forecast API](/rest/api/maps/weather/getdailyforecastpreview) to retrieve the five-day weather forecast for coordinates located in Seattle, WA.
+In this example, you'll use the [Get Daily Forecast API](/rest/api/maps/weather/getdailyforecast) to retrieve the five-day weather forecast for coordinates located in Seattle, WA.
1. Open the Postman app, click **New**, and select **Request**. Enter a **Request name** for the request. Select the collection you created in the previous section or created a new one, and then select **Save**.
In this example, you'll use the [Get Daily Forecast API](/rest/api/maps/weather/
## Request hourly weather forecast data
-The [Get Hourly Forecast API](/rest/api/maps/weather/gethourlyforecastpreview) returns detailed weather forecast by the hour for the next 1, 12, 24 (1 day), 72 (3 days), 120 (5 days), and 240 hours (10 days) for the given coordinate location. The API returns details such as temperature, humidity, wind, precipitation, and UV index.
+The [Get Hourly Forecast API](/rest/api/maps/weather/gethourlyforecast) returns detailed weather forecast by the hour for the next 1, 12, 24 (1 day), 72 (3 days), 120 (5 days), and 240 hours (10 days) for the given coordinate location. The API returns details such as temperature, humidity, wind, precipitation, and UV index.
>[!IMPORTANT] >In the S0 pricing tier, you can request hourly forecast for the next 1, 12, 24 hours (1 day), and 72 hours (3 days). In the S1 pricing tier, you can also request hourly forecast for the next 120 (5 days) and 240 hours (10 days).
-In this example, you'll use the [Get Hourly Forecast API](/rest/api/maps/weather/gethourlyforecastpreview) to retrieve the hourly weather forecast for the next 12 hours at coordinates located in Seattle, WA.
+In this example, you'll use the [Get Hourly Forecast API](/rest/api/maps/weather/gethourlyforecast) to retrieve the hourly weather forecast for the next 12 hours at coordinates located in Seattle, WA.
1. Open the Postman app, click **New**, and select **Request**. Enter a **Request name** for the request. Select the collection you created in the previous section or created a new one, and then select **Save**.
In this example, you'll use the [Get Hourly Forecast API](/rest/api/maps/weather
``` ## Request minute-by-minute weather forecast data
- The [Get Minute Forecast API](/rest/api/maps/weather/getminuteforecastpreview) returns minute-by-minute forecasts for a given location for the next 120 minutes. Users can request weather forecasts in intervals of 1, 5 and 15 minutes. The response includes details such as the type of precipitation (including rain, snow, or a mixture of both), start time, and precipitation intensity value (dBZ).
+ The [Get Minute Forecast API](/rest/api/maps/weather/getminuteforecast) returns minute-by-minute forecasts for a given location for the next 120 minutes. Users can request weather forecasts in intervals of 1, 5 and 15 minutes. The response includes details such as the type of precipitation (including rain, snow, or a mixture of both), start time, and precipitation intensity value (dBZ).
-In this example, you'll use the [Get Minute Forecast API](/rest/api/maps/weather/getminuteforecastpreview) to retrieve the minute-by-minute weather forecast at coordinates located in Seattle, WA. The weather forecast is given for the next 120 minutes. Our query requests that the forecast be given at 15-minute intervals, but you can adjust the parameter to be either 1 or 5 minutes.
+In this example, you'll use the [Get Minute Forecast API](/rest/api/maps/weather/getminuteforecast) to retrieve the minute-by-minute weather forecast at coordinates located in Seattle, WA. The weather forecast is given for the next 120 minutes. Our query requests that the forecast be given at 15-minute intervals, but you can adjust the parameter to be either 1 or 5 minutes.
1. Open the Postman app, click **New**, and select **Request**. Enter a **Request name** for the request. Select the collection you created in the previous section or created a new one, and then select **Save**.
azure-maps Weather Service Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/weather-service-tutorial.md
In this tutorial, you will:
> * Load demo data from file. > * Call Azure Maps REST APIs in Python. > * Render location data on the map.
-> * Enrich the demo data with Azure Maps [Daily Forecast](/rest/api/maps/weather/getdailyforecastpreview) weather data.
+> * Enrich the demo data with Azure Maps [Daily Forecast](/rest/api/maps/weather/getdailyforecast) weather data.
> * Plot forecast data in graphs.
df = pd.read_csv("./data/weather_dataset_demo.csv")
## Request daily forecast data
-In our scenario, we would like to request daily forecast for each sensor location. The following script calls the [Daily Forecast API](/rest/api/maps/weather/getdailyforecastpreview) of the Azure Maps Weather services (Preview). This API returns weather forecast for each wind turbine, for the next 15 days from the current date.
+In our scenario, we would like to request daily forecast for each sensor location. The following script calls the [Daily Forecast API](/rest/api/maps/weather/getdailyforecast) of the Azure Maps Weather services (Preview). This API returns weather forecast for each wind turbine, for the next 15 days from the current date.
```python
To learn more about how to call Azure Maps REST APIs inside Azure Notebooks, see
To explore the Azure Maps APIs that are used in this tutorial, see:
-* [Daily Forecast](/rest/api/maps/weather/getdailyforecastpreview)
+* [Daily Forecast](/rest/api/maps/weather/getdailyforecast)
* [Render - Get Map Image](/rest/api/maps/render/getmapimage) For a complete list of Azure Maps REST APIs, see [Azure Maps REST APIs](./consumption-model.md).
azure-maps Weather Services Faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/weather-services-faq.md
Yes. In addition to real-time radar and satellite tiles, Azure Maps customers ca
**Do you offer icons for different weather conditions?**
-Yes. You can find icons and their respective codes [here](./weather-services-concepts.md#weather-icons). Notice that only some of the Weather service (Preview) APIs, such as [Get Current Conditions API](/rest/api/maps/weather/getcurrentconditionspreview), return the *iconCode* in the response. For more information, see the Current WeatherConditions open-source [code sample](https://azuremapscodesamples.azurewebsites.net/https://docsupdatetracker.net/index.html?sample=Get%20current%20weather%20at%20a%20location).
+Yes. You can find icons and their respective codes [here](./weather-services-concepts.md#weather-icons). Notice that only some of the Weather service (Preview) APIs, such as [Get Current Conditions API](/rest/api/maps/weather/getcurrentconditions), return the *iconCode* in the response. For more information, see the Current WeatherConditions open-source [code sample](https://azuremapscodesamples.azurewebsites.net/https://docsupdatetracker.net/index.html?sample=Get%20current%20weather%20at%20a%20location).
## Next steps
azure-monitor Action Groups Logic App https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/alerts/action-groups-logic-app.md
Title: How to trigger complex actions with Azure Monitor alerts
+ Title: Trigger complex actions with Azure Monitor alerts
description: Learn how to create a logic app action to process Azure Monitor alerts.
The process is similar if you want the logic app to perform a different action.
## Create an activity log alert: Administrative
-1. [Create a Logic App](~/articles/logic-apps/quickstart-create-first-logic-app-workflow.md)
+1. [Create a logic app](~/articles/logic-apps/quickstart-create-first-logic-app-workflow.md).
-2. Select the trigger: **When a HTTP request is received**.
+1. Select the trigger: **When a HTTP request is received**.
1. In the dialog for **When an HTTP request is received**, select **Use sample payload to generate schema**.
- ![Screenshot that shows the When an H T T P request dialog box and the Use sample payload to generate schema opion selected. ](~/articles/app-service/media/tutorial-send-email/generate-schema-with-payload.png)
+ ![Screenshot that shows the When an H T T P request dialog box and the Use sample payload to generate schema option selected. ](~/articles/app-service/media/tutorial-send-email/generate-schema-with-payload.png)
-3. Copy and paste the following sample payload into the dialog box:
+1. Copy and paste the following sample payload into the dialog box:
```json {
The process is similar if you want the logic app to perform a different action.
} ```
-9. The **Logic App Designer** displays a pop-up window to remind you that the request sent to the logic app must set the **Content-Type** header to **application/json**. Close the pop-up window. The Azure Monitor alert sets the header.
+1. The **Logic Apps Designer** displays a pop-up window to remind you that the request sent to the logic app must set the **Content-Type** header to **application/json**. Close the pop-up window. The Azure Monitor alert sets the header.
![Set the Content-Type header](media/action-groups-logic-app/content-type-header.png "Set the Content-Type header")
-10. Select **+** **New step** and then choose **Add an action**.
+1. Select **+** **New step** and then choose **Add an action**.
![Add an action](media/action-groups-logic-app/add-action.png "Add an action")
-11. Search for and select the Microsoft Teams connector. Choose the **Microsoft Teams - Post message** action.
+1. Search for and select the Microsoft Teams connector. Choose the **Microsoft Teams - Post message** action.
![Microsoft Teams actions](media/action-groups-logic-app/microsoft-teams-actions.png "Microsoft Teams actions")
-12. Configure the Microsoft Teams action. The **Logic Apps Designer** asks you to authenticate to your work or school account. Choose the **Team ID** and **Channel ID** to send the message to.
+1. Configure the Microsoft Teams action. The **Logic Apps Designer** asks you to authenticate to your work or school account. Choose the **Team ID** and **Channel ID** to send the message to.
13. Configure the message by using a combination of static text and references to the \<fields\> in the dynamic content. Copy and paste the following text into the **Message** field:
The process is similar if you want the logic app to perform a different action.
![Microsoft Teams action: Post a message](media/action-groups-logic-app/teams-action-post-message.png "Microsoft Teams action: Post a message")
-14. At the top of the **Logic Apps Designer**, select **Save** to save your logic app.
+1. At the top of the **Logic Apps Designer**, select **Save** to save your logic app.
-15. Open your existing action group and add an action to reference the logic app. If you don't have an existing action group, see [Create and manage action groups in the Azure portal](./action-groups.md) to create one. DonΓÇÖt forget to save your changes.
+1. Open your existing action group and add an action to reference the logic app. If you don't have an existing action group, see [Create and manage action groups in the Azure portal](./action-groups.md) to create one. DonΓÇÖt forget to save your changes.
![Update the action group](media/action-groups-logic-app/update-action-group.png "Update the action group")
Azure Service Health entries are part of the activity log. The process for creat
!["Service Health payload condition"](media/action-groups-logic-app/service-health-payload-condition.png "Service Health payload condition")
- 1. In the **if true** condition, follow the instructions in steps 11 through 13 in [Create an activity log alert](#create-an-activity-log-alert-administrative) to add the Microsoft Teams action.
+ 1. In the **If true** condition, follow the instructions in steps 11 through 13 in [Create an activity log alert](#create-an-activity-log-alert-administrative) to add the Microsoft Teams action.
1. Define the message by using a combination of HTML and dynamic content. Copy and paste the following content into the **Message** field. Replace the `[incidentType]`, `[trackingID]`, `[title]`, and `[communication]` fields with dynamic content tags of the same name:
The process for creating a metric alert is similar to [creating an activity log
!["Metric alert payload condition"](media/action-groups-logic-app/metric-alert-payload-condition.png "Metric alert payload condition")
- 1. In the **if true** condition, add a **For each** loop and the Microsoft Teams action. Define the message by using a combination of HTML and dynamic content.
+ 1. In the **If true** condition, add a **For each** loop and the Microsoft Teams action. Define the message by using a combination of HTML and dynamic content.
!["Metric alert true condition post action"](media/action-groups-logic-app/metric-alert-true-condition-post-action.png "Metric alert true condition post action")
Logic Apps has a number of different connectors that allow you to trigger action
## Next steps * Get an [overview of Azure activity log alerts](./alerts-overview.md) and learn how to receive alerts. * Learn how to [configure alerts when an Azure Service Health notification is posted](../../service-health/alerts-activity-log-service-notifications-portal.md).
-* Learn more about [action groups](./action-groups.md).
+* Learn more about [action groups](./action-groups.md).
azure-monitor Annotations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/annotations.md
To enable annotations in your workbook go to **Advanced Settings** and select **
Select any annotation marker to open details about the release, including requestor, source control branch, release pipeline, and environment. ## Create custom annotations from PowerShell
-You can use the CreateReleaseAnnotation PowerShell script from GitHub to create annotations from any process you like, without using Azure DevOps.
+You can use the CreateReleaseAnnotation PowerShell script to create annotations from any process you like, without using Azure DevOps.
+
+> [!IMPORTANT]
+> If you are using PowerShell 7.1, add `-SkipHttpErrorCheck` at the end of line 26. For example: `$request = Invoke-WebRequest -Uri $fwLink -MaximumRedirection 0 -UseBasicParsing -ErrorAction Ignore -SkipHttpErrorCheck`.
1. Make a local copy of CreateReleaseAnnotation.ps1:
You can use the CreateReleaseAnnotation PowerShell script from GitHub to create
You can modify the script, for example to create annotations for the past. + ## Next steps * [Create work items](./diagnostic-search.md#create-work-item) * [Automation with PowerShell](./powershell.md)-
azure-monitor Change Analysis https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/change-analysis.md
Change Analysis captures the deployment and configuration state of an applicatio
![Screenshot of the "Scan changes now" button](./media/change-analysis/scan-changes.png) Currently all text-based files under site root **wwwroot** with the following extensions are supported:-- *.config-- *.xml - *.json-- *.gem-- *.yml-- *.txt
+- *.xml
- *.ini-- *.env
+- *.yml
+- *.config
+- *.properties
+- *.html
+- *.cshtml
+- *.js
+- requirements.txt
+- Gemfile
+- Gemfile.lock
+- config.gemspec
### Dependency changes
azure-monitor Status Monitor V2 Get Started https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/status-monitor-v2-get-started.md
If you don't have an Azure subscription, create a [free account](https://azure.m
### Install prerequisites
+- To enable monitoring you will require a connection string. A connection string is displayed on the Overview blade of your Application Insights resource. For more information, see page [Connection Strings](https://docs.microsoft.com/azure/azure-monitor/app/sdk-connection-string?tabs=net#finding-my-connection-string).
+ > [!NOTE] > As of April 2020, PowerShell Gallery has deprecated TLS 1.1 and 1.0. >
Install-Module -Name Az.ApplicationMonitor -AllowPrerelease -AcceptLicense
``` ### Enable monitoring+ ```powershell Set-ExecutionPolicy -ExecutionPolicy RemoteSigned -Scope Process -Force
-Enable-ApplicationInsightsMonitoring -ConnectionString xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
+Enable-ApplicationInsightsMonitoring -ConnectionString 'xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx'
```
$pathInstalledModule = "$Env:ProgramFiles\WindowsPowerShell\Modules\Az.Applicati
Expand-Archive -LiteralPath $pathToZip -DestinationPath $pathInstalledModule ``` ### Enable monitoring+ ```powershell
-Enable-ApplicationInsightsMonitoring -ConnectionString xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
+Enable-ApplicationInsightsMonitoring -ConnectionString 'xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx'
``` + ## Next steps View your telemetry:
azure-monitor Sql Insights Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/insights/sql-insights-troubleshoot.md
InsightsMetrics
``` ```
-Operation
- | where OperationCategory == "WorkloadInsights"
- | summarize Errors = countif(OperationStatus == 'Error')
+WorkloadDiagnosticLogs
+| summarize Errors = countif(Status == 'Error')
```
+> [!NOTE]
+> If you do not see any data in the 'WorkloadDiagnosticLogs' data type then you may need to update your monitoring profile to store this data. From within the SQL insights UX select 'Manage profile', then select 'Edit profile', and then select 'Update monitoring profile'.
++ For common cases, we provide troubleshooting knowledge in our logs view: :::image type="content" source="media/sql-insights-enable/troubleshooting-logs-view.png" alt-text="Troubleshooting logs view.":::
azure-netapp-files Create Active Directory Connections https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/create-active-directory-connections.md
na ms.devlang: na Previously updated : 03/24/2021 Last updated : 04/06/2021 # Create and manage Active Directory connections for Azure NetApp Files
This setting is configured in the **Active Directory Connections** under **NetAp
For example, user accounts used for installing SQL Server in certain scenarios must be granted elevated security privilege. If you are using a non-administrator (domain) account to install SQL Server and the account does not have the security privilege assigned, you should add security privilege to the account. > [!IMPORTANT]
- > The domain account used for installing SQL Server must already exist before you add it to the **Security privilege users** field. When you add the SQL Server installer's account to **Security privilege users**, the Azure NetApp Files service might validate the account by contacting the domain controller. The command might fail if it cannot contact the domain controller.
+ > Using the **Security privilege users** feature requires that you submit a waitlist request through the **[Azure NetApp Files SMB Continuous Availability Shares Public Preview waitlist submission page](https://aka.ms/anfsmbcasharespreviewsignup)**. Wait for an official confirmation email from the Azure NetApp Files team before using this feature.
+ >
+ > Using this feature is optional and supported only for SQL Server. The domain account used for installing SQL Server must already exist before you add it to the **Security privilege users** field. When you add the SQL Server installer's account to **Security privilege users**, the Azure NetApp Files service might validate the account by contacting the domain controller. The command might fail if it cannot contact the domain controller.
For more information about `SeSecurityPrivilege` and SQL Server, see [SQL Server installation fails if the Setup account doesn't have certain user rights](/troubleshoot/sql/install/installation-fails-if-remove-user-right).
azure-resource-manager Azure Subscription Service Limits https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/management/azure-subscription-service-limits.md
Title: Azure subscription limits and quotas description: Provides a list of common Azure subscription and service limits, quotas, and constraints. This article includes information on how to increase limits along with maximum values. Previously updated : 03/15/2021 Last updated : 04/07/2021 # Azure subscription and service limits, quotas, and constraints
To learn more about Azure pricing, see [Azure pricing overview](https://azure.mi
Some limits are managed at a regional level.
-Let's use vCPU quotas as an example. To request a quota increase with support for vCPUs, you must decide how many vCPUs you want to use in which regions. You then make a specific request for Azure resource group vCPU quotas for the amounts and regions that you want. If you need to use 30 vCPUs in West Europe to run your application there, you specifically request 30 vCPUs in West Europe. Your vCPU quota isn't increased in any other region--only West Europe has the 30-vCPU quota.
+Let's use vCPU quotas as an example. To request a quota increase with support for vCPUs, you must decide how many vCPUs you want to use in which regions. You then request an increase in vCPU quotas for the amounts and regions that you want. If you need to use 30 vCPUs in West Europe to run your application there, you specifically request 30 vCPUs in West Europe. Your vCPU quota isn't increased in any other region--only West Europe has the 30-vCPU quota.
-As a result, decide what your Azure resource group quotas must be for your workload in any one region. Then request that amount in each region into which you want to deploy. For help in how to determine your current quotas for specific regions, see [Resolve errors for resource quotas](../templates/error-resource-quota.md).
+As a result, decide what your quotas must be for your workload in any one region. Then request that amount in each region into which you want to deploy. For help in how to determine your current quotas for specific regions, see [Resolve errors for resource quotas](../templates/error-resource-quota.md).
## General limits
azure-resource-manager Lock Resources https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/management/lock-resources.md
Title: Lock resources to prevent changes description: Prevent users from updating or deleting Azure resources by applying a lock for all users and roles. Previously updated : 03/09/2021 Last updated : 04/07/2021
Resource Manager locks apply only to operations that happen in the management pl
Applying locks can lead to unexpected results because some operations that don't seem to modify the resource actually require actions that are blocked by the lock. Locks will prevent any operations that require a POST request to the Azure Resource Manager API. Some common examples of the operations that are blocked by locks are:
-* A read-only lock on a **storage account** prevents users from listing the account keys. The Azure Storage [List Keys](/rest/api/storagerp/storageaccounts/listkeys) operation is handled through a POST request to protect access to the account keys, which provide complete access to data in the storage account. When a read-only lock is configured for a storage account, users who do not possess the account keys must use Azure AD credentials to access blob or queue data. A read-only lock also prevents the assignment of Azure RBAC roles that are scoped to the storage account or to a data container (blob container or queue).
+* A read-only lock on a **storage account** prevents users from listing the account keys. The Azure Storage [List Keys](/rest/api/storagerp/storageaccounts/listkeys) operation is handled through a POST request to protect access to the account keys, which provide complete access to data in the storage account. When a read-only lock is configured for a storage account, users who don't have the account keys must use Azure AD credentials to access blob or queue data. A read-only lock also prevents the assignment of Azure RBAC roles that are scoped to the storage account or to a data container (blob container or queue).
-* A cannot-delete lock on a **storage account** does not prevent data within that account from being deleted or modified. This type of lock only protects the storage account itself from being deleted, and does not protect blob, queue, table, or file data within that storage account.
+* A cannot-delete lock on a **storage account** doesn't prevent data within that account from being deleted or modified. This type of lock only protects the storage account itself from being deleted, and doesn't protect blob, queue, table, or file data within that storage account.
-* A read-only lock on a **storage account** does not prevent data within that account from being deleted or modified. This type of lock only protects the storage account itself from being deleted or modified, and does not protect blob, queue, table, or file data within that storage account.
+* A read-only lock on a **storage account** doesn't prevent data within that account from being deleted or modified. This type of lock only protects the storage account itself from being deleted or modified, and doesn't protect blob, queue, table, or file data within that storage account.
* A read-only lock on an **App Service** resource prevents Visual Studio Server Explorer from displaying files for the resource because that interaction requires write access.
+* A read-only lock on a **resource group** that contains an **App Service plan** prevents you from [scaling up or out the plan](../../app-service/manage-scale-up.md).
+ * A read-only lock on a **resource group** that contains a **virtual machine** prevents all users from starting or restarting the virtual machine. These operations require a POST request. * A cannot-delete lock on a **resource group** prevents Azure Resource Manager from [automatically deleting deployments](../templates/deployment-history-deletions.md) in the history. If you reach 800 deployments in the history, your deployments will fail.
azure-resource-manager Resource Name Rules https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/management/resource-name-rules.md
In the following tables, the term alphanumeric refers to:
> [!NOTE] > Azure virtual machines have two distinct names: resource name and host name. When you create a virtual machine in the portal, the same value is used for both names. The restrictions in the preceding table are for the host name. The actual resource name can have up to 64 characters.
+## Microsoft.Communication
+
+> [!div class="mx-tableFixed"]
+> | Entity | Scope | Length | Valid Characters |
+> | | | | |
+> | communicationServices | global | 1-63 | Alphanumerics, hyphens, and underscores. |
+ ## Microsoft.ContainerInstance > [!div class="mx-tableFixed"]
azure-sql Planned Maintenance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/planned-maintenance.md
Maintenance event can produce single or multiple reconfigurations, depending on
## How to simulate a planned maintenance event
-Ensuring that your client application is resilient to maintenance events prior to deploying to production will help mitigate the risk of application faults and will contribute to application availability for your end users. You can test behavior of your client application during planned maintenance events by [initiating manual failover](https://aka.ms/mifailover-techblog) via PowerShell, CLI, or REST API. It will produce identical behavior as maintenance event bringing primary replica offline.
+Ensuring that your client application is resilient to maintenance events prior to deploying to production will help mitigate the risk of application faults and will contribute to application availability for your end users.You can test behavior of your client application during planned maintenance events by [Testing Application Fault Resiliency](https://docs.microsoft.com/azure/azure-sql/database/high-availability-sla#testing-application-fault-resiliency) via PowerShell, CLI or REST API. Also see [initiating manual failover](https://aka.ms/mifailover-techblog) for Managed Instance. It will produce identical behavior as maintenance event bringing primary replica offline.
## Retry logic
azure-sql Access To Sql Database Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/migration-guides/database/access-to-sql-database-guide.md
Last updated 03/19/2021
# Migration guide: Access to Azure SQL Database
-In this guide, you learn how to migrate your Microsoft Access database to an Azure SQL database by using SQL Server Migration Assistant for Access (SSMA for Access).
+In this guide, you learn [how to migrate](https://azure.microsoft.com/migration/migration-journey) your Microsoft Access database to an Azure SQL database by using [SQL Server Migration](https://azure.microsoft.com/migration/migration-journey) Assistant for Access (SSMA for Access).
For other migration guides, see [Azure Database Migration Guide](https://docs.microsoft.com/data-migration).
Before you begin migrating your Access database to a SQL database, do the follow
## Pre-migration
-After you've met the prerequisites, you're ready to discover the topology of your environment and assess the feasibility of your migration.
+After you've met the prerequisites, you're ready to discover the topology of your environment and assess the feasibility of your [Azure cloud migration](https://azure.microsoft.com/migration).
### Assess
The Data SQL Engineering team developed these resources. This team's core charte
- To learn more about the framework and adoption cycle for cloud migrations, see: - [Cloud Adoption Framework for Azure](/azure/cloud-adoption-framework/migrate/azure-best-practices/contoso-migration-scale) - [Best practices for costing and sizing workloads for migration to Azure](/azure/cloud-adoption-framework/migrate/azure-best-practices/migrate-best-practices-costs)
+ - [Cloud Migration Resources](https://azure.microsoft.com/migration/resources)
+ - To assess the application access layer, see [Data Access Migration Toolkit (preview)](https://marketplace.visualstudio.com/items?itemName=ms-databasemigration.data-access-migration-toolkit). - For information about how to perform Data Access Layer A/B testing, see [Overview of Database Experimentation Assistant](/sql/dea/database-experimentation-assistant-overview).
azure-sql Db2 To Sql Database Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/migration-guides/database/db2-to-sql-database-guide.md
Last updated 11/06/2020
# Migration guide: IBM Db2 to Azure SQL Database [!INCLUDE[appliesto-sqldb-sqlmi](../../includes/appliesto-sqldb.md)]
-This guide teaches you to migrate your IBM Db2 databases to Azure SQL Database, by using the SQL Server Migration Assistant for Db2.
+In this guide, you learn [how to migrate](https://azure.microsoft.com/migration/migration-journey) your IBM Db2 databases to Azure SQL Database, by using [SQL Server Migration](https://azure.microsoft.com/migration/migration-journey) Assistant for Db2.
For other migration guides, see [Azure Database Migration Guides](https://docs.microsoft.com/data-migration).
To migrate your Db2 database to SQL Database, you need:
## Pre-migration
-After you have met the prerequisites, you're ready to discover the topology of your environment and assess the feasibility of your migration.
+After you have met the prerequisites, you're ready to discover the topology of your environment and assess the feasibility of your [Azure cloud migration](https://azure.microsoft.com/migration).
### Assess and convert
The Data SQL Engineering team developed these resources. This team's core charte
- To learn more about the framework and adoption cycle for cloud migrations, see: - [Cloud Adoption Framework for Azure](/azure/cloud-adoption-framework/migrate/azure-best-practices/contoso-migration-scale)
- - [Best practices for costing and sizing workloads migrated to Azure](/azure/cloud-adoption-framework/migrate/azure-best-practices/migrate-best-practices-costs)
+ - [Best practices for costing and sizing workloads migrated to Azure](/azure/cloud-adoption-framework/migrate/azure-best-practices/migrate-best-practices-costs)
+ - [Cloud Migration Resources](https://azure.microsoft.com/migration/resources)
- To assess the application access layer, see [Data Access Migration Toolkit](https://marketplace.visualstudio.com/items?itemName=ms-databasemigration.data-access-migration-toolkit). - For details on how to perform data access layer A/B testing, see [Database Experimentation Assistant](/sql/dea/database-experimentation-assistant-overview).
azure-sql Mysql To Sql Database Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/migration-guides/database/mysql-to-sql-database-guide.md
Last updated 03/19/2021
# Migration guide: MySQL to Azure SQL Database [!INCLUDE[appliesto-sqldb-sqlmi](../../includes/appliesto-sqldb.md)]
-In this guide, you learn how to migrate your MySQL database to an Azure SQL database by using SQL Server Migration Assistant for MySQL (SSMA for MySQL).
+In this guide, you learn [how to migrate](https://azure.microsoft.com/migration/migration-journey) your MySQL database to an Azure SQL database by using [SQL Server Migration](https://azure.microsoft.com/migration/migration-journey) Assistant for MySQL (SSMA for MySQL).
For other migration guides, see [Azure Database Migration Guide](https://docs.microsoft.com/data-migration).
Before you begin migrating your MySQL database to a SQL database, do the followi
## Pre-migration
-After you've met the prerequisites, you're ready to discover the topology of your environment and assess the feasibility of your migration.
+After you've met the prerequisites, you're ready to discover the topology of your environment and assess the feasibility of your [Azure cloud migration](https://azure.microsoft.com/migration).
### Assess
The Data SQL Engineering team developed these resources. This team's core charte
- For other migration guides, see [Azure Database Migration Guide](https://datamigration.microsoft.com/). - For migration videos, see [Overview of the migration journey and recommended migration and assessment tools and services](https://azure.microsoft.com/resources/videos/overview-of-migration-and-recommended-tools-services/).+
+- For more [cloud migration resources](https://azure.microsoft.com/migration/resources/), see [cloud migration solutions](https://azure.microsoft.com/migration).
+
azure-sql Oracle To Sql Database Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/migration-guides/database/oracle-to-sql-database-guide.md
Last updated 08/25/2020
[!INCLUDE[appliesto-sqldb-sqlmi](../../includes/appliesto-sqldb.md)]
-In this guide, you learn how to migrate your Oracle schemas to Azure SQL Database by using SQL Server Migration Assistant for Oracle (SSMA for Oracle).
+In this guide, you learn [how to migrate](https://azure.microsoft.com/migration/migration-journey) your Oracle schemas to Azure SQL Database by using [SQL Server Migration](https://azure.microsoft.com/migration/migration-journey) Assistant for Oracle (SSMA for Oracle).
For other migration guides, see [Azure Database Migration Guides](https://docs.microsoft.com/data-migration).
For other migration guides, see [Azure Database Migration Guides](https://docs.m
Before you begin migrating your Oracle schema to SQL Database: - Verify that your source environment is supported.-- Download [SSMA for Oracle](https://www.microsoft.com/en-us/download/details.aspx?id=54258).
+- Download [SSMA for Oracle](https://www.microsoft.com/download/details.aspx?id=54258).
- Have a target [SQL Database](../../database/single-database-create-quickstart.md) instance. - Obtain the [necessary permissions for SSMA for Oracle](/sql/ssma/oracle/connecting-to-oracle-database-oracletosql) and [provider](/sql/ssma/oracle/connect-to-oracle-oracletosql). ## Pre-migration
-After you've met the prerequisites, you're ready to discover the topology of your environment and assess the feasibility of your migration. This part of the process involves conducting an inventory of the databases that you need to migrate, assessing those databases for potential migration issues or blockers, and then resolving any items you might have uncovered.
+After you've met the prerequisites, you're ready to discover the topology of your environment and assess the feasibility of your [Azure cloud migration](https://azure.microsoft.com/migration). This part of the process involves conducting an inventory of the databases that you need to migrate, assessing those databases for potential migration issues or blockers, and then resolving any items you might have uncovered.
### Assess
By using SSMA for Oracle, you can review database objects and data, assess datab
To create an assessment:
-1. Open [SSMA for Oracle](https://www.microsoft.com/en-us/download/details.aspx?id=54258).
+1. Open [SSMA for Oracle](https://www.microsoft.com/download/details.aspx?id=54258).
1. Select **File**, and then select **New Project**. 1. Enter a project name and a location to save your project. Then select **Azure SQL Database** as the migration target from the drop-down list and select **OK**.
The Data SQL Engineering team developed these resources. This team's core charte
- To learn more about SQL Database, see: - [An overview of Azure SQL Database](../../database/sql-database-paas-overview.md)
- - [Azure Total Cost of Ownership (TCO) Calculator](https://azure.microsoft.com/en-us/pricing/tco/calculator/)
+ - [Azure Total Cost of Ownership (TCO) Calculator](https://azure.microsoft.com/pricing/tco/calculator/)
- To learn more about the framework and adoption cycle for cloud migrations, see: - [Cloud Adoption Framework for Azure](/azure/cloud-adoption-framework/migrate/azure-best-practices/contoso-migration-scale) - [Best practices for costing and sizing workloads for migration to Azure](/azure/cloud-adoption-framework/migrate/azure-best-practices/migrate-best-practices-costs)
+ - [Cloud Migration Resources](https://azure.microsoft.com/migration/resources)
- For video content, see: - [Overview of the migration journey and the tools and services recommended for performing assessment and migration](https://azure.microsoft.com/resources/videos/overview-of-migration-and-recommended-tools-services/)
azure-sql Sap Ase To Sql Database https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/migration-guides/database/sap-ase-to-sql-database.md
Last updated 03/19/2021
[!INCLUDE[appliesto-sqldb-sqlmi](../../includes/appliesto-sqldb.md)]
-In this guide, you learn how to migrate your SAP Adapter Server Enterprise (ASE) databases to an Azure SQL database by using SQL Server Migration Assistant for SAP Adapter Server Enterprise.
+In this guide, you learn [how to migrate](https://azure.microsoft.com/migration/migration-journey) your SAP Adapter Server Enterprise (ASE) databases to an Azure SQL database by using [SQL Server Migration](https://azure.microsoft.com/migration/migration-journey) Assistant for SAP Adapter Server Enterprise.
For other migration guides, see [Azure Database Migration Guide](https://docs.microsoft.com/data-migration).
Before you begin migrating your SAP SE database to your SQL database, do the fol
## Pre-migration
-After you've met the prerequisites, you're ready to discover the topology of your environment and assess the feasibility of your migration.
+After you've met the prerequisites, you're ready to discover the topology of your environment and assess the feasibility of your [Azure cloud migration](https://azure.microsoft.com/migration).
### Assess
For more information about these issues and the steps to mitigate them, see the
- To learn more about the framework and adoption cycle for cloud migrations, see: - [Cloud Adoption Framework for Azure](/azure/cloud-adoption-framework/migrate/azure-best-practices/contoso-migration-scale) - [Best practices for costing and sizing workloads for migration to Azure](/azure/cloud-adoption-framework/migrate/azure-best-practices/migrate-best-practices-costs)
+ - [Cloud Migration Resources](https://azure.microsoft.com/migration/resources)
- To assess the application access layer, see [Data Access Migration Toolkit (preview)](https://marketplace.visualstudio.com/items?itemName=ms-databasemigration.data-access-migration-toolkit). - For details on how to perform Data Access Layer A/B testing see [Database Experimentation Assistant](/sql/dea/database-experimentation-assistant-overview).
azure-sql Sql Server To Sql Database Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/migration-guides/database/sql-server-to-sql-database-guide.md
Last updated 03/19/2021
# Migration guide: SQL Server to Azure SQL Database [!INCLUDE[appliesto--sqldb](../../includes/appliesto-sqldb.md)]
-This guide helps you migrate your SQL Server instance to Azure SQL Database.
+In this guide, you learn [how to migrate](https://azure.microsoft.com/migration/migration-journey) your SQL Server instance to Azure SQL Database.
You can migrate SQL Server running on-premises or on:
For more migration information, see the [migration overview](sql-server-to-sql-d
## Prerequisites
-To migrate your SQL Server to Azure SQL Database, make sure you have the following prerequisites:
+For your [SQL Server migration](https://azure.microsoft.com/migration/migration-journey) to Azure SQL Database, make sure you have the following prerequisites:
- A chosen [migration method](sql-server-to-sql-database-overview.md#compare-migration-options) and corresponding tools . - [Data Migration Assistant (DMA)](https://www.microsoft.com/download/details.aspx?id=53595) installed on a machine that can connect to your source SQL Server.
To migrate your SQL Server to Azure SQL Database, make sure you have the followi
## Pre-migration
-After you've verified that your source environment is supported, start with the pre-migration stage. Discover all of the existing data sources, assess migration feasibility, and identify any blocking issues that might prevent your migration.
+After you've verified that your source environment is supported, start with the pre-migration stage. Discover all of the existing data sources, assess migration feasibility, and identify any blocking issues that might prevent your [Azure cloud migration](https://azure.microsoft.com/migration).
### Discover
To learn more, see [managing Azure SQL Database after migration](../../database/
- For a matrix of the Microsoft and third-party services and tools that are available to assist you with various database and data migration scenarios as well as specialty tasks, see [Service and tools for data migration](../../../dms/dms-tools-matrix.md). +
+- To learn more about [Azure Migrate](https://azure.microsoft.com/services/azure-migrate) see
+ - [Azure Migrate](../../../migrate/migrate-services-overview.md)
+ - To learn more about SQL Database see: - [An Overview of Azure SQL Database](../../database/sql-database-paas-overview.md) - [Azure total Cost of Ownership Calculator](https://azure.microsoft.com/pricing/tco/calculator/)
To learn more, see [managing Azure SQL Database after migration](../../database/
- To learn more about the framework and adoption cycle for Cloud migrations, see - [Cloud Adoption Framework for Azure](/azure/cloud-adoption-framework/migrate/azure-best-practices/contoso-migration-scale)
- - [Best practices for costing and sizing workloads migrate to Azure](/azure/cloud-adoption-framework/migrate/azure-best-practices/migrate-best-practices-costs)
+ - [Best practices for costing and sizing workloads for migration to Azure](/azure/cloud-adoption-framework/migrate/azure-best-practices/migrate-best-practices-costs)
+ - [Cloud Migration Resources](https://azure.microsoft.com/migration/resources)
- To assess the Application access layer, see [Data Access Migration Toolkit (Preview)](https://marketplace.visualstudio.com/items?itemName=ms-databasemigration.data-access-migration-toolkit) - For details on how to perform Data Access Layer A/B testing see [Database Experimentation Assistant](/sql/dea/database-experimentation-assistant-overview).
azure-vmware Production Ready Deployment Steps https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/production-ready-deployment-steps.md
This article provides you the planning process to identify and collect the infor
The steps outlined in this quick start give you a production-ready environment for creating virtual machines (VMs) and migration.
->[!IMPORTANT]
->Before you create your Azure VMware Solution resource, follow the [How to enable Azure VMware Solution resource](enable-azure-vmware-solution.md) article to submit a support ticket to have your hosts allocated. Once the support team receives your request, it takes up to five business days to confirm your request and allocate your hosts. If you have an existing Azure VMware Solution private cloud and want more hosts allocated, you'll go through the same process.
+To track the data you'll be collecting, get the [HCX planning checklist](https://www.virtualworkloads.com/2021/04/hcx-planning-checklist/).
+
+> [!IMPORTANT]
+> It's important to request a host quota early as you prepare to create your Azure VMware Solution resource. You can request a host quota now, so when the planning process is finished, you're ready to deploy the Azure VMware Solution private cloud. After the support team receives your request for a host quota, it takes up to five business days to confirm your request and allocate your hosts. If you have an existing Azure VMware Solution private cloud and want more hosts allocated, you complete the same process. For more information, see the following links, depending on the type of subscription you have:
+> - [EA customers](enable-azure-vmware-solution.md?tabs=azure-portal#request-host-quota-for-ea-customers)
+> - [CSP customers](enable-azure-vmware-solution.md?tabs=azure-portal#request-host-quota-for-csp-customers)
## Subscription
This network segment is used primarily for testing purposes during the initial d
:::image type="content" source="media/pre-deployment/nsx-segment-diagram.png" alt-text="Identify - IP address segment for virtual machine workloads" border="false":::
-## (Optional) Extend your networks
-
-You can extend network segments from on-premises to Azure VMware Solution, and if you do, identify those networks now.
-
-Keep in mind that:
--- If you plan to extend networks from on-premises, those networks must connect to a [vSphere Distributed Switch (vDS)](https://docs.vmware.com/en/VMware-vSphere/6.7/com.vmware.vsphere.networking.doc/GUID-B15C6A13-797E-4BCB-B9D9-5CBC5A60C3A6.html) in your on-premises VMware environment. -- If the network(s) you wish to extend live on a [vSphere Standard Switch](https://docs.vmware.com/en/VMware-vSphere/6.7/com.vmware.vsphere.networking.doc/GUID-350344DE-483A-42ED-B0E2-C811EE927D59.html), then they can't be extended.-
->[!NOTE]
->These networks are extended as a final step of the configuration, not during deployment.
- ## Attach Azure Virtual Network to Azure VMware Solution To provide connectivity to Azure VMware Solution, an ExpressRoute is built from Azure VMware Solution private cloud to an ExpressRoute virtual network gateway.
You can use an *existing* OR *new* ExpressRoute virtual network gateway.
If you plan to use an *existing* ExpressRoute virtual network gateway, the Azure VMware Solution ExpressRoute circuit is established as a post-deployment step. In this case, leave the **Virtual Network** field blank.
-As a general recommendation, it's acceptable to use an existing ExpressRoute virtual network gateway. For planning purposes, make note of which ExpressRoute virtual network gateway you'll use and then continue to the next step.
+As a general recommendation, it's acceptable to use an existing ExpressRoute virtual network gateway. For planning purposes, make note of which ExpressRoute virtual network gateway you'll use and then continue to the [next step](#vmware-hcx-network-segments).
### Create a new ExpressRoute virtual network gateway
When you create a *new* ExpressRoute virtual network gateway, you can use an exi
1. Identify an Azure Virtual network where there are no pre-existing ExpressRoute virtual network gateways. 2. Prior to deployment, create a [GatewaySubnet](../expressroute/expressroute-howto-add-gateway-portal-resource-manager.md#create-the-gateway-subnet) in the Azure Virtual Network. -- For a new Azure Virtual Network, you can create it in advance or during deployment. Select the **Create new** link under the **Virtual Network** list.
+- For a new Azure Virtual Network and virtual network gateway you will create that during the deployment by selecting the **Create new** link under the **Virtual Network** list. It's important to define the address space and subnets in advance of the deployment, so you're ready to enter that information when you complete the deployment steps.
-The below image shows the **Create a private cloud** deployment screen with the **Virtual Network** field highlighted.
+The following image shows the **Create a private cloud** deployment screen with the **Virtual Network** field highlighted.
:::image type="content" source="media/pre-deployment/azure-vmware-solution-deployment-screen-vnet-circle.png" alt-text="Screenshot of the Azure VMware Solution deployment screen with Virtual Network field highlighted.":::
->[!NOTE]
->Any virtual network that is going to be used or created may be seen by your on-premises environment and Azure VMware Solution, so make sure whatever IP segment you use in this virtual network and subnets do not overlap.
+> [!NOTE]
+> Any virtual network that is going to be used or created may be seen by your on-premises environment and Azure VMware Solution, so make sure whatever IP segment you use in this virtual network and subnets do not overlap.
-## VMware HCX Network Segments
+## (Optional) VMware HCX network segments
-VMware HCX is a technology bundled in with Azure VMware Solution. The primary use cases for VMware HCX are workload migrations and disaster recovery. If you plan to do either, it's best to plan out the networking now. Otherwise, you can skip and continue to the next step.
+VMware HCX is a technology that's bundled with Azure VMware Solution. The primary use cases for VMware HCX are workload migrations and disaster recovery. If you plan to do either, it's best to plan out the networking now. Otherwise, you can skip and continue to the next step.
[!INCLUDE [hcx-network-segments](includes/hcx-network-segments.md)]
+## (Optional) Extend your networks
+
+You can extend network segments from on-premises to Azure VMware Solution. If you do extend network segments, identify those networks now.
+
+Here are some factors to consider:
+
+- If you plan to extend networks from on-premises, those networks must connect to a [vSphere Distributed Switch (vDS)](https://docs.vmware.com/en/VMware-vSphere/6.7/com.vmware.vsphere.networking.doc/GUID-B15C6A13-797E-4BCB-B9D9-5CBC5A60C3A6.html) in your on-premises VMware environment.
+- Networks that are on a [vSphere Standard Switch](https://docs.vmware.com/en/VMware-vSphere/6.7/com.vmware.vsphere.networking.doc/GUID-350344DE-483A-42ED-B0E2-C811EE927D59.html) can't be extended.
+
+>[!NOTE]
+>These networks are extended as a final step of the configuration, not during deployment.
+>
## Next steps Now that you've gathered and documented the needed information continue to the next section to create your Azure VMware Solution private cloud. > [!div class="nextstepaction"] > [Deploy Azure VMware Solution](deploy-azure-vmware-solution.md)
+>
azure-vmware Tutorial Deploy Vmware Hcx https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/tutorial-deploy-vmware-hcx.md
VMware HCX Advanced Connector is pre-deployed in Azure VMware Solution. It suppo
>Although the VMware Configuration Maximum tool describes site pairs maximum to be 25 between the on-premises Connector and Cloud Manager, the licensing limits this to three for HCX Advanced and 10 for HCX Enterprise Edition. >[!NOTE]
->VMware HCX Enterprise is available with Azure VMware Solution as a preview service. It's free and is subject to terms and conditions for a preview service. After the VMware HCX Enterprise service is generally available, you'll get a 30-day notice that billing will switch over. You'll also have the option to turn off or opt-out of the service. There is no simple downgrade path from VMware HCX Enterprise to VMware HCX Advanced. If you decide to downgrade, you'll have to redeploy, incurring downtime.
+>VMware HCX Enterprise is available with Azure VMware Solution as a preview service. It's free and is subject to terms and conditions for a preview service. After the VMware HCX Enterprise service is generally available, you'll get a 30-day notice that billing will switch over. You'll also have the option to turn off or opt-out of the service. Downgrading from HCx Enterprise to HCX Advanced is possible without redeploying, but you'll have to log a support ticket for that action to take place. If planning a downgrade please ensure no migrations are scheduled and or features such as RAV, MON are not in use.
First, review [Before you begin](#before-you-begin), [Software version requirements](https://docs.vmware.com/en/VMware-HCX/services/user-guide/GUID-54E5293B-8707-4D29-BFE8-EE63539CC49B.html), and the [Prerequisites](#prerequisites).
backup Archive Tier Support https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/archive-tier-support.md
Supported clients:
## Get started with PowerShell
+1. Download the [latest](https://github.com/PowerShell/PowerShell/releases) version of PowerShell from GitHub.
+ 1. Run the following command in PowerShell: ```azurepowershell
Supported clients:
1. Get the list of backup items:
- `$BackupItemList = Get-AzRecoveryServicesBackupItem -vaultId $vault.ID -BackupManagementType "AzureVM/AzureWorkload" -WorkloadType "AzureVM/MSSQL"`
+ - For Azure virtual machines:
+
+ `$BackupItemList = Get-AzRecoveryServicesBackupItem -vaultId $vault.ID -BackupManagementType "AzureVM" -WorkloadType "AzureVM"`
+
+ - For SQL Server in Azure virtual machines:
+
+ `$BackupItemList = Get-AzRecoveryServicesBackupItem -vaultId $vault.ID -BackupManagementType "AzureWorkload" -WorkloadType "MSSQL"`
1. Get the backup item.
backup Backup Azure Database Postgresql https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-azure-database-postgresql.md
Title: Backup Azure Database for PostgreSQL description: Learn about Azure Database for PostgreSQL backup with long-term retention (preview) Previously updated : 09/08/2020 Last updated : 04/06/2021
The following instructions are a step-by-step guide to configuring backup on the
1. Define **Retention** settings. You can add one or more retention rules. Each retention rule assumes inputs for specific backups, and data store and retention duration for those backups.
-1. You can choose to store your backups in one of the two data stores (or tiers): **Backup data store** (standard tier) or **Archive data store** (in preview). You can choose between **two tiering options** to define when the backups are tiered across the two datastores:
+1. You can choose to store your backups in one of the two data stores (or tiers): **Backup data store** (standard tier) or **Archive data store** (in preview).
- - Choose to copy **Immediately** if you prefer to have a backup copy in both backup and archive data stores simultaneously.
- - Choose to move **On-expiry** if you prefer to move the backup to archive data store upon its expiry in the backup data store.
+ You can choose **On-expiry** to move the backup to archive data store upon its expiry in the backup data store.
1. The **default retention rule** is applied in the absence of any other retention rule, and has a default value of three months.
Follow this step-by-step guide to trigger a restore:
![Restore as files](./media/backup-azure-database-postgresql/restore-as-files.png)
+1. If the recovery point is in the archive tier, you must rehydrate the recovery point before restoring.
+
+ ![Rehydration settings](./media/backup-azure-database-postgresql/rehydration-settings.png)
+
+ Provide the following additional parameters required for rehydration:
+ - **Rehydration priority:** Default is **Standard**.
+ - **Rehydration duration:** The maximum rehydration duration is 30 days, and the minimum rehydration duration is 10 days. Default value is **15**.
+
+ The recovery point is stored in the **Backup data store** for the specified rehydration duration.
++ 1. Review the information and select **Restore**. This will trigger a corresponding Restore job that can be tracked under **Backup jobs**.
+>[!NOTE]
+>Archive support for Azure Database for PostgreSQL is in limited public preview.
+ ## Prerequisite permissions for configure backup and restore Azure Backup follows strict security guidelines. Even though itΓÇÖs a native Azure service, permissions on the resource aren't assumed, and need to be explicitly given by the user. Similarly, credentials to connect to the database aren't stored. This is important to safeguard your data. Instead, we use Azure Active Directory authentication.
Choose from the list of retention rules that were defined in the associated Back
### Stop protection
-You can stop protection on a backup item. This will also delete the associated recovery points for that backup item. We don't yet provide the option of stop protection while retaining the existing recovery points.
+You can stop protection on a backup item. This will also delete the associated recovery points for that backup item. If recovery points are not in the archive tier for a minimum of six months, deletion of those recovery points will incur early deletion cost. We don't yet provide the option of stop protection while retaining the existing recovery points.
![Stop protection](./media/backup-azure-database-postgresql/stop-protection.png)
backup Backup Azure Restore Files From Vm https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-azure-restore-files-from-vm.md
Also, ensure that you have the [right machine to execute the ILR script](#step-2
If you run the script on a computer with restricted access, ensure there's access to: -- `download.microsoft.com`
+- `download.microsoft.com` or `AzureFrontDoor.FirstParty` service tag in NSG
- Recovery Service URLs (GEO-NAME refers to the region where the Recovery Services vault resides)
- - `https://pod01-rec2.GEO-NAME.backup.windowsazure.com` (For Azure public regions)
- - `https://pod01-rec2.GEO-NAME.backup.windowsazure.cn` (For Azure China 21Vianet)
- - `https://pod01-rec2.GEO-NAME.backup.windowsazure.us` (For Azure US Government)
- - `https://pod01-rec2.GEO-NAME.backup.windowsazure.de` (For Azure Germany)
+ - `https://pod01-rec2.GEO-NAME.backup.windowsazure.com` (For Azure public regions) or `AzureBackup` service tag in NSG
+ - `https://pod01-rec2.GEO-NAME.backup.windowsazure.cn` (For Azure China 21Vianet) or `AzureBackup` service tag in NSG
+ - `https://pod01-rec2.GEO-NAME.backup.windowsazure.us` (For Azure US Government) or `AzureBackup` service tag in NSG
+ - `https://pod01-rec2.GEO-NAME.backup.windowsazure.de` (For Azure Germany) or `AzureBackup` service tag in NSG
- Outbound ports 53 (DNS), 443, 3260 > [!NOTE]
baremetal-infrastructure Concepts Baremetal Infrastructure Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/baremetal-infrastructure/concepts-baremetal-infrastructure-overview.md
Title: Overview of BareMetal Infrastructure Preview in Azure
-description: Overview of the BareMetal Infrastructure in Azure.
+ Title: Overview of BareMetal Infrastructure on Azure
+description: Provides an overview of the BareMetal Infrastructure on Azure.
Previously updated : 1/4/2021+ Last updated : 04/06/2021
-# What is BareMetal Infrastructure Preview on Azure?
+# What is BareMetal Infrastructure on Azure?
-Azure BareMetal Infrastructure provides a secure solution for migrating enterprise custom workloads. The BareMetal instances are non-shared host/server hardware assigned to you. It unlocks porting your on-prem solution with specialized workloads requiring certified hardware, licensing, and support agreements. Azure handles infrastructure monitoring and maintenance for you, while in-guest operating system (OS) monitoring and application monitoring fall within your ownership.
+Microsoft Azure offers a cloud infrastructure with a wide range of integrated cloud services to meet your business needs. In some cases, though, you may need to run services on bare metal servers without a virtualization layer. You may need root access, and control over the operating system (OS). To meet such a need, Azure offers BareMetal Infrastructure for several high-value and mission-critical applications.
+
+BareMetal Infrastructure is made up of dedicated BareMetal instances (compute instances), high-performance and application-appropriate storage (NFS, dNFS, ISCSI, and Fiber Channel), as well as a set of function-specific virtual LANs (VLANs) in an isolated environment. Storage can be shared across BareMetal instances to enable features like scale-out clusters or for creating high availability pairs with STONITH.
+
+This environment also has special VLANs you can access if you're running virtual machines (VMs) on one or more Azure Virtual Networks (VNets) in your Azure subscription. The entire environment is represented as a resource group in your Azure subscription.
+
+BareMetal Infrastructure is offered in over 30 SKUs from 2-socket to 24-socket servers and memory ranging from 1.5 TB up to 24 TBs. A large set of SKUs is also available with Octane memory. Azure offers the largest range of bare metal instances in a hyperscale cloud.
+
+## Why BareMetal Infrastructure?
+
+Some central workloads in the enterprise are made up of technologies that just aren't designed to run in a typical virtualized cloud setting. They require special architecture, certified hardware, or extraordinarily large sizes. Although those technologies have the most sophisticated data protection and business continuity features, those features aren't built for the virtualized cloud. They're more sensitive to latencies, noisy neighbors, and require a lot more control over change management and maintenance activity.
+
+BareMetal Infrastructure is built, certified, and tested for a select set of such applications. Azure was the first to offer such solutions, and has since lead with the largest portfolio and most sophisticated systems.
+
+BareMetal Infrastructure offers these benefits:
+
+- Dedicated instances
+- Certified hardware for specialized workloads
+ - SAP (Refer to [SAP Note #1928533](https://launchpad.support.sap.com/#/notes/1928533))
+ - Oracle (Refer to [Oracle document ID #948372.1](https://support.oracle.com/epmos/faces/DocumentDisplay?_afrLoop=52088246571495&id=948372.1&_adf.ctrl-state=kwnkj1hzm_52))
+- Bare metal (no compute virtualization)
+- Low latency between Azure hosted application VMs to BareMetal instances (0.35 ms)
+- All Flash SSD and NVMe
+ - Up to 1 PB/tenant
+ - IOPS up to 1.2 million/tenant
+ - 40/100-GB network bandwidth
+ - Accessible via NFS, dNFS, ISCSI, and FC
+- Redundant power, power supplies, NICs, TORs, ports, WANs, storage, and management
+- Hot spares for replacement on a failure (without the need for reconfiguring)
+- Customer coordinated maintenance windows
+- Application aware snapshots, archive, mirroring, and cloning
-BareMetal Infrastructure provides a path to modernize your infrastructure landscape while maintaining your existing investments and architecture. With BareMetal Infrastructure, you can bring specialized workloads to Azure, allowing you access and integration with Azure services with low latency.
## SKU availability in Azure regions
-BareMetal Infrastructure for specialized and general-purpose workloads is available, starting with four regions based on Revision 4.2 (Rev 4.2) stamps:
+
+BareMetal Infrastructure offers multiple SKUs certified for specialized workloads. Use the workload-specific SKUs to meet your needs.
+
+- Large instances ΓÇô Ranging from two-socket to four-socket systems.
+- Very Large instances ΓÇô Ranging from four-socket to twenty-socket systems.
+
+BareMetal Infrastructure for specialized workloads is available in the following Azure regions:
- West Europe - North Europe-- East US 2
+- Germany West Central *zones support
+- East US 2 *zones support
+- East US *zones support
+- West US *zones support
+- West US 2 *zones support
- South Central US >[!NOTE]
->**Rev 4.2** is the latest rebranded BareMetal Infrastructure using the existing Rev 4 architecture. Rev 4 provides closer proximity to the Azure virtual machine (VM) hosts. It has significant improvements in network latency between Azure VMs and BareMetal instance units deployed in Rev 4 stamps or rows. You can access and manage your BareMetal instances through the Azure portal.
+>**Zones support** refers to availability zones within a region where BareMetal instances can be deployed across zones for high resiliency and availability. This capability enables support for multi-site active-active scaling.
+
+## Managing BareMetal instances in Azure
+
+Depending on your needs, the application topologies of BareMetal Infrastructure can be complex. You may deploy multiple instances, in one or more locations, with shared or dedicated storage, and specialized LAN and WAN connections. So for BareMetal Infrastructure, Azure offers a consultative capture of that information by a CSA/GBB in the field in a provisioning portal.
-## Support
-BareMetal Infrastructure is ISO 27001, ISO 27017, SOC 1, and SOC 2 compliant. It also uses a bring-your-own-license (BYOL) model: OS, specialized workload, and third-party applications.
+By the time your BareMetal Infrastructure is provisioned, the OS, networks, storage volumes, placements in zones and regions, and WAN connections between locations is already pre-configured. You are set to register your OS licenses (BYOL), configure the OS, and install the application layer.
+
+You will be able to see all the BareMetal Infrastructure resources, and their state and attributes, in the Azure portal. You can also operate the instances and open service requests and support tickets from there.
+
+## Operational model
+BareMetal Infrastructure is ISO 27001, ISO 27017, SOC 1, and SOC 2 compliant. It also uses a bring-your-own-license (BYOL) model: OS, specialized workload, and third-party applications.
As soon as you receive root access and full control, you assume responsibility for:-- Designing and implementing backup and recovery solutions, high availability, and disaster recovery-- Licensing, security, and support for OS and third-party software
+- Designing and implementing backup and recovery solutions, high availability, and disaster recovery.
+- Licensing, security, and support for the OS and third-party software.
Microsoft is responsible for:-- Providing the hardware for specialized workloads -- Provisioning the OS
+- Providing the hardware for specialized workloads.
+- Provisioning the OS.
-## Compute
-BareMetal Infrastructure offers multiple SKUs for specialized workloads. Available SKUs available range from the smaller two-socket system to the 24-socket system. Use the workload-specific SKUs for your specialized workload.
+## BareMetal instance stamp
The BareMetal instance stamp itself combines the following components: -- **Computing:** Servers based on a different generation of Intel Xeon processors that provide the necessary computing capability and are certified for the specialized workload.
+- **Computing:** Servers based on the generation of Intel Xeon processors that provide the necessary computing capability and are certified for the specialized workload.
- **Network:** A unified high-speed network fabric interconnects computing, storage, and LAN components. - **Storage:** An infrastructure accessed through a unified network fabric.
-Within the multi-tenant infrastructure of the BareMetal stamp, customers are deployed in isolated tenants. When deploying a tenant, you name an Azure subscription within your Azure enrollment. This Azure subscription is the one that BareMetal instances are billed.
+Within the multi-tenant infrastructure of the BareMetal stamp, customers are deployed in isolated tenants. When deploying a tenant, you name an Azure subscription within your Azure enrollment. This Azure subscription is the one billed for your BareMetal instances.
>[!NOTE]
->A customer deployed in the BareMetal instance gets isolated into a tenant. A tenant is isolated in the networking, storage, and compute layer from other tenants. Storage and compute units assigned to the different tenants cannot see each other or communicate with each other on the BareMetal instances.
+>A customer deploying a BareMetal instance is isolated into a tenant. A tenant is isolated in the networking, storage, and compute layer from other tenants. Storage and compute units assigned to different tenants cannot see each other or communicate with each other on their BareMetal instances.
-## OS
+## Operating system
During the provisioning of the BareMetal instance, you can select the OS you want to install on the machines. >[!NOTE] >Remember, BareMetal Infrastructure is a BYOL model. The available Linux OS versions are:-- Red Hat Enterprise Linux (RHEL) 7.6
+- Red Hat Enterprise Linux (RHEL)
- SUSE Linux Enterprise Server (SLES)
- - SLES 12 SP2
- - SLES 12 SP3
- - SLES 12 SP4
- - SLES 12 SP5
- - SLES 15 SP1
## Storage
-BareMetal instances based on specific SKU type come with predefined NFS storage for the specific workload type. When you provision BareMetal, you can provision more storage based on your estimated growth by submitting a support request. All storage comes with an all-flash disk in Revision 4.2 with support for NFSv3 and NFSv4. The newer Revision 4.5 NVMe SSD will be available. For more information on storage sizing, see the [BareMetal workload type](../virtual-machines/workloads/sap/get-started.md) section.
-
->[!NOTE]
->The storage used for BareMetal meets [Federal Information Processing Standard (FIPS) Publication 140-2](/microsoft-365/compliance/offering-fips-140-2) requirements offering Encryption at Rest by default. The data is stored securely on the disks.
+BareMetal Infrastructure provides highly redundant NFS storage and Fiber Channel storage. The infrastructure offers deep integration for enterprise workloads like SAP, SQL, and more. It also provides application-consistent data protection and data-management capabilities. The self-service management tools offer space-efficient snapshot, cloning, and granular replication capabilities along with single pane of glass monitoring. The infrastructure enables zero RPO and RTO capabilities for data availability and business continuity needs.
+
+The storage infrastructure offers:
+- Up to 4 x 100-GB uplinks.
+- Up to 32-GB Fiber channel uplinks.
+- All flash SSD and NVMe drive.
+- Ultra-low latency and high throughput.
+- Scales up to 4 PB of raw storage.
+- Up to 11 million IOPS.
+
+These Data access protocols are supported:
+- iSCSI
+- NFS (v3 or v4)
+- Fiber Channel
+- NVMe over FC
## Networking
-The architecture of Azure network services is a key component for a successful deployment of specialized workloads in BareMetal instances. It's likely that not all IT systems are located in Azure already. Azure offers you network technology to make Azure look like a virtual data center to your on-premises software deployments. The Azure network functionality required for BareMetal instances is:
+The architecture of Azure network services is a key component for a successful deployment of specialized workloads in BareMetal instances. It's likely that not all IT systems are located in Azure already. Azure offers you network technology to make Azure look like a virtual data center to your on-premises software deployments. The Azure network functionality required for BareMetal instances includes:
-- Azure virtual networks are connected to the ExpressRoute circuit that connects to your on-premises network assets.-- An ExpressRoute circuit that connects on-premises to Azure should have a minimum bandwidth of 1 Gbps or higher.-- Extended Active directory and DNS in Azure or completely running in Azure.
+- Azure virtual networks connected to the Azure ExpressRoute circuit that connects to your on-premises network assets.
+- The ExpressRoute circuit that connects on-premises to Azure should have a minimum bandwidth of 1 Gbps or higher.
+- Extended Active Directory and DNS in Azure, or completely running in Azure.
-Using ExpressRoute lets you extend your on-premises network into Microsoft cloud over a private connection with a connectivity provider's help. You can enable **ExpressRoute Premium** to extend connectivity across geopolitical boundaries or use **ExpressRoute Local** for cost-effective data transfer between the location near the Azure region you want.
+ExpressRoute lets you extend your on-premises network into the Microsoft cloud over a private connection with a connectivity provider's help. You can use **ExpressRoute Local** for cost-effective data transfer between your on-premises location and the Azure region you want. To extend connectivity across geopolitical boundaries, you can enable **ExpressRoute Premium**.
-BareMetal instances are provisioned within your Azure VNET server IP address range.
+BareMetal instances are provisioned within your Azure VNet server IP address range.
The architecture shown is divided into three sections:-- **Left:** shows the customer on-premise infrastructure that runs different applications, connecting through the partner or local edge router like Equinix. For more information, see [Connectivity providers and locations: Azure ExpressRoute](../expressroute/expressroute-locations.md).-- **Center:** shows [ExpressRoute](../expressroute/expressroute-introduction.md) provisioned using your Azure subscription offering connectivity to Azure edge network.-- **Right:** shows Azure IaaS, and in this case use of VMs to host your applications, which are provisioned within your Azure virtual network.-- **Bottom:** shows using your ExpressRoute Gateway enabled with [ExpressRoute FastPath](../expressroute/about-fastpath.md) for BareMetal connectivity offering low latency.
+- **Left:** Shows the customer on-premise infrastructure that runs different applications, connecting through the partner or local edge router like Equinix. For more information, see [Connectivity providers and locations: Azure ExpressRoute](../expressroute/expressroute-locations.md).
+- **Center:** Shows [ExpressRoute](../expressroute/expressroute-introduction.md) provisioned using your Azure subscription offering connectivity to Azure edge network.
+- **Right:** Shows Azure IaaS, and in this case, use of VMs to host your applications, which are provisioned within your Azure virtual network.
+- **Bottom:** Shows using your ExpressRoute Gateway enabled with [ExpressRoute FastPath](../expressroute/about-fastpath.md) for BareMetal connectivity offering low latency.
>[!TIP]
- >To support this, your ExpressRoute Gateway should be UltraPerformance. For more information, see [About ExpressRoute virtual network gateways](../expressroute/expressroute-about-virtual-network-gateways.md).
+ >To support this, your ExpressRoute Gateway should be UltraPerformance. For more information, see [About ExpressRoute virtual network gateways](../expressroute/expressroute-about-virtual-network-gateways.md).
## Next steps
-The next step is to learn how to identify and interact with BareMetal Instance units through the Azure portal.
+The next step is to learn how to identify and interact with BareMetal instances through the Azure portal.
> [!div class="nextstepaction"]
-> [Manage BareMetal Instances through the Azure portal](connect-baremetal-infrastructure.md)
+> [Manage BareMetal instances through the Azure portal](connect-baremetal-infrastructure.md)
baremetal-infrastructure Connect Baremetal Infrastructure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/baremetal-infrastructure/connect-baremetal-infrastructure.md
Title: Connect BareMetal Instance units in Azure
-description: Learn how to identify and interact with BareMetal Instance units the Azure portal or Azure CLI.
+ Title: Connect BareMetal Infrastructure instances in Azure
+description: Learn how to identify and interact with BareMetal instances in the Azure portal or Azure CLI.
Previously updated : 03/19/2021+ Last updated : 04/06/2021
-# Connect BareMetal Instance units in Azure
-
-This article shows how the [Azure portal](https://portal.azure.com/) displays [BareMetal Instances](concepts-baremetal-infrastructure-overview.md). This article also shows you the activities you can do in the Azure portal with your deployed BareMetal Instance units.
+# Connect BareMetal Infrastructure instances in Azure
+
+This article shows how the [Azure portal](https://portal.azure.com/) displays [BareMetal instances](concepts-baremetal-infrastructure-overview.md). This article also shows you what you can do in the Azure portal with your deployed BareMetal Infrastructure instances.
## Register the resource provider
-An Azure resource provider for BareMetal Instances provides visibility of the instances in the Azure portal, currently in public preview. By default, the Azure subscription you use for BareMetal Instance deployments registers the *BareMetalInfrastructure* resource provider. If you don't see your deployed BareMetal Instance units, you must register the resource provider with your subscription.
+An Azure resource provider for BareMetal instances provides visibility of the instances in the Azure portal. By default, the Azure subscription you use for BareMetal instance deployments registers the *BareMetalInfrastructure* resource provider. If you don't see your deployed BareMetal instances, you must register the resource provider with your subscription.
-You can register the BareMetal Instance resource provider by using the Azure portal or Azure CLI.
+You can register the BareMetal instance resource provider by using the Azure portal or Azure CLI.
### [Portal](#tab/azure-portal)
-You'll need to list your subscription in the Azure portal and then double-click on the subscription used to deploy your BareMetal Instance units.
+You'll need to list your subscription in the Azure portal and then double-click on the subscription used to deploy your BareMetal instances.
1. Sign in to the [Azure portal](https://portal.azure.com).
You'll need to list your subscription in the Azure portal and then double-click
1. In the **All services** box, enter **subscription**, and then select **Subscriptions**.
-1. Select the subscription from the subscription list to view.
+1. Select the subscription from the subscription list.
1. Select **Resource providers** and enter **BareMetalInfrastructure** into the search. The resource provider should be **Registered**, as the image shows. >[!NOTE] >If the resource provider is not registered, select **Register**. ### [Azure CLI](#tab/azure-cli)
To begin using Azure CLI:
[!INCLUDE [azure-cli-prepare-your-environment-no-header](../../includes/azure-cli-prepare-your-environment-no-header.md)]
-Sign in to the Azure subscription you use for the BareMetal Instance deployment through the Azure CLI. Register the `BareMetalInfrastructure` resource provider with the [az provider register](/cli/azure/provider#az_provider_register) command:
+Sign in to the Azure subscription you use for the BareMetal instance deployment through the Azure CLI. Register the `BareMetalInfrastructure` resource provider with the [az provider register](/cli/azure/provider#az_provider_register) command:
```azurecli az provider register --namespace Microsoft.BareMetalInfrastructure
You can use the [az provider list](/cli/azure/provider#az_provider_list) command
For more information about resource providers, see [Azure resource providers and types](../azure-resource-manager/management/resource-providers-and-types.md).
-## BareMetal Instance units in the Azure portal
+## BareMetal instances in the Azure portal
-When you submit a BareMetal Instance deployment request, you'll specify the Azure subscription that you're connecting to the BareMetal Instances. Use the same subscription you use to deploy the application layer that works against the BareMetal Instance units.
+When you submit a BareMetal instance deployment request, you'll specify the Azure subscription that you're connecting to the BareMetal instances. Use the same subscription you use to deploy the application layer that works against the BareMetal instances.
-During the deployment of your BareMetal Instances, a new [Azure resource group](../azure-resource-manager/management/manage-resources-portal.md) gets created in the Azure subscription you used in the deployment request. This new resource group lists all your BareMetal Instance units you've deployed in the specific subscription.
+During the deployment of your BareMetal instances, a new [Azure resource group](../azure-resource-manager/management/manage-resources-portal.md) gets created in the Azure subscription you used in the deployment request. This new resource group lists all of the BareMetal instances you've deployed in that subscription.
### [Portal](#tab/azure-portal) 1. In the BareMetal subscription, in the Azure portal, select **Resource groups**.
- :::image type="content" source="media/baremetal-infrastructure-portal/view-baremetal-instance-units-azure-portal.png" alt-text="Screenshot that shows the list of Resource Groups":::
+ :::image type="content" source="media/connect-baremetal-infrastructure/view-baremetal-instances-azure-portal.png" alt-text="Screenshot showing the list of Resource groups.":::
1. In the list, locate the new resource group.
- :::image type="content" source="media/baremetal-infrastructure-portal/filter-resource-groups.png" alt-text="Screenshot that shows the BareMetal Instance unit in a filtered Resource groups list" lightbox="media/baremetal-infrastructure-portal/filter-resource-groups.png":::
+ :::image type="content" source="media/connect-baremetal-infrastructure/filter-resource-groups.png" alt-text="Screenshot showing the BareMetal instance in a filtered Resource groups list." lightbox="media/connect-baremetal-infrastructure/filter-resource-groups.png":::
>[!TIP]
- >You can filter on the subscription you used to deploy the BareMetal Instance. After you filter to the proper subscription, you might have a long list of resource groups. Look for one with a post-fix of **-Txxx** where xxx is three digits like **-T250**.
+ >You can filter on the subscription you used to deploy the BareMetal instance. After you filter to the proper subscription, you might have a long list of resource groups. Look for one with a post-fix of **-Txxx** where xxx is three digits like **-T250**.
-1. Select the new resource group to show the details of it. The image shows one BareMetal Instance unit deployed.
+1. Select the new resource group to view its details. The image shows one BareMetal instance deployed.
>[!NOTE]
- >If you deployed several BareMetal Instance tenants under the same Azure subscription, you would see multiple Azure resource groups.
+ >If you deployed several BareMetal instance tenants under the same Azure subscription, you will see multiple Azure resource groups.
### [Azure CLI](#tab/azure-cli)
-To see all your BareMetal Instances, run the [az baremetalinstance list](/cli/azure/ext/baremetal-infrastructure/baremetalinstance#ext_baremetal_infrastructure_az_baremetalinstance_list) command for your resource group:
+To see all your BareMetal instances, run the [az baremetalinstance list](/cli/azure/ext/baremetal-infrastructure/baremetalinstance#ext_baremetal_infrastructure_az_baremetalinstance_list) command for your resource group:
```azurecli az baremetalinstance list --resource-group DSM05A-T550 ΓÇôoutput table
az baremetalinstance list --resource-group DSM05A-T550 ΓÇôoutput table
## View the attributes of a single instance
-You can view the details of a single unit.
+You can view the details of a single instance.
### [Portal](#tab/azure-portal)
-In the list of the BareMetal instance, select the single instance you want to view.
+In the list of BareMetal instances, select the single instance you want to view.
-The attributes in the image don't look much different than the Azure virtual machine (VM) attributes. On the left, you'll see the Resource group, Azure region, and subscription name and ID. If you assigned tags, then you'll see them here as well. By default, the BareMetal Instance units don't have tags assigned.
+The attributes in the image don't look much different than the Azure virtual machine (VM) attributes. On the left, you'll see the Resource group, Azure region, and subscription name and ID. If you assigned tags, then you'll see them here as well. By default, the BareMetal instances don't have tags assigned.
-On the right, you'll see the unit's name, operating system (OS), IP address, and SKU that shows the number of CPU threads and memory. You'll also see the power state and hardware version (revision of the BareMetal Instance stamp). The power state indicates if the hardware unit is powered on or off. The operating system details, however, don't indicate whether it's up and running.
+On the right, you'll see the name of the BareMetal instance, operating system (OS), IP address, and SKU that shows the number of CPU threads and memory. You'll also see the power state and hardware version (revision of the BareMetal instance stamp). The power state indicates whether the hardware unit is powered on or off. The operating system details, however, don't indicate whether it's up and running.
The possible hardware revisions are:
The possible hardware revisions are:
* Revision 4.2 (Rev 4.2) >[!NOTE]
->Rev 4.2 is the latest rebranded BareMetal Infrastructure using the existing Rev 4 architecture. Rev 4 provides closer proximity to the Azure virtual machine (VM) hosts. It has significant improvements in network latency between Azure VMs and BareMetal instance units deployed in Rev 4 stamps or rows. You can access and manage your BareMetal instances through the Azure portal. For more information, see [BareMetal Infrastructure on Azure](concepts-baremetal-infrastructure-overview.md).
+>Rev 4.2 is the latest rebranded BareMetal Infrastructure using the existing Rev 4 architecture. Rev 4 provides closer proximity to the Azure virtual machine (VM) hosts. It has significant improvements in network latency between Azure VMs and SAP HANA instances. You can access and manage your BareMetal instances through the Azure portal. For more information, see [BareMetal Infrastructure on Azure](concepts-baremetal-infrastructure-overview.md).
+
-Also, on the right side, you'll find the [Azure Proximity Placement Group's](../virtual-machines/co-location.md) name, which is created automatically for each deployed BareMetal Instance unit. Reference the Proximity Placement Group when you deploy the Azure VMs that host the application layer. When you use the Proximity Placement Group associated with the BareMetal Instance unit, you ensure that the Azure VMs get deployed close to the BareMetal Instance unit.
+Also, on the right side, you'll find the [Azure proximity placement group's](../virtual-machines/co-location.md) name, which is created automatically for each deployed BareMetal instance. Reference the proximity placement group when you deploy the Azure VMs that host the application layer. When you use the proximity placement group associated with the BareMetal instance, you ensure that the Azure VMs get deployed close to the BareMetal instance.
>[!TIP] >To locate the application layer in the same Azure datacenter as Revision 4.x, see [Azure proximity placement groups for optimal network latency](/azure/virtual-machines/workloads/sap/sap-proximity-placement-scenarios). ### [Azure CLI](#tab/azure-cli)
-To see details of a BareMetal Instance, run the [az baremetalinstance show](/cli/azure/ext/baremetal-infrastructure/baremetalinstance#ext_baremetal_infrastructure_az_baremetalinstance_show) command:
+To see details of a BareMetal instance, run the [az baremetalinstance show](/cli/azure/ext/baremetal-infrastructure/baremetalinstance#ext_baremetal_infrastructure_az_baremetalinstance_show) command:
```azurecli az baremetalinstance show --resource-group DSM05A-T550 --instance-name orcllabdsm01
If you're uncertain of the instance name, run the `az baremetalinstance list` co
## Check activities of a single instance
-You can check the activities of a single unit. One of the main activities recorded are restarts of the unit. The data listed includes the activity's status, timestamp the activity triggered, subscription ID, and the Azure user who triggered the activity.
+You can check the activities of a single BareMetal instance. One of the main activities recorded are restarts of the instance. The data listed includes the activity's status, timestamp the activity triggered, subscription ID, and the Azure user who triggered the activity.
-Changes to the unit's metadata in Azure also get recorded in the Activity log. Besides the restart initiated, you can see the activity of **Write BareMetallnstances**. This activity makes no changes on the BareMetal Instance unit itself but documents the changes to the unit's metadata in Azure.
+Changes to the instance's metadata in Azure also get recorded in the Activity log. Besides the restart initiated, you can see the activity of **Write BareMetallnstances**. This activity makes no changes on the BareMetal instance itself but documents the changes to the unit's metadata in Azure.
Another activity that gets recorded is when you add or delete a [tag](../azure-resource-manager/management/tag-resources.md) to an instance.
Another activity that gets recorded is when you add or delete a [tag](../azure-r
### [Portal](#tab/azure-portal)
-You can add Azure tags to a BareMetal Instance unit or delete them. The way tags get assigned doesn't differ from assigning tags to VMs. As with VMs, the tags exist in the Azure metadata, and for BareMetal Instances, they have the same restrictions as the tags for VMs.
+You can add Azure tags to a BareMetal instance or delete them. Tags get assigned just as they do when assigning tags to VMs. As with VMs, the tags exist in the Azure metadata. Tags have the same restrictions for BareMetal instances as for VMs.
-Deleting tags work the same way as with VMs. Applying and deleting a tag are listed in the BareMetal Instance unit's Activity log.
+Deleting tags also works the same way as for VMs. Applying and deleting a tag is listed in the BareMetal instance's Activity log.
### [Azure CLI](#tab/azure-cli)
-Assigning tags to BareMetal Instances works the same as for virtual machines. The tags exist in the Azure metadata, and for BareMetal Instances, they have the same restrictions as the tags for VMs.
+Assigning tags to BareMetal instances works the same as assigning tags for virtual machines. As with VMs, the tags exist in the Azure metadata. Tags have the same restrictions for BareMetal instances as for VMs.
-To add tags to a BareMetal Instance unit, run the [az baremetalinstance update](/cli/azure/ext/baremetal-infrastructure/baremetalinstance#ext_baremetal_infrastructure_az_baremetalinstance_update) command:
+To add tags to a BareMetal instance, run the [az baremetalinstance update](/cli/azure/ext/baremetal-infrastructure/baremetalinstance#ext_baremetal_infrastructure_az_baremetalinstance_update) command:
```azurecli az baremetalinstance update --resource-group DSM05a-T550 --instance-name orcllabdsm01 --set tags.Dept=Finance tags.Status=Normal
az baremetalinstance update --resource-group DSM05a-T550 --instance-name orcllab
## Check properties of an instance
-When you acquire the instances, you can go to the Properties section to view the data collected about the instances. The data collected includes the Azure connectivity, storage backend, ExpressRoute circuit ID, unique resource ID, and the subscription ID. You'll use this information in support requests or when setting up storage snapshot configuration.
+When you acquire the instances, you can go to the Properties section to view the data collected about the instances. Data collected includes the Azure connectivity, storage backend, ExpressRoute circuit ID, unique resource ID, and the subscription ID. You'll use this information in support requests or when setting up storage snapshot configuration.
-Another critical piece of information you'll see is the storage NFS IP address. It isolates your storage to your **tenant** in the BareMetal Instance stack. You'll use this IP address when you edit the [configuration file for storage snapshot backups](../virtual-machines/workloads/sap/hana-backup-restore.md#set-up-storage-snapshots).
+Another critical piece of information you'll see is the storage NFS IP address. It isolates your storage to your **tenant** in the BareMetal instance stack. You'll use this IP address when you edit the [configuration file for storage snapshot backups](../virtual-machines/workloads/sap/hana-backup-restore.md#set-up-storage-snapshots).
-## Restart a unit through the Azure portal
+## Restart a BareMetal instance through the Azure portal
-There are various situations where the OS won't finish a restart, which requires a power restart of the BareMetal Instance unit.
+There are various situations where the OS won't finish a restart, which requires a power restart of the BareMetal instance.
### [Portal](#tab/azure-portal)
-You can do a power restart of the unit directly from the Azure portal:
+You can do a power restart of the instance directly from the Azure portal:
-Select **Restart** and then **Yes** to confirm the restart of the unit.
+Select **Restart** and then **Yes** to confirm the restart.
-When you restart a BareMetal Instance unit, you'll experience a delay. During this delay, the power state moves from **Starting** to **Started**, which means the OS has started up completely. As a result, after a restart, you can't log into the unit as soon as the state switches to **Started**.
+When you restart a BareMetal instance, you'll experience a delay. During this delay, the power state moves from **Starting** to **Started**, which means the OS has started up completely. As a result, after a restart, you can only log into the unit once the state switches to **Started**.
### [Azure CLI](#tab/azure-cli)
-To restart a BareMetal Instance unit, use the [az baremetalinstance restart](/cli/azure/ext/baremetal-infrastructure/baremetalinstance#ext_baremetal_infrastructure_az_baremetalinstance_restart) command:
+To restart a BareMetal instance, use the [az baremetalinstance restart](/cli/azure/ext/baremetal-infrastructure/baremetalinstance#ext_baremetal_infrastructure_az_baremetalinstance_restart) command:
```azurecli az baremetalinstance restart --resource-group DSM05a-T550 --instance-name orcllabdsm01
az baremetalinstance restart --resource-group DSM05a-T550 --instance-name orclla
>[!IMPORTANT]
->Depending on the amount of memory in your BareMetal Instance unit, a restart and a reboot of the hardware and the operating system can take up to one hour.
+>Depending on the amount of memory in your BareMetal instance, a restart and a reboot of the hardware and operating system can take up to one hour.
-## Open a support request for BareMetal Instances
+## Open a support request for BareMetal instances
-You can submit support requests specifically for a BareMetal Instance unit.
+You can submit support requests specifically for BareMetal instances.
1. In Azure portal, under **Help + Support**, create a **[New support request](https://rc.portal.azure.com/#create/Microsoft.Support)** and provide the following information for the ticket:
- - **Issue type:** Select an issue type
+ - **Issue type:** Select an issue type.
- - **Subscription:** Select your subscription
+ - **Subscription:** Select your subscription.
- **Service:** BareMetal Infrastructure
- - **Resource:** Provide the name of the instance
+ - **Resource:** Provide the name of the instance.
- - **Summary:** Provide a summary of your request
+ - **Summary:** Provide a summary of your request.
- - **Problem type:** Select a problem type
+ - **Problem type:** Select a problem type.
- - **Problem subtype:** Select a subtype for the problem
+ - **Problem subtype:** Select a subtype for the problem.
1. Select the **Solutions** tab to find a solution to your problem. If you can't find a solution, go to the next step.
-1. Select the **Details** tab and select whether the issue is with VMs or the BareMetal Instance units. This information helps direct the support request to the correct specialists.
+1. Select the **Details** tab and select whether the issue is with VMs or BareMetal instances. This information helps direct the support request to the correct specialists.
1. Indicate when the problem began and select the instance region.
It takes up to five business days for a support representative to confirm your r
## Next steps
-If you want to learn more about the workloads, see [BareMetal workload types](../virtual-machines/workloads/sap/get-started.md).
+Learn more about workloads:
+
+- [What is SAP HANA on Azure (Large Instances)?](../virtual-machines/workloads/sap/hana-overview-architecture.md)
baremetal-infrastructure Know Baremetal Terms https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/baremetal-infrastructure/know-baremetal-terms.md
Title: Know the terms of Azure BareMetal Infrastructure description: Know the terms of Azure BareMetal Infrastructure. Previously updated : 1/4/2021+ Last updated : 04/06/2021 # Know the terms for BareMetal Infrastructure
-In this article, we'll cover some important BareMetal terms.
+In this article, we'll cover some important terms related to the BareMetal Infrastructure.
-- **Revision**: There's an original stamp revision known as Revision 3 (Rev 3), and two different stamp revisions for BareMetal Instance stamps. Each stamp differs in architecture and proximity to Azure virtual machine hosts:
- - **Revision 4** (Rev 4): a newer design that provides closer proximity to the Azure virtual machine (VM) hosts and lowers the latency between Azure VMs and BareMetal Instance units.
- - **Revision 4.2** (Rev 4.2): the latest rebranded BareMetal Infrastructure using the existing Rev 4 architecture. Rev 4 provides closer proximity to the Azure virtual machine (VM) hosts. It has significant improvements in network latency between Azure VMs and BareMetal instance units deployed in Rev 4 stamps or rows. You can access and manage your BareMetal instances through the Azure portal.
+- **Revision**: There's an original stamp revision known as Revision 3 (Rev 3), and two additional stamp revisions for BareMetal instance stamps. Each stamp differs in architecture and proximity to Azure virtual machine hosts:
+ - **Revision 4** (Rev 4): A newer design that provides closer proximity to the Azure virtual machine (VM) hosts and lowers the latency between Azure VMs and SAP HANA instances.
+ - **Revision 4.2** (Rev 4.2): The latest rebranded BareMetal Infrastructure using the existing Rev 4 architecture. Rev 4 provides closer proximity to the Azure virtual machine (VM) hosts. It has significant improvements in network latency between Azure VMs and BareMetal instances deployed in Rev 4 stamps or rows. You can access and manage your BareMetal instances through the Azure portal.
-- **Stamp**: Defines the Microsoft internal deployment size of BareMetal Instances. Before instance units can get deployed, a BareMetal Instance stamp consisting of compute, network, and storage racks must be deployed in a datacenter location. Such a deployment is called a BareMetal Instance stamp or from Revision 4.2.
+- **Stamp**: Defines the Microsoft internal deployment size of BareMetal instances. Before instances can be deployed, a BareMetal instance stamp consisting of compute, network, and storage racks must be deployed in a datacenter location. Such a deployment is called a BareMetal instance stamp.
-- **Tenant**: A customer deployed in BareMetal Instance stamp gets isolated into a *tenant.* A tenant is isolated in the networking, storage, and compute layer from other tenants. Storage and compute units assigned to the different tenants can't see each other or communicate with each other on the BareMetal Instance stamp level. A customer can choose to have deployments into different tenants. Even then, there's no communication between tenants on the BareMetal Instance stamp level.
+- **Tenant**: A customer deploying a BareMetal instance stamp gets isolated as a *tenant.* A tenant is isolated in the networking, storage, and compute layer from other tenants. Storage and compute units assigned to the different tenants can't see each other or communicate with each other on the BareMetal instance stamp level. A customer can choose to have deployments into different tenants. Even then, there's no communication between tenants on the BareMetal instance stamp level.
## Next steps
-Learn more about the [BareMetal Infrastructure](concepts-baremetal-infrastructure-overview.md) or how to [identify and interact with BareMetal Instance units](connect-baremetal-infrastructure.md).
+Now that you've been introduced to important terminology of the BareMetal Infrastructure, you may want to learn about:
+- More details of the [BareMetal Infrastructure](concepts-baremetal-infrastructure-overview.md).
+- How to [Connect BareMetal Infrastructure instances in Azure](connect-baremetal-infrastructure.md).
+
bastion Bastion Create Host Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/bastion/bastion-create-host-powershell.md
Verify that you have an Azure subscription. If you don't already have an Azure s
[!INCLUDE [PowerShell](../../includes/vpn-gateway-cloud-shell-powershell-about.md)]
+ >[!NOTE]
+ >The use of Azure Bastion with Azure Private DNS Zones is not supported at this time. Before you begin, please make sure that the virtual network where you plan to deploy your Bastion resource is not linked to a private DNS zone.
+ >
+ ## <a name="createhost"></a>Create a bastion host This section helps you create a new Azure Bastion resource using Azure PowerShell.
bastion Create Host Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/bastion/create-host-cli.md
Verify that you have an Azure subscription. If you don't already have an Azure s
[!INCLUDE [Cloud Shell CLI](../../includes/vpn-gateway-cloud-shell-cli.md)]
+ >[!NOTE]
+ >The use of Azure Bastion with Azure Private DNS Zones is not supported at this time. Before you begin, please make sure that the virtual network where you plan to deploy your Bastion resource is not linked to a private DNS zone.
+ >
+ ## <a name="createhost"></a>Create a bastion host This section helps you create a new Azure Bastion resource using Azure CLI.
bastion Tutorial Create Host Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/bastion/tutorial-create-host-portal.md
If you donΓÇÖt have an Azure subscription, create a [free account](https://azure
* Ports: To connect to the Windows VM, you must have the following ports open on your Windows VM: * Inbound ports: RDP (3389)
+ >[!NOTE]
+ >The use of Azure Bastion with Azure Private DNS Zones is not supported at this time. Before you begin, please make sure that the virtual network where you plan to deploy your Bastion resource is not linked to a private DNS zone.
+ >
+ ## Sign in to the Azure portal Sign in to the [Azure portal](https://portal.azure.com).
cognitive-services Howtocallvisionapi https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Computer-vision/Vision-API-How-to-Topics/HowToCallVisionAPI.md
Title: Call the Image Analysis API
-description: Learn how to call the Image Analysis API by using the REST API in Azure Cognitive Services.
+description: Learn how to call the Image Analysis API and configure its behavior.
-+
# Call the Image Analysis API
-This article demonstrates how to call the Image Analysis API by using the REST API. The samples are written both in C# by using the Image Analysis API client library and as HTTP POST or GET calls. The article focuses on:
+This article demonstrates how to call the Image Analysis API to return information about an image's visual features.
-- Getting tags, a description, and categories-- Getting domain-specific information, or "celebrities"-
-The examples in this article demonstrate the following features:
-
-* Analyzing an image to return an array of tags and a description
-* Analyzing an image with a domain-specific model (specifically, the "celebrities" model) to return the corresponding result in JSON
-
-The features offer the following options:
--- **Option 1**: Scoped Analysis - Analyze only a specified model-- **Option 2**: Enhanced Analysis - Analyze to provide additional details by using [86-categories taxonomy](../Category-Taxonomy.md)-
-## Prerequisites
-
-* An Azure subscription - [Create one for free](https://azure.microsoft.com/free/cognitive-services/)
-* Once you have your Azure subscription, <a href="https://portal.azure.com/#create/Microsoft.CognitiveServicesComputerVision" title="Create a Computer Vision resource" target="_blank">create a Computer Vision resource </a> in the Azure portal to get your key and endpoint. After it deploys, click **Go to resource**.
- * You will need the key and endpoint from the resource you create to connect your application to the Computer Vision service. You'll paste your key and endpoint into the code below later in the quickstart.
- * You can use the free pricing tier (`F0`) to try the service, and upgrade later to a paid tier for production.
-* An image URL or a path to a locally stored image
-* Supported input methods: a raw image binary in the form of an application/octet-stream, or an image URL
-* Supported image file formats: JPEG, PNG, GIF, and BMP
-* Image file size: 4 MB or less
-* Image dimensions: 50 &times; 50 pixels or greater
+This guide assumes you have already <a href="https://portal.azure.com/#create/Microsoft.CognitiveServicesComputerVision" title="created a Computer Vision resource" target="_blank">create a Computer Vision resource </a> and obtained a subscription key and endpoint URL. If you haven't, follow a [quickstart](../quickstarts-sdk/image-analysis-client-library.md) to get started.
-## Authorize the API call
-
-Every call to the Image Analysis API requires a subscription key. This key must be either passed through a query string parameter or specified in the request header.
-
-You can pass the subscription key by doing any of the following:
-
-* Pass it through a query string, as in this example:
-
- ```
- https://westus.api.cognitive.microsoft.com/vision/v2.1/analyze?visualFeatures=Description,Tags&subscription-key=<Your subscription key>
- ```
-
-* Specify it in the HTTP request header:
-
- ```
- ocp-apim-subscription-key: <Your subscription key>
- ```
-
-* When you use the client library, pass the key through the constructor of ComputerVisionClient, and specify the region in a property of the client:
-
- ```
- var visionClient = new ComputerVisionClient(new ApiKeyServiceClientCredentials("Your subscriptionKey"))
- {
- Endpoint = "https://westus.api.cognitive.microsoft.com"
- }
- ```
-
-## Upload an image to the Image Analysis service
-
-The basic way to perform the Image Analysis API call is by uploading an image directly to return tags, a description, and celebrities. You do this by sending a "POST" request with the binary image in the HTTP body together with the data read from the image. The upload method is the same for all Image Analysis API calls. The only difference is the query parameters that you specify.
+## Submit data to the service
-For a specified image, get tags and a description by using either of the following options:
+You submit either a local image or a remote image to the Analyze API. For local, you put the binary image data in the HTTP request body. For remote, you specify the image's URL by formatting the request body like the following: `{"url":"http://example.com/images/test.jpg"}`.
-### Option 1: Get a list of tags and a description
+## Determine how to process the data
-```
-POST https://westus.api.cognitive.microsoft.com/vision/v2.1/analyze?visualFeatures=Description,Tags&subscription-key=<Your subscription key>
-```
-
-```csharp
-using System.IO;
-using Microsoft.Azure.CognitiveServices.Vision.ComputerVision;
-using Microsoft.Azure.CognitiveServices.Vision.ComputerVision.Models;
+### Select visual features
-ImageAnalysis imageAnalysis;
-var features = new VisualFeatureTypes[] { VisualFeatureTypes.Tags, VisualFeatureTypes.Description };
+The [Analyze API](https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2-preview-3/operations/56f91f2e778daf14a499f21b) gives you access to all of the service's image analysis features. You need to specify which features you want to use by setting the URL query parameters. A parameter can have multiple values, separated by commas. Each feature you specify will require additional computation time, so only specify what you need.
-using (var fs = new FileStream(@"C:\Vision\Sample.jpg", FileMode.Open))
-{
- imageAnalysis = await visionClient.AnalyzeImageInStreamAsync(fs, features);
-}
-```
+|URL parameter | Value | Description|
+|||--|
+|`visualFeatures`|`Adult` | detects if the image is pornographic in nature (depicts nudity or a sex act), or is gory (depicts extreme violence or blood). Sexually suggestive content (aka racy content) is also detected.|
+||`Brands` | detects various brands within an image, including the approximate location. The Brands argument is only available in English.|
+||`Categories` | categorizes image content according to a taxonomy defined in documentation. This is the default value of `visualFeatures`.|
+||`Color` | determines the accent color, dominant color, and whether an image is black&white.|
+||`Description` | describes the image content with a complete sentence in supported languages.|
+||`Faces` | detects if faces are present. If present, generate coordinates, gender and age.|
+||`ImageType` | detects if image is clip art or a line drawing.|
+||`Objects` | detects various objects within an image, including the approximate location. The Objects argument is only available in English.|
+||`Tags` | tags the image with a detailed list of words related to the image content.|
+|`details`| `Celebrities` | identifies celebrities if detected in the image.|
+||`Landmarks` |identifies landmarks if detected in the image.|
-### Option 2: Get a list of tags only or a description only
-
-For tags only, run:
-
-```
-POST https://westus.api.cognitive.microsoft.com/vision/v2.1/tag?subscription-key=<Your subscription key>
-var tagResults = await visionClient.TagImageAsync("http://contoso.com/example.jpg");
-```
-
-For a description only, run:
-
-```
-POST https://westus.api.cognitive.microsoft.com/vision/v2.1/describe?subscription-key=<Your subscription key>
-using (var fs = new FileStream(@"C:\Vision\Sample.jpg", FileMode.Open))
-{
- imageDescription = await visionClient.DescribeImageInStreamAsync(fs);
-}
-```
-
-## Get domain-specific analysis (celebrities)
-
-### Option 1: Scoped analysis - Analyze only a specified model
-```
-POST https://westus.api.cognitive.microsoft.com/vision/v2.1/models/celebrities/analyze
-var celebritiesResult = await visionClient.AnalyzeImageInDomainAsync(url, "celebrities");
-```
+A populated URL might look like the following:
-For this option, all other query parameters {visualFeatures, details} are not valid. If you want to see all supported models, use:
+`https://{endpoint}/vision/v2.1/analyze?visualFeatures=Description,Tags&details=Celebrities`
-```
-GET https://westus.api.cognitive.microsoft.com/vision/v2.1/models
-var models = await visionClient.ListModelsAsync();
-```
+### Specify languages
-### Option 2: Enhanced analysis - Analyze to provide additional details by using 86-categories taxonomy
+You can also specify the language of the returned data. The following URL query parameter specifies the language. The default value is `en`.
-For applications where you want to get a generic image analysis in addition to details from one or more domain-specific models, extend the v1 API by using the models query parameter.
+|URL parameter | Value | Description|
+|||--|
+|`language`|`en` | English|
+||`es` | Spanish|
+||`ja` | Japanese|
+||`pt` | Portuguese|
+||`zh` | Simplified Chinese|
-```
-POST https://westus.api.cognitive.microsoft.com/vision/v2.1/analyze?details=celebrities
-```
+A populated URL might look like the following:
-When you invoke this method, you first call the [86-category](../Category-Taxonomy.md) classifier. If any of the categories matches that of a known or matching model, a second pass of classifier invocations occurs. For example, if "details=all" or "details" includes "celebrities," you call the celebrities model after you call the 86-category classifier. The result includes the category person. In contrast with Option 1, this method increases latency for users who are interested in celebrities.
+`https://{endpoint}/vision/v2.1/analyze?visualFeatures=Description,Tags&details=Celebrities&language=en`
-In this case, all v1 query parameters behave in the same way. If you don't specify visualFeatures=categories, it's implicitly enabled.
+> [!NOTE]
+> **Scoped API calls**
+>
+> Some of the features in Image Analysis can be called directly as well as through the Analyze API call. For example, you can do a scoped analysis of only image tags by making a request to `https://{endpoint}/vision/v3.2-preview.3/tag`. See the [reference documentation](https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2-preview-3/operations/56f91f2e778daf14a499f21b) for other features that can be called separately.
-## Retrieve and understand the JSON output for analysis
+## Get results from the service
-Here's an example:
+The service returns a `200` HTTP response, and the body contains the returned data in the form of a JSON string. The following is an example of a JSON response.
```json {
Here's an example:
} ```
+See the following table for explanations of the fields in this example:
+ Field | Type | Content ||| Tags | `object` | The top-level object for an array of tags. tags[].Name | `string` | The keyword from the tags classifier. tags[].Score | `number` | The confidence score, between 0 and 1.
-description | `object` | The top-level object for a description.
-description.tags[] | `string` | The list of tags. If there is insufficient confidence in the ability to produce a caption, the tags might be the only information available to the caller.
+description | `object` | The top-level object for an image description.
+description.tags[] | `string` | The list of tags. If there is insufficient confidence in the ability to produce a caption, the tags might be the only information available to the caller.
description.captions[].text | `string` | A phrase describing the image. description.captions[].confidence | `number` | The confidence score for the phrase.
-## Retrieve and understand the JSON output of domain-specific models
-
-### Option 1: Scoped analysis - Analyze only a specified model
-
-The output is an array of tags, as shown in the following example:
-
-```json
-{
- "result":[
- {
- "name":"golden retriever",
- "score":0.98
- },
- {
- "name":"Labrador retriever",
- "score":0.78
- }
- ]
-}
-```
-
-### Option 2: Enhanced analysis - Analyze to provide additional details by using the "86-categories" taxonomy
-
-For domain-specific models using Option 2 (enhanced analysis), the categories return type is extended, as shown in the following example:
-
-```json
-{
- "requestId":"87e44580-925a-49c8-b661-d1c54d1b83b5",
- "metadata":{
- "width":640,
- "height":430,
- "format":"Jpeg"
- },
- "result":{
- "celebrities":[
- {
- "name":"Richard Nixon",
- "faceRectangle":{
- "left":107,
- "top":98,
- "width":165,
- "height":165
- },
- "confidence":0.9999827
- }
- ]
- }
-}
-```
-
-The categories field is a list of one or more of the [86 categories](../Category-Taxonomy.md) in the original taxonomy. Categories that end in an underscore match that category and its children (for example, "people_" or "people_group," for the celebrities model).
-
-Field | Type | Content
-|||
-categories | `object` | The top-level object.
-categories[].name | `string` | The name from the 86-category taxonomy list.
-categories[].score | `number` | The confidence score, between 0 and 1.
-categories[].detail | `object?` | (Optional) The detail object.
-
-If multiple categories match (for example, the 86-category classifier returns a score for both "people_" and "people_young," when model=celebrities), the details are attached to the most general level match ("people_," in that example).
-
-## Error responses
-
-These errors are identical to those in vision.analyze, with the additional NotSupportedModel error (HTTP 400), which might be returned in both the Option 1 and Option 2 scenarios. For Option 2 (enhanced analysis), if any of the models that are specified in the details isn't recognized, the API returns a NotSupportedModel, even if one or more of them are valid. To find out what models are supported, you can call listModels.
+### Error codes
+
+See the following list of possible errors and their causes:
+
+* 400
+ * InvalidImageUrl - Image URL is badly formatted or not accessible.
+ * InvalidImageFormat - Input data is not a valid image.
+ * InvalidImageSize - Input image is too large.
+ * NotSupportedVisualFeature - Specified feature type is not valid.
+ * NotSupportedImage - Unsupported image, e.g. child pornography.
+ * InvalidDetails - Unsupported `detail` parameter value.
+ * NotSupportedLanguage - The requested operation is not supported in the language specified.
+ * BadArgument - Additional details are provided in the error message.
+* 415 - Unsupported media type error. The Content-Type is not in the allowed types:
+ * For an image URL: Content-Type should be application/json
+ * For a binary image data: Content-Type should be application/octet-stream or multipart/form-data
+* 500
+ * FailedToProcess
+ * Timeout - Image processing timed out.
+ * InternalServerError
## Next steps
-To use the REST API, go to the [Image Analysis API Reference](https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2-preview-3/operations/56f91f2e778daf14a499f21b).
+To try out the REST API, go to the [Image Analysis API Reference](https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2-preview-3/operations/56f91f2e778daf14a499f21b).
cognitive-services Call Read Api https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Computer-vision/Vision-API-How-to-Topics/call-read-api.md
In this guide, you'll learn how to call the Read API to extract text from images. You'll learn the different ways you can configure the behavior of this API to meet your needs.
+This guide assumes you have already <a href="https://portal.azure.com/#create/Microsoft.CognitiveServicesComputerVision" title="created a Computer Vision resource" target="_blank">create a Computer Vision resource </a> and obtained a subscription key and endpoint URL. If you haven't, follow a [quickstart](../quickstarts-sdk/client-library.md) to get started.
+ ## Submit data to the service The Read API's [Read call](https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2-preview-3/operations/5d986960601faab4bf452005) takes an image or PDF document as the input and extracts text asynchronously.
The [Read 3.2 preview API](https://westus.dev.cognitive.microsoft.com/docs/servi
## Next steps
-To use the REST API, go to the [Read API Reference](https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2-preview-3/operations/5d986960601faab4bf452005).
+To try out the REST API, go to the [Read API Reference](https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2-preview-3/operations/5d986960601faab4bf452005).
cognitive-services How To Mitigate Latency https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Face/Face-API-How-to-Topics/how-to-mitigate-latency.md
If your computer has a slow connection to the Face service, that will impact the
Mitigations: - When you create your Face subscription, make sure to choose the region closest to where your application is hosted. - If you need to call multiple service methods, consider calling them in parallel if your application design allows for it. See the previous section for an example.-- If longer latencies impact the user experience, choose a timeout threshold (e.g. maximum 5s) before retrying the API call
+- If longer latencies impact the user experience, choose a timeout threshold (e.g. maximum 5s) before retrying the API call.
## Next steps
cognitive-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/language-support.md
https://cris.ai -> Click on Adaptation Data -> scroll down to section "Pronuncia
| Croatian (Croatia) | `hr-HR` | Text | | | Czech (Czech Republic) | `cs-CZ` | Text | | | Danish (Denmark) | `da-DK` | Text | Yes |
-| Dutch (Netherlands) | `nl-NL` | Audio (20201015)<br>Text | Yes |
+| Dutch (Netherlands) | `nl-NL` | Audio (20201015)<br>Text<br>Pronunciation| Yes |
| English (Australia) | `en-AU` | Audio (20201019)<br>Text | Yes | | English (Canada) | `en-CA` | Audio (20201019)<br>Text | Yes | | English (Ghana) | `en-GH` | Text | |
https://cris.ai -> Click on Adaptation Data -> scroll down to section "Pronuncia
| Estonian(Estonia) | `et-EE` | Text | | | Filipino (Philippines) | `fil-PH`| Text | | | Finnish (Finland) | `fi-FI` | Text | Yes |
-| French (Canada) | `fr-CA` | Audio (20201015)<br>Text | Yes |
+| French (Canada) | `fr-CA` | Audio (20201015)<br>Text<br>Pronunciation| Yes |
| French (France) | `fr-FR` | Audio (20201015)<br>Text<br>Pronunciation| Yes |
-| French (Switzerland) | `fr-CH` | Text | |
-| German (Austria) | `de-AT` | Text | |
+| French (Switzerland) | `fr-CH` | Text<br>Pronunciation | |
+| German (Austria) | `de-AT` | Text<br>Pronunciation | |
| German (Germany) | `de-DE` | Audio (20190701, 20200619, 20201127)<br>Text<br>Pronunciation| Yes | | Greek (Greece) | `el-GR` | Text | | | Gujarati (Indian) | `gu-IN` | Text | |
https://cris.ai -> Click on Adaptation Data -> scroll down to section "Pronuncia
| Norwegian (Bokmål, Norway) | `nb-NO` | Text | Yes | | Polish (Poland) | `pl-PL` | Text | Yes | | Portuguese (Brazil) | `pt-BR` | Audio (20190620, 20201015)<br>Text<br>Pronunciation| Yes |
-| Portuguese (Portugal) | `pt-PT` | Text | Yes |
+| Portuguese (Portugal) | `pt-PT` | Text<br>Pronunciation | Yes |
| Romanian (Romania) | `ro-RO` | Text | | | Russian (Russia) | `ru-RU` | Audio (20200907)<br>Text | Yes | | Slovak (Slovakia) | `sk-SK` | Text | | | Slovenian (Slovenia) | `sl-SI` | Text | |
-| Spanish (Argentina) | `es-AR` | Text | |
-| Spanish (Bolivia) | `es-BO` | Text | |
-| Spanish (Chile) | `es-CL` | Text | |
-| Spanish (Colombia) | `es-CO` | Text | |
-| Spanish (Costa Rica) | `es-CR` | Text | |
-| Spanish (Cuba) | `es-CU` | Text | |
-| Spanish (Dominican Republic) | `es-DO` | Text | |
-| Spanish (Ecuador) | `es-EC` | Text | |
-| Spanish (El Salvador) | `es-SV` | Text | |
+| Spanish (Argentina) | `es-AR` | Text<br>Pronunciation | |
+| Spanish (Bolivia) | `es-BO` | Text<br>Pronunciation | |
+| Spanish (Chile) | `es-CL` | Text<br>Pronunciation | |
+| Spanish (Colombia) | `es-CO` | Text<br>Pronunciation | |
+| Spanish (Costa Rica) | `es-CR` | Text<br>Pronunciation | |
+| Spanish (Cuba) | `es-CU` | Text<br>Pronunciation | |
+| Spanish (Dominican Republic) | `es-DO` | Text<br>Pronunciation | |
+| Spanish (Ecuador) | `es-EC` | Text<br>Pronunciation | |
+| Spanish (El Salvador) | `es-SV` | Text<br>Pronunciation | |
| Spanish (Equatorial Guinea) | `es-GQ` | Text | |
-| Spanish (Guatemala) | `es-GT` | Text | |
-| Spanish (Honduras) | `es-HN` | Text | |
-| Spanish (Mexico) | `es-MX` | Audio (20200907)<br>Text | Yes |
-| Spanish (Nicaragua) | `es-NI` | Text | |
-| Spanish (Panama) | `es-PA` | Text | |
-| Spanish (Paraguay) | `es-PY` | Text | |
-| Spanish (Peru) | `es-PE` | Text | |
-| Spanish (Puerto Rico) | `es-PR` | Text | |
-| Spanish (Spain) | `es-ES` | Audio (20201015)<br>Text | Yes |
-| Spanish (Uruguay) | `es-UY` | Text | |
-| Spanish (USA) | `es-US` | Text | |
-| Spanish (Venezuela) | `es-VE` | Text | |
+| Spanish (Guatemala) | `es-GT` | Text<br>Pronunciation | |
+| Spanish (Honduras) | `es-HN` | Text<br>Pronunciation | |
+| Spanish (Mexico) | `es-MX` | Audio (20200907)<br>Text<br>Pronunciation| Yes |
+| Spanish (Nicaragua) | `es-NI` | Text<br>Pronunciation | |
+| Spanish (Panama) | `es-PA` | Text<br>Pronunciation | |
+| Spanish (Paraguay) | `es-PY` | Text<br>Pronunciation | |
+| Spanish (Peru) | `es-PE` | Text<br>Pronunciation | |
+| Spanish (Puerto Rico) | `es-PR` | Text<br>Pronunciation | |
+| Spanish (Spain) | `es-ES` | Audio (20201015)<br>Text<br>Pronunciation| Yes |
+| Spanish (Uruguay) | `es-UY` | Text<br>Pronunciation | |
+| Spanish (USA) | `es-US` | Text<br>Pronunciation | |
+| Spanish (Venezuela) | `es-VE` | Text<br>Pronunciation | |
| Swedish (Sweden) | `sv-SE` | Text | Yes | | Tamil (India) | `ta-IN` | Text | | | Telugu (India) | `te-IN` | Text | |
cognitive-services Speech Service Vnet Service Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/speech-service-vnet-service-endpoint.md
+
+ Title: How to use VNet service endpoints with Speech service
+
+description: Learn how to use Speech service with Virtual Network service endpoints
++++++ Last updated : 03/19/2021+++
+# Use Speech service through a Virtual Network service endpoint
+
+[Virtual Network](../../virtual-network/virtual-networks-overview.md) (VNet) [service endpoint](../../virtual-network/virtual-network-service-endpoints-overview.md) provides secure and direct connectivity to Azure services over an optimized route over the Azure backbone network. Endpoints allow you to secure your critical Azure service resources to only your virtual networks. Service Endpoints enables private IP addresses in the VNet to reach the endpoint of an Azure service without needing a public IP address on the VNet.
+
+This article explains how to set up and use VNet service endpoints with Speech service in Azure Cognitive Services.
+
+> [!NOTE]
+> Before you proceed, review [how to use virtual networks with Cognitive Services](../cognitive-services-virtual-networks.md).
+
+This article also describes [how to remove VNet service endpoints later, but still use the Speech resource](#use-a-speech-resource-with-a-custom-domain-name-and-without-allowed-vnets).
+
+Setting up a Speech resource for the VNet service endpoint scenarios requires performing the following tasks:
+1. [Create Speech resource custom domain name](#create-a-custom-domain-name)
+1. [Configure VNet(s) and the Speech resource networking settings](#configure-vnets-and-the-speech-resource-networking-settings)
+1. [Adjust existing applications and solutions](#adjust-existing-applications-and-solutions)
+
+> [!NOTE]
+> Setting up and using VNet service endpoints for Speech service is very similar to setting up and using the private endpoints. In this article we reference the correspondent sections of the [article on using private endpoints](speech-services-private-link.md), when the content is equivalent.
++
+This article describes the usage of the VNet service endpoints with Speech service. Usage of the private endpoints is described [here](speech-services-private-link.md).
+
+## Create a custom domain name
+
+VNet service endpoints require a [custom subdomain name for Cognitive Services](../cognitive-services-custom-subdomains.md). Create a custom domain referring to [this section](speech-services-private-link.md#create-a-custom-domain-name) of the private endpoint article. Note, that all warnings in the section are also applicable to the VNet service endpoint scenario.
+
+## Configure VNet(s) and the Speech resource networking settings
+
+You need to add all Virtual networks that are allowed access via the service endpoint to the Speech resource networking properties.
+
+> [!NOTE]
+> To access a Speech resource via the VNet service endpoint you need to enable `Microsoft.CognitiveServices` service endpoint type for the required subnet(s) of your VNet. This in effect will route **all** subnet Cognitive Services related traffic via the private backbone network. If you intend to access any other Cognitive Services resources from the same subnet, make sure these resources are configured to allow your VNet. See next Note for the details.
+
+> [!NOTE]
+> If a VNet is not added as allowed to the Speech resource networking properties, it will **not** have access to this Speech resource via the service endpoint, even if the `Microsoft.CognitiveServices` service endpoint is enabled for the VNet. Moreover, if the service endpoint is enabled, but the VNet is not allowed, the Speech resource will be unaccessible for this VNet through a public IP address as well, no matter what the Speech resource other network security settings are. The reason is that enabling `Microsoft.CognitiveServices` endpoint routes **all** Cognitive Services related traffic through the private backbone network, and in this case the VNet should be explicitly allowed to access the resource. This is true not only for Speech but for all other Cognitive Services resources (see the previous Note).
+
+1. Go to the [Azure portal](https://portal.azure.com/) and sign in to your Azure account.
+1. Select the required Speech resource.
+1. In the **Resource Management** group on the left pane, select **Networking**.
+1. On the **Firewalls and virtual networks** tab, select **Selected Networks and Private Endpoints**.
+
+> [!NOTE]
+> To use VNet service endpoints you need to select **Selected Networks and Private Endpoints** network security option. No other options are supported. If your scenario requires **All networks** option, consider using the [private endpoints](speech-services-private-link.md), which support all three network security options.
+
+5. Select **Add existing virtual network** or **Add new virtual network**, fill in the required parameters, and select **Add** for the existing or **Create** for the new virtual network. Note, that if you add an existing virtual network then the `Microsoft.CognitiveServices` service endpoint will be automatically enabled for the selected subnet(s). This operation can take up to 15 minutes. Also do not forget to consider the Notes in the beginning of this section.
+
+### Enabling service endpoint for an existing VNet
+
+As described in the previous section when you add a VNet as allowed for the speech resource the `Microsoft.CognitiveServices` service endpoint is automatically enabled. However, if later you disable it for whatever reason, you need to re-enable it manually to restore the service endpoint access to the Speech resource (as well as other Cognitive Services resources):
+
+1. Go to the [Azure portal](https://portal.azure.com/) and sign in to your Azure account.
+1. Select the required VNet.
+1. In the **Settings** group on the left pane, select **Subnets**.
+1. Select the required subnet.
+1. A new right panel appears. In this panel in the **Service Endpoints** section select `Microsoft.CognitiveServices` from the **Services** drop-down list.
+1. Select **Save**.
+
+## Adjust existing applications and solutions
+
+A Speech resource with a custom domain enabled uses a different way to interact with Speech Services. This is true for a custom-domain-enabled Speech resource both with and without service endpoints configured. Information in this section applies to both scenarios.
+
+### Use a Speech resource with a custom domain name and allowed VNet(s) configured
+
+This is the case when **Selected Networks and Private Endpoints** option is selected in networking settings of the Speech resource **AND** at least one VNet is allowed. The usage is equivalent to [using a Speech resource with a custom domain name and a private endpoint enabled](speech-services-private-link.md#adjust-an-application-to-use-a-speech-resource-with-a-private-endpoint).
++
+### Use a Speech resource with a custom domain name and without allowed VNet(s)
+
+This is the case when private endpoints are **not** enabled, and any of the following is true:
+
+- **Selected Networks and Private Endpoints** option is selected in networking settings of the Speech resource, but **no** allowed VNet(s) are configured
+- **All networks** option is selected in networking settings of the Speech resource
+
+The usage is equivalent to [using a Speech resource with a custom domain name and without private endpoints](speech-services-private-link.md#adjust-an-application-to-use-a-speech-resource-without-private-endpoints).
++++
+## Learn more
+
+* [Use Speech service through a private endpoint](speech-services-private-link.md)
+* [Azure VNet service endpoint](../../virtual-network/virtual-network-service-endpoints-overview.md)
+* [Azure Private Link](../../private-link/private-link-overview.md)
+* [Speech SDK](speech-sdk.md)
+* [Speech-to-text REST API](rest-speech-to-text.md)
+* [Text-to-speech REST API](rest-text-to-speech.md)
cognitive-services Speech Services Private Link https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/speech-services-private-link.md
Title: How to use private endpoints with Speech Services
+ Title: How to use private endpoints with Speech service
-description: Learn how to use Speech Services with private endpoints provided by Azure Private Link
+description: Learn how to use Speech service with private endpoints provided by Azure Private Link
Previously updated : 02/04/2021 Last updated : 04/07/2021
-# Use Speech Services through a private endpoint
+# Use Speech service through a private endpoint
[Azure Private Link](../../private-link/private-link-overview.md) lets you connect to services in Azure by using a [private endpoint](../../private-link/private-endpoint-overview.md). A private endpoint is a private IP address that's accessible only within a specific [virtual network](../../virtual-network/virtual-networks-overview.md) and subnet.
This article then describes how to remove private endpoints later, but still use
+Setting up a Speech resource for the private endpoint scenarios requires performing the following tasks:
+1. [Create a custom domain name](#create-a-custom-domain-name)
+1. [Turn on private endpoints](#turn-on-private-endpoints)
+1. [Adjust existing applications and solutions](#adjust-an-application-to-use-a-speech-resource-with-a-private-endpoint)
++
+This article describes the usage of the private endpoints with Speech service. Usage of the VNet service endpoints is described [here](speech-service-vnet-service-endpoint.md).
++ ## Create a custom domain name Private endpoints require a [custom subdomain name for Cognitive Services](../cognitive-services-custom-subdomains.md). Use the following instructions to create one for your Speech resource. > [!WARNING]
-> A Speech resource that uses a custom domain name interacts with Speech Services in a different way.
-> You might have to adjust your application code to use a Speech resource with a private endpoint, and also to use a Speech resource with _no_ private endpoint.
-> Both scenarios may be needed because the switch to custom domain name is _not_ reversible.
+> A Speech resource with a custom domain name enabled uses a different way to interact with Speech service. You might have to adjust your application code for both of these scenarios: [with private endpoint](#adjust-an-application-to-use-a-speech-resource-with-a-private-endpoint) and [*without* private endpoint](#adjust-an-application-to-use-a-speech-resource-without-private-endpoints).
> > When you turn on a custom domain name, the operation is [not reversible](../cognitive-services-custom-subdomains.md#can-i-change-a-custom-domain-name). The only way to go back to the [regional name](../cognitive-services-custom-subdomains.md#is-there-a-list-of-regional-endpoints) is to create a new Speech resource. >
A Speech resource with a custom domain name and a private endpoint turned on use
We'll use `my-private-link-speech.cognitiveservices.azure.com` as a sample Speech resource DNS name (custom domain) for this section.
-Speech Services has REST APIs for [Speech-to-text](rest-speech-to-text.md) and [Text-to-speech](rest-text-to-speech.md). Consider the following information for the private-endpoint-enabled scenario.
+Speech service has REST APIs for [Speech-to-text](rest-speech-to-text.md) and [Text-to-speech](rest-text-to-speech.md). Consider the following information for the private-endpoint-enabled scenario.
Speech-to-text has two REST APIs. Each API serves a different purpose, uses different endpoints, and requires a different approach when you're using it in the private-endpoint-enabled scenario.
An example DNS name is:
`westeurope.stt.speech.microsoft.com`
-All possible values for the region (first element of the DNS name) are listed in [Speech service supported regions](regions.md). (See [this article](sovereign-clouds.md) for Azure Government and Azure China endpoints.) The following table presents the possible values for the Speech Services offering (second element of the DNS name):
+All possible values for the region (first element of the DNS name) are listed in [Speech service supported regions](regions.md). (See [this article](sovereign-clouds.md) for Azure Government and Azure China endpoints.) The following table presents the possible values for the Speech service offering (second element of the DNS name):
| DNS name value | Speech service offering | |-|-|
All possible values for the region (first element of the DNS name) are listed in
So the earlier example (`westeurope.stt.speech.microsoft.com`) stands for a Speech-to-text endpoint in West Europe.
-Private-endpoint-enabled endpoints communicate with Speech Services via a special proxy. Because of that, *you must change the endpoint connection URLs*.
+Private-endpoint-enabled endpoints communicate with Speech service via a special proxy. Because of that, *you must change the endpoint connection URLs*.
A "standard" endpoint URL looks like: <p/>`{region}.{speech service offering}.speech.microsoft.com/{URL path}`
After this modification, your application should work with the private-endpoint-
## Adjust an application to use a Speech resource without private endpoints
-In this article, we've pointed out several times that enabling a custom domain for a Speech resource is *irreversible*. Such a resource will use a different way of communicating with Speech Services, compared to the ones that are using [regional endpoint names](../cognitive-services-custom-subdomains.md#is-there-a-list-of-regional-endpoints).
+In this article, we've pointed out several times that enabling a custom domain for a Speech resource is *irreversible*. Such a resource will use a different way of communicating with Speech service, compared to the ones that are using [regional endpoint names](../cognitive-services-custom-subdomains.md#is-there-a-list-of-regional-endpoints).
This section explains how to use a Speech resource with a custom domain name but *without* any private endpoints with the Speech Services REST APIs and [Speech SDK](speech-sdk.md). This might be a resource that was once used in a private endpoint scenario, but then had its private endpoints deleted.
You need to roll back your application to the standard instantiation of `SpeechC
var config = SpeechConfig.FromSubscription(subscriptionKey, azureRegion); ``` + ## Pricing For pricing details, see [Azure Private Link pricing](https://azure.microsoft.com/pricing/details/private-link). ## Learn more
+* [Use Speech service through a Virtual Network service endpoint](speech-service-vnet-service-endpoint.md)
* [Azure Private Link](../../private-link/private-link-overview.md)
+* [Azure VNet service endpoint](../../virtual-network/virtual-network-service-endpoints-overview.md)
* [Speech SDK](speech-sdk.md) * [Speech-to-text REST API](rest-speech-to-text.md) * [Text-to-speech REST API](rest-text-to-speech.md)
cognitive-services Speech Services Quotas And Limits https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/speech-services-quotas-and-limits.md
Title: Speech Services Quotas and Limits
+ Title: Speech service Quotas and Limits
-description: Quick reference, detailed description, and best practices on Azure Cognitive Speech Services Quotas and Limits
+description: Quick reference, detailed description, and best practices on Azure Cognitive Speech service Quotas and Limits
Previously updated : 03/27/2021 Last updated : 04/07/2021
-# Speech Services Quotas and Limits
+# Speech service Quotas and Limits
-This article contains a quick reference and the **detailed description** of Azure Cognitive Speech Services Quotas and Limits for all [pricing tiers](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/). It also contains some best practices to avoid request throttling.
+This article contains a quick reference and the **detailed description** of Azure Cognitive Speech service Quotas and Limits for all [pricing tiers](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/). It also contains some best practices to avoid request throttling.
## Quotas and Limits quick reference Jump to [Text-to-Speech Quotas and limits](#text-to-speech-quotas-and-limits-per-speech-resource)
The next sections describe specific cases of adjusting quotas.<br/>
Jump to [Text-to-Speech. Increasing Transcription Concurrent Request limit for Custom voice](#text-to-speech-increasing-transcription-concurrent-request-limit-for-custom-voice) ### Speech-to-text: increasing online transcription concurrent request limit
-By default the number of concurrent requests is limited to 20 per Speech resource (Base model) or per Custom endpoint (Custom model). For Standard pricing tier this amount can be increased. Before submitting the request, ensure you are familiar with the material in [this section](#detailed-description-quota-adjustment-and-best-practices) and aware of these [best practices](#general-best-practices-to-mitigate-throttling-during-autoscaling).
+By default the number of concurrent requests is limited to 100 per Speech resource (Base model) and to 20 per Custom endpoint (Custom model). For Standard pricing tier this amount can be increased. Before submitting the request, ensure you are familiar with the material in [this section](#detailed-description-quota-adjustment-and-best-practices) and aware of these [best practices](#general-best-practices-to-mitigate-throttling-during-autoscaling).
-Increasing the Concurrent Request limit does **not** directly affect your costs. Speech Services uses "Pay only for what you use" model. The limit defines how high the Service may scale before it starts throttle your requests.
+>[!NOTE]
+> If you use custom models, please be aware, that one Speech resource may be associated with many custom endpoints hosting many custom model deployments. Each Custom endpoint has the default number of concurrent request limit (20) set by creation. If you need to adjust it, you need to make the adjustment of each custom endpoint **separately**. Please also note, that the value of the number of concurrent request limit for the base model of a Speech resource has **no** effect to the custom endpoints associated with this resource.
++
+Increasing the Concurrent Request limit does **not** directly affect your costs. Speech service uses "Pay only for what you use" model. The limit defines how high the Service may scale before it starts throttle your requests.
Concurrent Request limits for **Base** and **Custom** models need to be adjusted **separately**.
Generally, it is highly recommended to test the workload and the workload patter
### Text-to-speech: increasing transcription concurrent request limit for Custom Voice By default the number of concurrent requests for a Custom Voice endpoint is limited to 10. For Standard pricing tier this amount can be increased. Before submitting the request, ensure you are familiar with the material in [this section](#detailed-description-quota-adjustment-and-best-practices) and aware of these [best practices](#general-best-practices-to-mitigate-throttling-during-autoscaling).
-Increasing the Concurrent Request limit does **not** directly affect your costs. Speech Services uses "Pay only for what you use" model. The limit defines how high the Service may scale before it starts throttle your requests.
+Increasing the Concurrent Request limit does **not** directly affect your costs. Speech service uses "Pay only for what you use" model. The limit defines how high the Service may scale before it starts throttle your requests.
Existing value of Concurrent Request limit parameter is **not** visible via Azure portal, Command-Line tools, or API requests. To verify the existing value, create an Azure Support Request.
cognitive-services Text Analytics How To Call Api https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/text-analytics/how-tos/text-analytics-how-to-call-api.md
Before you use the Text Analytics API, you will need to create a Azure resource
3. Create the Text Analytics resource and go to the ΓÇ£keys and endpoint bladeΓÇ¥ in the left of the page. Copy the key to be used later when you call the APIs. You'll add this later as a value for the `Ocp-Apim-Subscription-Key` header.
+4. To check the number of text records that have been sent using your Text Analytics resource:
+
+ 1. Navigate to your Text Analytics resource in the Azure portal.
+ 2. Click **Metrics**, located under **Monitoring** in the left navigation menu.
+ 3. Select *Processed text records* in the dropdown box for **Metric**.
+
+A text record is 1000 characters.
+ ## Change your pricing tier If you have an existing Text Analytics resource using the S0 through S4 pricing tier, you should update it to use the Standard (S) [pricing tier](https://azure.microsoft.com/pricing/details/cognitive-services/text-analytics/). The S0 through S4 pricing tiers will be retired. To update your resource's pricing:
communication-services Managed Identity From Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/quickstarts/managed-identity-from-cli.md
The Azure Identity SDK reads values from three environment variables at runtime
> [!IMPORTANT] > After you set the environment variables, close and re-open your console window. If you are using Visual Studio or another development environment, you may need to restart it in order for it to register the new environment variables.
+Once these variables have been set, you should be able to use the DefaultAzureCredential object in your code to authenticate to the service client of your choice.
+ ## Next steps
cosmos-db Concepts Limits https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/concepts-limits.md
Previously updated : 03/22/2021 Last updated : 04/07/2021 # Azure Cosmos DB service quotas
You can provision throughput at a container-level or a database-level in terms o
| | | | Maximum RUs per container ([dedicated throughput provisioned mode](account-databases-containers-items.md#azure-cosmos-containers)) | 1,000,000 by default. You can increase it by [filing an Azure support ticket](create-support-request-quota-increase.md) | | Maximum RUs per database ([shared throughput provisioned mode](account-databases-containers-items.md#azure-cosmos-containers)) | 1,000,000 by default. You can increase it by [filing an Azure support ticket](create-support-request-quota-increase.md) |
-| Maximum RUs per (logical) partition | 10,000 |
+| Maximum RUs per partition (logical & physical) | 10,000 |
| Maximum storage across all items per (logical) partition | 20 GB | | Maximum number of distinct (logical) partition keys | Unlimited | | Maximum storage per container | Unlimited |
cosmos-db Partitioning Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/partitioning-overview.md
Previously updated : 03/19/2021 Last updated : 04/07/2021
This article explains the relationship between logical and physical partitions.
## Logical partitions
-A logical partition consists of a set of items that have the same partition key. For example, in a container that contains data about food nutrition, all items contain a `foodGroup` property. You can use `foodGroup` as the partition key for the container. Groups of items that have specific values for `foodGroup`, such as `Beef Products`, `Baked Products`, and `Sausages and Luncheon Meats`, form distinct logical partitions. You don't have to worry about deleting a logical partition when the underlying data is deleted.
+A logical partition consists of a set of items that have the same partition key. For example, in a container that contains data about food nutrition, all items contain a `foodGroup` property. You can use `foodGroup` as the partition key for the container. Groups of items that have specific values for `foodGroup`, such as `Beef Products`, `Baked Products`, and `Sausages and Luncheon Meats`, form distinct logical partitions.
-A logical partition also defines the scope of database transactions. You can update items within a logical partition by using a [transaction with snapshot isolation](database-transactions-optimistic-concurrency.md). When new items are added to a container, new logical partitions are transparently created by the system.
+A logical partition also defines the scope of database transactions. You can update items within a logical partition by using a [transaction with snapshot isolation](database-transactions-optimistic-concurrency.md). When new items are added to a container, new logical partitions are transparently created by the system. You don't have to worry about deleting a logical partition when the underlying data is deleted.
There is no limit to the number of logical partitions in your container. Each logical partition can store up to 20GB of data. Good partition key choices have a wide range of possible values. For example, in a container where all items contain a `foodGroup` property, the data within the `Beef Products` logical partition can grow up to 20 GB. [Selecting a partition key](#choose-partitionkey) with a wide range of possible values ensures that the container is able to scale.
A container is scaled by distributing data and throughput across physical partit
The number of physical partitions in your container depends on the following:
-* The number of throughput provisioned (each individual physical partition can provide a throughput of up to 10,000 request units per second).
+* The number of throughput provisioned (each individual physical partition can provide a throughput of up to 10,000 request units per second). The 10,000 RU/s limit for physical partitions implies that logical partitions also have a 10,000 RU/s limit, as each logical partition is only mapped to one physical partition.
+ * The total data storage (each individual physical partition can store up to 50GB data). > [!NOTE]
data-factory Copy Activity Performance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/copy-activity-performance.md
This full utilization means you can estimate the overall throughput by measuring
* Destination data store * Network bandwidth in between the source and destination data stores
-The table below calculates the copy duration. The duration is based on data size and the network/data store bandwidth limit for your environment.
+The table below shows the calculation of data movement duration. The duration in each cell is calculated based on a given network and data store bandwidth and a given data payload size.
+
+> [!NOTE]
+> The duration provided below are meant to represent achievable performance in an end-to-end data integration solution implemented using ADF, by using one or more performance optimization techniques described in [Copy performance optimization features](#copy-performance-optimization-features), including using ForEach to partition and spawn off multiple concurrent copy activities. We recommend you to follow steps laid out in [Performance tuning steps](#performance-tuning-steps) to optimize copy performance for your specific dataset and system configuration. You should use the numbers obtained in your performance tuning tests for production deployment planning, capacity planning, and billing projection.
&nbsp;
data-factory Copy Activity Schema And Type Mapping https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/copy-activity-schema-and-type-mapping.md
This article describes how the Azure Data Factory copy activity perform schema m
### Default mapping
-By default, copy activity maps source data to sink **by column names** in case-sensitive manner. If sink doesn't exist, for example, writing to file(s), the source field names will be persisted as sink names. Such default mapping supports flexible schemas and schema drift from source to sink from execution to execution - all the data returned by source data store can be copied to sink.
+By default, copy activity maps source data to sink **by column names** in case-sensitive manner. If sink doesn't exist, for example, writing to file(s), the source field names will be persisted as sink names. If the sink already exists, it must contain all columns being copied from the source. Such default mapping supports flexible schemas and schema drift from source to sink from execution to execution - all the data returned by source data store can be copied to sink.
If your source is text file without header line, [explicit mapping](#explicit-mapping) is required as the source doesn't contain column names.
data-factory Data Factory Tutorials https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/data-factory-tutorials.md
Below is a list of tutorials to help explain and walk through a series of Data F
[Data flows inside managed VNet](tutorial-data-flow-private.md)
+[Best practices for lake data in ADLS Gen2](tutorial-data-flow-write-to-lake.md)
+ ## External data services [Azure Databricks notebook activity](transform-data-using-databricks-notebook.md)
data-factory Data Flow Expression Functions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/data-flow-expression-functions.md
Previously updated : 03/26/2021 Last updated : 04/01/2021 # Data transformation expressions in mapping data flow
Concatenates a variable number of strings together with a separator. The first p
* ``isNull(concatWS(null, 'dataflow', 'is', 'awesome')) -> true`` * ``concatWS(' is ', 'dataflow', 'awesome') -> 'dataflow is awesome'`` ___
-### <code>contains</code>
-<code><b>contains(<i>&lt;value1&gt;</i> : array, <i>&lt;value2&gt;</i> : unaryfunction) => boolean</b></code><br/><br/>
-Returns true if any element in the provided array evaluates as true in the provided predicate. Contains expects a reference to one element in the predicate function as #item.
-* ``contains([1, 2, 3, 4], #item == 3) -> true``
-* ``contains([1, 2, 3, 4], #item > 5) -> false``
-___
### <code>cos</code> <code><b>cos(<i>&lt;value1&gt;</i> : number) => double</b></code><br/><br/> Calculates a cosine value.
Checks if the first parameter is null. If not null, the first parameter is retur
* ``iifNull('azure', 'data', 'factory') -> 'factory'`` * ``iifNull(null, 'data', 'factory') -> 'data'`` ___
-### <code>in</code>
-<code><b>in(<i>&lt;array of items&gt;</i> : array, <i>&lt;item to find&gt;</i> : any) => boolean</b></code><br/><br/>
-Checks if an item is in the array.
-* ``in([10, 20, 30], 10) -> true``
-* ``in(['good', 'kid'], 'bad') -> false``
-___
### <code>initCap</code> <code><b>initCap(<i>&lt;value1&gt;</i> : string) => string</b></code><br/><br/> Converts the first letter of every word to uppercase. Words are identified as separated by whitespace.
Creates an array of items. All items should be of the same type. If no items are
* ``['Seattle', 'Washington'][1]`` * ``'Washington'`` ___
+### <code>contains</code>
+<code><b>contains(<i>&lt;value1&gt;</i> : array, <i>&lt;value2&gt;</i> : unaryfunction) => boolean</b></code><br/><br/>
+Returns true if any element in the provided array evaluates as true in the provided predicate. Contains expects a reference to one element in the predicate function as #item.
+* ``contains([1, 2, 3, 4], #item == 3) -> true``
+* ``contains([1, 2, 3, 4], #item > 5) -> false``
+___
### <code>filter</code> <code><b>filter(<i>&lt;value1&gt;</i> : array, <i>&lt;value2&gt;</i> : unaryfunction) => array</b></code><br/><br/> Filters elements out of the array that do not meet the provided predicate. Filter expects a reference to one element in the predicate function as #item.
Find the first item from an array that match the condition. It takes a filter fu
) `` ___
+### <code>in</code>
+<code><b>in(<i>&lt;array of items&gt;</i> : array, <i>&lt;item to find&gt;</i> : any) => boolean</b></code><br/><br/>
+Checks if an item is in the array.
+* ``in([10, 20, 30], 10) -> true``
+* ``in(['good', 'kid'], 'bad') -> false``
+___
### <code>map</code> <code><b>map(<i>&lt;value1&gt;</i> : array, <i>&lt;value2&gt;</i> : unaryfunction) => any</b></code><br/><br/> Maps each element of the array to a new element using the provided expression. Map expects a reference to one element in the expression function as #item.
data-factory Tutorial Data Flow Write To Lake https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/tutorial-data-flow-write-to-lake.md
+
+ Title: Best practices for writing to files to data lake with data flows
+description: This tutorial provides best practices for writing to files to data lake with data flows
+++++ Last updated : 04/01/2021++
+# Best practices for writing to files to data lake with data flows
++
+If you're new to Azure Data Factory, see [Introduction to Azure Data Factory](introduction.md).
+
+In this tutorial, you'll learn best practices that can be applied when writing files to ADLS Gen2 or Azure Blob Storage using data flows. You'll need access to an Azure Blob Storage Account or Azure Data Lake Store Gen2 account for reading a parquet file and then storing the results in folders.
+
+## Prerequisites
+* **Azure subscription**. If you don't have an Azure subscription, create a [free Azure account](https://azure.microsoft.com/free/) before you begin.
+* **Azure storage account**. You use ADLS storage as a *source* and *sink* data stores. If you don't have a storage account, see [Create an Azure storage account](../storage/common/storage-account-create.md) for steps to create one.
+
+The steps in this tutorial will assume that you have
+
+## Create a data factory
+
+In this step, you create a data factory and open the Data Factory UX to create a pipeline in the data factory.
+
+1. Open **Microsoft Edge** or **Google Chrome**. Currently, Data Factory UI is supported only in the Microsoft Edge and Google Chrome web browsers.
+1. On the left menu, select **Create a resource** > **Integration** > **Data Factory**
+1. On the **New data factory** page, under **Name**, enter **ADFTutorialDataFactory**
+1. Select the Azure **subscription** in which you want to create the data factory.
+1. For **Resource Group**, take one of the following steps:
+
+ a. Select **Use existing**, and select an existing resource group from the drop-down list.
+
+ b. Select **Create new**, and enter the name of a resource group.To learn about resource groups, see [Use resource groups to manage your Azure resources](../azure-resource-manager/management/overview.md).
+
+1. Under **Version**, select **V2**.
+1. Under **Location**, select a location for the data factory. Only locations that are supported are displayed in the drop-down list. Data stores (for example, Azure Storage and SQL Database) and computes (for example, Azure HDInsight) used by the data factory can be in other regions.
+1. Select **Create**.
+1. After the creation is finished, you see the notice in Notifications center. Select **Go to resource** to navigate to the Data factory page.
+1. Select **Author & Monitor** to launch the Data Factory UI in a separate tab.
+
+## Create a pipeline with a data flow activity
+
+In this step, you'll create a pipeline that contains a data flow activity.
+
+1. On the **Let's get started** page, select **Create pipeline**.
+
+ ![Create pipeline](./media/doc-common-process/get-started-page.png)
+
+1. In the **General** tab for the pipeline, enter **DeltaLake** for **Name** of the pipeline.
+1. In the factory top bar, slide the **Data Flow debug** slider on. Debug mode allows for interactive testing of transformation logic against a live Spark cluster. Data Flow clusters take 5-7 minutes to warm up and users are recommended to turn on debug first if they plan to do Data Flow development. For more information, see [Debug Mode](concepts-data-flow-debug-mode.md).
+
+ ![Data Flow Activity](media/tutorial-data-flow/dataflow1.png)
+1. In the **Activities** pane, expand the **Move and Transform** accordion. Drag and drop the **Data Flow** activity from the pane to the pipeline canvas.
+
+ ![Screenshot that shows the pipeline canvas where you can drop the Data Flow activity.](media/tutorial-data-flow/activity1.png)
+1. In the **Adding Data Flow** pop-up, select **Create new Data Flow** and then name your data flow **DeltaLake**. Click Finish when done.
+
+ ![Screenshot that shows where you name your data flow when you create a new data flow.](media/tutorial-data-flow/activity2.png)
+
+## Build transformation logic in the data flow canvas
+
+You will take any source data (in this tutorial, we'll use a Parquet file source) and use a sink transformation to land the data in Parquet format using the most effective mechanisms for data lake ETL.
+
+![Final flow](media/data-flow/parts-final.png "Final flow")
+
+### Tutorial objectives
+
+1. Choose any of your source datasets in a new data flow
+1. Use data flows to effectively partition your sink dataset
+1. Land your partitioned data in ADLS Gen2 lake folders
+
+### Start from a blank data flow canvas
+
+First, let's set up the data flow environment for each of the mechanisms described below for landing data in ADLS Gen2
+
+1. Click on the source transformation.
+1. Click the new button next to dataset in the bottom panel.
+1. Choose a dataset or create a new one. For this demo, we'll use a Parquet dataset called User Data.
+1. Add a Derived Column transformation. We'll use this as a way to set your desired folder names dynamically.
+1. Add a sink transformation.
+
+### Hierarchical folder output
+
+It is very common to use unique values in your data to create folder hierarchies to partition your data in the lake. This is a very optimal way to organize and process data in the lake and in Spark (the compute engine behind data flows). However, there will be a small performance cost to organize your output in this way. Expect to see a small decrease in overall pipeline performance using this mechanism in the sink.
+
+1. Go back to the data flow designer and edit the data flow create above. Click on the sink transformation.
+1. Click Optimize > Set partitioning > Key
+1. Pick the column(s) you wish to use to set your hierarchical folder structure.
+1. Note the example below uses year and month as the columns for folder naming. The results will be folders of the form ```releaseyear=1990/month=8```.
+1. When accessing the data partitions in a data flow source, you will point to just the top-level folder above ```releaseyear``` and use a wildcard pattern for each subsequent folder, ex: ```**/**/*.parquet```
+1. To manipulate the data values, or even if need to generate synthetic values for folder names, use the Derived Column transformation to create the values you wish to use in your folder names.
+
+![Key partitioning](media/data-flow/key-parts.png "Key partitioning")
+
+### Name folder as data values
+
+A slightly better performing sink technique for lake data using ADLS Gen2 that does not offer the same benefit as key/value partitioning, is ```Name folder as column data```. Whereas the key partitioning style of hierarchical structure will allow you to process data slices easier, this technique is a flattened folder structure that can write data quicker.
+
+1. Go back to the data flow designer and edit the data flow create above. Click on the sink transformation.
+1. Click Optimize > Set partitioning > Use current partitioning.
+1. Click Settings > Name folder as column data.
+1. Pick the column that you wish to use for generating folder names.
+1. To manipulate the data values, or even if need to generate synthetic values for folder names, use the Derived Column transformation to create the values you wish to use in your folder names.
+
+![Folder option](media/data-flow/folders.png "Folders")
+
+### Name file as data values
+
+The techniques listed in the above tutorials are good use cases for creating folder categories in your data lake. The default file naming scheme being employed by those techniques is to use the Spark executor job ID. Sometimes you may wish to set the name of the output file in a data flow text sink. This technique is only suggested for use with small files. The process of merging partition files into a single output file is a long-running process.
+
+1. Go back to the data flow designer and edit the data flow create above. Click on the sink transformation.
+1. Click Optimize > Set partitioning > Single partition. It is this single partition requirement that creates a bottleneck in the execution process as files are merged. This option is only recommended for small files.
+1. Click Settings > Name file as column data.
+1. Pick the column that you wish to use for generating file names.
+1. To manipulate the data values, or even if need to generate synthetic values for file names, use the Derived Column transformation to create the values you wish to use in your file names.
+
+## Next steps
+
+Learn more about [data flow sinks](data-flow-sink.md).
defender-for-iot Troubleshoot Defender Micro Agent https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/troubleshoot-defender-micro-agent.md
Title: Defender IoT micro agent troubleshooting (Preview) description: Learn how to handle unexpected or unexplained errors. Previously updated : 1/24/2021 Last updated : 4/5/2021 # Defender IoT micro agent troubleshooting (Preview)
-In the event you have unexpected or unexplained errors, use the following troubleshooting methods to attempt to resolve your issues. You can also reach out to the Azure Defender for IoT product team for assistance as needed.  
+If an unexpected error occurs, you can use these troubleshooting methods in an attempt to resolve the issue. You can also reach out to the Azure Defender for IoT product team for assistance as needed.  
## Service status
If the service is listed as `inactive`, use the following command to start the s
systemctl start defender-iot-micro-agent.service ```
-You will know that the service is crashing if the process uptime is too short. To resolve this issue, you must review the logs.
+You will know that the service is crashing if, the process uptime is less than 2 minutes. To resolve this issue, you must [review the logs](#review-the-logs).
-## Review logs
+## Validate micro agent root privileges
Use the following command to verify that the Defender IoT micro agent service is running with root privileges.
ps -aux | grep " defender-iot-micro-agent"
``` :::image type="content" source="media/troubleshooting/root-privileges.png" alt-text="Verify the Defender for IoT micro agent service is running with root privileges.":::
+## Review the logs
-To view the logs, use the following command: 
+To review the logs, use the following command: 
```azurecli sudo journalctl -u defender-iot-micro-agent | tail -n 200  ```
+### Quick log review
+
+If an issue occurs when the micro agent is run, you can run the micro agent in a temporary state, which will allow you to view the logs using the following command:
+
+```azurecli
+sudo systectl stop defender-iot-micro-agent
+cd /var/defender_iot_micro_agent/
+sudo ./defender_iot_micro_agent
+```
+
+## Restart the service
+ To restart the service, use the following command: ```azurecli
dms Tutorial Sql Server To Azure Sql https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/dms/tutorial-sql-server-to-azure-sql.md
To complete this tutorial, you need to:
- Ensure that the credentials used to connect to source SQL Server instance have [CONTROL SERVER](/sql/t-sql/statements/grant-server-permissions-transact-sql) permissions. - Ensure that the credentials used to connect to target Azure SQL Database instance have [CONTROL DATABASE](/sql/t-sql/statements/grant-database-permissions-transact-sql) permission on the target databases.
+ > [!IMPORTANT]
+ > Creating an instance of Azure Database Migration Service requires access to virtual network settings that are normally not within the same resource group. As a result, the user creating an instance of DMS requires permission at subscription level. To create the required roles, which you can assign as needed, run the following script:
+ >
+ > ```
+ >
+ > $readerActions = `
+ > "Microsoft.Network/networkInterfaces/ipConfigurations/read", `
+ > "Microsoft.DataMigration/*/read", `
+ > "Microsoft.Resources/subscriptions/resourceGroups/read"
+ >
+ > $writerActions = `
+ > "Microsoft.DataMigration/services/*/write", `
+ > "Microsoft.DataMigration/services/*/delete", `
+ > "Microsoft.DataMigration/services/*/action", `
+ > "Microsoft.Network/virtualNetworks/subnets/join/action", `
+ > "Microsoft.Network/virtualNetworks/write", `
+ > "Microsoft.Network/virtualNetworks/read", `
+ > "Microsoft.Resources/deployments/validate/action", `
+ > "Microsoft.Resources/deployments/*/read", `
+ > "Microsoft.Resources/deployments/*/write"
+ >
+ > $writerActions += $readerActions
+ >
+ > # TODO: replace with actual subscription IDs
+ > $subScopes = ,"/subscriptions/00000000-0000-0000-0000-000000000000/","/subscriptions/11111111-1111-1111-1111-111111111111/"
+ >
+ > function New-DmsReaderRole() {
+ > $aRole = [Microsoft.Azure.Commands.Resources.Models.Authorization.PSRoleDefinition]::new()
+ > $aRole.Name = "Azure Database Migration Reader"
+ > $aRole.Description = "Lets you perform read only actions on DMS service/project/tasks."
+ > $aRole.IsCustom = $true
+ > $aRole.Actions = $readerActions
+ > $aRole.NotActions = @()
+ >
+ > $aRole.AssignableScopes = $subScopes
+ > #Create the role
+ > New-AzRoleDefinition -Role $aRole
+ > }
+ >
+ > function New-DmsContributorRole() {
+ > $aRole = [Microsoft.Azure.Commands.Resources.Models.Authorization.PSRoleDefinition]::new()
+ > $aRole.Name = "Azure Database Migration Contributor"
+ > $aRole.Description = "Lets you perform CRUD actions on DMS service/project/tasks."
+ > $aRole.IsCustom = $true
+ > $aRole.Actions = $writerActions
+ > $aRole.NotActions = @()
+ >
+ > $aRole.AssignableScopes = $subScopes
+ > #Create the role
+ > New-AzRoleDefinition -Role $aRole
+ > }
+ >
+ > function Update-DmsReaderRole() {
+ > $aRole = Get-AzRoleDefinition "Azure Database Migration Reader"
+ > $aRole.Actions = $readerActions
+ > $aRole.NotActions = @()
+ > Set-AzRoleDefinition -Role $aRole
+ > }
+ >
+ > function Update-DmsConributorRole() {
+ > $aRole = Get-AzRoleDefinition "Azure Database Migration Contributor"
+ > $aRole.Actions = $writerActions
+ > $aRole.NotActions = @()
+ > Set-AzRoleDefinition -Role $aRole
+ > }
+ >
+ > # Invoke above functions
+ > New-DmsReaderRole
+ > New-DmsContributorRole
+ > Update-DmsReaderRole
+ > Update-DmsConributorRole
+ > ```
+ ## Assess your on-premises database Before you can migrate data from a SQL Server instance to a single database or pooled database in Azure SQL Database, you need to assess the SQL Server database for any blocking issues that might prevent migration. Using the Data Migration Assistant, follow the steps described in the article [Performing a SQL Server migration assessment](/sql/dma/dma-assesssqlonprem) to complete the on-premises database assessment. A summary of the required steps follows:
expressroute Expressroute About Virtual Network Gateways https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/expressroute/expressroute-about-virtual-network-gateways.md
Title: About ExpressRoute virtual network gateways - Azure| Microsoft Docs
description: Learn about virtual network gateways for ExpressRoute. This article includes information about gateway SKUs and types. + Previously updated : 04/05/2021 Last updated : 10/14/2019
The following table shows the gateway types and the estimated performances. This
[!INCLUDE [expressroute-table-aggthroughput](../../includes/expressroute-table-aggtput-include.md)] > [!IMPORTANT]
-> * Number of VMs in the virtual network also includes VMs in peered virtual networks that uses remote ExpressRoute gateway.
-> * Application performance depends on multiple factors, such as the end-to-end latency, and the number of traffic flows the application opens. The numbers in the table represent the upper limit that the application can theoretically achieve in an ideal environment.
+> Application performance depends on multiple factors, such as the end-to-end latency, and the number of traffic flows the application opens. The numbers in the table represent the upper limit that the application can theoretically achieve in an ideal environment.
+>
> ## <a name="gwsub"></a>Gateway subnet
expressroute Expressroute Introduction https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/expressroute/expressroute-introduction.md
Connectivity can be from an any-to-any (IP VPN) network, a point-to-point Ethern
![ExpressRoute connection overview](./media/expressroute-introduction/expressroute-connection-overview.png)
+> [!NOTE]
+> In the context of ExpressRoute, the Microsoft Edge describes the edge routers on the Microsoft side of the ExpressRoute circuit. This is the ExpressRoute circuit's point of entry into Microsoft's network.
+>
+ ## Key benefits * Layer 3 connectivity between your on-premises network and the Microsoft Cloud through a connectivity provider. Connectivity can be from an any-to-any (IPVPN) network, a point-to-point Ethernet connection, or through a virtual cross-connection via an Ethernet exchange.
Subscribe to the RSS feed and view the latest ExpressRoute feature updates on th
## Next steps * Ensure that all prerequisites are met. See [ExpressRoute prerequisites](expressroute-prerequisites.md). * Learn about [ExpressRoute connectivity models](expressroute-connectivity-models.md).
-* Find a service provider. See [ExpressRoute partners and peering locations](expressroute-locations.md).
+* Find a service provider. See [ExpressRoute partners and peering locations](expressroute-locations.md).
iot-dps Concepts Device Oem Security Practices https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-dps/concepts-device-oem-security-practices.md
At this point in the process, install the DPS client along with the ID scope and
> If you're using a software TPM, you can install it now. Extract the EK_pub at the same time. #### Step 4: Device is packaged and sent to the warehouse
-A device can sit in a warehouse for 6-12 months before being deployed.
+A device can sometimes sit in a warehouse for up to a year before being deployed and provisioned with DPS. If a device sits in a warehouse for a long time before deployment, customers who deploy the device might need to update the firmware, software, or expired credentials.
#### Step 5: Device is installed into the location After the device arrives at its final location, it goes through automated provisioning with DPS.
iot-edge Development Environment https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/development-environment.md
For more information, guidance, and examples, see the following pages:
* [Continuous integration and continuous deployment to Azure IoT Edge](how-to-continuous-integration-continuous-deployment.md) * [Create a CI/CD pipeline for IoT Edge with Azure DevOps Starter](how-to-devops-starter.md)
-* [Azure IoT Edge Jenkins plugin](https://plugins.jenkins.io/azure-iot-edge)
* [IoT Edge DevOps GitHub repo](https://github.com/toolboc/IoTEdge-DevOps)
iot-edge How To Connect Downstream Iot Edge Device https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/how-to-connect-downstream-iot-edge-device.md
Make sure that the user **iotedge** has read permissions for the directory holdi
1. Install the **root CA certificate** on this IoT Edge device. ```bash
- sudo cp <path>/<root ca certificate>.pem /usr/local/share/ca-certificates/<root ca certificate>.pem
+ sudo cp <path>/<root ca certificate>.pem /usr/local/share/ca-certificates/<root ca certificate>.pem.crt
``` 1. Update the certificate store.
iot-edge How To Install Iot Edge https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/how-to-install-iot-edge.md
If you want to install the most recent version of the security daemon, use the f
sudo apt-get install iotedge ```
-Or, if you want to install a specific version of the security daemon, specify the version from the apt list output. Also specify the same version for the **libiothsm-std** package, which otherwise would install its latest version. For example, the following command installs the most recent version of the 1.0.10 release:
+Or, if you want to install a specific version of the security daemon, specify the version from the apt list output. Also specify the same version for the **libiothsm-std** package, which otherwise would install its latest version. For example, the following command installs the most recent version of the 1.1 release:
```bash
- sudo apt-get install iotedge=1.0.10* libiothsm-std=1.0.10*
+ sudo apt-get install iotedge=1.1* libiothsm-std=1.1*
``` If the version that you want to install isn't listed, follow the [Offline or specific version installation](#offline-or-specific-version-installation-optional) steps later in this article. That section shows you how to target any previous version of the IoT Edge security daemon, or release candidate versions.
Using curl commands, you can target the component files directly from the IoT Ed
2. Use the copied link in the following command to install that version of the hsmlib: ```bash
- curl -L <libiothsm-std link> -o libiothsm-std.deb && sudo dpkg -i ./libiothsm-std.deb
+ curl -L <libiothsm-std link> -o libiothsm-std.deb && sudo apt-get install ./libiothsm-std.deb
``` 3. Find the **iotedge** file that matches your IoT Edge device's architecture. Right-click on the file link and copy the link address.
Using curl commands, you can target the component files directly from the IoT Ed
4. Use the copied link in the following command to install that version of the IoT Edge security daemon. ```bash
- curl -L <iotedge link> -o iotedge.deb && sudo dpkg -i ./iotedge.deb
+ curl -L <iotedge link> -o iotedge.deb && sudo apt-get install ./iotedge.deb
``` <!-- end 1.1 -->
iot-edge How To Update Iot Edge https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/how-to-update-iot-edge.md
If you want to update to the most recent version of the security daemon, use the
sudo apt-get install iotedge ```
-If you want to update to a specific version of the security daemon, specify the version from the apt list output. Whenever **iotedge** is updated, it automatically tries to update the **libiothsm-std** package to its latest version, which may cause a dependency conflict. If you aren't going to the most recent version, be sure to target both packages for the same version. For example, the following command installs a specific version of the 1.0.9 release:
+If you want to update to a specific version of the security daemon, specify the version from the apt list output. Whenever **iotedge** is updated, it automatically tries to update the **libiothsm-std** package to its latest version, which may cause a dependency conflict. If you aren't going to the most recent version, be sure to target both packages for the same version. For example, the following command installs a specific version of the 1.1 release:
```bash
- sudo apt-get install iotedge=1.0.9-1 libiothsm-std=1.0.9-1
+ sudo apt-get install iotedge=1.1.1 libiothsm-std=1.1.1
``` If the version that you want to install is not available through apt-get, you can use curl to target any version from the [IoT Edge releases](https://github.com/Azure/azure-iotedge/releases) repository. For whichever version you want to install, locate the appropriate **libiothsm-std** and **iotedge** files for your device. For each file, right-click the file link and copy the link address. Use the link address to install the specific versions of those components: ```bash
-curl -L <libiothsm-std link> -o libiothsm-std.deb && sudo dpkg -i ./libiothsm-std.deb
-curl -L <iotedge link> -o iotedge.deb && sudo dpkg -i ./iotedge.deb
+curl -L <libiothsm-std link> -o libiothsm-std.deb && sudo apt-get install ./libiothsm-std.deb
+curl -L <iotedge link> -o iotedge.deb && sudo apt-get install ./iotedge.deb
``` <!-- end 1.1 --> :::moniker-end
Currently, there is not support for IoT Edge version 1.2 running on Windows devi
## Update the runtime containers
-The way that you update the IoT Edge agent and IoT Edge hub containers depends on whether you use rolling tags (like 1.0) or specific tags (like 1.0.7) in your deployment.
+The way that you update the IoT Edge agent and IoT Edge hub containers depends on whether you use rolling tags (like 1.1) or specific tags (like 1.1.1) in your deployment.
Check the version of the IoT Edge agent and IoT Edge hub modules currently on your device using the commands `iotedge logs edgeAgent` or `iotedge logs edgeHub`.
The IoT Edge agent and IoT Edge hub images are tagged with the IoT Edge version
### Update a rolling tag image
-If you use rolling tags in your deployment (for example, mcr.microsoft.com/azureiotedge-hub:**1.0**) then you need to force the container runtime on your device to pull the latest version of the image.
+If you use rolling tags in your deployment (for example, mcr.microsoft.com/azureiotedge-hub:**1.1**) then you need to force the container runtime on your device to pull the latest version of the image.
Delete the local version of the image from your IoT Edge device. On Windows machines, uninstalling the security daemon also removes the runtime images, so you don't need to take this step again. ```bash
-docker rmi mcr.microsoft.com/azureiotedge-hub:1.0
-docker rmi mcr.microsoft.com/azureiotedge-agent:1.0
+docker rmi mcr.microsoft.com/azureiotedge-hub:1.1
+docker rmi mcr.microsoft.com/azureiotedge-agent:1.1
``` You may need to use the force `-f` flag to remove the images.
The IoT Edge service will pull the latest versions of the runtime images and aut
### Update a specific tag image
-If you use specific tags in your deployment (for example, mcr.microsoft.com/azureiotedge-hub:**1.0.8**) then all you need to do is update the tag in your deployment manifest and apply the changes to your device.
+If you use specific tags in your deployment (for example, mcr.microsoft.com/azureiotedge-hub:**1.1.1**) then all you need to do is update the tag in your deployment manifest and apply the changes to your device.
1. In the IoT Hub in the Azure portal, select your IoT Edge device, and select **Set Modules**.
Now that the IoT Edge service running on your devices has been updated, follow t
Azure IoT Edge regularly releases new versions of the IoT Edge service. Before each stable release, there is one or more release candidate (RC) versions. RC versions include all the planned features for the release, but are still going through testing and validation. If you want to test a new feature early, you can install an RC version and provide feedback through GitHub.
-Release candidate versions follow the same numbering convention of releases, but have **-rc** plus an incremental number appended to the end. You can see the release candidates in the same list of [Azure IoT Edge releases](https://github.com/Azure/azure-iotedge/releases) as the stable versions. For example, find **1.0.9-rc5** and **1.0.9-rc6**, two of the release candidates that came before **1.0.9**. You can also see that RC versions are marked with **pre-release** labels.
+Release candidate versions follow the same numbering convention of releases, but have **-rc** plus an incremental number appended to the end. You can see the release candidates in the same list of [Azure IoT Edge releases](https://github.com/Azure/azure-iotedge/releases) as the stable versions. For example, find **1.2.0-rc4**, one of the release candidates released before **1.2.0**. You can also see that RC versions are marked with **pre-release** labels.
-The IoT Edge agent and hub modules have RC versions that are tagged with the same convention. For example, **mcr.microsoft.com/azureiotedge-hub:1.0.9-rc6**.
+The IoT Edge agent and hub modules have RC versions that are tagged with the same convention. For example, **mcr.microsoft.com/azureiotedge-hub:1.2.0-rc4**.
As previews, release candidate versions aren't included as the latest version that the regular installers target. Instead, you need to manually target the assets for the RC version that you want to test. For the most part, installing or updating to an RC version is the same as targeting any other specific version of IoT Edge.
iot-edge Iot Edge Certs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/iot-edge-certs.md
IoT Edge certificates are used by the modules and downstream IoT devices to veri
>[!NOTE] >This article talks about the certificates that are used to secure connections between the different components on an IoT Edge device or between an IoT Edge device and any leaf devices. You may also use certificates to authenticate your IoT Edge device to IoT Hub. Those authentication certificates are different, and are not discussed in this article. For more information about authenticating your device with certificates, see [Create and provision an IoT Edge device using X.509 certificates](how-to-auto-provision-x509-certs.md).
-This article explains how IoT Edge certificates can work in production, development, and test scenarios. While the scripts are different (PowerShell vs. bash), the concepts are the same between Linux and Windows.
+This article explains how IoT Edge certificates can work in production, development, and test scenarios.
## IoT Edge certificates
iot-edge Iot Edge Security Manager https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/iot-edge-security-manager.md
Secure silicon is necessary to anchor trust inside the IoT Edge device hardware.
## IoT Edge security manager integration and maintenance
-The IoT Edge security manager aims to identify and isolate the components that defend the security and integrity of the Azure IoT Edge platform for custom hardening. Third parties, like device makers, should make use of custom security features available with their device hardware. See next steps section for links that demonstrate how to harden the Azure IoT security manager with the Trusted Platform Module (TPM) on Linux and Windows platforms. These examples use software or virtual TPMs but directly apply to using discrete TPM devices.
+The IoT Edge security manager aims to identify and isolate the components that defend the security and integrity of the Azure IoT Edge platform for custom hardening. Third parties, like device makers, should make use of custom security features available with their device hardware.
-## Next steps
-
-Read the blog on [Securing the intelligent edge](https://azure.microsoft.com/blog/securing-the-intelligent-edge/).
+Learn how to harden the Azure IoT security manager with the Trusted Platform Module (TPM) using software or virtual TPMs:
Create and provision an [IoT Edge device with a virtual TPM on a Linux virtual machine](how-to-auto-provision-simulated-device-linux.md).
-Create and provision an [IoT Edge device with a simulated TPM on Windows](how-to-auto-provision-simulated-device-windows.md).
+<!-- 1.1 -->
+Create and provision an [IoT Edge device with a simulated TPM on Windows](how-to-auto-provision-simulated-device-windows.md).
+
+## Next steps
+
+Read the blog on [Securing the intelligent edge](https://azure.microsoft.com/blog/securing-the-intelligent-edge/).
iot-edge Module Development https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/module-development.md
IoT Edge supports multiple operating systems, device architectures, and developm
### Linux
-For all languages in the following table, IoT Edge supports development for AMD64 and ARM32 Linux devices.
+For all languages in the following table, IoT Edge supports development for AMD64 and ARM32 Linux containers.
| Development language | Development tools | | -- | -- |
For all languages in the following table, IoT Edge supports development for AMD6
| Python | Visual Studio Code | >[!NOTE]
->Develop and debugging support for ARM64 Linux devices is in [public preview](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). For more information, see [Develop and debug ARM64 IoT Edge modules in Visual Studio Code (preview)](https://devblogs.microsoft.com/iotdev/develop-and-debug-arm64-iot-edge-modules-in-visual-studio-code-preview).
+>Develop and debugging support for ARM64 Linux containers is in [public preview](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). For more information, see [Develop and debug ARM64 IoT Edge modules in Visual Studio Code (preview)](https://devblogs.microsoft.com/iotdev/develop-and-debug-arm64-iot-edge-modules-in-visual-studio-code-preview).
### Windows
-For all languages in the following table, IoT Edge supports development for AMD64 Windows devices.
+<!-- 1.1 -->
+For all languages in the following table, IoT Edge supports development for AMD64 Windows containers.
| Development language | Development tools | | -- | -- | | C | Visual Studio 2017/2019 | | C# | Visual Studio Code (no debugging capabilities)<br>Visual Studio 2017/2019 |
+<!-- end 1.1 -->
+
+<!-- 1.2 -->
+
+IoT Edge 1.1 LTS is the last release channel that supports Windows containers. Starting with version 1.2, Windows containers are not supported.
+
+For information about developing with Windows containers, refer to the [IoT Edge 1.1](?view=iotedge-2018-06&preserve-view=true) version of this article.
+
+<!-- end 1.2 -->
## Next steps
iot-edge Production Checklist https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/production-checklist.md
To authenticate using a service principal, provide the service principal ID and
### Use tags to manage versions
-A tag is a docker concept that you can use to distinguish between versions of docker containers. Tags are suffixes like **1.0** that go on the end of a container repository. For example, **mcr.microsoft.com/azureiotedge-agent:1.0**. Tags are mutable and can be changed to point to another container at any time, so your team should agree on a convention to follow as you update your module images moving forward.
+A tag is a docker concept that you can use to distinguish between versions of docker containers. Tags are suffixes like **1.1** that go on the end of a container repository. For example, **mcr.microsoft.com/azureiotedge-agent:1.1**. Tags are mutable and can be changed to point to another container at any time, so your team should agree on a convention to follow as you update your module images moving forward.
Tags also help you to enforce updates on your IoT Edge devices. When you push an updated version of a module to your container registry, increment the tag. Then, push a new deployment to your devices with the tag incremented. The container engine will recognize the incremented tag as a new version and will pull the latest module version down to your device.
If your devices are going to be deployed on a network that uses a proxy server,
On Linux, the IoT Edge daemon uses journals as the default logging driver. You can use the command-line tool `journalctl` to query the daemon logs.
+<!-- 1.1 -->
+On Windows, the IoT Edge daemon uses PowerShell diagnostics. Use `Get-IoTEdgeLog` to query logs from the daemon. IoT Edge modules use the JSON driver for logging, which is the default.
+
+```powershell
+. {Invoke-WebRequest -useb aka.ms/iotedge-win} | Invoke-Expression; Get-IoTEdgeLog
+```
+
+<!-- end 1.1 -->
+ <!--1.2--> :::moniker range=">=iotedge-2020-11"
Starting with version 1.2, IoT Edge relies on multiple daemons. While each daemo
:::moniker-end
-On Windows, the IoT Edge daemon uses PowerShell diagnostics. Use `Get-IoTEdgeLog` to query logs from the daemon. IoT Edge modules use the JSON driver for logging, which is the default.
-
-```powershell
-. {Invoke-WebRequest -useb aka.ms/iotedge-win} | Invoke-Expression; Get-IoTEdgeLog
-```
- When you're testing an IoT Edge deployment, you can usually access your devices to retrieve logs and troubleshoot. In a deployment scenario, you may not have that option. Consider how you're going to gather information about your devices in production. One option is to use a logging module that collects information from the other modules and sends it to the cloud. One example of a logging module is [logspout-loganalytics](https://github.com/veyalla/logspout-loganalytics), or you can design your own. ### Place limits on log size
You can limit the size of all container logfiles in the container engine log opt
} ```
-Add (or append) this information to a file named `daemon.json` and place it the right location for your device platform.
+Add (or append) this information to a file named `daemon.json` and place it in the following location:
+<!-- 1.1 -->
| Platform | Location | | -- | -- | | Linux | `/etc/docker/` | | Windows | `C:\ProgramData\iotedge-moby\config\` |
+<!-- end 1.1 -->
+
+<!-- 1.2 -->
+
+* `/etc/docker/`
+
+<!-- end 1.2 -->
The container engine must be restarted for the changes to take effect.
iot-edge Support https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/support.md
# Azure IoT Edge supported systems This article provides details about which systems and components are supported by IoT Edge, whether officially or in preview.
Azure IoT Edge runs on most operating systems that can run containers; however,
* Microsoft has done informal testing on the platforms or knows of a partner successfully running Azure IoT Edge on the platform * Installation packages for other platforms may work on these platforms
-The family of the host OS must always match the family of the guest OS used inside a module's container. In other words, you can only use Linux containers on Linux and Windows containers on Windows. When using Windows, only process isolated containers are supported, not Hyper-V isolated containers.
+The family of the host OS must always match the family of the guest OS used inside a module's container.
+
+<!-- 1.1 -->
+In other words, you can only use Linux containers on Linux and Windows containers on Windows. When using Windows containers, only process isolated containers are supported, not Hyper-V isolated containers.
IoT Edge for Linux on Windows uses IoT Edge in a Linux virtual machine running on a Windows host. In this way, you can run Linux modules on a Windows device.
+<!-- end 1.1 -->
### Tier 1 The systems listed in the following tables are supported by Microsoft, either generally available or in public preview, and are tested with each new release.
+<!-- 1.1 -->
Azure IoT Edge supports modules built as either Linux or Windows containers. Linux containers can be deployed to Linux devices or deployed to Windows devices using IoT Edge for Linux on Windows. Windows containers can only be deployed to Windows devices.
+<!-- end 1.1 -->
+
+<!-- 1.2 -->
+Azure IoT Edge version 1.2 only supports modules built as Linux containers.
+
+Currently, there is no supported way to run IoT Edge version 1.2 on Windows devices. [IoT Edge for Linux on Windows](iot-edge-for-linux-on-windows.md) is the recommended way to run IoT Edge on Windows devices, but currently only runs IoT Edge 1.1. For more information, refer to the [IoT Edge 1.1](?view=iotedge-2018-06&preserve-view=true) version of this article.
+
+<!-- end 1.2 -->
#### Linux containers
+<!-- 1.1 -->
Modules built as Linux containers can be deployed to either Linux or Windows devices. For Linux devices, the IoT Edge runtime is installed directly on the host device. For Windows devices, a Linux virtual machine prebuilt with the IoT Edge runtime runs on the host device. [IoT Edge for Linux on Windows](iot-edge-for-linux-on-windows.md) is currently in public preview, but is the recommended way to run IoT Edge on Windows devices.
Modules built as Linux containers can be deployed to either Linux or Windows dev
| Windows Server 2019 | Public preview | | | All Windows operating systems must be version 1809 (build 17763) or later.
+<!-- end 1.1 -->
+
+<!-- 1.2 -->
+
+| Operating System | AMD64 | ARM32v7 | ARM64 |
+| - | -- | - | -- |
+| Raspberry Pi OS Stretch | | ![Raspberry Pi OS Stretch + ARM32v7](./media/tutorial-c-module/green-check.png) | |
+| Ubuntu Server 18.04 | ![Ubuntu Server 18.04 + AMD64](./media/tutorial-c-module/green-check.png) | | Public preview |
+
+<!-- end 1.2 -->
>[!NOTE] >Ubuntu Server 16.04 support ended with the release of IoT Edge version 1.1. #### Windows containers
+<!-- 1.1 -->
>[!IMPORTANT] >IoT Edge 1.1 LTS is the last release channel that will support Windows containers. Starting with version 1.2, Windows containers will not be supported. Consider using or moving to [IoT Edge for Linux on Windows](iot-edge-for-linux-on-windows.md) to run IoT Edge on Windows devices.
All Windows operating systems must be version 1809 (build 17763). The specific b
>[!NOTE] >Windows 10 IoT Core support ended with the release of IoT Edge version 1.1.
+<!-- end 1.1 -->
+
+<!-- 1.2 -->
+IoT Edge 1.1 LTS is the last release channel that supports Windows containers. Starting with version 1.2, Windows containers are not supported.
+
+For information about supported operating systems for Windows containers, refer to the [IoT Edge 1.1](?view=iotedge-2018-06&preserve-view=true) version of this article.
+
+<!-- end 1.2 -->
### Tier 2
IoT Edge uses the Microsoft.Azure.Devices.Client SDK. For more information, see
Azure IoT Edge can be run in virtual machines. Using a virtual machine as an IoT Edge device is common when customers want to augment existing infrastructure with edge intelligence. The family of the host VM OS must match the family of the guest OS used inside a module's container. This requirement is the same as when Azure IoT Edge is run directly on a device. Azure IoT Edge is agnostic of the underlying virtualization technology and works in VMs powered by platforms like Hyper-V and vSphere. <br>+
+<!-- 1.1 -->
+
+<center>
+
+![Azure IoT Edge in a VM](./media/support/edge-on-vm-with-windows.png)
+
+</center>
++
+<!-- 1.2 -->
+ <center> ![Azure IoT Edge in a VM](./media/support/edge-on-vm.png)+ </center> + ## Minimum system requirements Azure IoT Edge runs great on devices as small as a Raspberry Pi3 to server grade hardware. Choosing the right hardware for your scenario depends on the workloads that you want to run. Making the final device decision can be complicated; however, you can easily start prototyping a solution on traditional laptops or desktops.
iot-edge Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/troubleshoot.md
On Linux:
```bash [Service]
- Environment=IOTEDGE_LOG=edgelet=debug
+ Environment=IOTEDGE_LOG=debug
``` 3. Restart the IoT Edge security daemon:
iot-edge Tutorial Nested Iot Edge https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/tutorial-nested-iot-edge.md
Once you are satisfied your configurations are correct on each device, you are r
## Deploy modules to the top layer device
-Modules serve to complete the deployment and the IoT Edge runtime to your devices and further define the structure of your hierarchy. The IoT Edge API Proxy module securely routs HTTP traffic over a single port from your lower layer devices. The Docker Registry module allows for a repository of Docker images that your lower layer devices can access by routing image pulls to the top layer device.
+Modules serve to complete the deployment and the IoT Edge runtime to your devices and further define the structure of your hierarchy. The IoT Edge API Proxy module securely routes HTTP traffic over a single port from your lower layer devices. The Docker Registry module allows for a repository of Docker images that your lower layer devices can access by routing image pulls to the top layer device.
To deploy modules to your top layer device, you can use the Azure portal or Azure CLI.
In the [Azure portal](https://ms.portal.azure.com/):
-If you completed the above steps correctly, your **top layer device** should report the four modules: the IoT Edge API Proxy Module, the Docker Container Registry module, and the system modules, as **Specified in Deployment**. It may take a few minutes for the device to receive its new deployment and start the modules. Refresh the page until you see the temperature sensor module listed as **Reported by Device**. Once the modules are reported by the device, you are ready to continue.
+If you completed the above steps correctly, your **top layer device** should report the four modules: the IoT Edge API Proxy Module, the Docker Container Registry module, and the system modules, as **Specified in Deployment**. It may take a few minutes for the device to receive its new deployment and start the modules. Refresh the page until you see the IoTEdgeAPIProxy and registry modules listed as **Reported by Device**. Once the modules are reported by the device, you are ready to continue.
## Deploy modules to the lower layer device
key-vault Create Certificate https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/key-vault/certificates/create-certificate.md
Note that when an order is placed with the issuer provider, it may honor or over
## See Also
+ - How-to guide to create certificates in Key Vault using [Portal](https://docs.microsoft.com/azure/key-vault/certificates/quick-create-portal), [Azure CLI](https://docs.microsoft.com/azure/key-vault/certificates/quick-create-cli), [Azure Powershell](https://docs.microsoft.com/azure/key-vault/certificates/quick-create-powershell)
+ - [Monitor and manage certificate creation](create-certificate-scenarios.md)
key-vault Tutorial Rotate Certificates https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/key-vault/certificates/tutorial-rotate-certificates.md
Create a certificate or import a certificate into the key vault (see [Steps to c
## Update certificate lifecycle attributes
-In Azure Key Vault, you can update a certificate's lifecycle attributes both before and after the time of certificate creation.
+In Azure Key Vault, you can update a certificate's lifecycle attributes both at the time of certificate creation or after.
A certificate created in Key Vault can be:
Key Vault auto-rotates certificates through established partnerships with CAs. B
| Automatically renew at a given time| Email all contacts at a given time | |--|| |Selecting this option will *turn on* autorotation. | Selecting this option will *not* auto-rotate but will only alert the contacts.|-
+ You can learn about [setting up Email contact here](https://docs.microsoft.com/azure/key-vault/certificates/overview-renew-certificate#get-notified-about-certificate-expiration)
1. Select **Create**. ![Certificate lifecycle](../media/certificates/tutorial-rotate-cert/create-cert-lifecycle.png)
key-vault Network Security https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/key-vault/general/network-security.md
This section will cover the different ways that the Azure Key Vault firewall can
### Key Vault Firewall Disabled (Default)
-By default, when you create a new key vault, the Azure Key Vault firewall is disabled. All applications and Azure services can access the key vault and send requests to the key vault. Note, this configuration does not mean that any user will be able to perform operations on your key vault. The key vault still restricts to secrets, keys, and certificates stored in key vault by requiring Azure Active Directory authentication and access policy permissions. To understand key vault authentication in more detail see the key vault authentication fundamentals document [here](./authentication-fundamentals.md).
+By default, when you create a new key vault, the Azure Key Vault firewall is disabled. All applications and Azure services can access the key vault and send requests to the key vault. Note, this configuration does not mean that any user will be able to perform operations on your key vault. The key vault still restricts to secrets, keys, and certificates stored in key vault by requiring Azure Active Directory authentication and access policy permissions. To understand key vault authentication in more detail see the key vault authentication fundamentals document [here](./authentication-fundamentals.md). For more information, see [Access Azure Key Vault behind a firewall](./access-behind-firewall.md).
### Key Vault Firewall Enabled (Trusted Services Only)
-When you enable the Key Vault Firewall, you will be given an option to 'Allow Trusted Microsoft Services to bypass this firewall.' The trusted services list does not cover every single Azure service. For example, Azure DevOps is not on the trusted services list. **This does not imply that services that do not appear on the trusted services list not trusted or insecure.** The trusted services list encompasses services where Microsoft controls all of the code that runs on the service. Since users can write custom code in Azure services such as Azure DevOps, Microsoft does not provide the option to create a blanket approval for the service. Furthermore, just because a service appears on the trusted service list, doesn't mean it is allowed for all scenarios.
+When you enable the Key Vault Firewall, you will be given an option to 'Allow Trusted Microsoft Services to bypass this firewall.' The trusted services list does not cover every single Azure service. For example, Azure DevOps is not on the trusted services list. **This does not imply that services that do not appear on the trusted services list not trusted or insecure.** The trusted services list encompasses services where Microsoft controls all of the code that runs on the service. Since users can write custom code in Azure services such as Azure DevOps, Microsoft does not provide the option to create a blanket approval for the service. Furthermore, just because a service appears on the trusted service list, doesn't mean it is allowed for all scenarios.
To determine if a service you are trying to use is on the trusted service list, please see the following document [here](./overview-vnet-service-endpoints.md#trusted-services).
+For how-to guide, follow the instructions here for [Portal, Azure CLI and Powershell](https://docs.microsoft.com/azure/key-vault/general/network-security#use-the-azure-portal)
### Key Vault Firewall Enabled (IPv4 Addresses and Ranges - Static IPs)
machine-learning Concept Train Model Git Integration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/concept-train-model-git-integration.md
Previously updated : 11/16/2020 Last updated : 04/08/2021 # Git integration for Azure Machine Learning
When submitting a job to Azure Machine Learning, if source files are stored in a
Since Azure Machine Learning tracks information from a local git repo, it isn't tied to any specific central repository. Your repository can be cloned from GitHub, GitLab, Bitbucket, Azure DevOps, or any other git-compatible service.
+> [!TIP]
+> Use Visual Studio Code to interact with Git through a graphical user interface. To connect to an Azure Machine Learning remote compute instance using Visual Studio Code, see [Connect to an Azure Machine Learning compute instance in Visual Studio Code (preview)](how-to-set-up-vs-code-remote.md)
+>
+> For more information on Visual Studio Code version control features, see [Using Version Control in VS Code](https://code.visualstudio.com/docs/editor/versioncontrol) and [Working with GitHub in VS Code](https://code.visualstudio.com/docs/editor/github).
+ ## Clone Git repositories into your workspace file system Azure Machine Learning provides a shared file system for all users in the workspace. To clone a Git repository into this file share, we recommend that you create a compute instance & [open a terminal](how-to-access-terminal.md).
machine-learning How To Configure Environment https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-configure-environment.md
Previously updated : 11/16/2020 Last updated : 03/22/2021
To use Visual Studio Code for development:
1. Install [Visual Studio Code](https://code.visualstudio.com/Download). 1. Install the [Azure Machine Learning Visual Studio Code extension](tutorial-setup-vscode-extension.md) (preview).
-Once you have the Visual Studio Code extension installed, you can manage your [Azure Machine Learning resources](how-to-manage-resources-vscode.md), [run and debug experiments](how-to-debug-visual-studio-code.md), and [deploy trained models](tutorial-train-deploy-image-classification-model-vscode.md).
+Once you have the Visual Studio Code extension installed, use it to:
+
+* [Manage your Azure Machine Learning resources](how-to-manage-resources-vscode.md)
+* [Connect to an Azure Machine Learning compute instance](how-to-set-up-vs-code-remote.md)
+* [Run and debug experiments](how-to-debug-visual-studio-code.md)
+* [Deploy trained models](tutorial-train-deploy-image-classification-model-vscode.md).
## <a id="compute-instance"></a>Azure Machine Learning compute instance
To learn more about compute instances, including how to install packages, see [C
In addition to a Jupyter Notebook server and JupyterLab, you can use compute instances in the [integrated notebook feature inside of Azure Machine Learning studio](how-to-run-jupyter-notebooks.md).
-You can also use the Azure Machine Learning Visual Studio Code extension to [configure an Azure Machine Learning compute instance as a remote Jupyter Notebook server](how-to-set-up-vs-code-remote.md#configure-compute-instance-as-remote-notebook-server).
+You can also use the Azure Machine Learning Visual Studio Code extension to [connect to a remote compute instance using VS Code](how-to-set-up-vs-code-remote.md).
## <a id="dsvm"></a>Data Science Virtual Machine
machine-learning How To Connect Data Ui https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-connect-data-ui.md
After you create a datastore, create a dataset to interact with your data. Datas
There are two types of datasets, FileDataset and TabularDataset. [FileDatasets](how-to-create-register-datasets.md#filedataset) create references to single or multiple files or public URLs. Whereas,
-[TabularDatasets](how-to-create-register-datasets.md#tabulardataset) represent your data in a tabular format.
+[TabularDatasets](how-to-create-register-datasets.md#tabulardataset) represent your data in a tabular format. You can create TabularDatasets from .csv, .tsv, .parquet, .jsonl files, and from SQL query results.
The following steps and animation show how to create a dataset in [Azure Machine Learning studio](https://ml.azure.com).
Use your datasets in your machine learning experiments for training ML models. [
* [Train a model](how-to-set-up-training-targets.md).
-* For more dataset training examples, see the [sample notebooks](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/work-with-data/).
+* For more dataset training examples, see the [sample notebooks](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/work-with-data/).
machine-learning How To Create Register Datasets https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-create-register-datasets.md
Create a FileDataset with the [Python SDK](#create-a-filedataset) or the [Azure
. ### TabularDataset
-A [TabularDataset](/python/api/azureml-core/azureml.data.tabulardataset) represents data in a tabular format by parsing the provided file or list of files. This provides you with the ability to materialize the data into a pandas or Spark DataFrame so you can work with familiar data preparation and training libraries without having to leave your notebook. You can create a `TabularDataset` object from .csv, .tsv, .parquet, .jsonl files, and from [SQL query results](/python/api/azureml-core/azureml.data.dataset_factory.tabulardatasetfactory#from-sql-query-query--validate-true--set-column-types-none--query-timeout-30-).
+A [TabularDataset](/python/api/azureml-core/azureml.data.tabulardataset) represents data in a tabular format by parsing the provided file or list of files. This provides you with the ability to materialize the data into a pandas or Spark DataFrame so you can work with familiar data preparation and training libraries without having to leave your notebook. You can create a `TabularDataset` object from .csv, .tsv, [.parquet](/python/api/azureml-core/azureml.data.dataset_factory.tabulardatasetfactory#from-parquet-files-path--validate-true--include-path-false--set-column-types-none--partition-format-none-), [.jsonl files](/python/api/azureml-core/azureml.data.dataset_factory.tabulardatasetfactory#from-json-lines-files-path--validate-true--include-path-false--set-column-types-none--partition-format-none--invalid-lines--errorencoding--utf8--), and from [SQL query results](/python/api/azureml-core/azureml.data.dataset_factory.tabulardatasetfactory#from-sql-query-query--validate-true--set-column-types-none--query-timeout-30-).
With TabularDatasets, you can specify a time stamp from a column in the data or from wherever the path pattern data is stored to enable a time series trait. This specification allows for easy and efficient filtering by time. For an example, see [Tabular time series-related API demo with NOAA weather data](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/work-with-data/datasets-tutorial/timeseries-datasets/tabular-timeseries-dataset-filtering.ipynb).
To reuse and share datasets across experiment in your workspace, [register your
### Create a TabularDataset
-Use the [`from_delimited_files()`](/python/api/azureml-core/azureml.data.dataset_factory.tabulardatasetfactory) method on the `TabularDatasetFactory` class to read files in .csv or .tsv format, and to create an unregistered TabularDataset. If you're reading from multiple files, results will be aggregated into one tabular representation.
+Use the [`from_delimited_files()`](/python/api/azureml-core/azureml.data.dataset_factory.tabulardatasetfactory) method on the `TabularDatasetFactory` class to read files in .csv or .tsv format, and to create an unregistered TabularDataset. To read in files from .parquet format, use the [`from_parquet_files()`](/python/api/azureml-core/azureml.data.dataset_factory.tabulardatasetfactory#from-parquet-files-path--validate-true--include-path-false--set-column-types-none--partition-format-none-) method. If you're reading from multiple files, results will be aggregated into one tabular representation.
+
+See the [TabularDatasetFactory reference documentation](/python/api/azureml-core/azureml.data.dataset_factory.tabulardatasetfactory) for information about supported file formats, as well as syntax and design patterns.
If your storage is behind a virtual network or firewall, set the parameter `validate=False` in your `from_delimited_files()` method. This bypasses the initial validation step, and ensures that you can create your dataset from these secure files. Learn more about how to use [datastores and datasets in a virtual network](how-to-secure-workspace-vnet.md#secure-datastores-and-datasets).
machine-learning How To Set Up Vs Code Remote https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-set-up-vs-code-remote.md
Previously updated : 11/16/2020
-#Customer intent: As a data scientist, I want to connect to an Azure Machine Learning compute instance in Visual Studio Code to access my resources and run my code..
Last updated : 04/08/2021
+# As a data scientist, I want to connect to an Azure Machine Learning compute instance in Visual Studio Code to access my resources and run my code.
# Connect to an Azure Machine Learning compute instance in Visual Studio Code (preview)
An [Azure Machine Learning Compute Instance](concept-compute-instance.md) is a f
There are two ways you can connect to a compute instance from Visual Studio Code:
+* Remote compute instance. This option provides you with a full-featured development environment for building your machine learning projects.
* Remote Jupyter Notebook server. This option allows you to set a compute instance as a remote Jupyter Notebook server.
-* [Visual Studio Code remote development](https://code.visualstudio.com/docs/remote/remote-overview). Visual Studio Code remote development allows you to use a container, remote machine, or the Windows Subsystem for Linux (WSL) as a full-featured development environment.
-## Configure compute instance as remote notebook server
+## Configure a remote compute instance
-In order to configure a compute instance as a remote Jupyter Notebook server you'll need a few prerequisites:
+To configure a remote compute instance for development, you'll need a few prerequisites.
* Azure Machine Learning Visual Studio Code extension. For more information, see the [Azure Machine Learning Visual Studio Code Extension setup guide](tutorial-setup-vscode-extension.md). * Azure Machine Learning workspace. [Use the Azure Machine Learning Visual Studio Code extension to create a new workspace](how-to-manage-resources-vscode.md#create-a-workspace) if you don't already have one.
+* Azure Machine Learning compute instance. [Use the Azure Machine Learning Visual Studio Code extension to create a new compute instance](how-to-manage-resources-vscode.md#create-compute-instance) if you don't have one.
-To connect to a compute instance:
-
-1. Open a Jupyter Notebook in Visual Studio Code.
-1. When the integrated notebook experience loads, select **Jupyter Server**.
-
- > [!div class="mx-imgBorder"]
- > ![Launch Azure Machine Learning remote Jupyter Notebook server dropdown](media/how-to-set-up-vs-code-remote/launch-server-selection-dropdown.png)
+To connect to your remote compute instance:
- Alternatively, you also use the command palette:
+# [VS Code](#tab/extension)
- 1. Open the command palette by selecting **View > Command Palette** from the menu bar.
- 1. Enter into the text box `Azure ML: Connect to Compute instance Jupyter server`.
-
-1. Choose `Azure ML Compute Instances` from the list of Jupyter server options.
-1. Select your subscription from the list of subscriptions. If you have have previously configured your default Azure Machine Learning workspace, this step is skipped.
-1. Select your workspace.
-1. Select your compute instance from the list. If you don't have one, select **Create new Azure ML Compute Instance** and follow the prompts to create one.
-1. For the changes to take effect, you have to reload Visual Studio Code.
-1. Open a Jupyter Notebook and run a cell.
-
-> [!IMPORTANT]
-> You **MUST** run a cell in order to establish the connection.
-
-At this point, you can continue to run cells in your Jupyter Notebook.
-
-> [!TIP]
-> You can also work with Python script files (.py) containing Jupyter-like code cells. For more information, see the [Visual Studio Code Python interactive documentation](https://code.visualstudio.com/docs/python/jupyter-support-py).
+### Azure Machine Learning Extension
-## Configure compute instance remote development
+1. In VS Code, launch the Azure Machine Learning extension.
+1. Expand the **Compute instances** node in your extension.
+1. Right-click the compute instance you want to connect to and select **Connect to Compute Instance**.
-For a full-featured remote development experience, you'll need a few prerequisites:
-* [Visual Studio Code Remote SSH extension](https://marketplace.visualstudio.com/items?itemName=ms-vscode-remote.remote-ssh).
-* SSH-enabled compute instance. For more information, [see the Create a compute instance guide](how-to-create-manage-compute-instance.md).
+### Command Palette
-> [!NOTE]
-> On Windows platforms, you must [install an OpenSSH compatible SSH client](https://code.visualstudio.com/docs/remote/troubleshooting#_installing-a-supported-ssh-client) if one is not already present. PuTTY is not supported on Windows since the ssh command must be in the path.
+1. In VS Code, open the command palette by selecting **View > Command Palette**.
+1. Enter into the text box **Azure ML: Connect to Compute Instance**.
+1. Select your subscription.
+1. Select your workspace.
+1. Select your compute instance or create a new one.
-### Get the IP and SSH port for your compute instance
+# [Studio](#tab/studio)
-1. Go to the Azure Machine Learning studio at https://ml.azure.com/.
-2. Select your [workspace](concept-workspace.md).
-1. Click the **Compute Instances** tab.
-1. In the **Application URI** column, click the **SSH** link of the compute instance you want to use as a remote compute.
-1. In the dialog, take note of the IP Address and SSH port.
-1. Save your private key to the ~/.ssh/ directory on your local computer; for instance, open an editor for a new file and paste the key in:
+Navigate to [ml.azure.com](https://ml.azure.com)
- **Linux**:
+> [!IMPORTANT]
+> In order to connect to your remote compute instance from Visual Studio Code, make sure that the account you're logged into in Azure Machine Learning studio is the same one you use in Visual Studio Code.
- ```sh
- vi ~/.ssh/id_azmlcitest_rsa
- ```
+### Compute
- **Windows**:
+1. Select the **Compute** tab
+1. In the *Application URI* column, select **VS Code** for the compute instance you want to connect to.
- ```cmd
- notepad C:\Users\<username>\.ssh\id_azmlcitest_rsa
- ```
- The private key will look somewhat like this:
+### Notebook
- ```text
- --BEGIN RSA PRIVATE KEY--
+1. Select the **Notebook** tab
+1. In the *Notebook* tab, select the file you want to edit.
+1. Select **Editors > Edit in VS Code (preview)**.
- MIIEpAIBAAKCAQEAr99EPm0P4CaTPT2KtBt+kpN3rmsNNE5dS0vmGWxIXq4vAWXD
- .....
- ewMtLnDgXWYJo0IyQ91ynOdxbFoVOuuGNdDoBykUZPQfeHDONy2Raw==
- --END RSA PRIVATE KEY--
- ```
+
-1. Change permissions on file to make sure only you can read the file.
+A new window launches for your remote compute instance. When attempting to make a connection to a remote compute instance, the following tasks are taking place:
- ```sh
- chmod 600 ~/.ssh/id_azmlcitest_rsa
- ```
+1. Authorization. Some checks are performed to make sure the user attempting to make a connection is authorized to use the compute instance.
+1. VS Code Remote Server is installed on the compute instance.
+1. A WebSocket connection is established for real-time interaction.
-### Add instance as a host
+Once the connection is established, it's persisted. A token is issued at the start of the session which gets refreshed automatically to maintain the connection with your compute instance.
-Open the file `~/.ssh/config` (Linux) or `C:\Users<username>.ssh\config` (Windows) in an editor and add a new entry similar to the content below:
+After you connect to your remote compute instance, use the editor to:
-```
-Host azmlci1
+* [Author and manage files on your remote compute instance or file share](https://code.visualstudio.com/docs/editor/codebasics).
+* Use the [VS Code integrated terminal](https://code.visualstudio.com/docs/editor/integrated-terminal) to [run commands and applications on your remote compute instance](how-to-access-terminal.md).
+* [Debug your scripts and applications](https://code.visualstudio.com/Docs/editor/debugging)
+* [Use VS Code to manage your Git repositories](concept-train-model-git-integration.md)
- HostName 13.69.56.51
+## Configure compute instance as remote notebook server
- Port 50000
+In order to configure a compute instance as a remote Jupyter Notebook server you'll need a few prerequisites:
- User azureuser
+* Azure Machine Learning Visual Studio Code extension. For more information, see the [Azure Machine Learning Visual Studio Code Extension setup guide](tutorial-setup-vscode-extension.md).
+* Azure Machine Learning workspace. [Use the Azure Machine Learning Visual Studio Code extension to create a new workspace](how-to-manage-resources-vscode.md#create-a-workspace) if you don't already have one.
- IdentityFile ~/.ssh/id_azmlcitest_rsa
-```
+To connect to a compute instance:
-Here some details on the fields:
+1. Open a Jupyter Notebook in Visual Studio Code.
+1. When the integrated notebook experience loads, select **Jupyter Server**.
-|Field|Description|
-|-||
-|Host|Use whatever shorthand you like for the compute instance |
-|HostName|This is the IP address of the compute instance |
-|Port|This is the port shown on the SSH dialog above |
-|User|This needs to beΓÇ»`azureuser` |
-|IdentityFile|Should point to the file where you saved the private key |
+ > [!div class="mx-imgBorder"]
+ > ![Launch Azure Machine Learning remote Jupyter Notebook server dropdown](media/how-to-set-up-vs-code-remote/launch-server-selection-dropdown.png)
-Now, you should be able to ssh to your compute instance using the shorthand you used above, `ssh azmlci1`.
+ Alternatively, you also use the command palette:
-### Connect VS Code to the instance
+ 1. Open the command palette by selecting **View > Command Palette** from the menu bar.
+ 1. Enter into the text box `Azure ML: Connect to Compute instance Jupyter server`.
-1. Click the Remote-SSH icon from the Visual Studio Code activity bar to show your SSH configurations.
+1. Choose `Azure ML Compute Instances` from the list of Jupyter server options.
+1. Select your subscription from the list of subscriptions. If you have have previously configured your default Azure Machine Learning workspace, this step is skipped.
+1. Select your workspace.
+1. Select your compute instance from the list. If you don't have one, select **Create new Azure ML Compute Instance** and follow the prompts to create one.
+1. For the changes to take effect, you have to reload Visual Studio Code.
+1. Open a Jupyter Notebook and run a cell.
-1. Right-click the SSH host configuration you just created.
+> [!IMPORTANT]
+> You **MUST** run a cell in order to establish the connection.
-1. Select **Connect to Host in Current Window**.
+At this point, you can continue to run cells in your Jupyter Notebook.
-From here on, you are entirely working on the compute instance and you can now edit, debug, use git, use extensions, etc. -- just like you can with your local Visual Studio Code.
+> [!TIP]
+> You can also work with Python script files (.py) containing Jupyter-like code cells. For more information, see the [Visual Studio Code Python interactive documentation](https://code.visualstudio.com/docs/python/jupyter-support-py).
## Next steps
machine-learning Monitor Resource Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/monitor-resource-reference.md
Previously updated : 10/02/2020 Last updated : 04/07/2021 # Monitoring Azure machine learning data reference
This section lists all the automatically collected platform metrics collected fo
**Model** | Metric | Unit | Description |
-| -- | -- | -- |
-| Model deploy failed | Count | The number of model deployments that failed. |
-| Model deploy started | Count | The number of model deployments started. |
-| Model deploy succeeded | Count | The number of model deployments that succeeded. |
-| Model register failed | Count | The number of model registrations that failed. |
-| Model register succeeded | Count | The number of model registrations that succeeded. |
+|--|--|--|
+| Model Register Succeeded | Count | Number of model registrations that succeeded in this workspace |
+| Model Register Failed | Count | Number of model registrations that failed in this workspace |
+| Model Deploy Started | Count | Number of model deployments started in this workspace |
+| Model Deploy Succeeded | Count | Number of model deployments that succeeded in this workspace |
+| Model Deploy Failed | Count | Number of model deployments that failed in this workspace |
**Quota** Quota information is for Azure Machine Learning compute only. | Metric | Unit | Description |
-| -- | -- | -- |
-| Active cores | Count | The number of active compute cores. |
-| Active nodes | Count | The number of active nodes. |
-| Idle cores | Count | The number of idle compute cores. |
-| Idle nodes | Count | The number of idle compute nodes. |
-| Leaving cores | Count | The number of leaving cores. |
-| Leaving nodes | Count | The number of leaving nodes. |
-| Preempted cores | Count | The number of preempted cores. |
-| Preempted nodes | Count | The number of preempted nodes. |
-| Quota utilization percentage | Percent | The percentage of quota used. |
-| Total cores | Count | The total cores. |
-| Total nodes | Count | The total nodes. |
-| Unusable cores | Count | The number of unusable cores. |
-| Unusable nodes | Count | The number of unusable nodes. |
+|--|--|--|
+| Total Nodes | Count | Number of total nodes. This total includes some of Active Nodes, Idle Nodes, Unusable Nodes, Preempted Nodes, Leaving Nodes |
+| Active Nodes | Count | Number of Active nodes. The nodes that are actively running a job. |
+| Idle Nodes | Count | Number of idle nodes. Idle nodes are the nodes that are not running any jobs but can accept new job if available. |
+| Unusable Nodes | Count | Number of unusable nodes. Unusable nodes are not functional due to some unresolvable issue. Azure will recycle these nodes. |
+| Preempted Nodes | Count | Number of preempted nodes. These nodes are the low-priority nodes that are taken away from the available node pool. |
+| Leaving Nodes | Count | Number of leaving nodes. Leaving nodes are the nodes that just finished processing a job and will go to Idle state. |
+| Total Cores | Count | Number of total cores |
+| Active Cores | Count | Number of active cores |
+| Idle Cores | Count | Number of idle cores |
+| Unusable Cores | Count | Number of unusable cores |
+| Preempted Cores | Count | Number of preempted cores |
+| Leaving Cores | Count | Number of leaving cores |
+| Quota Utilization Percentage | Count | Percent of quota utilized |
**Resource**
-| Metric | Unit | Description |
-| -- | -- | -- |
-| CpuUtilization | Percent | How much percent of CPU was utilized for a given node during a run/job. This metric is published only when a job is running on a node. One job may use one or more nodes. This metric is published per node. |
-| GpuUtilization | Percent | How much percentage of GPU was utilized for a given node during a run/job. One node can have one or more GPUs. This metric is published per GPU per node. |
+| Metric| Unit | Description |
+|--|--|--|
+| CpuUtilization | Count | Percentage of utilization on a CPU node. Utilization is reported at one-minute intervals. |
+| GpuUtilization | Count | Percentage of utilization on a GPU node. Utilization is reported at one-minute intervals. |
+| GpuMemoryUtilization | Count | Percentage of memory utilization on a GPU node. Utilization is reported at one-minute intervals. |
+| GpuEnergyJoules | Count | Interval energy in Joules on a GPU node. Energy is reported at one-minute intervals. |
**Run**
-Information on training runs.
+Information on training runs for the workspace.
| Metric | Unit | Description |
-| -- | -- | -- |
-| Completed runs | Count | The number of completed runs. |
-| Failed runs | Count | The number of failed runs. |
-| Started runs | Count | The number of started runs. |
+|--|--|--|
+| Cancelled Runs | Count | Number of runs canceled for this workspace. Count is updated when a run is successfully canceled. |
+| Cancel Requested Runs | Count | Number of runs where cancel was requested for this workspace. Count is updated when cancellation request has been received for a run. |
+| Completed Runs | Count | Number of runs completed successfully for this workspace. Count is updated when a run has completed and output has been collected. |
+| Failed Runs | Count | Number of runs failed for this workspace. Count is updated when a run fails. |
+| Finalizing Runs | Count | Number of runs entered finalizing state for this workspace. Count is updated when a run has completed but output collection still in progress. |
+| Not Responding Runs | Count | Number of runs not responding for this workspace. Count is updated when a run enters Not Responding state. |
+| Not Started Runs | Count | Number of runs in Not Started state for this workspace. Count is updated when a request is received to create a run but run information has not yet been populated. |
+| Preparing Runs | Count | Number of runs that are preparing for this workspace. Count is updated when a run enters Preparing state while the run environment is being prepared. |
+| Provisioning Runs | Count | Number of runs that are provisioning for this workspace. Count is updated when a run is waiting on compute target creation or provisioning. |
+| Queued Runs | Count | Number of runs that are queued for this workspace. Count is updated when a run is queued in compute target. Can occur when waiting for required compute nodes to be ready. |
+| Started Runs | Count | Number of runs running for this workspace. Count is updated when run starts running on required resources. |
+| Starting Runs | Count | Number of runs started for this workspace. Count is updated after request to create run and run info, such as the Run ID, has been populated |
+| Errors | Count | Number of run errors in this workspace. Count is updated whenever run encounters an error. |
+| Warnings | Count | Number of run warnings in this workspace. Count is updated whenever a run encounters a warning. |
## Metric dimensions
machine-learning Overview What Is Azure Ml https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/overview-what-is-azure-ml.md
Previously updated : 11/04/2020 Last updated : 04/08/2021 adobe-target: true
Forecasts or predictions from machine learning can make apps and devices smarter
## Machine learning tools to fit each task Azure Machine Learning provides all the tools developers and data scientists need for their machine learning workflows, including:
-+ The [Azure Machine Learning designer](tutorial-designer-automobile-price-train-score.md): drag-n-drop modules to build your experiments and then deploy pipelines.
++ The [Azure Machine Learning designer](tutorial-designer-automobile-price-train-score.md): drag-n-drop modules to build your experiments and then deploy pipelines in a low-code environment. + Jupyter notebooks: use our [example notebooks](https://github.com/Azure/MachineLearningNotebooks) or create your own notebooks to leverage our <a href="/python/api/overview/azure/ml/intro" target="_blank">SDK for Python</a> samples for your machine learning.
Azure Machine Learning provides all the tools developers and data scientists nee
+ The [Many Models Solution Accelerator](https://aka.ms/many-models) (preview) builds on Azure Machine Learning and enables you to train, operate, and manage hundreds or even thousands of machine learning models.
-+ [Machine learning extension for Visual Studio Code users](tutorial-setup-vscode-extension.md)
++ [Machine learning extension for Visual Studio Code (preview)](how-to-set-up-vs-code-remote.md) provides you with a full-featured development environment for building and managing your machine learning projects.
-+ [Machine learning CLI](reference-azure-machine-learning-cli.md)
++ [Machine learning CLI](reference-azure-machine-learning-cli.md) is an Azure CLI extension that provides commands for managing with Azure Machine Learning resources from the command line.
-+ Open-source frameworks such as PyTorch, TensorFlow, and scikit-learn and many more
++ [Integration with open-source frameworks](concept-open-source.md) such as PyTorch, TensorFlow, and scikit-learn and many more for training, deploying, and managing the end-to-end machine learning process. + [Reinforcement learning](how-to-use-reinforcement-learning.md) with Ray RLlib
Your Azure Storage account, compute targets, and other resources can be used sec
- + [Get started in your own development environment](tutorial-1st-experiment-sdk-setup-local.md) + [Use Jupyter notebooks on a compute instance to train & deploy ML models](tutorial-1st-experiment-sdk-setup.md) + [Use automated machine learning to train & deploy ML models](tutorial-first-experiment-automated-ml.md)
+ + [Manage resources in Visual Studio Code](how-to-manage-resources-vscode.md)
+ + [Use Visual Studio Code to train and deploy an image classification model](tutorial-train-deploy-image-classification-model-vscode.md)
+ [Use the designer's drag & drop capabilities to train & deploy](tutorial-designer-automobile-price-train-score.md) + [Use the machine learning CLI to train and deploy a model](tutorial-train-deploy-model-cli.md)
media-services Video Indexer Embed Widgets https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/video-indexer/video-indexer-embed-widgets.md
You can use the Editor widget to create new projects and manage a video's insigh
<sup>*</sup>The owner should provide `accessToken` with caution.
-## Embedding videos
+## Embed videos
-This section discusses embedding the public and private content into apps.
+This section discusses embedding videos by [using the portal](#the-portal-experience) or by [assembling the URL manually](#assemble-the-url-manually) into apps.
The `location` parameter must be included in the embedded links, see [how to get the name of your region](regions.md). If your account is in preview, the `trial` should be used for the location value. `trial` is the default value for the `location` parameter. For example: `https://www.videoindexer.ai/accounts/00000000-0000-0000-0000-000000000000/videos/b2b2c74b8e/?location=trial`.
-> [!IMPORTANT]
-> Sharing a link for the **Player** or **Insights** widget will include the access token and grant the read-only permissions to your account.
+### The portal experience
-### Public content
+To embed a video, use the portal as described below:
1. Sign in to the [Video Indexer](https://www.videoindexer.ai/) website. 1. Select the video that you want to work with and press **Play**.
The `location` parameter must be included in the embedded links, see [how to get
5. Copy the embed code (appears in **Copy the embedded code** in the **Share & Embed** dialog). 6. Add the code to your app.
-### Private content
+> [!NOTE]
+> Sharing a link for the **Player** or **Insights** widget will include the access token and grant the read-only permissions to your account.
-To embed a private video, you must pass an access token in the `src` attribute of the iframe:
+### Assemble the URL manually
-`https://www.videoindexer.ai/embed/[insights | player]/<accountId>/<videoId>/?accessToken=<accessToken>`
-
-To get the Cognitive Insights widget content, use one of the following methods:
+#### Public videos
-- The [Get Insights Widget](https://api-portal.videoindexer.ai/docs/services/operations/operations/Get-Video-Insights-Widget?&pattern=widget) API.<br/>-- The [Get Video Access Token](https://api-portal.videoindexer.ai/docs/services/Operations/operations/Get-Video-Access-Token?). Add it as a query parameter to the URL. Specify this URL as the `src` value for the iframe, as shown earlier.
+You can embed public videos assembling the URL as follows:
+
+`https://www.videoindexer.ai/embed/[insights | player]/<accountId>/<videoId>`
+
+
+#### Private videos
+
+To embed a private video, you must pass an access token (use [Get Video Access Token](https://api-portal.videoindexer.ai/docs/services/Operations/operations/Get-Video-Access-Token?) in the `src` attribute of the iframe:
+
+`https://www.videoindexer.ai/embed/[insights | player]/<accountId>/<videoId>/?accessToken=<accessToken>`
+
+### Provide editing insights capabilities
-To provide editing insights capabilities in your embedded widget, you must pass an access token that includes editing permissions. Use [Get Insights Widget](https://api-portal.videoindexer.ai/docs/services/operations/operations/Get-Video-Insights-Widget?&pattern=widget) or [Get Video Access Token](https://api-portal.videoindexer.ai/docs/services/operations/operations/Get-Video-Access-Token?) with `&allowEdit=true`.
+To provide editing insights capabilities in your embedded widget, you must pass an access token that includes editing permissions. Use [Get Video Access Token](https://api-portal.videoindexer.ai/docs/services/operations/operations/Get-Video-Access-Token?) with `&allowEdit=true`.
## Widgets interaction
mysql Concepts Migrate Dbforge Studio For Mysql https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/mysql/concepts-migrate-dbforge-studio-for-mysql.md
As a result of our database migration efforts, the *world_x* database has succes
dbForge Studio for MySQL incorporates a few tools that allow migrating MySQL databases, MySQL schemas and\or data to Azure. The choice of functionality depends on your needs and the requirements of your project. If you need to selectively move a database, that is, migrate certain MySQL tables to Azure, it's best to use Schema and Data Compare functionality. In this example, we migrate the *world* database that resides on MySQL server to Azure Database for MySQL. The logic behind the migration process using Schema and Data Compare functionality of dbForge Studio for MySQL is to create an empty database in Azure Database for MySQL, synchronize it with the required MySQL database first using Schema Compare tool and then using Data Compare tool. This way MySQL schemas and data are accurately moved to Azure.
-### Connect to Azure Database for MySQL and create an empty database
-
-Connect to an Azure Database for MySQL and create an empty database.
+### Step 1. Connect to Azure Database for MySQL and create an empty database
### Step 2. Schema synchronization
postgresql Concepts Hyperscale Columnar https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/concepts-hyperscale-columnar.md
+
+ Title: Columnar table storage preview - Hyperscale (Citus) - Azure Database for PostgreSQL
+description: Compressing data using columnar storage (preview)
+++++ Last updated : 04/07/2021++
+# Columnar table storage (preview)
+
+> [!IMPORTANT]
+> Columnar table storage in Hyperscale (Citus) is currently in preview. This
+> preview version is provided without a service level agreement, and it's not
+> recommended for production workloads. Certain features might not be supported
+> or might have constrained capabilities.
+>
+> You can see a complete list of other new features in [preview features for
+> Hyperscale (Citus)](hyperscale-preview-features.md).
+
+Azure Database for PostgreSQL - Hyperscale (Citus) supports append-only
+columnar table storage for analytic and data warehousing workloads. When
+columns (rather than rows) are stored contiguously on disk, data becomes more
+compressible, and queries can request a subset of columns more quickly.
+
+To use columnar storage, specify `USING columnar` when creating a table:
+
+```postgresql
+CREATE TABLE contestant (
+ handle TEXT,
+ birthdate DATE,
+ rating INT,
+ percentile FLOAT,
+ country CHAR(3),
+ achievements TEXT[]
+) USING columnar;
+```
+
+Hyperscale (Citus) converts rows to columnar storage in "stripes" during
+insertion. Each stripe holds one transaction's worth of data, or 150000 rows,
+whichever is less. (The stripe size and other parameters of a columnar table
+can be changed with the
+[alter_columnar_table_set](reference-hyperscale-functions.md#alter_columnar_table_set)
+function.)
+
+For example, the following statement puts all five rows into the same stripe,
+because all values are inserted in a single transaction:
+
+```postgresql
+-- insert these values into a single columnar stripe
+
+INSERT INTO contestant VALUES
+ ('a','1990-01-10',2090,97.1,'XA','{a}'),
+ ('b','1990-11-01',2203,98.1,'XA','{a,b}'),
+ ('c','1988-11-01',2907,99.4,'XB','{w,y}'),
+ ('d','1985-05-05',2314,98.3,'XB','{}'),
+ ('e','1995-05-05',2236,98.2,'XC','{a}');
+```
+
+It's best to make large stripes when possible, because Hyperscale (Citus)
+compresses columnar data separately per stripe. We can see facts about our
+columnar table like compression rate, number of stripes, and average rows per
+stripe by using `VACUUM VERBOSE`:
+
+```postgresql
+VACUUM VERBOSE contestant;
+```
+```
+INFO: statistics for "contestant":
+storage id: 10000000000
+total file size: 24576, total data size: 248
+compression rate: 1.31x
+total row count: 5, stripe count: 1, average rows per stripe: 5
+chunk count: 6, containing data for dropped columns: 0, zstd compressed: 6
+```
+
+The output shows that Hyperscale (Citus) used the zstd compression algorithm to
+obtain 1.31x data compression. The compression rate compares a) the size of
+inserted data as it was staged in memory against b) the size of that data
+compressed in its eventual stripe.
+
+Because of how it's measured, the compression rate may or may not match the
+size difference between row and columnar storage for a table. The only way
+to truly find that difference is to construct a row and columnar table that
+contain the same data, and compare:
+
+```postgresql
+CREATE TABLE contestant_row AS
+ SELECT * FROM contestant;
+
+SELECT pg_total_relation_size('contestant_row') as row_size,
+ pg_total_relation_size('contestant') as columnar_size;
+```
+```
+ row_size | columnar_size
+-+
+ 16384 | 24576
+```
+
+For our tiny table, the columnar storage actually uses more space, but as the
+data grows, compression will win.
+
+## Example
+
+Columnar storage works well with table partitioning. For an example, see the
+Citus Engine community documentation, [archiving with columnar
+storage](https://docs.citusdata.com/en/stable/use_cases/timeseries.html#archiving-with-columnar-storage).
+
+## Gotchas
+
+* Columnar storage compresses per stripe. Stripes are created per transaction,
+ so inserting one row per transaction will put single rows into their own
+ stripes. Compression and performance of single row stripes will be worse than
+ a row table. Always insert in bulk to a columnar table.
+* If you mess up and columnarize a bunch of tiny stripes, you're stuck.
+ The only fix is to create a new columnar table and copy
+ data from the original in one transaction:
+ ```postgresql
+ BEGIN;
+ CREATE TABLE foo_compacted (LIKE foo) USING columnar;
+ INSERT INTO foo_compacted SELECT * FROM foo;
+ DROP TABLE foo;
+ ALTER TABLE foo_compacted RENAME TO foo;
+ COMMIT;
+ ```
+* Fundamentally non-compressible data can be a problem, although columnar
+ storage is still useful when selecting specific columns. It doesn't need
+ to load the other columns into memory.
+* On a partitioned table with a mix of row and column partitions, updates must
+ be carefully targeted. Filter them to hit only the row partitions.
+ * If the operation is targeted at a specific row partition (for example,
+ `UPDATE p2 SET i = i + 1`), it will succeed; if targeted at a specified columnar
+ partition (for example, `UPDATE p1 SET i = i + 1`), it will fail.
+ * If the operation is targeted at the partitioned table and has a WHERE
+ clause that excludes all columnar partitions (for example
+ `UPDATE parent SET i = i + 1 WHERE timestamp = '2020-03-15'`),
+ it will succeed.
+ * If the operation is targeted at the partitioned table, but does not
+ filter on the partition key columns, it will fail. Even if there are
+ WHERE clauses that match rows in only columnar partitions, it's not
+ enough--the partition key must also be filtered.
+
+## Limitations
+
+This feature still has a number of significant limitations. See [Hyperscale
+(Citus) limits and limitations](concepts-hyperscale-limits.md#columnar-storage).
+
+## Next steps
+
+* See an example of columnar storage in a Citus [timeseries
+ tutorial](https://docs.citusdata.com/en/stable/use_cases/timeseries.html#archiving-with-columnar-storage)
+ (external link).
postgresql Concepts Hyperscale Configuration Options https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/concepts-hyperscale-configuration-options.md
Previously updated : 1/12/2021+ Last updated : 04/07/2021 # Azure Database for PostgreSQL ΓÇô Hyperscale (Citus) configuration options
provisioning refers to the capacity available to the coordinator
and worker nodes in your Hyperscale (Citus) server group. The storage includes database files, temporary files, transaction logs, and the Postgres server logs.+
+### Standard tier
| Resource | Worker node | Coordinator node | |--|--|--|
following values:
| 19 | 29,184 | 58,368 | 116,812 | | 20 | 30,720 | 61,440 | 122,960 |
+### Basic tier (preview)
+
+> [!IMPORTANT]
+> The Hyperscale (Citus) basic tier is currently in preview. This preview
+> version is provided without a service level agreement, and it's not
+> recommended for production workloads. Certain features might not be supported
+> or might have constrained capabilities.
+>
+> You can see a complete list of other new features in [preview features for
+> Hyperscale (Citus)](hyperscale-preview-features.md).
+
+The Hyperscale (Citus) [basic tier](concepts-hyperscale-tiers.md) is a server
+group with just one node. Because there isn't a distinction between
+coordinator and worker nodes, it's less complicated to choose compute and
+storage resources.
+
+| Resource | Available options |
+|--|--|
+| Compute, vCores | 2, 4, 8 |
+| Memory per vCore, GiB | 4 |
+| Storage size, GiB | 128, 256, 512 |
+| Storage type | General purpose (SSD) |
+| IOPS | Up to 3 IOPS/GiB |
+
+The total amount of RAM in a single Hyperscale (Citus) node is based on the
+selected number of vCores.
+
+| vCores | GiB RAM |
+|--||
+| 2 | 8 |
+| 4 | 16 |
+| 8 | 32 |
+
+The total amount of storage you provision also defines the I/O capacity
+available to the basic tier node.
+
+| Storage size, GiB | Maximum IOPS |
+|-|--|
+| 128 | 384 |
+| 256 | 768 |
+| 512 | 1,536 |
+ ## Regions Hyperscale (Citus) server groups are available in the following Azure regions: * Americas: * Canada Central * Central US
- * East US
+ * East US *
* East US 2 * North Central US * West US 2
Hyperscale (Citus) server groups are available in the following Azure regions:
* UK South * West Europe
+(\* = supports [preview features](hyperscale-preview-features.md))
+ Some of these regions may not be initially activated on all Azure subscriptions. If you want to use a region from the list above and don't see it in your subscription, or if you want to use a region not on this list, open a [support request](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest).
-## Limits and limitations
-
-The following section describes capacity and functional limits in the
-Hyperscale (Citus) service.
-
-### Maximum connections
-
-Every PostgreSQL connection (even idle ones) uses at least 10 MB of memory, so
-it's important to limit simultaneous connections. Here are the limits we chose
-to keep nodes healthy:
-
-* Coordinator node
- * Maximum connections: 300
- * Maximum user connections: 297
-* Worker node
- * Maximum connections: 600
- * Maximum user connections: 597
-
-Attempts to connect beyond these limits will fail with an error. The system
-reserves three connections for monitoring nodes, which is why there are three
-fewer connections available for user queries than connections total.
-
-Establishing new connections takes time. That works against most applications,
-which request many short-lived connections. We recommend using a connection
-pooler, both to reduce idle transactions and reuse existing connections. To
-learn more, visit our [blog
-post](https://techcommunity.microsoft.com/t5/azure-database-for-postgresql/not-all-postgres-connection-pooling-is-equal/ba-p/825717).
-
-### Storage scaling
-
-Storage on coordinator and worker nodes can be scaled up (increased) but can't
-be scaled down (decreased).
-
-### Storage size
-
-Up to 2 TiB of storage is supported on coordinator and worker nodes. See the
-available storage options and IOPS calculation [above](#compute-and-storage)
-for node and cluster sizes.
-
-### Database creation
-
-The Azure portal provides credentials to connect to exactly one database per
-Hyperscale (Citus) server group, the `citus` database. Creating another
-database is currently not allowed, and the CREATE DATABASE command will fail
-with an error.
- ## Pricing For the most up-to-date pricing information, see the service [pricing page](https://azure.microsoft.com/pricing/details/postgresql/).
postgresql Concepts Hyperscale Extensions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/concepts-hyperscale-extensions.md
Previously updated : 07/09/2020 Last updated : 04/07/2021 # PostgreSQL extensions in Azure Database for PostgreSQL ΓÇô Hyperscale (Citus)
Azure Database for PostgreSQL - Hyperscale (Citus) currently supports a subset o
The following tables list the standard PostgreSQL extensions that are currently supported by Azure Database for PostgreSQL. This information is also available by running `SELECT * FROM pg_available_extensions;`.
+The versions of each extension installed in a server group sometimes differ based on the version of PostgreSQL (11, 12, or 13). The tables list extension versions per database version.
+
+### Citus extension
+
+> [!div class="mx-tableFixed"]
+> | **Extension** | **Description** | **PG 11** | **PG 12** | **PG 13** |
+> ||||||
+> | [citus](https://github.com/citusdata/citus) | Citus distributed database. | 9.5-1 | 9.5-1 | 10.0-2 |
+ ### Data types extensions > [!div class="mx-tableFixed"]
-> | **Extension** | **Description** |
-> |||
-> | [citext](https://www.postgresql.org/docs/current/static/citext.html) | Provides a case-insensitive character string type. |
-> | [cube](https://www.postgresql.org/docs/current/static/cube.html) | Provides a data type for multidimensional cubes. |
-> | [hll](https://github.com/citusdata/postgresql-hll) | Provides a HyperLogLog data structure. |
-> | [hstore](https://www.postgresql.org/docs/current/static/hstore.html) | Provides a data type for storing sets of key-value pairs. |
-> | [isn](https://www.postgresql.org/docs/current/static/isn.html) | Provides data types for international product numbering standards. |
-> | [lo](https://www.postgresql.org/docs/current/lo.html) | Large Object maintenance. |
-> | [ltree](https://www.postgresql.org/docs/current/static/ltree.html) | Provides a data type for hierarchical tree-like structures. |
-> | [seg](https://www.postgresql.org/docs/current/seg.html) | Data type for representing line segments or floating-point intervals. |
-> | [tdigest](https://github.com/tvondra/tdigest) | Data type for on-line accumulation of rank-based statistics such as quantiles and trimmed means. |
-> | [topn](https://github.com/citusdata/postgresql-topn/) | Type for top-n JSONB. |
+> | **Extension** | **Description** | **PG 11** | **PG 12** | **PG 13** |
+> ||||||
+> | [citext](https://www.postgresql.org/docs/current/static/citext.html) | Provides a case-insensitive character string type. | 1.5 | 1.6 | 1.6 |
+> | [cube](https://www.postgresql.org/docs/current/static/cube.html) | Provides a data type for multidimensional cubes. | 1.4 | 1.4 | 1.4 |
+> | [hll](https://github.com/citusdata/postgresql-hll) | Provides a HyperLogLog data structure. | 2.14 | 2.15 | 2.15 |
+> | [hstore](https://www.postgresql.org/docs/current/static/hstore.html) | Provides a data type for storing sets of key-value pairs. | 1.5 | 1.6 | 1.7 |
+> | [isn](https://www.postgresql.org/docs/current/static/isn.html) | Provides data types for international product numbering standards. | 1.2 | 1.2 | 1.2 |
+> | [lo](https://www.postgresql.org/docs/current/lo.html) | Large Object maintenance. | 1.1 | 1.1 | 1.1 |
+> | [ltree](https://www.postgresql.org/docs/current/static/ltree.html) | Provides a data type for hierarchical tree-like structures. | 1.1 | 1.1 | 1.2 |
+> | [seg](https://www.postgresql.org/docs/current/seg.html) | Data type for representing line segments or floating-point intervals. | 1.3 | 1.3 | 1.3 |
+> | [tdigest](https://github.com/tvondra/tdigest) | Data type for on-line accumulation of rank-based statistics such as quantiles and trimmed means. | 1.0 | 1.0 | 1.0 |
+> | [topn](https://github.com/citusdata/postgresql-topn/) | Type for top-n JSONB. | 2.2.2 | 2.3.1 | 2.3.1 |
### Full-text search extensions > [!div class="mx-tableFixed"]
-> | **Extension** | **Description** |
-> |||
-> | [dict\_int](https://www.postgresql.org/docs/current/static/dict-int.html) | Provides a text search dictionary template for integers. |
-> | [dict\_xsyn](https://www.postgresql.org/docs/current/dict-xsyn.html) | Text search dictionary template for extended synonym processing. |
-> | [unaccent](https://www.postgresql.org/docs/current/static/unaccent.html) | A text search dictionary that removes accents (diacritic signs) from lexemes. |
+> | **Extension** | **Description** | **PG 11** | **PG 12** | **PG 13** |
+> ||||||
+> | [dict\_int](https://www.postgresql.org/docs/current/static/dict-int.html) | Provides a text search dictionary template for integers. | 1.0 | 1.0 | 1.0 |
+> | [dict\_xsyn](https://www.postgresql.org/docs/current/dict-xsyn.html) | Text search dictionary template for extended synonym processing. | 1.0 | 1.0 | 1.0 |
+> | [unaccent](https://www.postgresql.org/docs/current/static/unaccent.html) | A text search dictionary that removes accents (diacritic signs) from lexemes. | 1.1 | 1.1 | 1.1 |
### Functions extensions > [!div class="mx-tableFixed"]
-> | **Extension** | **Description** |
-> |||
-> | [autoinc](https://www.postgresql.org/docs/current/contrib-spi.html#id-1.11.7.45.7) | Functions for autoincrementing fields. |
-> | [earthdistance](https://www.postgresql.org/docs/current/static/earthdistance.html) | Provides a means to calculate great-circle distances on the surface of the Earth. |
-> | [fuzzystrmatch](https://www.postgresql.org/docs/current/static/fuzzystrmatch.html) | Provides several functions to determine similarities and distance between strings. |
-> | [insert\_username](https://www.postgresql.org/docs/current/contrib-spi.html#id-1.11.7.45.8) | Functions for tracking who changed a table. |
-> | [intagg](https://www.postgresql.org/docs/current/intagg.html) | Integer aggregator and enumerator (obsolete). |
-> | [intarray](https://www.postgresql.org/docs/current/static/intarray.html) | Provides functions and operators for manipulating null-free arrays of integers. |
-> | [moddatetime](https://www.postgresql.org/docs/current/contrib-spi.html#id-1.11.7.45.9) | Functions for tracking last modification time. |
-> | [pg\_partman](https://pgxn.org/dist/pg_partman/doc/pg_partman.html) | Manages partitioned tables by time or ID. |
-> | [pg\_trgm](https://www.postgresql.org/docs/current/static/pgtrgm.html) | Provides functions and operators for determining the similarity of alphanumeric text based on trigram matching. |
-> | [pgcrypto](https://www.postgresql.org/docs/current/static/pgcrypto.html) | Provides cryptographic functions. |
-> | [refint](https://www.postgresql.org/docs/current/contrib-spi.html#id-1.11.7.45.5) | Functions for implementing referential integrity (obsolete). |
-> | session\_analytics | Functions for querying hstore arrays. |
-> | [tablefunc](https://www.postgresql.org/docs/current/static/tablefunc.html) | Provides functions that manipulate whole tables, including crosstab. |
-> | [tcn](https://www.postgresql.org/docs/current/tcn.html) | Triggered change notifications. |
-> | [timetravel](https://www.postgresql.org/docs/current/contrib-spi.html#id-1.11.7.45.6) | Functions for implementing time travel. |
-> | [uuid-ossp](https://www.postgresql.org/docs/current/static/uuid-ossp.html) | Generates universally unique identifiers (UUIDs). |
-
-### Hyperscale (Citus) extensions
-
-> [!div class="mx-tableFixed"]
-> | **Extension** | **Description** |
-> |||
-> | [citus](https://github.com/citusdata/citus) | Citus distributed database. |
+> | **Extension** | **Description** | **PG 11** | **PG 12** | **PG 13** |
+> ||||||
+> | [autoinc](https://www.postgresql.org/docs/current/contrib-spi.html#id-1.11.7.45.7) | Functions for autoincrementing fields. | 1.0 | 1.0 | 1.0 |
+> | [earthdistance](https://www.postgresql.org/docs/current/static/earthdistance.html) | Provides a means to calculate great-circle distances on the surface of the Earth. | 1.1 | 1.1 | 1.1 |
+> | [fuzzystrmatch](https://www.postgresql.org/docs/current/static/fuzzystrmatch.html) | Provides several functions to determine similarities and distance between strings. | 1.1 | 1.1 | 1.1 |
+> | [insert\_username](https://www.postgresql.org/docs/current/contrib-spi.html#id-1.11.7.45.8) | Functions for tracking who changed a table. | 1.0 | 1.0 | 1.0 |
+> | [intagg](https://www.postgresql.org/docs/current/intagg.html) | Integer aggregator and enumerator (obsolete). | 1.1 | 1.1 | 1.1 |
+> | [intarray](https://www.postgresql.org/docs/current/static/intarray.html) | Provides functions and operators for manipulating null-free arrays of integers. | 1.2 | 1.2 | 1.3 |
+> | [moddatetime](https://www.postgresql.org/docs/current/contrib-spi.html#id-1.11.7.45.9) | Functions for tracking last modification time. | 1.0 | 1.0 | 1.0 |
+> | [pg\_partman](https://pgxn.org/dist/pg_partman/doc/pg_partman.html) | Manages partitioned tables by time or ID. | 4.1 | 4.4.1 | 4.4.1 |
+> | [pg\_trgm](https://www.postgresql.org/docs/current/static/pgtrgm.html) | Provides functions and operators for determining the similarity of alphanumeric text based on trigram matching. | 1.4 | 1.4 | 1.5 |
+> | [pgcrypto](https://www.postgresql.org/docs/current/static/pgcrypto.html) | Provides cryptographic functions. | 1.3 | 1.3 | 1.3 |
+> | [refint](https://www.postgresql.org/docs/current/contrib-spi.html#id-1.11.7.45.5) | Functions for implementing referential integrity (obsolete). | 1.0 | 1.0 | 1.0 |
+> | session\_analytics | Functions for querying hstore arrays. | | | |
+> | [tablefunc](https://www.postgresql.org/docs/current/static/tablefunc.html) | Provides functions that manipulate whole tables, including crosstab. | 1.0 | 1.0 | 1.0 |
+> | [tcn](https://www.postgresql.org/docs/current/tcn.html) | Triggered change notifications. | 1.0 | 1.0 | 1.0 |
+> | [timetravel](https://www.postgresql.org/docs/current/contrib-spi.html#id-1.11.7.45.6) | Functions for implementing time travel. | 1.0 | | |
+> | [uuid-ossp](https://www.postgresql.org/docs/current/static/uuid-ossp.html) | Generates universally unique identifiers (UUIDs). | 1.1 | 1.1 | 1.1 |
### Index types extensions > [!div class="mx-tableFixed"]
-> | **Extension** | **Description** |
-> |||
-> | [bloom](https://www.postgresql.org/docs/current/bloom.html) | Bloom access method - signature file-based index. |
-> | [btree\_gin](https://www.postgresql.org/docs/current/static/btree-gin.html) | Provides sample GIN operator classes that implement B-tree-like behavior for certain data types. |
-> | [btree\_gist](https://www.postgresql.org/docs/current/static/btree-gist.html) | Provides GiST index operator classes that implement B-tree. |
+> | **Extension** | **Description** | **PG 11** | **PG 12** | **PG 13** |
+> ||||||
+> | [bloom](https://www.postgresql.org/docs/current/bloom.html) | Bloom access method - signature file-based index. | 1.0 | 1.0 | 1.0 |
+> | [btree\_gin](https://www.postgresql.org/docs/current/static/btree-gin.html) | Provides sample GIN operator classes that implement B-tree-like behavior for certain data types. | 1.3 | 1.3 | 1.3 |
+> | [btree\_gist](https://www.postgresql.org/docs/current/static/btree-gist.html) | Provides GiST index operator classes that implement B-tree. | 1.5 | 1.5 | 1.5 |
### Language extensions > [!div class="mx-tableFixed"]
-> | **Extension** | **Description** |
-> |||
-> | [plpgsql](https://www.postgresql.org/docs/current/static/plpgsql.html) | PL/pgSQL loadable procedural language. |
+> | **Extension** | **Description** | **PG 11** | **PG 12** | **PG 13** |
+> ||||||
+> | [plpgsql](https://www.postgresql.org/docs/current/static/plpgsql.html) | PL/pgSQL loadable procedural language. | 1.0 | 1.0 | 1.0 |
### Miscellaneous extensions > [!div class="mx-tableFixed"]
-> | **Extension** | **Description** |
-> |||
-> | [adminpack](https://www.postgresql.org/docs/current/adminpack.html) | Administrative functions for PostgreSQL. |
-> | [amcheck](https://www.postgresql.org/docs/current/amcheck.html) | Functions for verifying relation integrity. |
-> | [dblink](https://www.postgresql.org/docs/current/dblink.html) | A module that supports connections to other PostgreSQL databases from within a database session. See the "dblink and postgres_fdw" section for information about this extension. |
-> | [file\_fdw](https://www.postgresql.org/docs/current/file-fdw.html) | Foreign-data wrapper for flat file access. |
-> | [pageinspect](https://www.postgresql.org/docs/current/pageinspect.html) | Inspect the contents of database pages at a low level. |
-> | [pg\_buffercache](https://www.postgresql.org/docs/current/static/pgbuffercache.html) | Provides a means for examining what's happening in the shared buffer cache in real time. |
-> | [pg\_cron](https://github.com/citusdata/pg_cron) | Job scheduler for PostgreSQL. |
-> | [pg\_freespacemap](https://www.postgresql.org/docs/current/pgfreespacemap.html) | Examine the free space map (FSM). |
-> | [pg\_prewarm](https://www.postgresql.org/docs/current/static/pgprewarm.html) | Provides a way to load relation data into the buffer cache. |
-> | [pg\_stat\_statements](https://www.postgresql.org/docs/current/static/pgstatstatements.html) | Provides a means for tracking execution statistics of all SQL statements executed by a server. See the "pg_stat_statements" section for information about this extension. |
-> | [pg\_visibility](https://www.postgresql.org/docs/current/pgvisibility.html) | Examine the visibility map (VM) and page-level visibility information. |
-> | [pgrowlocks](https://www.postgresql.org/docs/current/static/pgrowlocks.html) | Provides a means for showing row-level locking information. |
-> | [pgstattuple](https://www.postgresql.org/docs/current/static/pgstattuple.html) | Provides a means for showing tuple-level statistics. |
-> | [postgres\_fdw](https://www.postgresql.org/docs/current/static/postgres-fdw.html) | Foreign-data wrapper used to access data stored in external PostgreSQL servers. See the "dblink and postgres_fdw" section for information about this extension.|
-> | [sslinfo](https://www.postgresql.org/docs/current/sslinfo.html) | Information about TLS/SSL certificates. |
-> | [tsm\_system\_rows](https://www.postgresql.org/docs/current/tsm-system-rows.html) | TABLESAMPLE method, which accepts number of rows as a limit. |
-> | [tsm\_system\_time](https://www.postgresql.org/docs/current/tsm-system-time.html) | TABLESAMPLE method, which accepts time in milliseconds as a limit. |
-> | [xml2](https://www.postgresql.org/docs/current/xml2.html) | XPath querying and XSLT. |
+> | **Extension** | **Description** | **PG 11** | **PG 12** | **PG 13** |
+> ||||||
+> | [adminpack](https://www.postgresql.org/docs/current/adminpack.html) | Administrative functions for PostgreSQL. | 2.0 | 2.0 | 2.1 |
+> | [amcheck](https://www.postgresql.org/docs/current/amcheck.html) | Functions for verifying relation integrity. | 1.1 | 1.2 | 1.2 |
+> | [dblink](https://www.postgresql.org/docs/current/dblink.html) | A module that supports connections to other PostgreSQL databases from within a database session. See the "dblink and postgres_fdw" section for information about this extension. | 1.2 | 1.2 | 1.2 |
+> | [file\_fdw](https://www.postgresql.org/docs/current/file-fdw.html) | Foreign-data wrapper for flat file access. | 1.0 | 1.0 | 1.0 |
+> | [pageinspect](https://www.postgresql.org/docs/current/pageinspect.html) | Inspect the contents of database pages at a low level. | 1.7 | 1.7 | 1.8 |
+> | [pg\_buffercache](https://www.postgresql.org/docs/current/static/pgbuffercache.html) | Provides a means for examining what's happening in the shared buffer cache in real time. | 1.3 | 1.3 | 1.3 |
+> | [pg\_cron](https://github.com/citusdata/pg_cron) | Job scheduler for PostgreSQL. | 1.1 | 1.3 | 1.3 |
+> | [pg\_freespacemap](https://www.postgresql.org/docs/current/pgfreespacemap.html) | Examine the free space map (FSM). | 1.2 | 1.2 | 1.2 |
+> | [pg\_prewarm](https://www.postgresql.org/docs/current/static/pgprewarm.html) | Provides a way to load relation data into the buffer cache. | 1.2 | 1.2 | 1.2 |
+> | [pg\_stat\_statements](https://www.postgresql.org/docs/current/static/pgstatstatements.html) | Provides a means for tracking execution statistics of all SQL statements executed by a server. See the "pg_stat_statements" section for information about this extension. | 1.6 | 1.7 | 1.8 |
+> | [pg\_visibility](https://www.postgresql.org/docs/current/pgvisibility.html) | Examine the visibility map (VM) and page-level visibility information. | 1.2 | 1.2 | 1.2 |
+> | [pgrowlocks](https://www.postgresql.org/docs/current/static/pgrowlocks.html) | Provides a means for showing row-level locking information. | 1.2 | 1.2 | 1.2 |
+> | [pgstattuple](https://www.postgresql.org/docs/current/static/pgstattuple.html) | Provides a means for showing tuple-level statistics. | 1.5 | 1.5 | 1.5 |
+> | [postgres\_fdw](https://www.postgresql.org/docs/current/static/postgres-fdw.html) | Foreign-data wrapper used to access data stored in external PostgreSQL servers. See the "dblink and postgres_fdw" section for information about this extension.| 1.0 | 1.0 | 1.0 |
+> | [sslinfo](https://www.postgresql.org/docs/current/sslinfo.html) | Information about TLS/SSL certificates. | 1.2 | 1.2 | 1.2 |
+> | [tsm\_system\_rows](https://www.postgresql.org/docs/current/tsm-system-rows.html) | TABLESAMPLE method, which accepts number of rows as a limit. | 1.0 | 1.0 | 1.0 |
+> | [tsm\_system\_time](https://www.postgresql.org/docs/current/tsm-system-time.html) | TABLESAMPLE method, which accepts time in milliseconds as a limit. | 1.0 | 1.0 | 1.0 |
+> | [xml2](https://www.postgresql.org/docs/current/xml2.html) | XPath querying and XSLT. | 1.1 | 1.1 | 1.1 |
### PostGIS extensions > [!div class="mx-tableFixed"]
-> | **Extension** | **Description** |
-> |||
-> | [PostGIS](https://www.postgis.net/), postgis\_topology, postgis\_tiger\_geocoder, postgis\_sfcgal | Spatial and geographic objects for PostgreSQL. |
-> | address\_standardizer, address\_standardizer\_data\_us | Used to parse an address into constituent elements. Used to support geocoding address normalization step. |
-> | postgis\_sfcgal | PostGIS SFCGAL functions. |
-> | postgis\_tiger\_geocoder | PostGIS tiger geocoder and reverse geocoder. |
-> | postgis\_topology | PostGIS topology spatial types and functions. |
+> | **Extension** | **Description** | **PG 11** | **PG 12** | **PG 13** |
+> ||||||
+> | [PostGIS](https://www.postgis.net/), postgis\_topology, postgis\_tiger\_geocoder, postgis\_sfcgal | Spatial and geographic objects for PostgreSQL. | 2.5.1 | 3.0.3 | 3.0.3 |
+> | address\_standardizer, address\_standardizer\_data\_us | Used to parse an address into constituent elements. Used to support geocoding address normalization step. | 2.5.1 | 3.0.3 | 3.0.3 |
+> | postgis\_sfcgal | PostGIS SFCGAL functions. | 2.5.1 | 3.0.3 | 3.0.3 |
+> | postgis\_tiger\_geocoder | PostGIS tiger geocoder and reverse geocoder. | 2.5.1 | 3.0.3 | 3.0.3 |
+> | postgis\_topology | PostGIS topology spatial types and functions. | 2.5.1 | 3.0.3 | 3.0.3 |
## pg_stat_statements
can be found in the Azure portal page for the Hyperscale (Citus) server group
under **Networking**. Currently, outbound connections from Azure Database for PostgreSQL Single server and Hyperscale (Citus) aren't supported, except for connections to other Azure Database for PostgreSQL servers and Hyperscale
-(Citus) server groups.
+(Citus) server groups.
postgresql Concepts Hyperscale Limits https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/concepts-hyperscale-limits.md
+
+ Title: Limits and limitations ΓÇô Hyperscale (Citus) - Azure Database for PostgreSQL
+description: Current limits for Hyperscale (Citus) server groups
+++++ Last updated : 04/07/2021++
+# Azure Database for PostgreSQL ΓÇô Hyperscale (Citus) limits and limitations
+
+The following section describes capacity and functional limits in the
+Hyperscale (Citus) service.
+
+## Maximum connections
+
+Every PostgreSQL connection (even idle ones) uses at least 10 MB of memory, so
+it's important to limit simultaneous connections. Here are the limits we chose
+to keep nodes healthy:
+
+* Coordinator node
+ * Maximum connections: 300
+ * Maximum user connections: 297
+* Worker node
+ * Maximum connections: 600
+ * Maximum user connections: 597
+
+> [!NOTE]
+> In a server group with [preview features](hyperscale-preview-features.md)
+> enabled, the connection limits to the coordinator are slightly different:
+>
+> * Coordinator node max connections
+> * 300 for 0-3 vCores
+> * 500 for 4-15 vCores
+> * 1000 for 16+ vCores
+
+Attempts to connect beyond these limits will fail with an error. The system
+reserves three connections for monitoring nodes, which is why there are three
+fewer connections available for user queries than connections total.
+
+### Connection pooling
+
+Establishing new connections takes time. That works against most applications,
+which request many short-lived connections. We recommend using a connection
+pooler, both to reduce idle transactions and reuse existing connections. To
+learn more, visit our [blog
+post](https://techcommunity.microsoft.com/t5/azure-database-for-postgresql/not-all-postgres-connection-pooling-is-equal/ba-p/825717).
+
+You can run your own connection pooler, or use PgBouncer managed by Azure.
+
+#### Managed PgBouncer (preview)
+
+> [!IMPORTANT]
+> The managed PgBouncer connection pooler in Hyperscale (Citus) is currently in
+> preview. This preview version is provided without a service level agreement,
+> and it's not recommended for production workloads. Certain features might not
+> be supported or might have constrained capabilities.
+>
+> You can see a complete list of other new features in [preview features for
+> Hyperscale (Citus)](hyperscale-preview-features.md).
+
+Connection poolers such as PgBouncer allow more clients to connect to the
+coordinator node at once. Applications connect to the pooler, and the pooler
+relays commands to the destination database.
+
+When clients connect through PgBouncer, the number of connections that can
+actively run in the database doesn't change. Instead, PgBouncer queues excess
+connections and runs them when the database is ready.
+
+Hyperscale (Citus) is now offering a managed instance of PgBouncer for server
+groups (in preview). It supports up to 2,000 simultaneous client connections.
+To connect through PgBouncer, follow these steps:
+
+1. Go to the **Connection strings** page for your server group in the Azure
+ portal.
+2. Enable the checkbox **PgBouncer connection strings**. (The listed connection
+ strings will change.)
+3. Update client applications to connect with the new string.
+
+## Storage scaling
+
+Storage on coordinator and worker nodes can be scaled up (increased) but can't
+be scaled down (decreased).
+
+## Storage size
+
+Up to 2 TiB of storage is supported on coordinator and worker nodes. See the
+available storage options and IOPS calculation
+[above](concepts-hyperscale-configuration-options.md#compute-and-storage) for
+node and cluster sizes.
+
+## Database creation
+
+The Azure portal provides credentials to connect to exactly one database per
+Hyperscale (Citus) server group, the `citus` database. Creating another
+database is currently not allowed, and the CREATE DATABASE command will fail
+with an error.
+
+## Columnar storage
+
+Hyperscale (Citus) currently has these limitations with [columnar
+tables](concepts-hyperscale-columnar.md):
+
+* Compression is on disk, not in memory
+* Append-only (no UPDATE/DELETE support)
+* No space reclamation (for example, rolled-back transactions may still consume
+ disk space)
+* No index support, index scans, or bitmap index scans
+* No tidscans
+* No sample scans
+* No TOAST support (large values supported inline)
+* No support for ON CONFLICT statements (except DO NOTHING actions with no
+ target specified).
+* No support for tuple locks (SELECT ... FOR SHARE, SELECT ... FOR UPDATE)
+* No support for serializable isolation level
+* Support for PostgreSQL server versions 12+ only
+* No support for foreign keys, unique constraints, or exclusion constraints
+* No support for logical decoding
+* No support for intra-node parallel scans
+* No support for AFTER ... FOR EACH ROW triggers
+* No UNLOGGED columnar tables
+* No TEMPORARY columnar tables
+
+## Next steps
+
+Learn how to [create a Hyperscale (Citus) server group in the
+portal](quickstart-create-hyperscale-portal.md).
postgresql Concepts Hyperscale Read Replicas https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/concepts-hyperscale-read-replicas.md
+
+ Title: Read replicas - Azure Database for PostgreSQL - Hyperscale (Citus)
+description: This article describes the read replica feature in Azure Database for PostgreSQL - Hyperscale (Citus).
++++ Last updated : 04/07/2021++
+# Read replicas in Azure Database for PostgreSQL - Hyperscale (Citus)
+
+> [!IMPORTANT]
+> Read replicas in Hyperscale (Citus) are currently in preview. This preview
+> version is provided without a service level agreement, and it's not
+> recommended for production workloads. Certain features might not be supported
+> or might have constrained capabilities.
+>
+> You can see a complete list of other new features in [preview features for
+> Hyperscale (Citus)](hyperscale-preview-features.md).
+
+The read replica feature allows you to replicate data from a Hyperscale (Citus)
+server group to a read-only server group. Replicas are updated
+**asynchronously** with PostgreSQL physical replication technology. You can
+replicate from the primary server to an unlimited number of replicas.
+
+Replicas are new server groups that you manage similar to regular Hyperscale
+(Citus) server groups. For each read replica, you're billed for the provisioned
+compute in vCores and storage in GB/ month.
+
+Learn how to [create and manage
+replicas](howto-hyperscale-read-replicas-portal.md).
+
+## When to use a read replica
+
+The read replica feature helps to improve the performance and scale of
+read-intensive workloads. Read workloads can be isolated to the replicas, while
+write workloads can be directed to the primary.
+
+A common scenario is to have BI and analytical workloads use the read replica
+as the data source for reporting.
+
+Because replicas are read-only, they don't directly reduce write-capacity
+burdens on the primary.
+
+### Considerations
+
+The feature is meant for scenarios where replication lag is acceptable, and is
+meant for offloading queries. It isn't meant for synchronous replication
+scenarios where replica data is expected to be up to date. There will be a
+measurable delay between the primary and the replica. The delay can be minutes
+or even hours depending on the workload and the latency between the primary and
+the replica. The data on the replica eventually becomes consistent with the
+data on the primary. Use this feature for workloads that can accommodate this
+delay.
+
+## Create a replica
+
+When you start the create replica workflow, a blank Hyperscale (Citus) server
+group is created. The new group is filled with the data that was on the primary
+server group. The creation time depends on the amount of data on the primary
+and the time since the last weekly full backup. The time can range from a few
+minutes to several hours.
+
+The read replica feature uses PostgreSQL physical replication, not logical
+replication. The default mode is streaming replication using replication slots.
+When necessary, log shipping is used to catch up.
+
+Learn how to [create a read replica in the Azure
+portal](howto-hyperscale-read-replicas-portal.md).
+
+## Connect to a replica
+
+When you create a replica, it doesn't inherit firewall rules the primary
+server group. These rules must be set up independently for the replica.
+
+The replica inherits the admin ("citus") account from the primary server group.
+All user accounts are replicated to the read replicas. You can only connect to
+a read replica by using the user accounts that are available on the primary
+server.
+
+You can connect to the replica's coordinator node by using its hostname and a
+valid user account, as you would on a regular Hyperscale (Citus) server group.
+For a server named **my replica** with the admin username **citus**, you can
+connect to the coordinator node of the replica by using psql:
+
+```bash
+psql -h c.myreplica.postgres.database.azure.com -U citus@myreplica -d postgres
+```
+
+At the prompt, enter the password for the user account.
+
+## Considerations
+
+This section summarizes considerations about the read replica feature.
+
+### New replicas
+
+A read replica is created as a new Hyperscale (Citus) server group. An existing
+server group can't be made into a replica. You can't create a replica of
+another read replica.
+
+### Replica configuration
+
+A replica is created by using the same compute, storage, and worker node
+settings as the primary. After a replica is created, several settings can be
+changed, including storage and backup retention period. Other settings can't be
+changed in replicas, such as storage size and number of worker nodes.
+
+Remember to keep replicas strong enough to keep up changes arriving from the
+primary. For instance, be sure to upscale compute power in replicas if you
+upscale it on the primary.
+
+Firewall rules and parameter settings are not inherited from the primary server
+to the replica when the replica is created or afterwards.
+
+### Regions
+
+Hyperscale (Citus) server groups support only same-region replication.
+
+## Next steps
+
+* Learn how to [create and manage read replicas in the Azure
+ portal](howto-hyperscale-read-replicas-portal.md).
postgresql Concepts Hyperscale Tiers https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/concepts-hyperscale-tiers.md
+
+ Title: Basic tier preview - Hyperscale (Citus) - Azure Database for PostgreSQL
+description: The single node basic tier for Azure Database for PostgreSQL - Hyperscale (Citus)
+++++ Last updated : 04/07/2021++
+# Basic tier (preview)
+
+> [!IMPORTANT]
+> The Hyperscale (Citus) basic tier is currently in preview. This preview
+> version is provided without a service level agreement, and it's not
+> recommended for production workloads. Certain features might not be supported
+> or might have constrained capabilities.
+>
+> You can see a complete list of other new features in [preview features for
+> Hyperscale (Citus)](hyperscale-preview-features.md).
+
+The basic tier in Azure Database for PostgreSQL - Hyperscale (Citus) is a
+simple way to create a small server group that you can scale later. While
+server groups in the standard tier have a coordinator node and at least two
+worker nodes, the basic tier runs everything in a single database node.
+
+Other than using fewer nodes, the basic tier has all the features of the
+standard tier. Like the standard tier, it supports high availability, read
+replicas, and columnar table storage, among other features.
+
+## Choosing basic vs standard tier
+
+The basic tier can be an economical and convenient deployment option for
+initial development, testing, and continuous integration. It uses a single
+database node and presents the same SQL API as the standard tier. You can test
+applications with the basic tier and later [graduate to the standard
+tier](howto-hyperscale-scale-grow.md#add-worker-nodes) with confidence that the
+interface remains the same.
+
+The basic tier is also appropriate for smaller workloads in production (once it
+emerges from preview into general availability). There is room to scale
+vertically *within* the basic tier by increasing the number of server vCores.
+
+When greater scale is required right away, use the standard tier. Its smallest
+allowed server group has one coordinator node and two workers. You can choose
+to use more nodes based on your use-case, as described in our [initial
+sizing](howto-hyperscale-scale-initial.md) how-to.
+
+## Next steps
+
+* Learn to [provision the basic tier](quickstart-create-hyperscale-basic-tier.md)
+* When you're ready, see [how to graduate](howto-hyperscale-scale-grow.md#add-worker-nodes) from the basic tier to the standard tier
+* The [columnar storage](concepts-hyperscale-columnar.md) option is available in both the basic and standard tier
postgresql Concepts Hyperscale Versions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/concepts-hyperscale-versions.md
+
+ Title: Supported versions ΓÇô Hyperscale (Citus) - Azure Database for PostgreSQL
+description: PostgreSQL versions available in Azure Database for PostgreSQL - Hyperscale (Citus)
+++++ Last updated : 04/07/2021++
+# Supported database versions in Azure Database for PostgreSQL ΓÇô Hyperscale (Citus)
+
+## PostgreSQL versions
+
+> [!IMPORTANT]
+> Customizable PostgreSQL versions in Hyperscale (Citus) is currently in
+> preview. This preview is provided without a service level agreement, and
+> it's not recommended for production workloads. Certain features might not be
+> supported or might have constrained capabilities.
+>
+> You can see a complete list of other new features in [preview features for
+> Hyperscale (Citus)](hyperscale-preview-features.md).
+
+The version of PostgreSQL running in a Hyperscale (Citus) server group is
+customizable during creation. Choosing anything other than version 11 is
+currently a preview feature.
+
+Hyperscale (Citus) currently supports the following major versions:
+
+### PostgreSQL version 13 (preview)
+
+The current minor release is 13.2. Refer to the [PostgreSQL
+documentation](https://www.postgresql.org/docs/13/static/release-13-2.html) to
+learn more about improvements and fixes in this minor release.
+
+### PostgreSQL version 12 (preview)
+
+The current minor release is 12.6. Refer to the [PostgreSQL
+documentation](https://www.postgresql.org/docs/12/static/release-12-6.html) to
+learn more about improvements and fixes in this minor release.
+
+### PostgreSQL version 11
+
+The current minor release is 11.11. Refer to the [PostgreSQL
+documentation](https://www.postgresql.org/docs/11/static/release-11-11.html) to
+learn more about improvements and fixes in this minor release.
+
+### PostgreSQL version 10 and older
+
+We do not support PostgreSQL version 10 and older for Azure Database for
+PostgreSQL - Hyperscale (Citus).
+
+## Citus and other extension versions
+
+Depending on which version of PostgreSQL is running in a server group,
+different [versions of Postgres extensions](concepts-hyperscale-extensions.md)
+will be installed as well. In particular, Postgres 13 comes with Citus 10, and
+earlier Postgres versions come with Citus 9.5.
+
+## Next steps
+
+* See which [extensions](concepts-hyperscale-extensions.md) are installed in
+ which versions.
+* Learn to [create a Hyperscale (Citus) server
+ group](quickstart-create-hyperscale-portal.md).
postgresql Howto Hyperscale Read Replicas Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/howto-hyperscale-read-replicas-portal.md
+
+ Title: Manage read replicas - Azure portal - Azure Database for PostgreSQL - Hyperscale (Citus)
+description: Learn how to manage read replicas Azure Database for PostgreSQL - Hyperscale (Citus) from the Azure portal.
++++ Last updated : 04/07/2021++
+# Create and manage read replicas in Azure Database for PostgreSQL - Hyperscale (Citus) from the Azure portal
+
+> [!IMPORTANT]
+> Read replicas in Hyperscale (Citus) are currently in preview. This preview
+> version is provided without a service level agreement, and it's not
+> recommended for production workloads. Certain features might not be supported
+> or might have constrained capabilities.
+>
+> You can see a complete list of other new features in [preview features for
+> Hyperscale (Citus)](hyperscale-preview-features.md).
+
+In this article, you learn how to create and manage read replicas in Hyperscale
+(Citus) from the Azure portal. To learn more about read replicas, see the
+[overview](concepts-hyperscale-read-replicas.md).
++
+## Prerequisites
+
+A [Hyperscale (Citus) server group](quickstart-create-hyperscale-portal.md) to
+be the primary.
+
+## Create a read replica
+
+To create a read replica, follow these steps:
+
+1. Select an existing Azure Database for PostgreSQL server group to use as the
+ primary.
+
+2. On the server group sidebar, under **Server group management**, select
+ **Replication**.
+
+3. Select **Add Replica**.
+
+4. Enter a name for the read replica.
+
+5. Select **OK** to confirm the creation of the replica.
+
+After the read replica is created, it can be viewed from the **Replication** window.
+
+> [!IMPORTANT]
+>
+> Review the [considerations section of the Read Replica
+> overview](concepts-hyperscale-read-replicas.md#considerations).
+>
+> Before a primary server group setting is updated to a new value, update the
+> replica setting to an equal or greater value. This action helps the replica
+> keep up with any changes made to the master.
+
+## Delete a primary server group
+
+To delete a primary server group, you use the same steps as to delete a
+standalone Hyperscale (Citus) server group.
+
+> [!IMPORTANT]
+>
+> When you delete a primary server group, replication to all read replicas is
+> stopped. The read replicas become standalone server groups that now support
+> both reads and writes.
+
+To delete a server group from the Azure portal, follow these steps:
+
+1. In the Azure portal, select your primary Azure Database for PostgreSQL
+ server group.
+
+2. Open the **Overview** page for the server group. Select **Delete**.
+
+3. Enter the name of the primary server group to delete. Select **Delete** to
+ confirm deletion of the primary server group.
+
+
+## Delete a replica
+
+You can delete a read replica similarly to how you delete a primary server
+group.
+
+- In the Azure portal, open the **Overview** page for the read replica. Select
+ **Delete**.
+
+You can also delete the read replica from the **Replication** window by
+following these steps:
+
+1. In the Azure portal, select your primary Hyperscale (Citus) server group.
+
+2. On the server group menu, under **Server group management**, select
+ **Replication**.
+
+3. Select the read replica to delete.
+
+4. Select **Delete replica**.
+
+5. Enter the name of the replica to delete. Select **Delete** to confirm
+ deletion of the replica.
+
+## Next steps
+
+* Learn more about [read replicas in Azure Database for
+ PostgreSQL - Hyperscale (Citus)](concepts-hyperscale-read-replicas.md).
postgresql Howto Hyperscale Scale Grow https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/howto-hyperscale-scale-grow.md
Previously updated : 11/17/2020 Last updated : 04/07/2021 # Scale a Hyperscale (Citus) server group
queries.
To add nodes, go to the **Compute + storage** tab in your Hyperscale (Citus) server group. Dragging the slider for **Worker node count** changes the value.
+> [!NOTE]
+>
+> A Hyperscale (Citus) server group created with the [basic tier
+> (preview)](concepts-hyperscale-tiers.md) has no workers. Increasing the
+> worker count automatically graduates the server group to the standard tier.
+> After graduating a server group to the standard tier, you can't downgrade it
+> back to the basic tier.
+ :::image type="content" source="./media/howto-hyperscale-scaling/01-sliders-workers.png" alt-text="Resource sliders"::: Click the **Save** button to make the changed value take effect.
Click the **Save** button to make the changed value take effect.
In addition to adding new nodes, you can increase the capabilities of existing nodes. Adjusting compute capacity up and down can be useful for performance
-experiments as well as short- or long-term changes to traffic demands.
+experiments, and short- or long-term changes to traffic demands.
To change the vCores for all worker nodes, adjust the **vCores** slider under **Configuration (per worker node)**. The coordinator node's vCores can be
postgresql Howto Hyperscale Scale Initial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/howto-hyperscale-scale-initial.md
Previously updated : 11/17/2020 Last updated : 04/07/2021 # Pick initial size for Hyperscale (Citus) server group
-The size of a server group, both its number of nodes and their hardware
-capacity, is [easy to change](howto-hyperscale-scale-grow.md)). However you
-still need to choose an initial size for a new server group. Here are some tips for a reasonable choice.
+The size of a server group, both number of nodes and their hardware capacity,
+is [easy to change](howto-hyperscale-scale-grow.md)). However you still need to
+choose an initial size for a new server group. Here are some tips for a
+reasonable choice.
-## Multi-tenant SaaS use-case
+## Use-cases
-When migrating to Hyperscale (Citus) from an existing single-node
-PostgreSQL database instance, choose a cluster where the number
-of worker vCores and RAM in total equals that of the original instance. In such
-scenarios we have seen 2-3x performance improvements because sharding improves
-resource utilization, allowing smaller indices etc.
+Hyperscale (Citus) is frequently used in the following ways.
+
+### Multi-tenant SaaS
+
+When migrating to Hyperscale (Citus) from an existing single-node PostgreSQL
+database instance, choose a cluster where the number of worker vCores and RAM
+in total equals that of the original instance. In such scenarios we have seen
+2-3x performance improvements because sharding improves resource utilization,
+allowing smaller indices etc.
The vCore count is actually the only decision. RAM allocation is currently determined based on vCore count, as described in the [Hyperscale (Citus)
configuration options](concepts-hyperscale-configuration-options.md) page.
The coordinator node doesn't require as much RAM as workers, but there's no way to choose RAM and vCores independently.
-## Real-time analytics use-case
+### Real-time analytics
Total vCores: when working data fits in RAM, you can expect a linear performance improvement on Hyperscale (Citus) proportional to the number of
the current latency for queries in your single-node database and the required
latency in Hyperscale (Citus). Divide current latency by desired latency, and round the result.
-Worker RAM: the best case would be providing enough memory that the majority of
+Worker RAM: the best case would be providing enough memory that most
the working set fits in memory. The type of queries your application uses affect memory requirements. You can run EXPLAIN ANALYZE on a query to determine how much memory it requires. Remember that vCores and RAM are scaled together as described in the [Hyperscale (Citus) configuration options](concepts-hyperscale-configuration-options.md) article.
+## Choosing a Hyperscale (Citus) tier
+
+> [!IMPORTANT]
+> The Hyperscale (Citus) basic tier is currently in preview. This preview
+> version is provided without a service level agreement, and it's not
+> recommended for production workloads. Certain features might not be supported
+> or might have constrained capabilities.
+>
+> You can see a complete list of other new features in [preview features for
+> Hyperscale (Citus)](hyperscale-preview-features.md).
+
+The sections above give an idea how many vCores and how much RAM are needed for
+each use case. You can meet these demands through a choice between two
+Hyperscale (Citus) tiers: the basic tier and the standard tier.
+
+The basic tier uses a single database node to perform processing, while the
+standard tier allows more nodes. The tiers are otherwise identical, offering
+the same features. In some cases, a single node's vCores and disk space can be
+scaled to suffice, and in other cases it requires the cooperation of multiple
+nodes.
+
+For a comparison of the tiers, see the [basic
+tier](concepts-hyperscale-tiers.md) concepts page.
+ ## Next steps - [Scale a server group](howto-hyperscale-scale-grow.md)
postgresql Howto Hyperscale Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/howto-hyperscale-upgrade.md
+
+ Title: Upgrade server group - Hyperscale (Citus) - Azure Database for PostgreSQL
+description: This article describes how you can upgrade PostgreSQL and Citus in Azure Database for PostgreSQL - Hyperscale (Citus).
+++++ Last updated : 4/5/2021++
+# Upgrade Hyperscale (Citus) server group
+
+These instructions describe how to upgrade to a new major version of PostgreSQL
+on all server group nodes.
+
+## Test the upgrade first
+
+Upgrading PostgreSQL causes more changes than you might imagine, because
+Hyperscale (Citus) will also upgrade the [database
+extensions](concepts-hyperscale-extensions.md), including the Citus extension.
+We strongly recommend you to test your application with the new PostgreSQL and
+Citus version before you upgrade your production environment.
+
+A convenient way to test is to make a copy of your server group using
+[point-in-time
+restore](concepts-hyperscale-backup.md#point-in-time-restore-pitr). Upgrade the
+copy and test your application against it. Once you've verified everything
+works properly, upgrade the original server group.
+
+## Upgrade a server group in the Azure portal
+
+1. In the **Overview** section of a Hyperscale (Citus) server group, select the
+ **Upgrade** button.
+1. A dialog appears, showing the current version of PostgreSQL and Citus.
+ Choose a new PostgreSQL version in the **Upgrade to** list.
+1. Verify the value in **Citus version after upgrade** is what you expect.
+ This value changes based on the PostgreSQL version you selected.
+1. Select the **Upgrade** button to continue.
+
+## Next steps
+
+* Learn about [supported PostgreSQL versions](concepts-hyperscale-versions.md).
+* See [which extensions](concepts-hyperscale-extensions.md) are packaged with
+ each PostgreSQL version in a Hyperscale (Citus) server group.
postgresql Hyperscale Preview Features https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/hyperscale-preview-features.md
+
+ Title: Preview features in Azure Database for PostgreSQL - Hyperscale (Citus)
+description: Updated list of features currently in preview
++++++ Last updated : 04/07/2021++
+# Preview features for PostgreSQL - Hyperscale (Citus)
+
+Azure Database for PostgreSQL - Hyperscale (Citus) offers
+previews for unreleased features. Preview versions are provided
+without a service level agreement, and aren't recommended for
+production workloads. Certain features might not be supported or
+might have constrained capabilities. For more information, see
+[Supplemental Terms of Use for Microsoft Azure
+Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/)
+
+## Features currently in preview
+
+Here are the features currently available for preview:
+
+* **[Columnar storage](concepts-hyperscale-columnar.md)**.
+ Store selected tables' columns (rather than rows) contiguously
+ on disk. Supports on-disk compression. Good for analytic and
+ data warehousing workloads.
+* **[PostgreSQL 12 and 13](concepts-hyperscale-versions.md)**.
+ Use the latest database version in your server group.
+* **[Basic tier](concepts-hyperscale-tiers.md)**. Run a server
+ group using only a coordinator node and no worker nodes. An
+ economical way to do initial testing and development, and
+ handle small production workloads.
+* **[Read replicas](howto-hyperscale-read-replicas-portal.md)**
+ (currently same-region only). Any changes that happen to the
+ primary server group get reflected in its replica, and queries
+ against the replica cause no extra load on the original.
+ Replicas are a useful tool to improve performance for
+ read-only workloads.
+* **[Managed
+ PgBouncer](concepts-hyperscale-limits.md#managed-pgbouncer-preview)**.
+ A connection pooler that allows many clients to connect to
+ the server group at once, while limiting the number of active
+ connections. It satisfies connection requests while keeping
+ the coordinator node running smoothly.
+* **[PgAudit](concepts-hyperscale-audit.md)**. Provides detailed
+ session and object audit logging via the standard PostgreSQL
+ logging facility. It produces audit logs required to pass
+ certain government, financial, or ISO certification audits.
+
+### Available regions for preview features
+
+The pgAudit extension is available in all [regions supported by
+Hyperscale
+(Citus)](concepts-hyperscale-configuration-options.md#regions).
+The other preview features are available in **East US** only.
+
+## Does my server group have access to preview features?
+
+To determine if your Hyperscale (Citus) server group has preview features
+enabled, navigate to the server group's **Overview** page in the Azure portal.
+If you see the property **Tier: Basic (preview)** or **Tier: Standard
+(preview)** then your server group has access to preview features.
+
+### How to get access
+
+When creating a new Hyperscale (Citus) server group, check
+the box **Enable preview features.**
+
+## Contact us
+
+Let us know about your experience using preview features, by emailing [Ask
+Azure DB for PostgreSQL](mailto:AskAzureDBforPostgreSQL@service.microsoft.com).
+(This email address isn't a technical support channel. For technical problems,
+open a [support
+request](https://ms.portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest).)
postgresql Quickstart Create Hyperscale Basic Tier https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/quickstart-create-hyperscale-basic-tier.md
+
+ Title: 'Quickstart: create a basic tier server group - Hyperscale (Citus) - Azure Database for PostgreSQL'
+description: Get started with the Azure Database for PostgreSQL Hyperscale (Citus) basic tier.
++++++ Last updated : 04/07/2021
+#Customer intent: As a developer, I want to provision a hyperscale server group so that I can run queries quickly on large datasets.
++
+# Create a Hyperscale (Citus) basic tier server group in the Azure portal
+
+Azure Database for PostgreSQL - Hyperscale (Citus) is a managed service that
+you use to run, manage, and scale highly available PostgreSQL databases in the
+cloud. Its [basic tier](concepts-hyperscale-tiers.md) is a a convenient
+deployment option for initial development and testing.
+
+This quickstart shows you how to create a Hyperscale (Citus) basic tier
+server group using the Azure portal. You'll provision the server group
+and verify that you can connect to it to run queries.
+
+> [!IMPORTANT]
+> The Hyperscale (Citus) basic tier is currently in preview. This preview
+> version is provided without a service level agreement, and it's not
+> recommended for production workloads. Certain features might not be supported
+> or might have constrained capabilities.
+>
+> You can see a complete list of other new features in [preview features for
+> Hyperscale (Citus)](hyperscale-preview-features.md).
++
+## Next steps
+
+In this quickstart, you learned how to provision a Hyperscale (Citus) server group. You connected to it with psql, created a schema, and distributed data.
+
+- Follow a tutorial to [build scalable multi-tenant
+ applications](./tutorial-design-database-hyperscale-multi-tenant.md)
+- Determine the best [initial
+ size](howto-hyperscale-scale-initial.md) for your server group
postgresql Reference Hyperscale Functions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/reference-hyperscale-functions.md
Previously updated : 08/10/2020 Last updated : 04/07/2021 # Functions in the Hyperscale (Citus) SQL API This section contains reference information for the user-defined functions
-provided by Hyperscale (Citus). These functions help in providing additional
-distributed functionality to Hyperscale (Citus) other than the standard SQL
-commands.
+provided by Hyperscale (Citus). These functions help in providing
+distributed functionality to Hyperscale (Citus).
> [!NOTE] >
SELECT create_distributed_function(
); ```
+### alter_columnar_table_set
+
+The alter_columnar_table_set() function changes settings on a [columnar
+table](concepts-hyperscale-columnar.md). Calling this function on a
+non-columnar table gives an error. All arguments except the table name are
+optional.
+
+To view current options for all columnar tables, consult this table:
+
+```postgresql
+SELECT * FROM columnar.options;
+```
+
+The default values for columnar settings for newly created tables can be
+overridden with these GUCs:
+
+* columnar.compression
+* columnar.compression_level
+* columnar.stripe_row_count
+* columnar.chunk_row_count
+
+#### Arguments
+
+**table_name:** Name of the columnar table.
+
+**chunk_row_count:** (Optional) The maximum number of rows per chunk for
+newly inserted data. Existing chunks of data will not be changed and may have
+more rows than this maximum value. The default value is 10000.
+
+**stripe_row_count:** (Optional) The maximum number of rows per stripe for
+newly inserted data. Existing stripes of data will not be changed and may have
+more rows than this maximum value. The default value is 150000.
+
+**compression:** (Optional) `[none|pglz|zstd|lz4|lz4hc]` The compression type
+for newly inserted data. Existing data will not be recompressed or
+decompressed. The default and suggested value is zstd (if support has
+been compiled in).
+
+**compression_level:** (Optional) Valid settings are from 1 through 19. If the
+compression method does not support the level chosen, the closest level will be
+selected instead.
+
+#### Return value
+
+N/A
+
+#### Example
+
+```postgresql
+SELECT alter_columnar_table_set(
+ 'my_columnar_table',
+ compression => 'none',
+ stripe_row_count => 10000);
+```
+ ## Metadata / Configuration Information ### master\_get\_table\_metadata
distribution. In most cases, the precise mapping is a low-level detail that the
database administrator can ignore. However it can be useful to determine a row's shard, either for manual database maintenance tasks or just to satisfy curiosity. The `get_shard_id_for_distribution_column` function provides this
-info for hash- and range-distributed tables as well as reference tables. It
+info for hash-distributed, range-distributed, and reference tables. It
does not work for the append distribution. #### Arguments
private-link Private Endpoint Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/private-link/private-endpoint-overview.md
Here are some key details about private endpoints:
- When creating a private endpoint, a read-only network interface is also created for the lifecycle of the resource. The interface is assigned dynamically private IP addresses from the subnet that maps to the private link resource. The value of the private IP address remains unchanged for the entire lifecycle of the private endpoint. -- The private endpoint must be deployed in the same region as the virtual network.
+- The private endpoint must be deployed in the same region and subscription as the virtual network.
- The private link resource can be deployed in a different region than the virtual network and private endpoint.
purview Register Scan Azure Synapse Analytics https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/register-scan-azure-synapse-analytics.md
Title: 'How to scan Azure Synapse Analytics'
-description: This how to guide describes details of how to scan Azure Synapse Analytics.
+ Title: 'How to scan Dedicated SQL pools'
+description: This how to guide describes details of how to scan Dedicated SQL pools.
Last updated 10/22/2020
-# Register and scan Azure Synapse Analytics
+# Register and scan Dedicated SQL pools (formerly SQL DW)
-This article discusses how to register and scan an instance of Azure Synapse Analytics (formerly SQL DW) in Purview.
+> [!NOTE]
+> If you are looking to register and scan a dedicated SQL database within a Synapse workspace, you must follow instructions [here](register-scan-synapse-workspace.md).
+
+This article discusses how to register and scan an instance of Dedicated SQL pool (formerly SQL DW) in Purview.
## Supported capabilities
When authentication method selected is **SQL Authentication**, you need to get y
1. If your key vault is not connected to Purview yet, you will need to [create a new key vault connection](manage-credentials.md#create-azure-key-vaults-connections-in-your-azure-purview-account) 1. Finally, [create a new credential](manage-credentials.md#create-a-new-credential) using the key to setup your scan
-## Register an Azure Synapse Analytics instance (formerly SQL DW)
+## Register a SQL dedicated pool (formerly SQL DW)
To register a new Azure Synapse Analytics server in your Data Catalog, do the following: 1. Navigate to your Purview account 1. Select **Sources** on the left navigation 1. Select **Register**
-1. On **Register sources**, select **Azure Synapse Analytics (formerly SQL DW)**
+1. On **Register sources**, select **SQL dedicated pool (formerly SQL DW)**
1. Select **Continue** On the **Register sources (Azure Synapse Analytics)** screen, do the following:
purview Register Scan Synapse Workspace https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/register-scan-synapse-workspace.md
+
+ Title: 'How to scan Azure Synapse Workspaces'
+description: Learn how to scan a Synapse Workspace in your Azure Purview data catalog.
+++++ Last updated : 3/31/2021++
+# Register and scan Azure Synapse workspaces
+
+This article outlines how to register an Azure Synapse Workspace in Purview and set up a scan on it.
+
+## Supported capabilities
+
+Azure Synapse Workspace scans support capturing metadata and schema for dedicated and serverless SQL databases within them. It also classifies the data automatically based on system and custom classification rules.
+
+## Prerequisites
+
+- Before registering data sources, create an Azure Purview account. For more information on creating a Purview account, see [Quickstart: Create an Azure Purview account](create-catalog-portal.md).
+- You need to be an Azure Purview Data Source Admin
+- Setting up authentication as described in the sections below
+
+### Setting up authentication for enumerating dedicated SQL database resources under a Synapse Workspace
+
+1. Navigate to the **Resource group** or **Subscription** that the Synapse workspace is in, in the Azure portal.
+1. Select **Access Control (IAM)** from the left navigation menu
+1. You must be owner or user access administrator to add a role on the Resource group or Subscription. Select *+Add* button.
+1. Set the **Reader** Role and enter your Azure Purview account name (which represents its MSI) under Select input box. Click *Save* to finish the role assignment.
+1. Follow steps 2 to 4 above to also add **Storage blob data reader** Role for the Azure Purview MSI on the resource group or subscription that the Synapse workspace is in.
+
+### Setting up authentication for enumerating serverless SQL database resources under a Synapse Workspace
+
+> [!NOTE]
+> You must be a **Synapse administrator** on the workspace to run these commands. Learn more about Synapse permissions [here](../synapse-analytics/security/how-to-set-up-access-control.md).
+
+1. Navigate to your Synapse workspace
+1. Navigate to the **Data** section and to one of your serverless SQL databases
+1. Click on the ellipses icon and start a New SQL script
+1. Add the Azure Purview account MSI (represented by the account name) as **sysadmin** on the serverless SQL databases by running the command below in your SQL script:
+ ```sql
+ CREATE LOGIN [PurviewAccountName] FROM EXTERNAL PROVIDER;
+ ALTER SERVER ROLE sysadmin ADD MEMBER [PurviewAccountName];
+ ```
+
+### Setting up authentication to scan resources under a Synapse workspace
+
+There are three ways to set up authentication for an Azure Synapse source:
+
+- Managed Identity
+- Service Principal
+
+#### Using Managed identity for Dedicated SQL databases
+
+1. Navigate to your **Synapse workspace**
+1. Navigate to the **Data** section and to one of your serverless SQL databases
+1. Click on the ellipses icon and start a New SQL script
+1. Add the Azure Purview account MSI (represented by the account name) as **db_owner** on the dedicated SQL database by running the command below in your SQL script:
+
+ ```sql
+ CREATE USER [PurviewAccountName] FROM EXTERNAL PROVIDER
+ GO
+
+ EXEC sp_addrolemember 'db_owner', [PurviewAccountName]
+ GO
+ ```
+#### Using Managed identity for Serverless SQL databases
+
+1. Navigate to your **Synapse workspace**
+1. Navigate to the **Data** section and to one of your serverless SQL databases
+1. Click on the ellipses icon and start a New SQL script
+1. Add the Azure Purview account MSI (represented by the account name) as **sysadmin** on the serverless SQL databases by running the command below in your SQL script:
+ ```sql
+ CREATE LOGIN [PurviewAccountName] FROM EXTERNAL PROVIDER;
+ ALTER SERVER ROLE sysadmin ADD MEMBER [PurviewAccountName];
+ ```
+
+#### Using Service Principal for Dedicated SQL databases
+
+> [!NOTE]
+> You must first set up a new **credential** of type Service Principal by following instructions [here](manage-credentials.md).
+
+1. Navigate to your **Synapse workspace**
+1. Navigate to the **Data** section and to one of your serverless SQL databases
+1. Click on the ellipses icon and start a New SQL script
+1. Add the **Service Principal ID** as **db_owner** on the dedicated SQL database by running the command below in your SQL script:
+
+ ```sql
+ CREATE USER [ServicePrincipalID] FROM EXTERNAL PROVIDER
+ GO
+
+ EXEC sp_addrolemember 'db_owner', [ServicePrincipalID]
+ GO
+ ```
+
+#### Using Service Principal for Serverless SQL databases
+
+1. Navigate to your **Synapse workspace**
+1. Navigate to the **Data** section and to one of your serverless SQL databases
+1. Click on the ellipses icon and start a New SQL script
+1. Add the Azure Purview account MSI (represented by the account name) as **sysadmin** on the serverless SQL databases by running the command below in your SQL script:
+ ```sql
+ CREATE LOGIN [ServicePrincipalID] FROM EXTERNAL PROVIDER;
+ ALTER SERVER ROLE sysadmin ADD MEMBER [ServicePrincipalID];
+ ```
+
+> [!NOTE]
+> You must set up authentication on each Dedicated SQL database within your Synapse workspace, that you intend to register and scan. The permissions mentioned above for Serverless SQL database apply to all of them within your workspace i.e. you will have to run it only once.
+
+## Register an Azure Synapse Source
+
+To register a new Azure Synapse Source in your data catalog, do the following:
+
+1. Navigate to your Purview account
+1. Select **Sources** on the left navigation
+1. Select **Register**
+1. On **Register sources**, select **Azure Synapse Analytics (multiple)**
+1. Select **Continue**
+
+ :::image type="content" source="media/register-scan-synapse-workspace/register-synapse-source.png" alt-text="Set up Azure Synapse source":::
+
+On the **Register sources (Azure Synapse Analytics)** screen, do the following:
+
+1. Enter a **Name** that the data source will be listed with in the Catalog.
+1. Optionally choose a **subscription** to filter down to.
+1. **Select a Synapse workspace name** from the dropdown. The SQL endpoints get automatically filled based on your workspace selection.
+1. Select a **collection** or create a new one (Optional)
+1. **Finish** to register the data source
+
+ :::image type="content" source="media/register-scan-synapse-workspace/register-synapse-source-details.png" alt-text="Fill details for Azure Synapse source":::
+
+## Creating and running a scan
+
+To create and run a new scan, do the following:
+
+1. Navigate to the **Sources** section.
+
+1. Select the data source that you registered.
+
+1. Click on view details and Select **+ New scan** or use the scan quick action icon on the source tile
+
+1. Fill in the *name* and select all the types of resource you want to scan within this source. **SQL Database** is the only type we support currently within a Synapse workspace.
+
+ :::image type="content" source="media/register-scan-synapse-workspace/synapse-scan-setup.png" alt-text="Azure Synapse Source scan":::
+
+1. Select the **credential** to connect to the resources within your data source.
+
+1. Within each type you can select to either scan all the resources or a subset of them by name.
+1. Click **Continue** to proceed.
+
+1. Select a **scan rule sets** of type Azure Synapse SQL. You can also create scan rule sets inline.
+
+1. Choose your scan trigger. You can scheduled it to run **weekly/monthly** or **once**
+
+1. Review your scan and select Save to complete set up
+
+## Viewing your scans and scan runs
+
+1. View source details by clicking on **view details** on the tile under the sources section.
+
+ :::image type="content" source="media/register-scan-synapse-workspace/synapse-source-details.png" alt-text="Azure Synapse Source details":::
+
+1. View scan run details by navigating to the **scan details** page.
+ 1. The *status bar* is a brief summary on the running status of the children resources. It will be displayed on the workspace level scan.
+ 1. Green means successful, while red means failed. Grey means that the scan is still in-progress
+ 1. You can click into each scan to view more fine grained details
+
+ :::image type="content" source="media/register-scan-synapse-workspace/synapse-scan-details.png" alt-text="Azure Synapse scan details" lightbox="media/register-scan-synapse-workspace/synapse-scan-details.png":::
+
+1. View a summary of recent failed scan runs at the bottom of the source details page. You can also click into view more granular details pertaining to these runs.
+
+## Manage your scans - edit, delete, or cancel
+To manage or delete a scan, do the following:
+
+- Navigate to the management center. Select Data sources under the Sources and scanning section then select on the desired data source.
+
+- Select the scan you would like to manage. You can edit the scan by selecting Edit.
+
+- You can delete your scan by selecting Delete.
+- If a scan is running, you can cancel it as well.
+
+## Next steps
+
+- [Browse the Azure Purview Data catalog](how-to-browse-catalog.md)
+- [Search the Azure Purview Data Catalog](how-to-search-catalog.md)
role-based-access-control Role Assignments Rest https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/role-based-access-control/role-assignments-rest.md
Title: Assign Azure roles using the REST API - Azure RBAC description: Learn how to grant access to Azure resources for users, groups, service principals, or managed identities using the REST API and Azure role-based access control (Azure RBAC).
rest-api ms.devlang: na Previously updated : 02/15/2021 Last updated : 04/06/2021
The following shows an example of the output:
} ```
+### New service principal
+
+If you create a new service principal and immediately try to assign a role to that service principal, that role assignment can fail in some cases. For example, if you create a new managed identity and then try to assign a role to that service principal, the role assignment might fail. The reason for this failure is likely a replication delay. The service principal is created in one region; however, the role assignment might occur in a different region that hasn't replicated the service principal yet.
+
+To address this scenario, you should set the `principalType` property to `ServicePrincipal` when creating the role assignment. You must also set the `apiVersion` of the role assignment to `2018-09-01-preview` or later.
+
+```http
+PUT https://management.azure.com/subscriptions/{subscriptionId1}/providers/microsoft.authorization/roleassignments/{roleAssignmentId1}?api-version=2018-09-01-preview
+```
+
+```json
+{
+ "properties": {
+ "roleDefinitionId": "/{scope}/providers/Microsoft.Authorization/roleDefinitions/{roleDefinitionId}",
+ "principalId": "{principalId}",
+ "principalType": "ServicePrincipal"
+ }
+}
+```
+ ## Next steps - [List Azure role assignments using the REST API](role-assignments-list-rest.md)
role-based-access-control Transfer Subscription https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/role-based-access-control/transfer-subscription.md
ms.devlang: na Previously updated : 12/10/2020 Last updated : 04/06/2021
Several Azure resources have a dependency on a subscription or a directory. Depe
| System-assigned managed identities | Yes | Yes | [List managed identities](#list-role-assignments-for-managed-identities) | You must disable and re-enable the managed identities. You must re-create the role assignments. | | User-assigned managed identities | Yes | Yes | [List managed identities](#list-role-assignments-for-managed-identities) | You must delete, re-create, and attach the managed identities to the appropriate resource. You must re-create the role assignments. | | Azure Key Vault | Yes | Yes | [List Key Vault access policies](#list-key-vaults) | You must update the tenant ID associated with the key vaults. You must remove and add new access policies. |
-| Azure SQL databases with Azure AD authentication integration enabled | Yes | No | [Check Azure SQL databases with Azure AD authentication](#list-azure-sql-databases-with-azure-ad-authentication) | |
+| Azure SQL databases with Azure AD authentication integration enabled | Yes | No | [Check Azure SQL databases with Azure AD authentication](#list-azure-sql-databases-with-azure-ad-authentication) | You cannot transfer an Azure SQL database with Azure AD authentication enabled to a different directory. For more information, see [Use Azure Active Directory authentication](../azure-sql/database/authentication-aad-overview.md). |
| Azure Storage and Azure Data Lake Storage Gen2 | Yes | Yes | | You must re-create any ACLs. | | Azure Data Lake Storage Gen1 | Yes | Yes | | You must re-create any ACLs. | | Azure Files | Yes | Yes | | You must re-create any ACLs. |
-| Azure File Sync | Yes | Yes | | |
+| Azure File Sync | Yes | Yes | | The storage sync service and/or storage account can be moved to a different directory. For more information, see [Frequently asked questions (FAQ) about Azure Files](../storage/files/storage-files-faq.md#azure-file-sync) |
| Azure Managed Disks | Yes | Yes | | If you are using Disk Encryption Sets to encrypt Managed Disks with customer-managed keys, you must disable and re-enable the system-assigned identities associated with Disk Encryption Sets. And you must re-create the role assignments i.e. again grant required permissions to Disk Encryption Sets in the Key Vaults. |
-| Azure Kubernetes Service | Yes | Yes | | |
+| Azure Kubernetes Service | Yes | No | | You cannot transfer your AKS cluster and its associated resources to a different directory. For more information, see [Frequently asked questions about Azure Kubernetes Service (AKS)](../aks/faq.md) |
| Azure Policy | Yes | No | All Azure Policy objects, including custom definitions, assignments, exemptions, and compliance data. | You must [export](../governance/policy/how-to/export-resources.md), import, and re-assign definitions. Then, create new policy assignments and any needed [policy exemptions](../governance/policy/concepts/exemption-structure.md). |
-| Azure Active Directory Domain Services | Yes | No | | |
+| Azure Active Directory Domain Services | Yes | No | | You cannot transfer an Azure AD Domain Services managed domain to a different directory. For more information, see [Frequently asked questions (FAQs) about Azure Active Directory (AD) Domain Services](../active-directory-domain-services/faqs.md) |
| App registrations | Yes | Yes | | | > [!WARNING]
role-based-access-control Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/role-based-access-control/troubleshooting.md
Title: Troubleshoot Azure RBAC description: Troubleshoot issues with Azure role-based access control (Azure RBAC). ms.assetid: df42cca2-02d6-4f3c-9d56-260e1eb7dc44
na ms.devlang: na Previously updated : 11/10/2020 Last updated : 04/06/2021 - # Troubleshoot Azure RBAC
$ras.Count
```azurecli az role assignment create --assignee-object-id 11111111-1111-1111-1111-111111111111 --role "Contributor" --scope "/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}" ```+
+- If you create a new service principal and immediately try to assign a role to that service principal, that role assignment can fail in some cases.
+
+ To address this scenario, you should set the `principalType` property to `ServicePrincipal` when creating the role assignment. You must also set the `apiVersion` of the role assignment to `2018-09-01-preview` or later. For more information, see [Assign Azure roles to a new service principal using the REST API](role-assignments-rest.md#new-service-principal) or [Assign Azure roles to a new service principal using Azure Resource Manager templates](role-assignments-template.md#new-service-principal)
+ - If you attempt to remove the last Owner role assignment for a subscription, you might see the error "Cannot delete the last RBAC admin assignment." Removing the last Owner role assignment for a subscription is not supported to avoid orphaning the subscription. If you want to cancel your subscription, see [Cancel your Azure subscription](../cost-management-billing/manage/cancel-azure-subscription.md). ## Problems with custom roles -- If you need steps for how to create a custom role, see the custom role tutorials using the [Azure portal](custom-roles-portal.md) (currently in preview), [Azure PowerShell](tutorial-custom-role-powershell.md), or [Azure CLI](tutorial-custom-role-cli.md).
+- If you need steps for how to create a custom role, see the custom role tutorials using the [Azure portal](custom-roles-portal.md), [Azure PowerShell](tutorial-custom-role-powershell.md), or [Azure CLI](tutorial-custom-role-cli.md).
- If you are unable to update an existing custom role, check that you are currently signed in with a user that is assigned a role that has the `Microsoft.Authorization/roleDefinition/write` permission such as [Owner](built-in-roles.md#owner) or [User Access Administrator](built-in-roles.md#user-access-administrator). - If you are unable to delete a custom role and get the error message "There are existing role assignments referencing role (code: RoleDefinitionHasAssignments)", then there are role assignments still using the custom role. Remove those role assignments and try to delete the custom role again. - If you get the error message "Role definition limit exceeded. No more role definitions can be created (code: RoleDefinitionLimitExceeded)" when you try to create a new custom role, delete any custom roles that aren't being used. Azure supports up to **5000** custom roles in a directory. (For Azure Germany and Azure China 21Vianet, the limit is 2000 custom roles.)
search Resource Partners Knowledge Mining https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/resource-partners-knowledge-mining.md
Get expert help from Microsoft partners who build end-to-end solutions that incl
| Partner | Description | Product link | ||-|-| | ![Agolo](media/resource-partners/agolo-logo.png "Agolo company logo") | [**Agolo**](https://www.agolo.com) is the leading summarization engine for enterprise use. AgoloΓÇÖs AI platform analyzes hundreds of thousands of media articles, research documents and proprietary information to give each customer a summary of key points specific to their areas of interest. </br></br>Our partnership with Microsoft combines the power and adaptability of the Azure Cognitive Search platform, integrated with Agolo summarization. Rather than typical search engine snippets, the results page displays contextually relevant Agolo summaries, instantly enabling the user to determine the relevance of that document to their specific needs. The impact of summarization-powered search is that users find more relevant content faster, enabling them to do their job more effectively and gaining a competitive advantage. | [Product page](https://www.agolo.com/microsoft-azure-cognitive-search ) |
+| ![BA Insight](media/resource-partners/ba-insight-logo.png "BA Insights company logo") | [**BA Insight Search for Workplace**](https://www.bainsight.com/azure-search/) is a complete enterprise search solution powered by Azure Cognitive Search. It is the first of its kind solution, bringing the internet to enterprises for secure, "askable", powerful search to help organizations get a return on information. It delivers a web-like search experience, connects to 80+ enterprise systems and provides automated and intelligent meta tagging. | [Product page](https://www.bainsight.com/azure-search/) |
| ![BlueGranite](media/resource-partners/blue-granite-full-color.png "Blue Granite company logo") | [**BlueGranite**](https://www.bluegranite.com/) offers 25 years of experience in Modern Business Intelligence, Data Platforms, and AI solutions across multiple industries. Their Knowledge Mining services enable organizations to obtain unique insights from structured and unstructured data sources. Modular AI capabilities perform searches on numerous file types to index data and associate that data with more traditional data sources. Analytics tools extract patterns and trends from the enriched data and showcase results to users at all levels. | [Product page](https://www.bluegranite.com/knowledge-mining) | | ![Neal Analytics](media/resource-partners/neal-analytics-logo.png "Neal Analytics company logo") | [**Neal Analytics**](https://nealanalytics.com/) offers over 10 years of cloud, data, and AI expertise on Azure. Its experts have recognized in-depth expertise across the Azure AI and ML services. Neal can help customers customize and implement Cognitive Search across a wide variety of use cases. Neal Analytics expertise ranges from enterprise-level search, form, and process automation to domain mapping for data extraction and analytics, plagiarism detection, and more. | [Product page](https://go.nealanalytics.com/cognitive-search)|
-| ![Neudesic](media/resource-partners/neudesic-logo.png "Neudesic company logo") | [Neudesic](https://www.neudesic.com/) is the trusted technology partner in business innovation, delivering impactful business results to clients through digital modernization and evolution. Our consultants bring business and technology expertise together, offering a wide range of cloud and data-driven solutions, including custom application development, data and artificial intelligence, comprehensive managed services, and business software products. Founded in 2002, Neudesic is a privately held company headquartered in Irvine, California. | [Product page](https://www.neudesic.com/services/digital-workplace/document-intelligence-platform-schedule-demo)|
+| ![Neudesic](media/resource-partners/neudesic-logo.png "Neudesic company logo") | [**Neudesic**](https://www.neudesic.com/) is the trusted technology partner in business innovation, delivering impactful business results to clients through digital modernization and evolution. Our consultants bring business and technology expertise together, offering a wide range of cloud and data-driven solutions, including custom application development, data and artificial intelligence, comprehensive managed services, and business software products. Founded in 2002, Neudesic is a privately held company headquartered in Irvine, California. | [Product page](https://www.neudesic.com/services/digital-workplace/document-intelligence-platform-schedule-demo)|
| ![OrangeNXT](media/resource-partners/orangenxt-beldmerk-boven-160px.png "OrangeNXT company logo") | [**OrangeNXT**](https://orangenxt.com/) offers expertise in data consolidation, data modeling, and building skillsets that include custom logic developed for specific use-cases.</br></br>digitalNXT Search is an OrangeNXT solution that combines AI, optical character recognition (OCR), and natural language processing in Azure Cognitive Search pipeline to help you extract search results from multiple structured and unstructured data sources. Integral to digitalNXT Search is advanced custom cognitive skills for interpreting and correlating selected data.</br></br>| [Product page](https://orangenxt.com/solutions/digitalnxt/digitalnxt-search/)|
search Search Capacity Planning https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/search-capacity-planning.md
Previously updated : 01/15/2021 Last updated : 04/06/2021 # Estimate and manage capacity of an Azure Cognitive Search service
In Cognitive Search, shard management is an implementation detail and non-config
+ Autocomplete anomalies: Autocomplete queries, where matches are made on the first several characters of a partially entered term, accept a fuzzy parameter that forgives small deviations in spelling. For autocomplete, fuzzy matching is constrained to terms within the current shard. For example, if a shard contains "Microsoft" and a partial term of "micor" is entered, the search engine will match on "Microsoft" in that shard, but not in other shards that hold the remaining parts of the index.
-## How to evaluate capacity requirements
+## Approaching estimation
-Capacity and the costs of running the service go hand in hand. Tiers impose limits on two levels: storage and content (a count of indexes on a service, for example). It's important to consider both because whichever limit you reach first is the effective limit.
+Capacity and the costs of running the service go hand in hand. Tiers impose limits on two levels: content (a count of indexes on a service, for example) and storage. It's important to consider both because whichever limit you reach first is the effective limit.
-Quantities of indexes and other objects are typically dictated by business and engineering requirements. For example, you might have multiple versions of the same index for active development, testing, and production.
+Counts of indexes and other objects are typically dictated by business and engineering requirements. For example, you might have multiple versions of the same index for active development, testing, and production.
Storage needs are determined by the size of the indexes you expect to build. There are no solid heuristics or generalities that help with estimates. The only way to determine the size of an index is [build one](search-what-is-an-index.md). Its size will be based on imported data, text analysis, and index configuration such as whether you enable suggesters, filtering, and sorting. For full text search, the primary data structure is an [inverted index](https://en.wikipedia.org/wiki/Inverted_index) structure, which has different characteristics than source data. For an inverted index, size and complexity are determined by content, not necessarily by the amount of data that you feed into it. A large data source with high redundancy could result in a smaller index than a smaller dataset that contains highly variable content. So it's rarely possible to infer index size based on the size of the original dataset.
-> [!NOTE]
+Attributes on the index, such as enabling filters and sorting, will impact storage requirements. The use of suggesters also has storage implications. For more information, see [Attributes and index size](search-what-is-an-index.md#index-size).
+
+> [!NOTE]
> Even though estimating future needs for indexes and storage can feel like guesswork, it's worth doing. If a tier's capacity turns out to be too low, you'll need to provision a new service at a higher tier and then [reload your indexes](search-howto-reindex.md). There's no in-place upgrade of a service from one tier to another. >
Dedicated resources can accommodate larger sampling and processing times for mor
+ Start high, at S2 or even S3, if testing includes large-scale indexing and query loads. + Start with Storage Optimized, at L1 or L2, if you're indexing a large amount of data and query load is relatively low, as with an internal business application.
-1. [Build an initial index](search-what-is-an-index.md) to determine how source data translates to an index. This is the only way to estimate index size.
+1. [Build an initial index](search-what-is-an-index.md) to determine how source data translates to an index. This is the only way to estimate index size.
1. [Monitor storage, service limits, query volume, and latency](search-monitor-usage.md) in the portal. The portal shows you queries per second, throttled queries, and search latency. All of these values can help you decide if you selected the right tier.
The Storage Optimized tiers are useful for large data workloads, supporting more
**Service-level agreements**
-The Free tier and preview features don't provide [service-level agreements (SLAs)](https://azure.microsoft.com/support/legal/sla/search/v1_0/). For all billable tiers, SLAs take effect when you provision sufficient redundancy for your service. You need to have two or more replicas for query (read) SLAs. You need to have three or more replicas for query and indexing (read-write) SLAs. The number of partitions doesn't affect SLAs.
+The Free tier and preview features are not covered by [service-level agreements (SLAs)](https://azure.microsoft.com/support/legal/sla/search/v1_0/). For all billable tiers, SLAs take effect when you provision sufficient redundancy for your service. You need to have two or more replicas for query (read) SLAs. You need to have three or more replicas for query and indexing (read-write) SLAs. The number of partitions doesn't affect SLAs.
## Tips for capacity planning
The Free tier and preview features don't provide [service-level agreements (SLAs
+ Remember that the only downside of under provisioning is that you might have to tear down a service if actual requirements are greater than your predictions. To avoid service disruption, you would create a new service at a higher tier and run it side by side until all apps and requests target the new endpoint.
-## When to add partitions and replicas
+## When to add capacity
-Initially, a service is allocated a minimal level of resources consisting of one partition and one replica.
+Initially, a service is allocated a minimal level of resources consisting of one partition and one replica. The [tier you choose](search-sku-tier.md) determines partition size and speed, and each tier is optimized around a set of characteristics that fit various scenarios. If you choose a higher-end tier, you might need fewer partitions than if you go with S1. One of the questions you'll need to answer through self-directed testing is whether a larger and more expensive partition yields better performance than two cheaper partitions on a service provisioned at a lower tier.
A single service must have sufficient resources to handle all workloads (indexing and queries). Neither workload runs in the background. You can schedule indexing for times when query requests are naturally less frequent, but the service will not otherwise prioritize one task over another. Additionally, a certain amount of redundancy smooths out query performance when services or nodes are updated internally.
-As a general rule, search applications tend to need more replicas than partitions, particularly when the service operations are biased toward query workloads. The section on [high availability](#HA) explains why.
+Some guidelines for determining whether to add capacity include:
-The [tier you choose](search-sku-tier.md) determines partition size and speed, and each tier is optimized around a set of characteristics that fit various scenarios. If you choose a higher-end tier, you might need fewer partitions than if you go with S1. One of the questions you'll need to answer through self-directed testing is whether a larger and more expensive partition yields better performance than two cheaper partitions on a service provisioned at a lower tier.
++ Meeting the high availability criteria for service level agreement++ The frequency of HTTP 503 errors is increasing++ Large query volumes are expected+
+As a general rule, search applications tend to need more replicas than partitions, particularly when the service operations are biased toward query workloads. Each replica is a copy of your index, allowing the service to load balance requests against multiple copies. All load balancing and replication of an index is managed by Azure Cognitive Search and you can alter the number of replicas allocated for your service at any time. You can allocate up to 12 replicas in a Standard search service and 3 replicas in a Basic search service. Replica allocation can be made either from the [Azure portal](search-create-service-portal.md) or one of the programmatic options.
Search applications that require near real-time data refresh will need proportionally more partitions than replicas. Adding partitions spreads read/write operations across a larger number of compute resources. It also gives you more disk space for storing additional indexes and documents.
-Larger indexes take longer to query. As such, you might find that every incremental increase in partitions requires a smaller but proportional increase in replicas. The complexity of your queries and query volume will factor into how quickly query execution is turned around.
+Finally, larger indexes take longer to query. As such, you might find that every incremental increase in partitions requires a smaller but proportional increase in replicas. The complexity of your queries and query volume will factor into how quickly query execution is turned around.
> [!NOTE] > Adding more replicas or partitions increases the cost of running the service, and can introduce slight variations in how results are ordered. Be sure to check the [pricing calculator](https://azure.microsoft.com/pricing/calculator/) to understand the billing implications of adding more nodes. The [chart below](#chart) can help you cross-reference the number of search units required for a specific configuration. For more information on how additional replicas impact query processing, see [Ordering results](search-pagination-page-layout.md#ordering-results).
-## How to allocate replicas and partitions
+<a name="adjust-capacity"></a>
+
+## Add or reduce replicas and partitions
1. Sign in to the [Azure portal](https://portal.azure.com/) and select the search service.
Larger indexes take longer to query. As such, you might find that every incremen
:::image type="content" source="media/search-capacity-planning/3-save-confirm.png" alt-text="Save changes" border="true":::
- Changes in capacity can take up to several hours to complete. You cannot cancel once the process has started and there is no real-time monitoring for replica and partition adjustments. However, the following message remains visible while changes are underway.
+ Changes in capacity can take anywhere from 15 minutes up to several hours to complete. You cannot cancel once the process has started and there is no real-time monitoring for replica and partition adjustments. However, the following message remains visible while changes are underway.
:::image type="content" source="media/search-capacity-planning/4-updating.png" alt-text="Status message in the portal" border="true":::
SUs, pricing, and capacity are explained in detail on the Azure website. For mor
> The number of replicas and partitions divides evenly into 12 (specifically, 1, 2, 3, 4, 6, 12). This is because Azure Cognitive Search pre-divides each index into 12 shards so that it can be spread in equal portions across all partitions. For example, if your service has three partitions and you create an index, each partition will contain four shards of the index. How Azure Cognitive Search shards an index is an implementation detail, subject to change in future releases. Although the number is 12 today, you shouldn't expect that number to always be 12 in the future. >
-<a id="HA"></a>
-
-## High availability
-
-Because it's easy and relatively fast to scale up, we generally recommend that you start with one partition and one or two replicas, and then scale up as query volumes build. Query workloads run primarily on replicas. If you need more throughput or high availability, you will probably require additional replicas.
-
-General recommendations for high availability are:
-
-+ Two replicas for high availability of read-only workloads (queries)
-
-+ Three or more replicas for high availability of read/write workloads (queries plus indexing as individual documents are added, updated, or deleted)
-
-Service level agreements (SLA) for Azure Cognitive Search are targeted at query operations and at index updates that consist of adding, updating, or deleting documents.
-
-Basic tier tops out at one partition and three replicas. If you want the flexibility to immediately respond to fluctuations in demand for both indexing and query throughput, consider one of the Standard tiers. If you find your storage requirements are growing much more rapidly than your query throughput, consider one of the Storage Optimized tiers.
-
-## About queries per second (QPS)
-
-Due to the large number of factors that go into query performance, Microsoft doesn't publish expected QPS numbers. QPS estimates must be developed independently by every customer using the service tier, configuration, index, and query constructs that are valid for your application. Index size and complexity, query size and complexity, and the amount of traffic are primary determinants of QPS. There is no way to offer meaningful estimates when such factors are unknown.
-
-Estimates are more predictable when calculated on services running on dedicated resources (Basic and Standard tiers). You can estimate QPS more closely because you have control over more of the parameters. For guidance on how to approach estimation, see [Azure Cognitive Search performance and optimization](search-performance-optimization.md).
-
-For the Storage Optimized tiers (L1 and L2), you should expect a lower query throughput and higher latency than the Standard tiers.
- ## Next steps > [!div class="nextstepaction"]
-> [How to estimate and manage costs](search-sku-manage-costs.md)
+> [Manage costs](search-sku-manage-costs.md)
search Search Create Service Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/search-create-service-portal.md
Title: 'Create a search service in the portal'
-description: In this portal quickstart, learn how to set up an Azure Cognitive Search resource in the Azure portal. Choose resource groups, regions, and SKU or pricing tier.
+description: Learn how to set up an Azure Cognitive Search resource in the Azure portal. Choose resource groups, regions, and the SKU or pricing tier.
-+ Last updated 02/15/2021
-# Quickstart: Create an Azure Cognitive Search service in the portal
+# Create an Azure Cognitive Search service in the portal
[Azure Cognitive Search](search-what-is-azure-search.md) is an Azure resource used for adding a full text search experience to custom apps. You can integrate it easily with other Azure services that provide data or additional processing, with apps on network servers, or with software running on other cloud platforms.
search Search Howto Monitor Indexers https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/search-howto-monitor-indexers.md
Last updated 01/28/2021
-# How to monitor Azure Cognitive Search indexer status and results
+# Monitor indexer status and results in Azure Cognitive Search
You can monitor indexer processing in the Azure portal, or programmatically through REST calls or an Azure SDK. In addition to status about the indexer itself, you can review start and end times, and detailed errors and warnings from a particular run.
search Search Indexer Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/search-indexer-overview.md
You can use an indexer as the sole means for data ingestion, or as part of a com
|-|| | Single data source | This pattern is the simplest: one data source is the sole content provider for a search index. From the source, you'll identify one field containing unique values to serve as the document key in the search index. The unique value will be used as an identifier. All other source fields are mapped implicitly or explicitly to corresponding fields in an index. </br></br>An important takeaway is that the value of a document key originates from source data. A search service does not generate key values. On subsequent runs, incoming documents with new keys are added, while incoming documents with existing keys are either merged or overwritten, depending on whether index fields are null or populated. | | Multiple data sources | An index can accept content from multiple sources, where each run brings new content from a different source. </br></br>One outcome might be an index that gains documents after each indexer run, with entire documents created in full from each source. For example, documents 1-100 are from Blob storage, documents 101-200 are from Azure SQL, and so forth. The challenge for this scenario lies in designing an index schema that works for all incoming data, and a document key structure that is uniform in the search index. Natively, the values that uniquely identify a document are metadata_storage_path in a blob container and a primary key in a SQL table. You can imagine that one or both sources must be amended to provide key values in a common format, regardless of content origin. For this scenario, you should expect to perform some level of pre-processing to homogenize the data so that it can be pulled into a single index. </br></br>An alternative outcome might be search documents that are partially populated on the first run, and then further populated by subsequent runs to bring in values from other sources. For example, fields 1-10 are from Blob storage, 11-20 from Azure SQL, and so forth. The challenge of this pattern is making sure that each indexing run is targeting the same document. Merging fields into an existing document requires a match on the document key. For a demonstration of this scenario, see [Tutorial: Index from multiple data sources](tutorial-multiple-data-sources.md). |
-| Multiple indexers | If you're using multiple data sources, you might also need multiple indexers if you need to vary run time parameters, the schedule, or field mappings. Although multiple indexer-data-source sets can target the same index, be careful of indexer runs that can overwrite existing values in the index. If a second indexer-data-source targets the same documents and fields, any values from the first run will be overwritten. Field values are replaced in full; an indexer cannot merge values from multiple runs into the same field.</br></br>Another multi-indexer use case is [cross-region scale out of Cognitive Search](search-performance-optimization.md#use-indexers-for-updating-content-on-multiple-services). You might have copies of the same search index in different regions. To synchronize search index content, you could have multiple indexers pulling from the same data source, where each indexer targets a different search index.</br></br>[Parallel indexing](search-howto-large-index.md#parallel-indexing) of very large data sets also requires a multi-indexer strategy. Each indexer targets a subset of the data. |
+| Multiple indexers | If you're using multiple data sources, you might also need multiple indexers if you need to vary run time parameters, the schedule, or field mappings. Although multiple indexer-data-source sets can target the same index, be careful of indexer runs that can overwrite existing values in the index. If a second indexer-data-source targets the same documents and fields, any values from the first run will be overwritten. Field values are replaced in full; an indexer cannot merge values from multiple runs into the same field.</br></br>Another multi-indexer use case is [cross-region scale out of Cognitive Search](search-performance-optimization.md#data-sync). You might have copies of the same search index in different regions. To synchronize search index content, you could have multiple indexers pulling from the same data source, where each indexer targets a different search index.</br></br>[Parallel indexing](search-howto-large-index.md#parallel-indexing) of very large data sets also requires a multi-indexer strategy. Each indexer targets a subset of the data. |
| Content transformation | Cognitive Search supports optional [AI enrichment](cognitive-search-concept-intro.md) behaviors that add image analysis and natural language processing to create new searchable content and structure. AI enrichment is indexer-driven, through an attached [skillset](cognitive-search-working-with-skillsets.md). To perform AI enrichment, the indexer still needs an index and an Azure data source, but in this scenario, adds skillset processing to indexer execution. | <a name="supported-data-sources"></a>
search Search Manage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/search-manage.md
tags: azure-portal Previously updated : 06/24/2020 Last updated : 04/06/2021 # Service administration for Azure Cognitive Search in the Azure portal
Last updated 06/24/2020
> * [Portal](search-manage.md) > * [Python](https://pypi.python.org/pypi/azure-mgmt-search/0.1.0)>
-Azure Cognitive Search is a fully managed, cloud-based search service used for building a rich search experience into custom apps. This article covers the service administration tasks that you can perform in the [Azure portal](https://portal.azure.com) for a search service that you've already provisioned. Service administration is lightweight by design, limited to the following tasks:
+Azure Cognitive Search is a fully managed, cloud-based search service used for building a rich search experience into custom apps. This article covers the administration tasks that you can perform in the [Azure portal](https://portal.azure.com) for a search service that you've already created. The portal allows you to perform all [management tasks](#management-tasks) related to the service, as well as content management and exploration. As such, the portal provides broad spectrum access to all aspects of search service operations.
-* Check storage using the mid-page **Usage** link.
-* Check query volumes and latency using the mid-page **Monitoring** link, and whether requests were throttled.
-* Manage access using the **Keys** page to the left.
-* Adjust capacity using the **Scale** page to the left.
+Each search service is managed as a standalone resource. The following image shows the portal pages for a single free search service called "demo-search-svc". Although you might be accustomed to using Azure PowerShell or Azure CLI for service management, it makes sense to become familiar with the tools and capabilities that the portal pages provide. Some tasks are just easier and faster to perform in the portal than through code.
-The same tasks performed in the portal can also be handled programmatically through the [Management APIs](/rest/api/searchmanagement/) and [Az.Search PowerShell module](search-manage-powershell.md). Administrative tasks are fully represented across portal and programmatic interfaces. There is no specific administrative task that is available in only one modality.
+## Overview (home) page
-Azure Cognitive Search leverages other Azure services for deeper monitoring and management. By itself, the only data stored with a search service is content (indexes, indexer and data source definitions, and other objects). Metrics reported out to portal pages are pulled from internal logs on a rolling 30-day cycle. For user-controlled log retention and additional events, you will need [Azure Monitor](../azure-monitor/index.yml).
+The overview page is the "home" page of each service. Below, the areas on the screen enclosed in red boxes indicate tasks, tools, and tiles that you might use often, especially if you are new to the service.
-## Fixed service properties
-Several aspects of a search service are determined when the service is provisioned and cannot be changed later:
+| Area | Description |
+||-|
+| 1 | The **Essentials** section shows you service properties including the endpoint used when setting up connections. It also shows you tier, replica, and partition counts at a glance. |
+| 2 | At the top of the page are a series of commands for invoking interactive tools or managing the service. Both [Import data](search-get-started-portal.md) and [Search explorer](search-explorer.md) are commonly used for prototyping and exploration. |
+| 3 | Below the **Essentials** section, are a series of tabbed subpages for quick access to usage statistics, service health metrics, and access to all of the existing indexes, indexers, data sources, and skillsets on your service. If you select an index or another object, additional pages become available to show object composition, settings, and status (if applicable). |
+| 4 | To the left, you can access links that open additional pages for accessing API keys used on connections, configuring the service, configuring security, monitoring operations, automating tasks, and getting support. |
-* Service name (you cannot rename a service)
-* Service location (you cannot currently move an intact service to another region)
-* Maximum replica and partition counts (determined by the tier, Basic or Standard)
+### Read-only service properties
-If you started with Basic with its maximum of one partition, and you now need more partitions, you will need to [create a new service](search-create-service-portal.md) at a higher tier and recreate your content on the new service.
+Several aspects of a search service are determined when the service is provisioned and cannot be changed:
-## Administrator rights
+* Service name (you cannot rename a search service)
+* Service location (you cannot easily move an intact search service to another region. Although there is a template, moving the content is a manual process)
+* Service tier (you cannot switch from S1 to S2, for example, without creating a new service)
-Provisioning or decommissioning the service itself can be done by an Azure subscription administrator or co-administrator.
+## Management tasks
-Regarding access to the endpoint, anyone with access to the service URL and an api-key has access to content. For more information about keys, see [Manage the api-keys](search-security-api-keys.md).
+Service administration is lightweight by design, and is often defined by the tasks you can perform relative to the service itself:
-* Read-only access to the service is query rights, typically granted to a client application by giving it the URL and a query api-key.
-* Read-write access provides the ability to add, delete, or modify server objects, including api-keys, indexes, indexers, data sources, and schedules.Read-write access is granted by giving the URL, an admin API key.
+* [Adjust capacity](search-capacity-planning.md) by adding or removing replicas and partitions
+* [Manage API keys](search-security-api-keys.md) used for admin and query operations
+* [Control access to admin operations](search-security-rbac.md) through role-based security
+* [Configure IP firewall rules](service-configure-firewall.md) to restrict access by IP address
+* [Configure a private endpoint](service-create-private-endpoint.md) using Azure Private Link and a private virtual network
+* [Monitor service health](search-monitor-usage.md): storage, query volumes, and latency
-Rights to the service provisioning apparatus is granted through role assignments. [Azure role-based access control (Azure RBAC)](../role-based-access-control/overview.md) is an authorization system built on [Azure Resource Manager](../azure-resource-manager/management/overview.md) for provisioning of Azure resources.
+You can also enumerate all of the objects created on the service: indexes, indexers, data sources, skillsets, and so forth. The portal's overview page shows you all of the content that exists on your service. The vast majority of operations on a search service are content-related.
-In the context of Azure Cognitive Search, [Azure role assignments](search-security-rbac.md) will determine who can perform tasks, regardless of whether they are using the [portal](search-manage.md), [PowerShell](search-manage-powershell.md), or the [Management REST APIs](/rest/api/searchmanagement/search-howto-management-rest-api):
+The same management tasks performed in the portal can also be handled programmatically through the [Management REST APIs](/rest/api/searchmanagement/), [Az.Search PowerShell module](search-manage-powershell.md), [az search Azure CLI module](search-manage-azure-cli.md), and the Azure SDKs for .NET, Python, Java, and JavaScript. Administrative tasks are fully represented across portal and all programmatic interfaces. There is no specific administrative task that is available in only one modality.
-* Create or delete a service
-* Scale the service
-* Delete or regenerate API keys
-* Enable diagnostic logging (create services)
-* Enable traffic analytics (create services)
+Cognitive Search leverages other Azure services for deeper monitoring and management. By itself, the only data stored within the search service is object content (indexes, indexer and data source definitions, and other objects). Metrics reported out to portal pages are pulled from internal logs on a rolling 30-day cycle. For user-controlled log retention and additional events, you will need [Azure Monitor](../azure-monitor/index.yml). For more information about setting up diagnostic logging for a search service, see [Collect and analyze log data](search-monitor-logs.md).
-> [!TIP]
-> Using Azure-wide mechanisms, you can lock a subscription or resource to prevent accidental or unauthorized deletion of your search service by users with admin rights. For more information, see [Lock resources to prevent unexpected deletion](../azure-resource-manager/management/lock-resources.md).
-
-## Logging and system information
-
-At the Basic tier and above, Microsoft monitors all Azure Cognitive Search services for 99.9% availability per service level agreements (SLA). If the service is slow or request throughput falls below SLA thresholds, support teams review the log files available to them and address the issue.
-
-Azure Cognitive Search leverages [Azure Monitor](../azure-monitor/index.yml) to collect and store indexing and query activity. A search service by itself stores just its content (indexes, indexer definitions, data source definitions, skillset definitions, synonym maps). Caching and logged information is stored off-service, often in an Azure Storage account. For more information about logging indexing and query workloads, see [Collect and analyze log data](search-monitor-logs.md).
-
-In terms of general information about your service, using just the facilities built into Azure Cognitive Search itself, you can obtain information in the following ways:
-
-* Using the service **Overview** page, through notifications, properties, and status messages.
-* Using [PowerShell](search-manage-powershell.md) or the [Management REST API](/rest/api/searchmanagement/) to [get service properties](/rest/api/searchmanagement/services). There is no new information or operations provided at the programmatic layer. The interfaces exist so that you can write scripts.
-
-## Monitor resource usage
-
-In the dashboard, resource monitoring is limited to the information shown in the service dashboard and a few metrics that you can obtain by querying the service. On the service dashboard, in the Usage section, you can quickly determine whether partition resource levels are adequate for your application. You can provision external resources, such as Azure monitoring, if you want to capture and persist logged events. For more information, see [Monitoring Azure Cognitive Search](search-monitor-usage.md).
-
-Using the search service REST API, you can get a count on documents and indexes programmatically:
-
-* [Get Index Statistics](/rest/api/searchservice/Get-Index-Statistics)
-* [Count Documents](/rest/api/searchservice/count-documents)
-
-## Disaster recovery and service outages
-
-Although we can salvage your data, Azure Cognitive Search does not provide instant failover of the service if there is an outage at the cluster or data center level. If a cluster fails in the data center, the operations team will detect and work to restore service. You will experience downtime during service restoration, but you can request service credits to compensate for service unavailability per the [Service Level Agreement (SLA)](https://azure.microsoft.com/support/legal/sla/search/v1_0/).
-
-If continuous service is required in the event of catastrophic failures outside of MicrosoftΓÇÖs control, you could [provision an additional service](search-create-service-portal.md) in a different region and implement a geo-replication strategy to ensure indexes are fully redundant across all services.
-
-Customers who use [indexers](search-indexer-overview.md) to populate and refresh indexes can handle disaster recovery through geo-specific indexers leveraging the same data source. Two services in different regions, each running an indexer, could index the same data source to achieve geo-redundancy. If you are indexing from data sources that are also geo-redundant, be aware that Azure Cognitive Search indexers can only perform incremental indexing (merging updates from new, modified, or deleted documents) from primary replicas. In a failover event, be sure to re-point the indexer to the new primary replica.
-
-If you do not use indexers, you would use your application code to push objects and data to different search services in parallel. For more information, see [Performance and optimization in Azure Cognitive Search](search-performance-optimization.md).
-
-## Backup and restore
-
-Because Azure Cognitive Search is not a primary data storage solution, we do not provide a formal mechanism for self-service backup and restore. However, you can use the **index-backup-restore** sample code in this [Azure Cognitive Search .NET sample repo](https://github.com/Azure-Samples/azure-search-dotnet-samples) to backup your index definition and snapshot to a series of JSON files, and then use these files to restore the index, if needed. This tool can also move indexes between service tiers.
-
-Otherwise, your application code used for creating and populating an index is the de facto restore option if you delete an index by mistake. To rebuild an index, you would delete it (assuming it exists), recreate the index in the service, and reload by retrieving data from your primary data store.
+## Administrator permissions
-## Scale up or down
+When you open the search service overview page, the permissions assigned to your account determine what pages are available to you. The overview page at the beginning of the article shows the portal pages an administrator or contributor will see.
-Every search service starts with a minimum of one replica and one partition. If you signed up for a [tier that supports more capacity](search-limits-quotas-capacity.md), click **Scale** on the left navigation pane to adjust resource usage.
+In Azure resource, administrative rights are granted through role assignments. In the context of Azure Cognitive Search, [role assignments](search-security-rbac.md) will determine who can allocate replicas and partitions or manage API keys, regardless of whether they are using the portal, [PowerShell](search-manage-powershell.md), [Azure CLI](search-manage-azure-cli.md),or the [Management REST APIs](/rest/api/searchmanagement/search-howto-management-rest-api):
-When you add capacity through either resource, the service uses them automatically. No further action is required on your part, but there is a slight delay before the impact of the new resource is realized. It can take 15 minutes or more to provision additional resources.
-
-### Add replicas
-
-Increasing queries per second (QPS) or achieving high availability is done by adding replicas. Each replica has one copy of an index, so adding one more replica translates to one more index available for handling service query requests. A minimum of 3 replicas are required for high availability (see [Adjust capacity](search-capacity-planning.md) for details).
-
-A search service having more replicas can load balance query requests over a larger number of indexes. Given a level of query volume, query throughput is going to be faster when there are more copies of the index available to service the request. If you are experiencing query latency, you can expect a positive impact on performance once the additional replicas are online.
-
-Although query throughput goes up as you add replicas, it does not precisely double or triple as you add replicas to your service. All search applications are subject to external factors that can impinge on query performance. Complex queries and network latency are two factors that contribute to variations in query response times.
-
-### Add partitions
-
-It's more common to add replicas, but when storage is constrained, you can add partitions to get more capacity. The tier at which you provisioned the service determines whether partitions can be added. The Basic tier is locked at one partition. Standard tiers and above support additional partitions.
-
-Partitions are added in divisors of 12 (specifically, 1, 2, 3, 4, 6, or 12). This is an artifact of sharding. An index is created in 12 shards, which can all be stored on 1 partition or equally divided into 2, 3, 4, 6, or 12 partitions (one shard per partition).
-
-### Remove replicas
-
-After periods of high query volumes, you can use the slider to reduce replicas after search query loads have normalized (for example, after holiday sales are over). There are no further steps required on your part. Lowering the replica count relinquishes virtual machines in the data center. Your query and data ingestion operations will now run on fewer VMs than before. The minimum requirement is one replica.
-
-### Remove partitions
-
-In contrast with removing replicas, which requires no extra effort on your part, you might have some work to do if you are using more storage than can be reduced. For example, if your solution is using three partitions, downsizing to one or two partitions will generate an error if the new storage space is less than required for hosting your index. As you might expect, your choices are to delete indexes or documents within an associated index to free up space, or keep the current configuration.
-
-There is no detection method that tells you which index shards are stored on specific partitions. Each partition provides approximately 25 GB in storage, so you will need to reduce storage to a size that can be accommodated by the number of partitions you have. If you want to revert to one partition, all 12 shards will need to fit.
-
-To help with future planning, you might want to check storage (using [Get Index Statistics](/rest/api/searchservice/Get-Index-Statistics)) to see how much you actually used.
+> [!TIP]
+> Provisioning or decommissioning the service itself can be done by an Azure subscription administrator or co-administrator. Using Azure-wide mechanisms, you can lock a subscription or resource to prevent accidental or unauthorized deletion of your search service by users with admin rights. For more information, see [Lock resources to prevent unexpected deletion](../azure-resource-manager/management/lock-resources.md).
## Next steps
-* Automate with [PowerShell](search-manage-powershell.md) or the [Azure CLI](search-manage-azure-cli.md)
-
-* Review [performance and optimization](search-performance-optimization.md) techniques
-
+* Review [monitoring capabilities](search-monitor-usage.md) available in the portal
+* Automate with [PowerShell](search-manage-powershell.md) or [Azure CLI](search-manage-azure-cli.md)
* Review [security features](search-security-overview.md) to protect content and operations- * Enable [diagnostic logging](search-monitor-logs.md) to monitor query and indexing workloads
search Search Modeling Multitenant Saas Applications https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/search-modeling-multitenant-saas-applications.md
Title: Multitenancy and content isolation
description: Learn about common design patterns for multitenant SaaS applications while using Azure Cognitive Search. - Previously updated : 09/25/2020 Last updated : 04/06/2021 # Design patterns for multitenant SaaS applications and Azure Cognitive Search
Last updated 09/25/2020
A multitenant application is one that provides the same services and capabilities to any number of tenants who cannot see or share the data of any other tenant. This document discusses tenant isolation strategies for multitenant applications built with Azure Cognitive Search. ## Azure Cognitive Search concepts+ As a search-as-a-service solution, [Azure Cognitive Search](search-what-is-azure-search.md) allows developers to add rich search experiences to applications without managing any infrastructure or becoming an expert in information retrieval. Data is uploaded to the service and then stored in the cloud. Using simple requests to the Azure Cognitive Search API, the data can then be modified and searched. ### Search services, indexes, fields, and documents
When using Azure Cognitive Search, one subscribes to a *search service*. As data
Each index within a search service has its own schema, which is defined by a number of customizable *fields*. Data is added to an Azure Cognitive Search index in the form of individual *documents*. Each document must be uploaded to a particular index and must fit that index's schema. When searching data using Azure Cognitive Search, the full-text search queries are issued against a particular index. To compare these concepts to those of a database, fields can be likened to columns in a table and documents can be likened to rows. ### Scalability+ Any Azure Cognitive Search service in the Standard [pricing tier](https://azure.microsoft.com/pricing/details/search/) can scale in two dimensions: storage and availability.
-* *Partitions* can be added to increase the storage of a search service.
-* *Replicas* can be added to a service to increase the throughput of requests that a search service can handle.
++ *Partitions* can be added to increase the storage of a search service.++ *Replicas* can be added to a service to increase the throughput of requests that a search service can handle. Adding and removing partitions and replicas at will allow the capacity of the search service to grow with the amount of data and traffic the application demands. In order for a search service to achieve a read [SLA](https://azure.microsoft.com/support/legal/sla/search/v1_0/), it requires two replicas. In order for a service to achieve a read-write [SLA](https://azure.microsoft.com/support/legal/sla/search/v1_0/), it requires three replicas. ### Service and index limits in Azure Cognitive Search+ There are a few different [pricing tiers](https://azure.microsoft.com/pricing/details/search/) in Azure Cognitive Search, each of the tiers has different [limits and quotas](search-limits-quotas-capacity.md). Some of these limits are at the service-level, some are at the index-level, and some are at the partition-level. | | Basic | Standard1 | Standard2 | Standard3 | Standard3 HD |
There are a few different [pricing tiers](https://azure.microsoft.com/pricing/de
| **Maximum Storage per Partition** |2 GB |25 GB |100 GB |200 GB |200 GB | | **Maximum Indexes per Service** |5 |50 |200 |200 |3000 (max 1000 indexes/partition) |
-#### S3 High Density'
+#### S3 High Density
+ In Azure Cognitive SearchΓÇÖs S3 pricing tier, there is an option for the High Density (HD) mode designed specifically for multitenant scenarios. In many cases, it is necessary to support a large number of smaller tenants under a single service to achieve the benefits of simplicity and cost efficiency. S3 HD allows for the many small indexes to be packed under the management of a single search service by trading the ability to scale out indexes using partitions for the ability to host more indexes in a single service.
S3 HD allows for the many small indexes to be packed under the management of a s
An S3 service is designed to host a fixed number of indexes (maximum 200) and allow each index to scale in size horizontally as new partitions are added to the service. Adding partitions to S3 HD services increases the maximum number of indexes that the service can host. The ideal maximum size for an individual S3HD index is around 50 - 80 GB, although there is no hard size limit on each index imposed by the system. ## Considerations for multitenant applications+ Multitenant applications must effectively distribute resources among the tenants while preserving some level of privacy between the various tenants. There are a few considerations when designing the architecture for such an application:
-* *Tenant isolation:* Application developers need to take appropriate measures to ensure that no tenants have unauthorized or unwanted access to the data of other tenants. Beyond the perspective of data privacy, tenant isolation strategies require effective management of shared resources and protection from noisy neighbors.
-* *Cloud resource cost:* As with any other application, software solutions must remain cost competitive as a component of a multitenant application.
-* *Ease of Operations:* When developing a multitenant architecture, the impact on the application's operations and complexity is an important consideration. Azure Cognitive Search has a [99.9% SLA](https://azure.microsoft.com/support/legal/sla/search/v1_0/).
-* *Global footprint:* Multitenant applications may need to effectively serve tenants which are distributed across the globe.
-* *Scalability:* Application developers need to consider how they reconcile between maintaining a sufficiently low level of application complexity and designing the application to scale with number of tenants and the size of tenants' data and workload.
++ *Tenant isolation:* Application developers need to take appropriate measures to ensure that no tenants have unauthorized or unwanted access to the data of other tenants. Beyond the perspective of data privacy, tenant isolation strategies require effective management of shared resources and protection from noisy neighbors.+++ *Cloud resource cost:* As with any other application, software solutions must remain cost competitive as a component of a multitenant application.+++ *Ease of Operations:* When developing a multitenant architecture, the impact on the application's operations and complexity is an important consideration. Azure Cognitive Search has a [99.9% SLA](https://azure.microsoft.com/support/legal/sla/search/v1_0/).+++ *Global footprint:* Multitenant applications may need to effectively serve tenants which are distributed across the globe.+++ *Scalability:* Application developers need to consider how they reconcile between maintaining a sufficiently low level of application complexity and designing the application to scale with number of tenants and the size of tenants' data and workload. Azure Cognitive Search offers a few boundaries that can be used to isolate tenantsΓÇÖ data and workload. ## Modeling multitenancy with Azure Cognitive Search+ In the case of a multitenant scenario, the application developer consumes one or more search services and divide their tenants among services, indexes, or both. Azure Cognitive Search has a few common patterns when modeling a multitenant scenario:
-1. *Index per tenant:* Each tenant has its own index within a search service that is shared with other tenants.
-2. *Service per tenant:* Each tenant has its own dedicated Azure Cognitive Search service, offering highest level of data and workload separation.
-3. *Mix of both:* Larger, more-active tenants are assigned dedicated services while smaller tenants are assigned individual indexes within shared services.
++ *One index per tenant:* Each tenant has its own index within a search service that is shared with other tenants.+++ *One service per tenant:* Each tenant has its own dedicated Azure Cognitive Search service, offering highest level of data and workload separation.
-## 1. Index per tenant
++ *Mix of both:* Larger, more-active tenants are assigned dedicated services while smaller tenants are assigned individual indexes within shared services.+
+## Model 1: One index per tenant
:::image type="content" source="media/search-modeling-multitenant-saas-applications/azure-search-index-per-tenant.png" alt-text="A portrayal of the index-per-tenant model" border="false":::
Azure Cognitive Search allows for the scale of both the individual indexes and t
If the total number of indexes grows too large for a single service, another service has to be provisioned to accommodate the new tenants. If indexes have to be moved between search services as new services are added, the data from the index has to be manually copied from one index to the other as Azure Cognitive Search does not allow for an index to be moved.
-## 2. Service per tenant
+## Model 2: Once service per tenant
:::image type="content" source="media/search-modeling-multitenant-saas-applications/azure-search-service-per-tenant.png" alt-text="A portrayal of the service-per-tenant model" border="false":::
The service-per-tenant model is an efficient choice for applications with a glob
The challenges in scaling this pattern arise when individual tenants outgrow their service. Azure Cognitive Search does not currently support upgrading the pricing tier of a search service, so all data would have to be manually copied to a new service.
-## 3. Mixing both models
+## Model 3: Hybrid
+ Another pattern for modeling multitenancy is mixing both index-per-tenant and service-per-tenant strategies. By mixing the two patterns, an application's largest tenants can occupy dedicated services while the long tail of less active, smaller tenants can occupy indexes in a shared service. This model ensures that the largest tenants have consistently high performance from the service while helping to protect the smaller tenants from any noisy neighbors.
By mixing the two patterns, an application's largest tenants can occupy dedicate
However, implementing this strategy relies foresight in predicting which tenants will require a dedicated service versus an index in a shared service. Application complexity increases with the need to manage both of these multitenancy models. ## Achieving even finer granularity+ The above design patterns to model multitenant scenarios in Azure Cognitive Search assume a uniform scope where each tenant is a whole instance of an application. However, applications can sometimes handle many smaller scopes. If service-per-tenant and index-per-tenant models are not sufficiently small scopes, it is possible to model an index to achieve an even finer degree of granularity.
This method can be used to achieve functionality of separate user accounts, sepa
> [!NOTE] > Using the approach described above to configure a single index to serve multiple tenants affects the relevance of search results. Search relevance scores are computed at an index-level scope, not a tenant-level scope, so all tenants' data is incorporated in the relevance scores' underlying statistics such as term frequency.
->
->
+>
## Next steps
-Azure Cognitive Search is a compelling choice for many applications. When evaluating the various design patterns for multitenant applications, consider the [various pricing tiers](https://azure.microsoft.com/pricing/details/search/) and the respective [service limits](search-limits-quotas-capacity.md) to best tailor Azure Cognitive Search to fit application workloads and architectures of all sizes.
-Any questions about Azure Cognitive Search and multitenant scenarios can be directed to azuresearch_contact@microsoft.com.
+Azure Cognitive Search is a compelling choice for many applications. When evaluating the various design patterns for multitenant applications, consider the [various pricing tiers](https://azure.microsoft.com/pricing/details/search/) and the respective [service limits](search-limits-quotas-capacity.md) to best tailor Azure Cognitive Search to fit application workloads and architectures of all sizes.
search Search Monitor Logs Powerbi https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/search-monitor-logs-powerbi.md
Title: Visualize Azure Cognitive Search Logs and Metrics with Power BI
-description: Visualize Azure Cognitive Search Logs and Metrics with Power BI
+ Title: Visualize logs and metrics with Power BI
+description: Visualize Azure Cognitive Search logs and metrics with Power BI.
- Previously updated : 09/25/2020 Last updated : 04/07/2021 # Visualize Azure Cognitive Search Logs and Metrics with Power BI
-[Azure Cognitive Search](./search-what-is-azure-search.md) allows you to store operation logs and service metrics about your search service in an Azure Storage account. This page provides instructions for how you can visualize that information through a Power BI Template App. The app provides detailed insights about your search service, including information about Search, Indexing, Operations, and Service metrics.
+
+[Azure Cognitive Search](./search-what-is-azure-search.md) can send operation logs and service metrics to an Azure Storage account, which you can then visualize in Power BI. This article explains the steps and how to use a Power BI Template App to visualize the data. The template can help you gain detailed insights about your search service, including information about queries, indexing, operations, and service metrics.
You can find the Power BI Template App **Azure Cognitive Search: Analyze Logs and Metrics** in the [Power BI Apps marketplace](https://appsource.microsoft.com/marketplace/apps).
-## How to get started with the app
+## Set up the app
1. Enable metric and resource logging for your search service:
You can find the Power BI Template App **Azure Cognitive Search: Analyze Logs an
:::image type="content" source="media/search-monitor-logs-powerbi/powerbi-search.png" alt-text="Screenshot of the Azure Cognitive Search Power BI report.":::
-## How to change the app parameters
+## Modify app parameters
+ If you would like to visualize data from a different storage account or change the number of days of data to query, follow the below steps to change the **Days** and **StorageAccount** parameters. 1. Navigate to your Power BI apps, find your Azure Cognitive Search app and select the **Edit app** button to view the workspace.
If you would like to visualize data from a different storage account or change t
1. Open the report to view the updated data. You might also need to refresh the report to view the latest data.
-## Troubleshooting
+## Troubleshooting report issues
+ If you find that you cannot see your data follow these troubleshooting steps: 1. Open the report and refresh the page to make sure you're viewing the latest data. There's an option in the report to refresh the data. Select this to get the latest data.
If you find that you cannot see your data follow these troubleshooting steps:
1. Check to see if the dataset is still refreshing. The refresh status indicator is shown in step 8 above. If it is still refreshing, wait until the refresh is complete to open and refresh the report. ## Next steps
-[Learn more about Azure Cognitive Search](./index.yml)
-
-[What is Power BI?](/power-bi/fundamentals/power-bi-overview)
-[Basic concepts for designers in the Power BI service](/power-bi/service-basic-concepts)
++ [Monitor search operations and activity](search-monitor-usage.md)++ [What is Power BI?](/power-bi/fundamentals/power-bi-overview)++ [Basic concepts for designers in the Power BI service](/power-bi/service-basic-concepts)
search Search Monitor Logs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/search-monitor-logs.md
Last updated 06/30/2020
# Collect and analyze log data for Azure Cognitive Search
-Diagnostic or operational logs provide insight into the detailed operations of Azure Cognitive Search and are useful for monitoring service and workload processes. Internally, Microsoft preserves system information on the backend for a short period of time (about 30 days), sufficient for investigation and analysis if you file a support ticket. However, if you want ownership over operational data, you should configure a diagnostic setting to specify where logging information is collected.
+Diagnostic or operational logs provide insight into the detailed operations of Azure Cognitive Search and are useful for monitoring service health and processes. Internally, Microsoft preserves system information on the backend for a short period of time (about 30 days), sufficient for investigation and analysis if you file a support ticket. However, if you want ownership over operational data, you should configure a diagnostic setting to specify where logging information is collected.
Diagnostic logging is enabled through integration with [Azure Monitor](../azure-monitor/index.yml).
search Search Performance Analysis https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/search-performance-analysis.md
+
+ Title: Analyze performance
+
+description: TBD
+++++ Last updated : 04/06/2021++
+# Analyze performance in Azure Cognitive Search
+
+This article describes the tools, behaviors, and approaches for analyzing query and indexing performance in Cognitive Search.
+
+## Develop baseline numbers
+
+In any large implementation, it is critical to do a performance benchmarking test of your Cognitive Search service before you roll it into production. You should test both the search query load that you expect, but also the expected data ingestion workloads (if possible, run these simultaneously). Having benchmark numbers helps to validate the proper [search tier](search-sku-tier.md), [service configuration](search-capacity-planning.md), and expected [query latency](search-performance-analysis.md#average-query-latency).
+
+To develop benchmarks, we recommend the [azure-search-performance-testing (GitHub)](https://github.com/Azure-Samples/azure-search-performance-testing) tool.
+
+To isolate the effects of a distributed service architecture, try testing on service configurations of one replica and one partition.
+
+> [!NOTE]
+> For the Storage Optimized tiers (L1 and L2), you should expect a lower query throughput and higher latency than the Standard tiers.
+>
+
+## Use diagnostic logging
+
+The most important tool that an administrator has in diagnosing potential performance issues is [diagnostics logging](search-monitor-logs.md) that collects operational data and metrics about your search service. Diagnostic logging is enabled through [Azure Monitor](../azure-monitor/overview.md). There are costs associated with using Azure Monitor and storing data, but if you enable it for your service, it can be instrumental in investigating performance issues.
+
+There are multiple opportunities for latency to occur, whether during a network transfer, or during processing of content in the app services layer, or on a search service. A key benefit of diagnostic logging is that activities are logged from the search service perspective, which means that the log can help you determine whether performance issues are due to problems with the query or indexing, or some other point of failure.
++
+Diagnostics logging gives you options for storing logged information. We recommend using [Log Analytics](../azure-monitor/logs/log-analytics-overview.md) so that you can execute advanced Kusto queries against the data to answer many questions about usage and performance.
+
+On your search service portal pages, you can enable logging through **Diagnostic settings**, and then issue Kusto queries against Log Analytics by choosing **Logs**. For more information about setting up, see [Collect and analyze log data](search-monitor-logs.md).
++
+## Throttling behaviors
+
+Throttling occurs when the search service has reached the limit of what it can handle from a resource perspective. Throttling can occur during queries or indexing. From the client side, an API call results in a 503 HTTP response when it has been throttled. During indexing, there is also the possibility of receiving a 207 HTTP response, which indicates that one or more items failed to index. This error is an indicator that the search service is getting close to capacity.
+
+As a rule of thumb, it is best to quantify the amount of throttling that you see. For example, if one search query out of 500,000 is throttled, it might not be that large of an issue. However, if a large percentage of queries is throttled over a period, this would be a greater concern. By looking at throttling over a period, it also helps to identify time frames where throttling might more likely occur and help you decide how to best accommodate that.
+
+A simple fix to most throttling issues is to throw more resources at the search service (typically replicas for query-based throttling, or partitions for indexing-based throttling). However, increasing replicas or partitions adds cost, which is why it is important to know the reason why throttling is occurring at all. Investigating the conditions that cause throttling will be explained in the next several sections.
+
+Below is an example of a Kusto query that can identify the breakdown of HTTP responses from the search service that has been under load. By querying over a 7-day period, the rendered bar chart shows that a relatively large percentage of the search queries were throttled, in comparison to the number of successful (200) responses.
+
+```kusto
+AzureDiagnostics
+| where TimeGenerated > ago(7d)
+| summarize count() by resultSignature_d
+| render barchart
+```
++
+Examining throttling over a specific time period can help you identify the times where throttling might occur more frequently. In the below example, a time series chart is used to show the number of throttled queries that occurred over a specified time frame. In this case, the throttled queries correlated with the times in with the performance benchmarking was performed.
+
+```kusto
+let ['_startTime']=datetime('2021-02-25T20:45:07Z');
+let ['_endTime']=datetime('2021-03-03T20:45:07Z');
+let intervalsize = 1m;
+AzureDiagnostics
+| where TimeGenerated > ago(7d)
+| where resultSignature_d != 403 and resultSignature_d != 404 and OperationName in ("Query.Search", "Query.Suggest", "Query.Lookup", "Query.Autocomplete")
+| summarize
+ ThrottledQueriesPerMinute=bin(countif(OperationName in ("Query.Search", "Query.Suggest", "Query.Lookup", "Query.Autocomplete") and resultSignature_d == 503)/(intervalsize/1m), 0.01)
+ by bin(TimeGenerated, intervalsize)
+| render timechart
+```
++
+## Measure individual queries
+
+In some cases, it can be useful to test individual queries to see how they are performing. To do this, it is important to be able to see how long the search service takes to complete the work, as well as how long it takes to make the round-trip request from the client and back to the client. The diagnostics logs could be used to look up individual operations, but it might be easier to do this all from a client tool, such as Postman.
+
+In the example below, a REST-based search query was executed. Cognitive Search includes in every response the number of milliseconds it takes to complete the query, visible in the Headers tab, in "elapsed-time". Next to Status at the top of the response, you'll find the round-trip duration. in this case, 418 milliseconds. In the results section, the ΓÇ£HeadersΓÇ¥ tab was chosen. Using these two values highlighted with a Red box in the image below, we see the search service took 21 ms to complete the search query and the entire client round-trip request took 125 ms. By subtracting these two numbers we can determine that it took 104 ms additional time to transmit the search query to the search service and to transfer the search results back to the client.
+
+This can be extremely helpful to determine if there might be network latencies or other factors impacting query performance.
++
+## Query rates
+
+One potential reason for your search service to throttle requests is due to the sheer number of queries being performed where volume is captured as queries per second (QPS) or queries per minute (QPM). As your search service receives more QPS, it will typically take longer and longer to respond to those queries until it can no longer keep up, as which it will send back a throttling 503 HTTP response.
+
+The following Kusto query shows query volume as measured in QPM, along with average duration of a query in milliseconds (AvgDurationMS) and the average number of documents (AvgDocCountReturned) returned in each one.
+
+```kusto
+AzureDiagnostics
+| where OperationName == "Query.Search" and TimeGenerated > ago(1d)
+| extend MinuteOfDay = substring(TimeGenerated, 0, 16)
+| project MinuteOfDay, DurationMs, Documents_d, IndexName_s
+| summarize QPM=count(), AvgDuractionMs=avg(DurationMs), AvgDocCountReturned=avg(Documents_d) by MinuteOfDay
+| order by MinuteOfDay desc
+| render timechart
+```
++
+> [!TIP]
+> To reveal the data behind this chart, remove the line `| render timechart` and then rerun the query.
+
+## Impact of indexing on queries
+
+An important factor to consider when looking at performance is that indexing uses the same resources as search queries. If you are indexing a large amount of content, you can expect to see latency grow as the service tries to accommodate both workloads.
+
+If queries are slowing down, look at the timing of indexing activity to see if it coincides with query degradation. For example, perhaps an indexer is running a daily or hourly job that correlates with the decreased performance of the search queries.
+
+This section provides a set of queries that can help you visualize the search and indexing rates. For these examples, the time range is set in the query. Be sure to indicate **Set in query** when running the queries in Azure portal.
++
+<a name="average-query-latency"></a>
+
+### Average Query Latency
+
+In the below query, an interval size of 1 minute is used to show the average latency of the search queries. From the chart we can see that the average latency was low until 5:45pm and lasted until 5:53pm.
+
+```kusto
+let intervalsize = 1m;
+let _startTime = datetime('2021-02-23 17:40');
+let _endTime = datetime('2021-02-23 18:00');
+AzureDiagnostics
+| where TimeGenerated between(['_startTime']..['_endTime']) // Time range filtering
+| summarize AverageQueryLatency = avgif(DurationMs, OperationName in ("Query.Search", "Query.Suggest", "Query.Lookup", "Query.Autocomplete"))
+ by bin(TimeGenerated, intervalsize)
+| render timechart
+```
++
+### Average Queries Per Minute (QPM)
+
+The following query allows us to look at the average number of queries per minute to ensure that there was not some sort of spike in search requests that might have impacted the latency. From the chart we can see there is some variance, but nothing to indicate a spike in request count.
+
+```kusto
+let intervalsize = 1m;
+let _startTime = datetime('2021-02-23 17:40');
+let _endTime = datetime('2021-02-23 18:00');
+AzureDiagnostics
+| where TimeGenerated between(['_startTime'] .. ['_endTime']) // Time range filtering
+| summarize QueriesPerMinute=bin(countif(OperationName in ("Query.Search", "Query.Suggest", "Query.Lookup", "Query.Autocomplete"))/(intervalsize/1m), 0.01)
+ by bin(TimeGenerated, intervalsize)
+| render timechart
+```
++
+### Indexing Operations Per Minute (OPM)
+
+Here we will look at the number of Indexing operations per minute. From the chart we can see that a large amount of data was indexed started at 5:42 pm and ended at 5:50pm. This indexing began 3 minutes before the search queries started becoming latent and ended 3 minutes before the search queries were no longer latent.
+
+From this we can see that it took about 3 minutes for the search service to be busy enough from the indexing to start impact the search query latency. We can also see that after the indexing completed it took another 3 minutes for the search service to complete all the work from the newly indexed content, until the search queries starting to no longer be latent.
+
+```kusto
+let intervalsize = 1m;
+let _startTime = datetime('2021-02-23 17:40');
+let _endTime = datetime('2021-02-23 18:00');
+AzureDiagnostics
+| where TimeGenerated between(['_startTime'] .. ['_endTime']) // Time range filtering
+| summarize IndexingOperationsPerSecond=bin(countif(OperationName == "Indexing.Index")/ (intervalsize/1m), 0.01)
+ by bin(TimeGenerated, intervalsize)
+| render timechart
+```
++
+## Background service processing
+
+It is not unusual to see periodic spikes in query or indexing latency. Spikes might occur in response to indexing or high query rates, but could also occur during merge operations. Search indexes are stored in chunks - or shards. Periodically, the system merges smaller shards into large shards, which can help optimize service performance. This merge process also cleans up documents that have previously been marked for deletion from the index, resulting in the recovery of storage space.
+
+Merging shards is fast, but also resource intensive and thus has the potential to degrade service performance. For this reason, if you see short bursts of query latency, and those bursts coincide with recent changes to indexed content, it's likely that you attribute that latency to shard merge operations.
+
+## Next steps
+
+Review these additional articles related to analyzing service performance.
+++ [Performance tips](search-performance-tips.md)++ [Choose a service tier](search-sku-tier.md)++ [Manage capacity](search-capacity-planning.md)
search Search Performance Optimization https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/search-performance-optimization.md
Title: Scale for performance
+ Title: Availability and continuity
-description: Learn techniques and best practices for tuning Azure Cognitive Search performance and configuring optimum scale.
+description: learn how to make a search service highly available and resilient against period disruptions or even catastrophic failures.
- Previously updated : 02/01/2021 Last updated : 04/06/2021
-# Scale for performance on Azure Cognitive Search
+# Availability and business continuity in Azure Cognitive Search
-This article describes best practices for advanced scenarios with sophisticated requirements for scalability and availability.
+In Cognitive Search, availability is achieved through multiple replicas, whereas business continuity (and disaster recovery) is achieved through multiple search services. This article provides guidance that you can use as a starting point for developing a strategy that meets your business requirements for both availability and continuous operations.
-## Start with baseline numbers
+<a name="scale-for-availability"></a>
-Before undertaking a larger deployment effort, make sure you know what a typical query load looks like. The following guidelines can help you arrive at baseline query numbers.
+## High availability
-1. Pick a target latency (or maximum amount of time) that a typical search request should take to complete.
+In Cognitive Search, replicas are copies of your index. Having multiple replicas allows Azure Cognitive Search to do machine reboots and maintenance against one replica, while query execution continues on other replicas. For more information about adding replicas, see [Add or reduce replicas and partitions](search-capacity-planning.md#adjust-capacity).
-1. Create and test a real workload against your search service with a realistic data set to measure these latency rates.
-
-1. Start with a low number of queries per second (QPS) and then gradually increase the number executed in the test until the query latency drops below the predefined target. This is an important benchmark to help you plan for scale as your application grows in usage.
-
-1. Wherever possible, reuse HTTP connections. If you are using the Azure Cognitive Search .NET SDK, this means you should reuse an instance or [SearchClient](/dotnet/api/azure.search.documents.searchclient) instance, and if you are using the REST API, you should reuse a single HttpClient.
-
-1. Vary the substance of query requests so that search occurs over different parts of your index. Variation is important because if you continually execute the same search requests, caching of data will start to make performance look better than it might with a more disparate query set.
-
-1. Vary the structure of query requests so that you get different types of queries. Not every search query performs at the same level. For example, a document lookup or search suggestion is typically faster than a query with a significant number of facets and filters. Test composition should include various queries, in roughly the same ratios as you would expect in production.
-
-While creating these test workloads, there are some characteristics of Azure Cognitive Search to keep in mind:
-
-+ It is possible overload your service by pushing too many search queries at one time. When this happens, you will see HTTP 503 response codes. To avoid a 503 during testing, start with various ranges of search requests to see the differences in latency rates as you add more search requests.
-
-+ Azure Cognitive Search does not run indexing tasks in the background. If your service handles query and indexing workloads concurrently, take this into account by either introducing indexing jobs into your query tests, or by exploring options for running indexing jobs during off peak hours.
-
-> [!Tip]
-> You can simulate a realistic query load using load testing tools. Try [load testing with Azure DevOps](/azure/devops/test/load-test/get-started-simple-cloud-load-test) or use one of these [alternatives](/azure/devops/test/load-test/overview#alternatives).
-
-## Scale for high query volume
-
-A service is overburdened when queries take too long or when the service starts dropping requests. If this happens, you can address the problem in one of two ways:
-
-+ **Add replicas**
-
- Each replica is a copy of your data, allowing the service to load balance requests against multiple copies. All load balancing and replication of data is managed by Azure Cognitive Search and you can alter the number of replicas allocated for your service at any time. You can allocate up to 12 replicas in a Standard search service and 3 replicas in a Basic search service. Replicas can be adjusted either from the [Azure portal](search-create-service-portal.md) or [PowerShell](search-manage-powershell.md).
-
-+ **Create a new service at a higher tier**
-
- Azure Cognitive Search comes in a [number of tiers](https://azure.microsoft.com/pricing/details/search/) and each one offers different levels of performance. In some cases, you may have so many queries that the tier you are on cannot provide sufficient turnaround, even when replicas are maxed out. In this case, consider moving to a higher performing tier, such as the Standard S3 tier, designed for scenarios having large numbers of documents and extremely high query workloads.
-
-## Scale for slow individual queries
-
-Another reason for high latency rates is a single query taking too long to complete. In this case, adding replicas will not help. Two possible options that might help include the following:
-
-+ **Increase Partitions**
-
- A partition splits data across extra computing resources. Two partitions split data in half, a third partition splits it into thirds, and so forth. One positive side-effect is that slower queries sometimes perform faster due to parallel computing. We have noted parallelization on low selectivity queries, such as queries that match many documents, or facets providing counts over a large number of documents. Since significant computation is required to score the relevancy of the documents, or to count the numbers of documents, adding extra partitions helps queries complete faster.
-
- There can be a maximum of 12 partitions in Standard search service and 1 partition in the Basic search service. Partitions can be adjusted either from the [Azure portal](search-create-service-portal.md) or [PowerShell](search-manage-powershell.md).
-
-+ **Limit High Cardinality Fields**
-
- A high cardinality field consists of a facetable or filterable field that has a significant number of unique values, and as a result, consumes significant resources when computing results. For example, setting a Product ID or Description field as facetable/filterable would count as high cardinality because most of the values from document to document are unique. Wherever possible, limit the number of high cardinality fields.
-
-+ **Increase Search Tier**
-
- Moving up to a higher Azure Cognitive Search tier can be another way to improve performance of slow queries. Each higher tier provides faster CPUs and more memory, both of which have a positive impact on query performance.
-
-## Scale for availability
-
-Replicas not only help reduce query latency, but can also allow for high availability. With a single replica, you should expect periodic downtime due to server reboots after software updates or for other maintenance events that will occur. As a result, it is important to consider if your application requires high availability of searches (queries) as well as writes (indexing events). Azure Cognitive Search offers SLA options on all the paid search offerings with the following attributes:
+For each individual search service, Microsoft guarantees at least 99.9% availability for configurations that meet these criteria:
+ Two replicas for high availability of read-only workloads (queries)
-+ Three or more replicas for high availability of read-write workloads (queries and indexing)
++ Three or more replicas for high availability of read-write workloads (queries and indexing)
-For more details on this, please visit the [Azure Cognitive Search Service Level Agreement](https://azure.microsoft.com/support/legal/sla/search/v1_0/).
-
-Since replicas are copies of your data, having multiple replicas allows Azure Cognitive Search to do machine reboots and maintenance against one replica, while query execution continues on other replicas. Conversely, if you take replicas away, you'll incur query performance degradation, assuming those replicas were an under-utilized resource.
+No SLA is provided for the Free tier. For more information, see [SLA for Azure Cognitive Search](https://azure.microsoft.com/support/legal/sla/search/v1_0/).
<a name="availability-zones"></a>
-### Availability Zones
+## Availability Zones
-[Availability Zones](../availability-zones/az-overview.md) divide a region's data centers into distinct physical location groups to provide high-availability, within the same region. For Cognitive Search, individual replicas are the units for zone assignment. A search service runs within one region; its replicas run in different zones.
+[Availability Zones](../availability-zones/az-overview.md) are an Azure platform capability that divides a region's data centers into distinct physical location groups to provide high-availability, within the same region. If you use Availability Zones for Cognitive Search, individual replicas are the units for zone assignment. A search service runs within one region; its replicas run in different zones.
-You can utilize Availability Zones with Azure Cognitive Search by adding two or more replicas to your search service. Each replica will be placed in a different Availability Zone within the region. If you have more replicas than Availability Zones, the replicas will be distributed across Availability Zones as evenly as possible.
+You can utilize Availability Zones with Azure Cognitive Search by adding two or more replicas to your search service. Each replica will be placed in a different Availability Zone within the region. If you have more replicas than Availability Zones, the replicas will be distributed across Availability Zones as evenly as possible. There is no specific action on your part, except to [create a search service](search-create-service-portal.md) in a region that provides Availability Zones, and then to configure the service to [use multiple replicas](search-capacity-planning.md#adjust-capacity).
Azure Cognitive Search currently supports Availability Zones for Standard tier or higher search services that were created in one of the following regions:
Azure Cognitive Search currently supports Availability Zones for Standard tier o
Availability Zones do not impact the [Azure Cognitive Search Service Level Agreement](https://azure.microsoft.com/support/legal/sla/search/v1_0/). You still need 3 or more replicas for query high availability.
-## Scale for geo-distributed workloads and geo-redundancy
+## Multiple services in separate geographic regions
+
+Although most customers use just one service, service redundancy might be necessary if operational requirements include the following:
-For geo-distributed workloads, users who are located far from the host data center will have higher latency rates. One mitigation is to provision multiple search services in regions with closer proximity to these users.
++ [Business continuity and disaster recovery (BCDR)](../best-practices-availability-paired-regions.md) (Cognitive Search does not provide instant failover in the event of an outage).++ Globally deployed applications. If query and indexing requests come from all over the world, users who are closest to the host data center will have faster performance. Creating additional services in regions with close proximity to these users can equalize performance for all users.++ [Multi-tenant architectures](search-modeling-multitenant-saas-applications.md) sometimes call for two or more services.
-Azure Cognitive Search does not currently provide an automated method of geo-replicating Azure Cognitive Search indexes across regions, but there are some techniques that can be used that can make this process simple to implement and manage. These are outlined in the next few sections.
+If you need two more search services, creating them in different regions can meet application requirements for continuity and recovery, as well as faster response times for a global user base.
-The goal of a geo-distributed set of search services is to have two or more indexes available in two or more regions, where a user is routed to the Azure Cognitive Search service that provides the lowest latency as seen in this example:
+Azure Cognitive Search does not currently provide an automated method of geo-replicating search indexes across regions, but there are some techniques that can be used that can make this process simple to implement and manage. These are outlined in the next few sections.
+
+The goal of a geo-distributed set of search services is to have two or more indexes available in two or more regions, where a user is routed to the Azure Cognitive Search service that provides the lowest latency:
![Cross-tab of services by region][1]
+You can implement this architecture by creating multiple services and designing a strategy for data synchronization. Optionally, you can include a resource like Azure Traffic Manager for routing requests. For more information, see [Create a search service](search-create-service-portal.md).
+
+<a name="data-sync"></a>
+ ### Keep data synchronized across multiple services
-There are two options for keeping your distributed search services in sync, which consist of either using the [Azure Cognitive Search Indexer](search-indexer-overview.md) or the Push API (also referred to as the [Azure Cognitive Search REST API](/rest/api/searchservice/)).
+There are two options for keeping two or more distributed search services in sync, which consist of either using the [Azure Cognitive Search Indexer](search-indexer-overview.md) or the Push API (also referred to as the [Azure Cognitive Search REST API](/rest/api/searchservice/)).
-### Use indexers for updating content on multiple services
+#### Option 1: Use indexers for updating content on multiple services
If you are already using indexer on one service, you can configure a second indexer on a second service to use the same data source object, pulling data from the same location. Each service in each region has its own indexer and a target index (your search index is not shared, which means data is duplicated), but each indexer references the same data source.
Here is a high-level visual of what that architecture would look like.
![Single data source with distributed indexer and service combinations][2]
-### Use REST APIs for pushing content updates on multiple services
+#### Option 2: Use REST APIs for pushing content updates on multiple services
-If you are using the Azure Cognitive Search REST API to [push content in your Azure Cognitive Search index](/rest/api/searchservice/update-index), you can keep your various search services in sync by pushing changes to all search services whenever an update is required. In your code, make sure to handle cases where an update to one search service fails but succeeds for other search services.
+If you are using the Azure Cognitive Search REST API to [push content to your search index](tutorial-optimize-indexing-push-api.md), you can keep your various search services in sync by pushing changes to all search services whenever an update is required. In your code, make sure to handle cases where an update to one search service fails but succeeds for other search services.
-## Leverage Azure Traffic Manager
+### Use Azure Traffic Manager to coordinate requests
[Azure Traffic Manager](../traffic-manager/traffic-manager-overview.md) allows you to route requests to multiple geo-located websites that are then backed by multiple search services. One advantage of the Traffic Manager is that it can probe Azure Cognitive Search to ensure that it is available and route users to alternate search services in the event of downtime. In addition, if you are routing search requests through Azure Web Sites, Azure Traffic Manager allows you to load balance cases where the Website is up but not Azure Cognitive Search. Here is an example of what the architecture that leverages Traffic Manager. ![Cross-tab of services by region, with central Traffic Manager][3]
+## Disaster recovery and service outages
+
+Although we can salvage your data, Azure Cognitive Search does not provide instant failover of the service if there is an outage at the cluster or data center level. If a cluster fails in the data center, the operations team will detect and work to restore service. You will experience downtime during service restoration, but you can request service credits to compensate for service unavailability per the [Service Level Agreement (SLA)](https://azure.microsoft.com/support/legal/sla/search/v1_0/).
+
+If continuous service is required in the event of catastrophic failures outside of MicrosoftΓÇÖs control, you could [provision an additional service](search-create-service-portal.md) in a different region and implement a geo-replication strategy to ensure indexes are fully redundant across all services.
+
+Customers who use [indexers](search-indexer-overview.md) to populate and refresh indexes can handle disaster recovery through geo-specific indexers leveraging the same data source. Two services in different regions, each running an indexer, could index the same data source to achieve geo-redundancy. If you are indexing from data sources that are also geo-redundant, be aware that Azure Cognitive Search indexers can only perform incremental indexing (merging updates from new, modified, or deleted documents) from primary replicas. In a failover event, be sure to re-point the indexer to the new primary replica.
+
+If you do not use indexers, you would use your application code to push objects and data to different search services in parallel. For more information, see [Keep data synchronized across multiple services](#data-sync).
+
+## Back up and restore alternatives
+
+Because Azure Cognitive Search is not a primary data storage solution, Microsoft does not provide a formal mechanism for self-service back up and restore. However, you can use the **index-backup-restore** sample code in this [Azure Cognitive Search .NET sample repo](https://github.com/Azure-Samples/azure-search-dotnet-samples) to back up your index definition and snapshot to a series of JSON files, and then use these files to restore the index, if needed. This tool can also move indexes between service tiers.
+
+Otherwise, your application code used for creating and populating an index is the de facto restore option if you delete an index by mistake. To rebuild an index, you would delete it (assuming it exists), recreate the index in the service, and reload by retrieving data from your primary data store.
+ ## Next steps To learn more about the pricing tiers and services limits for each one, see [Service limits](search-limits-quotas-capacity.md). See [Plan for capacity](search-capacity-planning.md) to learn more about partition and replica combinations.
search Search Performance Tips https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/search-performance-tips.md
+
+ Title: Performance tips
+
+description: Learn about tips and best practices for maximizing performance on a search service.
+++++ Last updated : 04/06/2021++
+# Tips for better performance in Azure Cognitive Search
+
+This article is a collection of tips and best practices that are often recommended for boosting performance. Knowing which factors are most likely to impact search performance can help you avoid inefficiencies and get the most out of your search service. Some key factors include:
+++ Index composition (schema and size)++ Query types++ Service capacity (tier, and the number of replicas and partitions)+
+## Index size and schema
+
+Queries run faster on smaller indexes. This is partly a function of having fewer fields to scan, but it's also due to how the system caches content for future queries. After the first query, some content remains in memory where it's searched more efficiently. Because index size tends to grow over time, one best practice is to periodically revisit index composition, both schema and documents, to look for content reduction opportunities. However, if the index is right-sized, the only other calibration you can make is to increase capacity: either by [adding replicas](search-capacity-planning.md#adjust-capacity) or upgrading the service tier. The section ["Tip: Upgrade to a Standard S2 tier"]](#tip-upgrade-to-a-standard-s2-tier) shows you how to evaluate the scale up versus scale out decision.
+
+Schema complexity can also adversely effect indexing and query performance. Excessive field attribution builds in limitations and processing requirements. [Complex types](search-howto-complex-data-types.md) take longer to index and query. The next few sections explore each condition.
+
+### Tip: Be selective in field attribution
+
+A common mistake that administrators and developer make when creating a search index is selecting all available properties for the fields, as opposed to only selecting just the properties that are needed. For example, if a field doesn't need to be full text searchable, skip that field when setting the searchable attribute.
++
+Support for filters, facets, and sorting can quadruple storage requirements. If you add suggesters, storage requirements go up even more. For an illustration on the impact of attributes on storage, see [Attributes and index size](search-what-is-an-index.md#index-size).
+
+Summarized, the ramifications of over-attribution include:
+++ Degradation of indexing performance due to the extra work required to process the content in the field, and then store it within the search inverted index (set the "searchable" attribute only on fields that contain searchable content).+++ Creates a larger surface that each query has to cover. All fields marked as searchable are scanned in a full text search.+++ Increases operational costs due to extra storage. Filtering and sorting requires additional space for storing original (non-analyzed) strings. Avoid setting filterable or sortable on fields that don't need it.+++ In many cases, over attribution limits the capabilities of the field. For example, if a field is facetable, filterable, and searchable, you can only store 16 KB of text within a field, whereas a searchable field can hold up to 16 MB of text.+
+> [!NOTE]
+> Only unnecessary attribution should be avoided. Filters and facets are often essential to the search experience, and in cases where filters are used, you frequently need sorting so that you can order the results (filters by themselves return in an unordered set).
+
+### Tip: Consider alternatives to complex types
+
+Complex data types are useful when data has a complicated nested structure, such as the parent-child elements found in JSON documents. The downside of complex types is the extra storage requirements and additional resources required to index the content, in comparison to non-complex data types.
+
+In some cases, you can avoid these tradeoffs by mapping a complex data structure to a simpler field type, such as a Collection. Alternatively, you might opt for flattening a field hierarchy into individual root-level fields.
++
+## Types of queries
+
+The types of queries you send are one of the most important factors for performance, and query optimization can drastically improve performance. When designing queries, think about the following points:
+++ **Number of searchable fields.** Each additional searchable field requires additional work by the search service. You can limit the fields being searched at query time using the "searchFields" parameter. It's best to specify only the fields that you care about to improve performance.+++ **Amount of data being returned.** Retrieving a lot of content can make queries slower. When structuring a query, return only those fields that you need to render the results page, and then retrieve remaining fields using the [Lookup API](/rest/api/searchservice/lookup-document) once a user selects a match.+++ **Use of partial term searches.** [Partial term searches](search-query-partial-matching.md), such as prefix search, fuzzy search, and regular expression search, are more computationally expensive than typical keyword searches, as they require full index scans to produce results.+++ **Number of facets.** Adding facets to queries requires aggregations for each query. In general, only add the facets that you plan to render in your app.+++ **Limit high cardinality fields.** A *high cardinality field* refers to a facetable or filterable field that has a significant number of unique values, and as a result, consumes significant resources when computing results. For example, setting a Product ID or Description field as facetable and filterable would count as high cardinality because most of the values from document to document are unique.+
+### Tip: Use search functions instead overloading filter criteria
+
+As a query uses increasingly [complex filter criteria](search-query-odata-filter.md#filter-size-limitations), the performance of the search query will degrade. Consider the following example that demonstrates the use of filters to trim results based on a user identity:
+
+```json
+$filter= userid eq 123 or userid eq 234 or userid eq 345 or userid eq 456 or userid eq 567
+```
+
+In this case, the filter expressions are used to check whether a single field in each document is equal to one of many possible values of a user identity. You are most likely to find this pattern in applications that implement [security trimming](search-security-trimming-for-azure-search.md) (checking a field containing one or more principal IDs against a list of principal IDs representing the user issuing the query).
+
+A more efficient way to execute filters that contain a large number of values is to use [`search.in` function](search-query-odata-search-in-function.md), as shown in this example:
+
+```json
+search.in(userid, '123,234,345,456,567', ',')
+```
+
+### Tip: Add partitions for slow individual queries
+
+When query performance is slowing down in general, adding more replicas frequently solves the issue. But what if the problem is a single query that takes too long to complete? In this scenario, adding replicas will not help, but additional partitions might. A partition splits data across extra computing resources. Two partitions split data in half, a third partition splits it into thirds, and so forth.
+
+One positive side-effect of adding partitions is that slower queries sometimes perform faster due to parallel computing. We have noted parallelization on low selectivity queries, such as queries that match many documents, or facets providing counts over a large number of documents. Since significant computation is required to score the relevancy of the documents, or to count the numbers of documents, adding extra partitions helps queries complete faster.
+
+To add partitions, use [Azure portal](search-create-service-portal.md), [PowerShell](search-manage-powershell.md), [Azure CLI](search-manage-azure-cli.md), or a management SDK.
+
+## Service capacity
+
+A service is overburdened when queries take too long or when the service starts dropping requests. If this happens, you can address the problem by upgrading the service or by adding capacity.
+
+The tier of your search service and the number of replicas/partitions also have a big impact on performance. Each higher tier provides faster CPUs and more memory, both of which have a positive impact on performance.
+
+### Tip: Upgrade to a Standard S2 tier
+
+The Standard S1 search tier is often where customers start. A common pattern for S1 services is that indexes grow over time, which requires more partitions. More partitions lead to slower response times, so more replicas are added to handle the query load. As you can imagine, the cost of running an S1 service has now progressed to levels beyond the initial configuration.
+
+At this juncture, an important question to ask is whether it would be beneficial to move to a higher tier, as opposed to progressively increasing the number of partitions or replicas of the current service.
+
+Consider the following topology as an example of a service that has taken on increasing levels of capacity:
+++ Standard S1 tier++ Index Size: 190 GB++ Partition Count: 8 (on S1, partition size is 25 GB per partition)++ Replica Count: 2++ Total Search Units: 16 (8 partitions x 2 replicas)++ Hypothetical Retail Price: ~$4,000 USD / month (assume $250 USD x 16 search units)+
+Suppose the service administrator is still seeing higher latency rates and is considering adding another replica. This would change the replica count from 2 to 3 and as a result change the Search Unit count to 24 and a resulting price of $6,000 USD/month.
+
+However, if the administrator chose to move to a Standard S2 tier the topology would look like:
+++ Standard S2 tier++ Index Size: 190 GB++ Partition Count: 2 (on S2, partition size is 100 GB per partition)++ Replica Count: 2++ Total Search Units: 4 (2 partitions x 2 replicas)++ Hypothetical Retail Price: ~$4,000 USD / month ($1000 USD x 4 search units)+
+As this hypothetical scenario illustrates, you can have configurations on lower tiers that result in similar costs as if you had opted for a higher tier in the first place. However, higher tiers come with premium storage, which makes indexing faster. Higher tiers also have much more compute power, as well as extra memory. For the same costs, you could have more powerful infrastructure backing the same index.
+
+An important benefit of added memory is that more of the index can be cached, resulting in lower search latency, and a greater number of queries per second. With this extra power, the administrator may not need to even need to increase the replica count and could potentially pay less than by staying on the S1 service.
+
+## Next steps
+
+Review these additional articles related to service performance.
+++ [Analyze performance](search-performance-analysis.md)++ [Choose a service tier](search-sku-tier.md)++ [Add capacity (replicas and partitions)](search-capacity-planning.md#adjust-capacity)
search Search Sku Manage Costs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/search-sku-manage-costs.md
The minimum charge is the first search unit (one replica x one partition) at the
Beyond the minimum, you can add replicas and partitions independently of each other. Incremental increases in capacity through replicas and partitions will increase your bill based on the following formula: **(replicas x partitions x billing rate)**, where the rate you're charged depends on the pricing tier you select.
-When you're estimating the cost of a search solution, keep in mind that pricing and capacity aren't linear (doubling capacity more than doubles the cost). For an example of how of the formula works, see [How to allocate replicas and partitions](search-capacity-planning.md#how-to-allocate-replicas-and-partitions).
+When you're estimating the cost of a search solution, keep in mind that pricing and capacity aren't linear (doubling capacity more than doubles the cost on the same tier). Also, at some point, switching up to a higher tier can give you better and faster performance at roughly the same price point. For more information and an example, see [Upgrade to a Standard S2 tier](search-performance-tips.md#tip-upgrade-to-a-standard-s2-tier).
### Bandwidth charges
search Whats New https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/whats-new.md
Previously updated : 03/12/2021 Last updated : 04/07/2021 # What's new in Azure Cognitive Search
Learn what's new in the service. Bookmark this page to keep up to date with the
| [Semantic search](semantic-search-overview.md) | A collection of query-related features that significantly improve the relevance of search results through minimal adjustments to a query request. </br></br>[Semantic ranking](semantic-ranking.md) computes relevance scores using the semantic meaning behind words and content. </br></br>[Semantic captions](semantic-how-to-query-request.md) return relevant passages from the document that best summarize the document, with highlights over the most important terms or phrases. </br></br>[Semantic answers](semantic-answers.md) return key passages, extracted from a search document, that are formulated as a direct answer to a query that looks like a question. | Public preview ([by request](https://aka.ms/SemanticSearchPreviewSignup)). </br></br>Use [Search Documents (REST)](/rest/api/searchservice/preview-api/search-documents) api-version=2020-06-30-Preview or [Search explorer](search-explorer.md) in Azure portal. </br></br>Region and tier restrictions apply. | | [Spell check query terms](speller-how-to-add.md) | Before query terms reach the search engine, you can have them checked for spelling errors. The `speller` option works with any query type (simple, full, or semantic). | Public preview, REST only, api-version=2020-06-30-Preview| | [SharePoint Online indexer](search-howto-index-sharepoint-online.md) | This indexer connects you to a SharePoint Online site so that you can index content from a document library. | Public preview, REST only, api-version=2020-06-30-Preview |
-| [Normalizers](search-normalizers.md) | Normalizers provide simple text pre-processing like casing, accent removal, asciifolding and so forth without undergoing through the entire analysis chain.| Public preview, REST only, api-version=2020-06-30-Preview |
-[**Custom Entity Lookup skill**](cognitive-search-skill-custom-entity-lookup.md ) | A cognitive skill that looks for text from a custom, user-defined list of words and phrases. Using this list, it labels all documents with any matching entities. The skill also supports a degree of fuzzy matching that can be applied to find matches that are similar but not quite exact. | Generally available. |
-|
+| [Normalizers](search-normalizers.md) | Normalizers provide simple text pre-processing: consistent casing, accent removal, and ASCII folding, without invoking the full text analysis chain.| Public preview, REST only, api-version=2020-06-30-Preview |
+| [Custom Entity Lookup skill](cognitive-search-skill-custom-entity-lookup.md ) | A cognitive skill that looks for text from a custom, user-defined list of words and phrases. Using this list, it labels all documents with any matching entities. The skill also supports a degree of fuzzy matching that can be applied to find matches that are similar but not quite exact. | Generally available. |
## February 2021
security-center Container Security https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security-center/container-security.md
Title: Container Security in Azure Security Center | Microsoft Docs
-description: "Learn about Azure Security Center's container security features."
-
+ Title: Container security with Azure Security Center and Azure Defender
+description: Learn about Azure Security Center's container security features
- Previously updated : 02/07/2021 Last updated : 04/06/2021
Security Center can protect the following container resource types:
| Resource type | Protections offered by Security Center | |:--:|--|
-| ![Kubernetes service](./medi).<br>[Learn more about run-time protection for AKS nodes and clusters](#run-time-protection-for-aks-nodes-and-clusters).|
-| ![Container host](./medi).<br>[Learn more about environment hardening through security recommendations](#environment-hardening).|
-| ![Container registry](./medi).<br>[Learn more about scanning your container images for vulnerabilities](#vulnerability-managementscanning-container-images). |
+| ![Kubernetes service](./medi). This Azure Defender plan defends your Kubernetes clusters whether they're hosted in Azure Kubernetes Service (AKS), on-premises, or on other cloud providers. clusters. <br>Learn more about [run-time protection for Kubernetes nodes and clusters](#run-time-protection-for-kubernetes-nodes-and-clusters).|
+| ![Container host](./medi).<br>Learn more about [environment hardening through security recommendations](#environment-hardening).|
+| ![Container registry](./medi).<br>Learn more about [scanning your container images for vulnerabilities](#vulnerability-managementscanning-container-images). |
||| This article describes how you can use Security Center, together with the optional Azure Defender plans for container registries, severs, and Kubernetes, to improve, monitor, and maintain the security of your containers and their apps.
You'll learn how Security Center helps with these core aspects of container secu
- [Vulnerability management - scanning container images](#vulnerability-managementscanning-container-images) - [Environment hardening](#environment-hardening)-- [Run-time protection for AKS nodes and clusters](#run-time-protection-for-aks-nodes-and-clusters)
+- [Run-time protection for Kubernetes nodes and clusters](#run-time-protection-for-kubernetes-nodes-and-clusters)
The following screenshot shows the asset inventory page and the various container resource types protected by Security Center.
For example, you can mandate that privileged containers shouldn't be created, an
Learn more in [Protect your Kubernetes workloads](kubernetes-workload-protections.md).
-## Run-time protection for AKS nodes and clusters
+## Run-time protection for Kubernetes nodes and clusters
[!INCLUDE [AKS in ASC threat protection](../../includes/security-center-azure-kubernetes-threat-protection.md)]
security-center Defender For Container Registries Introduction https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security-center/defender-for-container-registries-introduction.md
Title: Azure Defender for container registries - the benefits and features
description: Learn about the benefits and features of Azure Defender for container registries. Previously updated : 9/22/2020 Last updated : 04/07/2021
There are three triggers for an image scan:
- **On push** - Whenever an image is pushed to your registry, Security Center automatically scans that image. To trigger the scan of an image, push it to your repository. -- **Recently pulled** - Since new vulnerabilities are discovered every day, **Azure Defender for container registries** also scans any image that has been pulled within the last 30 days. There's no additional charge for a rescan; as mentioned above, you're billed once per image.
+- **Recently pulled** - Since new vulnerabilities are discovered every day, **Azure Defender for container registries** also scans, on a weekly basis, any image that has been pulled within the last 30 days. There's no additional charge for these rescans; as mentioned above, you're billed once per image.
- **On import** - Azure Container Registry has import tools to bring images to your registry from Docker Hub, Microsoft Container Registry, or another Azure container registry. **Azure Defender for container registries** scans any supported images you import. Learn more in [Import container images to a container registry](../container-registry/container-registry-import-images.md).
security-center Defender For Kubernetes Azure Arc https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security-center/defender-for-kubernetes-azure-arc.md
Previously updated : 04/05/2021 Last updated : 04/06/2021 # Defend Azure Arc enabled Kubernetes clusters running in on-premises and multi-cloud environments
-To defend your on-premises clusters with the same threat detection capabilities offered today for Azure Kubernetes Service clusters, enable Azure Arc on the clusters and deploy the **Azure Defender for Kubernetes cluster extension**
+The **Azure Defender for Kubernetes clusters extension** can defend your on-premises clusters with the same threat detection capabilities offered for Azure Kubernetes Service clusters. Enable [Azure Arc enabled Kubernetes](../azure-arc/kubernetes/overview.md) on your clusters and deploy the extension as described on this page.
-You can also use the extension to protect Kubernetes clusters deployed on machines in other cloud providers, although not on their managed Kubernetes services.
+The extension can also protect Kubernetes clusters on other cloud providers, although not on their managed Kubernetes services.
> [!TIP] > We've put some sample files to help with the installation process in [Installation examples on GitHub](https://aka.ms/kubernetes-extension-installation-examples).
You can also use the extension to protect Kubernetes clusters deployed on machin
| Aspect | Details | |--||
-| Release state | **Preview** [!INCLUDE [Legalese](../../includes/security-center-preview-legal-text.md)]|
+| Release state | **Preview**<br>[!INCLUDE [Legalese](../../includes/security-center-preview-legal-text.md)]|
| Required roles and permissions | [Security admin](../role-based-access-control/built-in-roles.md#security-admin) can dismiss alerts<br>[Security reader](../role-based-access-control/built-in-roles.md#security-reader) can view findings | | Pricing | Requires [Azure Defender for Kubernetes](defender-for-kubernetes-introduction.md) | | Supported Kubernetes distributions | [Azure Kubernetes Service on Azure Stack HCI](/azure-stack/aks-hci/overview)<br>[Kubernetes](https://kubernetes.io/docs/home/)<br> [AKS Engine](https://github.com/Azure/aks-engine)<br> [Red Hat OpenShift](https://www.openshift.com/learn/topics/kubernetes/) (version 4.6 or newer) |
A dedicated recommendation in Azure Security Center provides:
### Use Azure CLI to deploy the Azure Defender extension
-1. Login to Azure:
+1. Log in to Azure:
```azurecli az login
A full list of supported alerts is available in the [reference table of all secu
## Removing the Azure Defender extension
-You can remove the extension using Azure portal, Azure CLI or REST API as explained in the tabs below.
+You can remove the extension using Azure portal, Azure CLI, or REST API as explained in the tabs below.
### [**Azure portal - Arc**](#tab/k8s-remove-arc)
security-center Defender For Kubernetes Introduction https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security-center/defender-for-kubernetes-introduction.md
Title: Azure Defender for Kubernetes - the benefits and features
description: Learn about the benefits and features of Azure Defender for Kubernetes. Previously updated : 02/07/2021 Last updated : 04/07/2021
# Introduction to Azure Defender for Kubernetes
-Azure Kubernetes Service (AKS) is Microsoft's managed service for developing, deploying, and managing containerized applications.
+Azure Defender for Kubernetes is the Azure Defender plan providing protections for your Kubernetes clusters wherever they're running.
+
+We can defend clusters in:
+
+- **Azure Kubernetes Service (AKS)** - Microsoft's managed service for developing, deploying, and managing containerized applications
+
+- **On-premises and multi-cloud environments** - Using an [extension for Arc enabled Kubernetes](defender-for-kubernetes-azure-arc.md)
Azure Security Center and AKS form a cloud-native Kubernetes security offering with environment hardening, workload protection, and run-time protection as outlined in [Container security in Security Center](container-security.md).
-For threat detection for your Kubernetes clusters, enable **Azure Defender for Kubernetes**.
+Host-level threat detection for your Linux AKS nodes is available if you enable [Azure Defender for servers](defender-for-servers-introduction.md) and its Log Analytics agent. However, if your cluster is deployed on a virtual machine scale set, the Log Analytics agent is not currently supported.
+
-Host-level threat detection for your Linux AKS nodes is available if you enable [Azure Defender for servers](defender-for-servers-introduction.md) and its Log Analytics agent. However, if your AKS cluster is deployed on a virtual machine scale set, the Log Analytics agent is not currently supported.
## Availability
Host-level threat detection for your Linux AKS nodes is available if you enable
## What are the benefits of Azure Defender for Kubernetes?
-Azure Defender for Kubernetes provides **cluster-level threat protection** by monitoring your AKS-managed services through the logs retrieved by Azure Kubernetes Service (AKS).
+Azure Defender for Kubernetes provides **cluster-level threat protection** by monitoring your clusters' logs.
-Examples of security events that Azure Defender for Kubernetes monitors include exposed Kubernetes dashboards, creation of high privileged roles, and the creation of sensitive mounts. For a full list of the AKS cluster level alerts, see the [reference table of alerts](alerts-reference.md#alerts-akscluster).
+Examples of security events that Azure Defender for Kubernetes monitors include exposed Kubernetes dashboards, creation of high privileged roles, and the creation of sensitive mounts. For a full list of the cluster level alerts, see the [reference table of alerts](alerts-reference.md#alerts-akscluster).
> [!TIP] > You can simulate container alerts by following the instructions in [this blog post](https://techcommunity.microsoft.com/t5/azure-security-center/how-to-demonstrate-the-new-containers-features-in-azure-security/ba-p/1011270).
Examples of security events that Azure Defender for Kubernetes monitors include
Also, our global team of security researchers constantly monitor the threat landscape. They add container-specific alerts and vulnerabilities as they're discovered. >[!NOTE]
-> Security Center generates security alerts for Azure Kubernetes Service actions and deployments occurring **after** you've enabled Azure Defender for Kubernetes.
+> Azure Defender generates security alerts for actions and deployments that occur after you've enabled the Defender for Kubernetes plan on your subscription.
## Azure Defender for Kubernetes - FAQ
-### Can I still get AKS protections without the Log Analytics agent?
+### Can I still get cluster protections without the Log Analytics agent?
**Azure Defender for Kubernetes** plan provides protections at the cluster level. If you also deploy the Log Analytics agent of **Azure Defender for servers**, you'll get the threat protection for your nodes that's provided with that plan. Learn more in [Introduction to Azure Defender for servers](defender-for-servers-introduction.md).
For Azure Defender to monitor your AKS nodes, they must be running the Log Analy
AKS is a managed service and since the Log analytics agent is a Microsoft-managed extension, it is also supported on AKS clusters. ### If my cluster is already running an Azure Monitor for containers agent, do I need the Log Analytics agent too?
-For Azure Defender to monitor your AKS nodes, they must be running the Log Analytics agent.
+For Azure Defender to monitor your nodes, they must be running the Log Analytics agent.
If your clusters are already running the Azure Monitor for containers agent, you can install the Log Analytics agent too and the two agents can work alongside one another without any problems.
security-center Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security-center/release-notes.md
Updates in April include:
- [11 Azure Defender alerts deprecated](#11-azure-defender-alerts-deprecated) - [Two recommendations from "Apply system updates" security control were deprecated](#two-recommendations-from-apply-system-updates-security-control-were-deprecated) - ### Four new recommendations related to guest configuration (preview) Azure's [Guest Configuration extension](../governance/policy/concepts/guest-configuration.md) reports to Security Center to help ensure your virtual machines' in-guest settings are hardened. The extension isn't required for Arc enabled servers because it's included in the Arc Connected Machine agent. The extension requires a system-managed identity on the machine.
Learn more in [Understand Azure Policy's Guest Configuration](../governance/poli
### Use Azure Defender for Kubernetes to protect hybrid and multi-cloud Kubernetes deployments (preview)
-Azure Defender for Kubernetes is expanding its threat protection capabilities to defend your clusters wherever they're deployed. This has been enabled by integrating with [Azure Arc enabled Kubernetes](../azure-arc/kubernetes/overview.md) and its new extensions capabilities.
+Azure Defender for Kubernetes is expanding its threat protection capabilities to defend your clusters wherever they're deployed. This has been enabled by integrating with [Azure Arc enabled Kubernetes](../azure-arc/kubernetes/overview.md) and its new [extensions capabilities](../azure-arc/kubernetes/extensions.md).
When you've enabled Azure Arc on your non-Azure Kubernetes clusters, a new recommendation from Azure Security Center offers to deploy the Azure Defender extension to them with only a few clicks.
This integration between Azure Security Center, Azure Defender, and Azure Arc en
Learn more in [Use Azure Defender for Kubernetes with your on-premises and multi-cloud Kubernetes clusters](defender-for-kubernetes-azure-arc.md). ++ ### 11 Azure Defender alerts deprecated The eleven Azure Defender alerts listed below have been deprecated.
security-center Security Center Alerts Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security-center/security-center-alerts-overview.md
Previously updated : 02/25/2021 Last updated : 04/07/2021 # Security alerts and incidents in Azure Security Center
The severity is based on how confident Security Center is in the finding or the
| **High** | There is a high probability that your resource is compromised. You should look into it right away. Security Center has high confidence in both the malicious intent and in the findings used to issue the alert. For example, an alert that detects the execution of a known malicious tool such as Mimikatz, a common tool used for credential theft. | | **Medium** | This is probably a suspicious activity might indicate that a resource is compromised. Security Center's confidence in the analytic or finding is medium and the confidence of the malicious intent is medium to high. These would usually be machine learning or anomaly-based detections. For example, a sign-in attempt from an anomalous location. | | **Low** | This might be a benign positive or a blocked attack. Security Center isn't confident enough that the intent is malicious and the activity might be innocent. For example, log clear is an action that might happen when an attacker tries to hide their tracks, but in many cases is a routine operation performed by admins. Security Center doesn't usually tell you when attacks were blocked, unless it's an interesting case that we suggest you look into. |
-| **Informational** | You will only see informational alerts when you drill down into a security incident, or if you use the REST API with a specific alert ID. An incident is typically made up of a number of alerts, some of which might appear on their own to be only informational, but in the context of the other alerts might be worthy of a closer look. |
+| **Informational** | An incident is typically made up of a number of alerts, some of which might appear on their own to be only informational, but in the context of the other alerts might be worthy of a closer look. |
## Export alerts
security-center Security Center Provide Security Contact Details https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security-center/security-center-provide-security-contact-details.md
Previously updated : 02/09/2021 Last updated : 04/07/2021
You can also manage your email notifications through the supplied REST API. For
This is an example request body for the PUT request when creating a security contact configuration:
+URI: https://management.azure.com/subscriptions/<SubscriptionId>/providers/Microsoft.Security/securityContacts/default?api-version=2020-01-01-preview
+ ```json { "properties": {
This is an example request body for the PUT request when creating a security con
}, "alertNotifications": { "state": "On",
- "minimalSeverity": "High"
+ "minimalSeverity": "Medium"
}, "phone": "" }
security End To End https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security/fundamentals/end-to-end.md
+
+ Title: End-to-end security in Azure | Microsoft Docs
+description: The article provides a map of Azure services that help you secure and protect your cloud resources and detect and investigate threats.
+
+documentationcenter: na
+++
+ms.assetid: a5a7f60a-97e2-49b4-a8c5-7c010ff27ef8
++
+ms.devlang: na
+
+ na
+ Last updated : 4/07/2021+++
+# End-to-end security in Azure
+One of the best reasons to use Azure for your applications and services is to take advantage of its wide array of security tools and capabilities. These tools and capabilities help make it possible to create secure solutions on the secure Azure platform. Microsoft Azure provides confidentiality, integrity, and availability of customer data, while also enabling transparent accountability.
+
+The following diagram and documentation introduces you to the security services in Azure. These security services help you meet the security needs of your business and protect your users, devices, resources, data, and applications in the cloud.
+
+## Microsoft security services map
+
+The security services map organizes services by the resources they protect (column). The diagram also groups services into the following categories (row):
+
+- Secure and protect - Services that let you implement a layered, defense in-depth strategy across identity, hosts, networks, and data. This collection of security services and capabilities provides a way to understand and improve your security posture across your Azure environment.
+- Detect threats ΓÇô Services that identify suspicious activities and facilitate mitigating the threat.
+- Investigate and respond ΓÇô Services that pull logging data so you can assess a suspicious activity and respond.
+
+The diagram includes the Azure Security Benchmark program, a collection of high-impact security recommendations you can use to help secure the services you use in Azure.
++
+## Security controls and baselines
+The [Azure Security Benchmark](../benchmarks/introduction.md) program includes a collection of high-impact security recommendations you can use to help secure the services you use in Azure:
+
+- Security controls - These recommendations are generally applicable across your Azure tenant and Azure services. Each recommendation identifies a list of stakeholders that are typically involved in planning, approval, or implementation of the benchmark.
+- Service baselines - These apply the controls to individual Azure services to provide recommendations on that serviceΓÇÖs security configuration.
+
+## Secure and protect
++
+| Service | Description |
+||--|
+| [Azure Security Center](../../security-center/security-center-introduction.md)| A unified infrastructure security management system that strengthens the security posture of your data centers, and provides advanced threat protection across your hybrid workloads in the cloud - whether they're in Azure or not - as well as on premises. |
+| **Identity&nbsp;&&nbsp;Access&nbsp;Management** | |
+| [Azure Active Directory (AD)](../../active-directory/fundamentals/active-directory-whatis.md)| MicrosoftΓÇÖs cloud-based identity and access management service. |
+| | [Conditional Access](../../active-directory/conditional-access/overview.md) is the tool used by Azure AD to bring identity signals together, to make decisions, and enforce organizational policies. |
+| | [Domain Services](../../active-directory-domain-services/overview.md) is the tool used by Azure AD to provide managed domain services such as domain join, group policy, lightweight directory access protocol (LDAP), and Kerberos/NTLM authentication. |
+| | [Privileged Identity Management (PIM)](../../active-directory/privileged-identity-management/pim-configure.md) is a service in Azure AD that enables you to manage, control, and monitor access to important resources in your organization. |
+| | [Multi-factor authentication](../../active-directory/authentication/concept-mfa-howitworks.md) is the tool used by Azure AD to help safeguard access to data and applications by requiring a second form of authentication. |
+| [Azure AD Identity Protection](../../active-directory/identity-protection/overview-identity-protection.md) | A tool that allows organizations to automate the detection and remediation of identity-based risks, investigate risks using data in the portal, and export risk detection data to third-party utilities for further analysis. |
+| **Infrastructure&nbsp;&&nbsp;Network** | |
+| [VPN Gateway](../../vpn-gateway/vpn-gateway-about-vpngateways.md) | A virtual network gateway that is used to send encrypted traffic between an Azure virtual network and an on-premises location over the public Internet and to send encrypted traffic between Azure virtual networks over the Microsoft network. |
+| [Azure DDoS Protection Standard](../../ddos-protection/ddos-protection-overview.md) | Provides enhanced DDoS mitigation features to defend against DDoS attacks. It is automatically tuned to help protect your specific Azure resources in a virtual network. |
+| [Azure Front Door](../../frontdoor/front-door-overview.md) | A global, scalable entry-point that uses the Microsoft global edge network to create fast, secure, and widely scalable web applications. |
+| [Azure Firewall](../../firewall/overview.md) | A managed, cloud-based network security service that protects your Azure Virtual Network resources. It's a fully stateful firewall as a service with built-in high availability and unrestricted cloud scalability. |
+| [Azure Key Vault](../../key-vault/general/overview.md) | A managed, cloud-based network security service that protects your Azure Virtual Network resources. It's a fully stateful firewall as a service with built-in high availability and unrestricted cloud scalability. |
+| [Key Vault Managed HSDM (preview)](../../key-vault/managed-hsm/overview.md) | A fully managed, highly available, single-tenant, standards-compliant cloud service that enables you to safeguard cryptographic keys for your cloud applications, using FIPS 140-2 Level 3 validated HSMs. |
+| [Azure Private Link](../../private-link/private-link-overview.md) | Enables you to access Azure PaaS Services (for example, Azure Storage and SQL Database) and Azure hosted customer-owned/partner services over a private endpoint in your virtual network. |
+| [Azure Application Gateway](../../application-gateway/overview.md) | An advanced web traffic load balancer that enables you to manage traffic to your web applications. Application Gateway can make routing decisions based on additional attributes of an HTTP request, for example URI path or host headers. |
+| [Azure Service Bus](../../service-bus-messaging/service-bus-messaging-overview.md) | A fully managed enterprise message broker with message queues and publish-subscribe topics. Service Bus is used to decouple applications and services from each other. |
+| [Web Application Firewall](../../web-application-firewall/overview.md) | Provides centralized protection of your web applications from common exploits and vulnerabilities. WAF can be deployed with Azure Application Gateway and Azure Front Door. |
+| **Data & Application** | |
+| [Azure Backup](../../backup/backup-overview.md) | Provides simple, secure, and cost-effective solutions to back up your data and recover it from the Microsoft Azure cloud. |
+| [Azure Storage Service Encryption](../../storage/common/storage-service-encryption.md) | Automatically encrypts data before it is stored and automatically decrypts the data when you retrieve it. |
+| [Azure Information Protection](https://docs.microsoft.com/azure/information-protection/what-is-information-protection) | A cloud-based solution that enables organizations to discover, classify, and protect documents and emails by applying labels to content. |
+| [API Management](../../api-management/api-management-key-concepts.md) | A way to create consistent and modern API gateways for existing back-end services. |
+| [Azure confidential computing](../../confidential-computing/overview.md) | Allows you to isolate your sensitive data while it's being processed in the cloud. |
+| [Azure DevOps](https://docs.microsoft.com/azure/devops/user-guide/what-is-azure-devops) | Your development projects benefit from multiple layers of security and governance technologies, operational practices, and compliance policies when stored in Azure DevOps. |
+| **Customer Access** | |
+| [Azure AD External Identities](../../active-directory/external-identities/compare-with-b2c.md) | With External Identities in Azure AD, you can allow people outside your organization to access your apps and resources, while letting them sign in using whatever identity they prefer. |
+| | You can share your apps and resources with external users via [Azure AD B2B](../../active-directory/external-identities/what-is-b2b.md) collaboration. |
+| | [Azure AD B2C](../../active-directory-b2c/overview.md) lets you support millions of users and billions of authentications per day, monitoring and automatically handling threats like denial-of-service, password spray, or brute force attacks. |
+
+## Detect threats
++
+| Service | Description |
+||--|
+| [Azure Defender](../../security-center/azure-defender.md) | Brings advanced, intelligent, protection of your Azure and hybrid resources and workloads. The Azure Defender dashboard in Security Center provides visibility and control of the cloud workload protection features for your environment. |
+| [Azure Sentinel](../../sentinel/overview.md) | A scalable, cloud-native, security information event management (SIEM) and security orchestration automated response (SOAR) solution. Sentinel delivers intelligent security analytics and threat intelligence across the enterprise, providing a single solution for alert detection, threat visibility, proactive hunting, and threat response. |
+| **Identity&nbsp;&&nbsp;Access&nbsp;Management** | |
+| [Microsoft 365 Defender](https://docs.microsoft.com/microsoft-365/security/defender/microsoft-365-defender) | A unified pre- and post-breach enterprise defense suite that natively coordinates detection, prevention, investigation, and response across endpoints, identities, email, and applications to provide integrated protection against sophisticated attacks. |
+| | [Microsoft Defender for Endpoint](https://docs.microsoft.com/microsoft-365/security/defender-endpoint/microsoft-defender-endpoint.md) is an enterprise endpoint security platform designed to help enterprise networks prevent, detect, investigate, and respond to advanced threats. |
+| | [Microsoft Defender for Identity](https://docs.microsoft.com/defender-for-identity/what-is) is a cloud-based security solution that leverages your on-premises Active Directory signals to identify, detect, and investigate advanced threats, compromised identities, and malicious insider actions directed at your organization. |
+| [Azure AD Identity Protection](../../active-directory/identity-protection/howto-identity-protection-configure-notifications.md) | Sends two types of automated notification emails to help you manage user risk and risk detections: Users at risk detected email and Weekly digest email. |
+| **Infrastructure & Network** | |
+| [Azure Defender for IoT](../../defender-for-iot/overview.md) | A unified security solution for identifying IoT/OT devices, vulnerabilities, and threats. It enables you to secure your entire IoT/OT environment, whether you need to protect existing IoT/OT devices or build security into new IoT innovations. |
+| [Azure Network Watcher](../../network-watcher/network-watcher-monitoring-overview.md) | Provides tools to monitor, diagnose, view metrics, and enable or disable logs for resources in an Azure virtual network. Network Watcher is designed to monitor and repair the network health of IaaS products which includes virtual machines, virtual networks, application gateways, and load balancers. |
+| [Azure Policy audit logging](../../governance/policy/overview.md) | Helps to enforce organizational standards and to assess compliance at-scale. Azure Policy uses activity logs, which are automatically enabled to include event source, date, user, timestamp, source addresses, destination addresses, and other useful elements. |
+| **Data & Application** | |
+| [Azure Defender for container registries](../../security-center/defender-for-container-registries-introduction.md) | Includes a vulnerability scanner to scan the images in your Azure Resource Manager-based Azure Container Registry registries and provide deeper visibility into your images' vulnerabilities. |
+| [Azure Defender for Kubernetes](../../security-center/defender-for-kubernetes-introduction.md) | Provides cluster-level threat protection by monitoring your AKS-managed services through the logs retrieved by Azure Kubernetes Service (AKS). |
+| [Microsoft Cloud App Security](https://docs.microsoft.com/cloud-app-security/what-is-cloud-app-security) | A Cloud Access Security Broker (CASB) that operates on multiple clouds. It provides rich visibility, control over data travel, and sophisticated analytics to identify and combat cyberthreats across all your cloud services. |
+
+## Investigate and respond
++
+| Service | Description |
+||--|
+| [Azure Sentinel](../../sentinel/hunting.md) | Powerful search and query tools to hunt for security threats across your organization's data sources. |
+| [Azure&nbsp;Monitor&nbsp;logs&nbsp;and&nbsp;metrics](../../azure-monitor/overview.md) | Delivers a comprehensive solution for collecting, analyzing, and acting on telemetry from your cloud and on-premises environments. Azure Monitor [collects and aggregates data](../../azure-monitor/data-platform.md#observability-data-in-azure-monitor) from a variety of sources into a common data platform where it can be used for analysis, visualization, and alerting. |
+| **Identity&nbsp;&&nbsp;Access&nbsp;Management** | |
+| [Azure&nbsp;AD&nbsp;reports&nbsp;and&nbsp;monitoring](https://docs.microsoft.com/azure/active-directory/reports-monitoring/) | [Azure AD reports](../../active-directory/reports-monitoring/overview-reports.md) provide a comprehensive view of activity in your environment. |
+| | [Azure AD monitoring](../../active-directory/reports-monitoring/overview-monitoring.md) lets you route your Azure AD activity logs to different endpoints.|
+| [Azure AD PIM audit history](../../active-directory/privileged-identity-management/pim-how-to-use-audit-log.md) | Shows all role assignments and activations within the past 30 days for all privileged roles. |
+| **Data & Application** | |
+| [Microsoft Cloud App Security](https://docs.microsoft.com/cloud-app-security/investigate) | Provides tools to gain a deeper understanding of what's happening in your cloud environment. |
+
+## Next steps
+
+- Understand your [shared responsibility in the cloud](shared-responsibility.md).
+
+- Understand the [isolation choices in the Azure cloud](isolation-choices.md) against both malicious and non-malicious users.
sentinel Connect Logstash https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/connect-logstash.md
The Logstash engine is comprised of three components:
- Output plugins: Customized sending of collected and processed data to various destinations. > [!NOTE]
-> Azure Sentinel supports its own provided output plugin only. It does not support third-party output plugins for Azure Sentinel, or any other Logstash plugin of any type.
+> - Azure Sentinel supports its own provided output plugin only. The current version of this plugin is v1.0.0, released 2020-08-25. It does not support third-party output plugins for Azure Sentinel, or any other Logstash plugin of any type.
+>
+> - Azure Sentinel's Logstash output plugin supports only **Logstash versions from 7.0 to 7.9**.
The Azure Sentinel output plugin for Logstash sends JSON-formatted data to your Log Analytics workspace, using the Log Analytics HTTP Data Collector REST API. The data is ingested into custom logs.
Use the information in the Logstash [Structure of a config file](https://www.ela
| Field name | Data type | Description | |-||--|
-| `workspace_id` | string | Enter your workspace ID GUID. * |
-| `workspace_key` | string | Enter your workspace primary key GUID. * |
+| `workspace_id` | string | Enter your workspace ID GUID (see Tip). |
+| `workspace_key` | string | Enter your workspace primary key GUID (see Tip). |
| `custom_log_table_name` | string | Set the name of the table into which the logs will be ingested. Only one table name per output plugin can be configured. The log table will appear in Azure Sentinel under **Logs**, in **Tables** in the **Custom Logs** category, with a `_CL` suffix. | | `endpoint` | string | Optional field. By default, this is the Log Analytics endpoint. Use this field to set an alternative endpoint. | | `time_generated_field` | string | Optional field. This property overrides the default **TimeGenerated** field in Log Analytics. Enter the name of the timestamp field in the data source. The data in the field must conform to the ISO 8601 format (`YYYY-MM-DDThh:mm:ssZ`) | | `key_names` | array | Enter a list of Log Analytics output schema fields. Each list item should be enclosed in single quotes and the items separated by commas, and the entire list enclosed in square brackets. See example below. | | `plugin_flush_interval` | number | Optional field. Set to define the maximum interval (in seconds) between message transmissions to Log Analytics. The default is 5. |
- | `amount_resizing` | boolean | True or false. Enable or disable the automatic scaling mechanism, which adjusts the message buffer size according to the volume of log data received. |
+| `amount_resizing` | boolean | True or false. Enable or disable the automatic scaling mechanism, which adjusts the message buffer size according to the volume of log data received. |
| `max_items` | number | Optional field. Applies only if `amount_resizing` set to "false." Use to set a cap on the message buffer size (in records). The default is 2000. | | `azure_resource_id` | string | Optional field. Defines the ID of the Azure resource where the data resides. <br>The resource ID value is especially useful if you are using [resource-context RBAC](resource-context-rbac.md) to provide access to specific data only. | | | | |
-* You can find the workspace ID and primary key in the workspace resource, under **Agents management**.
+> [!TIP]
+> - You can find the workspace ID and primary key in the workspace resource, under **Agents management**.
+> - **However**, because having credentials and other sensitive information stored in cleartext in configuration files is not in line with security best practices, you are strongly encouraged to make use of the **Logstash key store** in order to securely include your **workspace ID** and **workspace primary key** in the configuration. See [Elastic's documentation](https://www.elastic.co/guide/en/elasticsearch/reference/current/get-started-logstash-user.html) for instructions.
#### Sample configurations
If you are not seeing any data in this log file, generate and send some events l
## Next steps In this document, you learned how to use Logstash to connect external data sources to Azure Sentinel. To learn more about Azure Sentinel, see the following articles:-- Learn how to [get visibility into your data, and potential threats](quickstart-get-visibility.md).
+- Learn how to [get visibility into your data and potential threats](quickstart-get-visibility.md).
- Get started detecting threats with Azure Sentinel, using [built-in](tutorial-detect-threats-built-in.md) or [custom](tutorial-detect-threats-custom.md) rules.
service-fabric Service Fabric Cross Availability Zones https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-fabric/service-fabric-cross-availability-zones.md
The recommended topology for the primary node type requires the resources outlin
* A NSG referenced by the subnet in which you deploy your virtual machine scale sets. >[!NOTE]
-> The virtual machine scale set single placement group property must be set to true, since Service Fabric does not support a single virtual machine scale set which spans zones.
+> The virtual machine scale set single placement group property must be set to true.
Diagram that shows the Azure Service Fabric Availability Zone architecture ![Diagram that shows the Azure Service Fabric Availability Zone architecture.][sf-architecture]
service-fabric Service Fabric Stateless Node Types https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-fabric/service-fabric-stateless-node-types.md
Title: Deploy Stateless-only node types in a Service Fabric cluster
-description: Learn how to create and deploy stateless node types in Azure Service fabric cluster.
+description: Learn how to create and deploy stateless node types in Azure Service Fabric cluster.
Last updated 09/25/2020
-# Deploy an Azure Service Fabric cluster with stateless-only node types (Preview)
+# Deploy an Azure Service Fabric cluster with stateless-only node types
Service Fabric node types come with inherent assumption that at some point of time, stateful services might be placed on the nodes. Stateless node types relax this assumption for a node type, thus allowing node type to use other features such as faster scale out operations, support for Automatic OS Upgrades on Bronze durability and scaling out to more than 100 nodes in a single virtual machine scale set. * Primary node types cannot be configured to be stateless
Service Fabric node types come with inherent assumption that at some point of ti
Sample templates are available: [Service Fabric Stateless Node types template](https://github.com/Azure-Samples/service-fabric-cluster-templates) ## Enabling stateless node types in Service Fabric cluster
-To set one or more node types as stateless in a cluster resource, set the **isStateless** property to "true". When deploying a Service Fabric cluster with stateless node types, do remember to have atleast one primary node type in the cluster resource.
+To set one or more node types as stateless in a cluster resource, set the **isStateless** property to **true**. When deploying a Service Fabric cluster with stateless node types, do remember to have atleast one primary node type in the cluster resource.
* The Service Fabric cluster resource apiVersion should be "2020-12-01-preview" or higher.
To set one or more node types as stateless in a cluster resource, set the **isSt
}, "httpGatewayEndpointPort": "[parameters('nt0fabricHttpGatewayPort')]", "isPrimary": true,
- "isStateless": false,
+ "isStateless": false, // Primary Node Types cannot be stateless
"vmInstanceCount": "[parameters('nt0InstanceCount')]" }, {
To set one or more node types as stateless in a cluster resource, set the **isSt
To enable stateless node types, you should configure the underlying virtual machine scale set resource in the following way: * The value **singlePlacementGroup** property, which should be set to **false** if you require to scale to more than 100 VMs.
-* The Scale set's **upgradePolicy** **mode** should be set to **Rolling**.
+* The Scale set's **upgradeMode** should be set to **Rolling**.
* Rolling Upgrade Mode requires Application Health Extension or Health probes configured. Configure health probe with default configuration for Stateless Node types as suggested below. Once applications are deployed to the node type, Health Probe/Health extension ports can be changed to monitor application health. >[!NOTE]
-> It is required that the platform fault domain count is updated to 5 when a stateless node type is backed by a virtual machine scale set which is spanning multiple zones. Please see this [template](https://github.com/Azure-Samples/service-fabric-cluster-templates/tree/master/15-VM-2-NodeTypes-Windows-Stateless-CrossAZ-Secure) for more details.
->
-> **platformFaultDomainCount:5**
+> While using AutoScaling with Stateless nodetypes, after scale down operation, node state is not automatically cleaned up. In order to cleanup the NodeState of Down Nodes during AutoScale, using [Service Fabric AutoScale Helper](https://github.com/Azure/service-fabric-autoscale-helper) is advised.
+ ```json {
- "apiVersion": "2018-10-01",
+ "apiVersion": "2019-03-01",
"type": "Microsoft.Compute/virtualMachineScaleSets", "name": "[parameters('vmNodeType1Name')]", "location": "[parameters('computeLocation')]",
To enable stateless node types, you should configure the underlying virtual mach
"automaticOSUpgradePolicy": { "enableAutomaticOSUpgrade": true }
- }
- }
+ },
+ "platformFaultDomainCount": 5
+ },
"virtualMachineProfile": { "extensionProfile": { "extensions": [
To enable stateless node types, you should configure the underlying virtual mach
} ```
+## Configuring Stateless node types with multiple Availability Zones
+To configure Stateless nodetype spanning across multiple availability zones follow the documentation [here](https://docs.microsoft.com/azure/service-fabric/service-fabric-cross-availability-zones#preview-enable-multiple-availability-zones-in-single-virtual-machine-scale-set), along with the few changes as follows:
+
+* Set **singlePlacementGroup** : **false** if multiple placement groups is required to be enabled.
+* Set **upgradeMode** : **Rolling** and add Application Health Extension/Health Probes as mentioned above.
+* Set **platformFaultDomainCount** : **5** for virtual machine scale set.
+
+>[!NOTE]
+> Irrespective of the VMSSZonalUpgradeMode configured in the cluster, virtual machine scale set updates always happen sequentially one availability zone at a time for the stateless nodetype which spans multiple zones, as it uses the rolling upgrade mode.
+
+For reference, look at the [template](https://github.com/Azure-Samples/service-fabric-cluster-templates/tree/master/15-VM-2-NodeTypes-Windows-Stateless-CrossAZ-Secure) for configuring Stateless node types with multiple Availability Zones
+ ## Networking requirements ### Public IP and Load Balancer Resource To enable scaling to more than 100 VMs on a virtual machine scale set resource, the load balancer and IP resource referenced by that virtual machine scale set must both be using a *Standard* SKU. Creating a load balancer or IP resource without the SKU property will create a Basic SKU, which does not support scaling to more than 100 VMs. A Standard SKU load balancer will block all traffic from the outside by default; to allow outside traffic, an NSG must be deployed to the subnet.
To enable scaling to more than 100 VMs on a virtual machine scale set resource,
``` >[!NOTE]
-> It is not possible to do an in-place change of SKU on the public IP and load balancer resources. If you are migrating from existing resources which have a Basic SKU, see the migration section of this article.
+> It is not possible to do an in-place change of SKU on the public IP and load balancer resources.
### Virtual machine scale set NAT rules The load balancer inbound NAT rules should match the NAT pools from the virtual machine scale set. Each virtual machine scale set must have a unique inbound NAT pool.
Standard Load Balancer and Standard Public IP introduce new abilities and differ
-### Migrate to using Stateless node types from a cluster using a Basic SKU Load Balancer and a Basic SKU IP
+## Migrate to using Stateless node types in a cluster
For all migration scenarios, a new stateless-only node type needs to be added. Existing node type cannot be migrated to be stateless-only. To migrate a cluster, which was using a Load Balancer and IP with a basic SKU, you must first create an entirely new Load Balancer and IP resource using the standard SKU. It is not possible to update these resources in-place.
To begin, you will need to add the new resources to your existing Resource Manag
Once the resources have finished deploying, you can begin to disable the nodes in the node type that you want to remove from the original cluster.
->[!NOTE]
-> While using AutoScaling with Stateless nodetypes with Bronze Durability, after scale down operation, node state is not automatically cleaned up. In order to cleanup the NodeState of Down Nodes during AutoScale, using [Service Fabric AutoScale Helper](https://github.com/Azure/service-fabric-autoscale-helper) is advised.
- ## Next steps * [Reliable Services](service-fabric-reliable-services-introduction.md) * [Node types and virtual machine scale sets](service-fabric-cluster-nodetypes.md)
site-recovery Azure To Azure Architecture https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/site-recovery/azure-to-azure-architecture.md
Title: Azure to Azure disaster recovery architecture in Azure Site Recovery description: Overview of the architecture used when you set up disaster recovery between Azure regions for Azure VMs, using the Azure Site Recovery service. -- Last updated 3/13/2020-+
site-recovery Azure To Azure Replicate After Migration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/site-recovery/azure-to-azure-replicate-after-migration.md
Title: Set up disaster recovery after migration to Azure with Azure Site Recovery description: This article describes how to prepare machines to set up disaster recovery between Azure regions after migration to Azure using Azure Site Recovery. -- Last updated 11/14/2019- # Set up disaster recovery for Azure VMs after migration to Azure
site-recovery Azure To Azure Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/site-recovery/azure-to-azure-support-matrix.md
Title: Support matrix for Azure VM disaster recovery with Azure Site Recovery
description: Summarizes support for Azure VMs disaster recovery to a secondary region with Azure Site Recovery. Last updated 11/29/2020-+ # Support matrix for Azure VM disaster recovery between Azure regions
site-recovery Concepts Types Of Failback https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/site-recovery/concepts-types-of-failback.md
Title: Failback during disaster recovery with Azure Site Recovery | Microsoft Docs description: This article provides an overview of various types of failback and caveats to be considered while failing back to on-premises during disaster recovery with the Azure Site Recovery service.-- Last updated 08/07/2019-+ # Failback of VMware VMs after disaster recovery to Azure
site-recovery Hyper V Azure Architecture https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/site-recovery/hyper-v-azure-architecture.md
Title: Hyper-V disaster recovery architecture in Azure Site Recovery description: This article provides an overview of components and architecture used when deploying disaster recovery for on-premises Hyper-V VMs (without VMM) to Azure with the Azure Site Recovery service.-- Last updated 11/14/2019-+
site-recovery Hyper V Azure Failover Failback Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/site-recovery/hyper-v-azure-failover-failback-tutorial.md
Title: Set up failover of Hyper-V VMs to Azure in Azure Site Recovery description: Learn how to fail over Hyper-V VMs to Azure with Azure Site Recovery.-- Last updated 12/16/2019-
site-recovery Hyper V Azure Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/site-recovery/hyper-v-azure-support-matrix.md
Title: Support for disaster recovery of Hyper-V VMs to Azure with Azure Site Recovery description: Summarizes the supported components and requirements for Hyper-V VM disaster recovery to Azure with Azure Site Recovery-- Last updated 7/14/2020-
site-recovery Hyper V Azure Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/site-recovery/hyper-v-azure-tutorial.md
Title: Set up Hyper-V disaster recovery using Azure Site Recovery description: Learn how to set up disaster recovery of on-premises Hyper-V VMs (without VMM) to Azure by using Site Recovery.-- Last updated 11/12/2019- # Set up disaster recovery of on-premises Hyper-V VMs to Azure
site-recovery Hyper V Prepare On Premises Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/site-recovery/hyper-v-prepare-on-premises-tutorial.md
Title: Prepare for disaster recovery of Hyper-V VMs to Azure with Azure Site Recovery description: Learn how to prepare on-premises Hyper-V VMs for disaster recovery to Azure with Azure Site Recovery.- Last updated 11/12/2019-
site-recovery Hyper V Vmm Architecture https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/site-recovery/hyper-v-vmm-architecture.md
Title: Architecture-Hyper-V disaster recovery to a secondary site with Azure Site Recovery description: This article provides an overview of the architecture for disaster recovery of on-premises Hyper-V VMs to a secondary System Center VMM site with Azure Site Recovery.-- Last updated 11/12/2019- # Architecture - Hyper-V replication to a secondary site
site-recovery Hyper V Vmm Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/site-recovery/hyper-v-vmm-disaster-recovery.md
Title: Set up Hyper-V disaster recovery to a secondary site with Azure Site Recovery description: Learn how to set up disaster recovery for Hyper-V VMs between your on-premises sites with Azure Site Recovery.-- Last updated 11/14/2019- # Set up disaster recovery for Hyper-V VMs to a secondary on-premises site
site-recovery Hyper V Vmm Failover Failback https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/site-recovery/hyper-v-vmm-failover-failback.md
Title: Set up failover/failback to a secondary Hyper-V site with Azure Site Recovery description: Learn how to fail over Hyper-V VMs to your secondary on-premises site and fail back to primary site, during disaster recovery with Azure Site Recovery. -- Last updated 11/14/2019-
site-recovery Hyper V Vmm Network Mapping https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/site-recovery/hyper-v-vmm-network-mapping.md
Title: About Hyper-V (with VMM) network mapping with Site Recovery description: Describes how to set up network mapping for disaster recovery of Hyper-V VMs (managed in VMM clouds) to Azure, with Azure Site Recovery.-- Last updated 11/14/2019-+
site-recovery Hyper V Vmm Networking https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/site-recovery/hyper-v-vmm-networking.md
Title: Set up IP addressing after failover to a secondary site with Azure Site Recovery description: Describes how to set up IP addressing for connecting to VMs in a secondary on-premises site after disaster recovery and failover with Azure Site Recovery.-- Last updated 11/12/2019-+ # Set up IP addressing to connect to a secondary on-premises site after failover
site-recovery Hyper V Vmm Secondary Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/site-recovery/hyper-v-vmm-secondary-support-matrix.md
Title: Support matrix-Hyper-V disaster recovery to a secondary VMM site with Azure Site Recovery description: Summarizes support for Hyper-V VM replication in VMM clouds to a secondary site with Azure Site Recovery.-- Last updated 11/06/2019- # Support matrix for disaster recovery of Hyper-V VMs to a secondary site
site-recovery Migrate Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/site-recovery/migrate-overview.md
Title: Compare Azure Migrate and Site Recovery for migration to Azure description: Summarizes the advantages of using Azure Migrate for migration, instead of Site Recovery. -- Last updated 08/06/2020-
site-recovery Migrate Tutorial Aws Azure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/site-recovery/migrate-tutorial-aws-azure.md
Title: Migrate AWS VMs to Azure with Azure Migrate description: This article describes options for migrating AWS instances to Azure, and recommends Azure Migrate. -- Last updated 07/27/2019-
site-recovery Migrate Tutorial On Premises Azure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/site-recovery/migrate-tutorial-on-premises-azure.md
Title: Migrate on-premises machines with Azure Migrate description: This article summarizes how to migrate on-premises machines to Azure, and recommends Azure Migrate.- Last updated 07/27/2020-
site-recovery Migrate Tutorial Windows Server 2008 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/site-recovery/migrate-tutorial-windows-server-2008.md
Title: Migrate Windows Server 2008 servers to Azure with Azure Migrate/Site Recovery description: This article describes how to migrate on-premises Windows Server 2008 machines to Azure, and recommends Azure Migrate.-- Last updated 07/27/2020-
site-recovery Monitor Log Analytics https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/site-recovery/monitor-log-analytics.md
Title: Monitor Azure Site Recovery with Azure Monitor Logs description: Learn how to monitor Azure Site Recovery with Azure Monitor Logs (Log Analytics)-- Last updated 11/15/2019-+ # Monitor Site Recovery with Azure Monitor Logs
site-recovery Monitoring Common Questions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/site-recovery/monitoring-common-questions.md
Title: Common questions about Azure Site Recovery monitoring description: Get answers to common questions about Azure Site Recovery monitoring, using inbuilt monitoring and Azure Monitor (Log Analytics)-- Last updated 07/31/2019 - # Common questions about Site Recovery monitoring
site-recovery Physical Azure Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/site-recovery/physical-azure-disaster-recovery.md
Title: Set up disaster recovery of physical on-premis