Updates from: 06/19/2021 03:05:07
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Authorization Code Flow https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/authorization-code-flow.md
Previously updated : 05/04/2021 Last updated : 06/18/2021
client_id=90c0fe63-bcf2-44d5-8fb7-b8bbc0b29dc6
| client_id |Required |The application ID assigned to your app in the [Azure portal](https://portal.azure.com). | | response_type |Required |The response type, which must include `code` for the authorization code flow. | | redirect_uri |Required |The redirect URI of your app, where authentication responses are sent and received by your app. It must exactly match one of the redirect URIs that you registered in the portal, except that it must be URL-encoded. |
-| scope |Required |A space-separated list of scopes. A single scope value indicates to Azure Active Directory (Azure AD) both of the permissions that are being requested. Using the client ID as the scope indicates that your app needs an access token that can be used against your own service or web API, represented by the same client ID. The `offline_access` scope indicates that your app needs a refresh token for long-lived access to resources. You also can use the `openid` scope to request an ID token from Azure AD B2C. |
+| scope |Required |A space-separated list of scopes. The `openid` scope indicates a permission to sign in the user and get data about the user in the form of ID tokens. The `offline_access` scope is optional for web applications. It indicates that your application will need a *refresh token* for extended access to resources. The `https://{tenant-name}/{app-id-uri}/{scope}` indicates a permission to protected resources, such as a web API. For more information, see [Request an access token](access-tokens.md#scopes). |
| response_mode |Recommended |The method that you use to send the resulting authorization code back to your app. It can be `query`, `form_post`, or `fragment`. | | state |Recommended |A value included in the request that can be a string of any content that you want to use. Usually, a randomly generated unique value is used, to prevent cross-site request forgery attacks. The state also is used to encode information about the user's state in the app before the authentication request occurred. For example, the page the user was on, or the user flow that was being executed. | | prompt |Optional |The type of user interaction that is required. Currently, the only valid value is `login`, which forces the user to enter their credentials on that request. Single sign-on will not take effect. |
active-directory-b2c Conditional Access Technical Profile https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/conditional-access-technical-profile.md
Previously updated : 05/13/2021 Last updated : 06/18/2021
The **OutputClaims** element contains a list of claims generated by the Conditio
| ClaimReferenceId | Required | Data Type | Description | | | -- | -- |-- |
-| Challenges | Yes |stringCollection | List of actions to remediate the identified threat. Possible values: `block` |
-| MultiConditionalAccessStatus | Yes | stringCollection | |
+| Challenges | Yes |stringCollection | List of actions to remediate the identified threat. Possible values: `block` , `mfa`, and `chg_pwd`. |
+| MultiConditionalAccessStatus | Yes | stringCollection | The status of conditional access evaluation. |
The **OutputClaimsTransformations** element may contain a collection of **OutputClaimsTransformation** elements that are used to modify the output claims or generate new ones.
active-directory-b2c Openid Connect https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/openid-connect.md
Previously updated : 03/15/2021 Last updated : 06/18/2021
client_id=90c0fe63-bcf2-44d5-8fb7-b8bbc0b29dc6
| client_id | Yes | The application ID that the [Azure portal](https://portal.azure.com/) assigned to your application. | | nonce | Yes | A value included in the request (generated by the application) that is included in the resulting ID token as a claim. The application can then verify this value to mitigate token replay attacks. The value is typically a randomized unique string that can be used to identify the origin of the request. | | response_type | Yes | Must include an ID token for OpenID Connect. If your web application also needs tokens for calling a web API, you can use `code+id_token`. |
-| scope | Yes | A space-separated list of scopes. The `openid` scope indicates a permission to sign in the user and get data about the user in the form of ID tokens. The `offline_access` scope is optional for web applications. It indicates that your application will need a *refresh token* for extended access to resources. |
+| scope | Yes | A space-separated list of scopes. The `openid` scope indicates a permission to sign in the user and get data about the user in the form of ID tokens. The `offline_access` scope is optional for web applications. It indicates that your application will need a *refresh token* for extended access to resources. The `https://{tenant-name}/{app-id-uri}/{scope}` indicates a permission to protected resources, such as a web API. For more information, see [Request an access token](access-tokens.md#scopes). |
| prompt | No | The type of user interaction that's required. The only valid value at this time is `login`, which forces the user to enter their credentials on that request. | | redirect_uri | No | The `redirect_uri` parameter of your application, where authentication responses can be sent and received by your application. It must exactly match one of the `redirect_uri` parameters that you registered in the Azure portal, except that it must be URL encoded. | | response_mode | No | The method that is used to send the resulting authorization code back to your application. It can be either `query`, `form_post`, or `fragment`. The `form_post` response mode is recommended for best security. |
To set the required ID Token in logout requests, see [Configure session behavior
## Next steps -- Learn more about [Azure AD B2C session](session-behavior.md).
+- Learn more about [Azure AD B2C session](session-behavior.md).
active-directory Howto Restrict Your App To A Set Of Users https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/howto-restrict-your-app-to-a-set-of-users.md
Last updated 09/24/2018
-#Customer intent: As a tenant administrator, I want to restrict an application that I have registered in Azure AD to a select set of users available in my Azure AD tenant
+#Customer intent: As a tenant administrator, I want to restrict an application that I have registered in Azuren-e AD to a select set of users available in my Azure AD tenant
# How to: Restrict your Azure AD app to a set of users in an Azure AD tenant
The option to restrict an app to a specific set of users or security groups in a
> [!NOTE] > This feature is available for web app/web API and enterprise applications only. Apps that are registered as [native](./quickstart-register-app.md) cannot be restricted to a set of users or security groups in the tenant.
-## Update the app to enable user assignment
+## Update the app to require user assignment
-There are two ways to create an application with enabled user assignment. One requires the **Global Administrator** role, the second does not.
+To update an application to require user assignment, you must be owner of the application under Enterprise apps, or be assigned one of **Global administrator**, **Application administrator** or **Cloud application administrator** directory roles.
-### Enterprise applications (requires the Global Administrator role)
-
-1. Sign in to the <a href="https://portal.azure.com/" target="_blank">Azure portal</a> as a **Global Administrator**.
+1. Sign in to the <a href="https://portal.azure.com/" target="_blank">Azure portal</a>.
1. If you have access to multiple tenants, use the **Directory + subscription** filter :::image type="icon" source="./media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to select the tenant in which you want to register an application. 1. Search for and select **Azure Active Directory**. 1. Under **Manage**, select **Enterprise Applications** > **All applications**.
-1. Select the application you want to assign a user or a security group to from the list.
- Use the filters at the top of the window to search for a specific application.
+1. Select the application you want to configure to require assignment. Use the filters at the top of the window to search for a specific application.
1. On the application's **Overview** page, under **Manage**, select **Properties**.
-1. Locate the setting **User assignment required?** and set it to **Yes**. When this option is set to **Yes**, users in the tenant must first be assigned to this application or they won't be able to sign-in to this application.
+1. Locate the setting **User assignment required?** and set it to **Yes**. When this option is set to **Yes**, users and services attempting to access the application or services must first be assigned for this application, or they won't be able to sign-in or obtain an access token.
1. Select **Save**.
-### App registration
-
-1. Sign in to the <a href="https://portal.azure.com/" target="_blank">Azure portal</a>.
-1. If you have access to multiple tenants, use the **Directory + subscription** filter :::image type="icon" source="./media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to select the tenant in which you want to register an application.
-1. Search for and select **Azure Active Directory**.
-1. Under **Manage**, select **App registrations**.
-1. Create or select the app you want to manage. You need to be the **Owner** of this application.
-1. On the application's **Overview** page, select the **Managed application in local directory** link in the **Essentials** section.
-1. Under **Manage**, select **Properties**.
-1. Locate the setting **User assignment required?** and set it to **Yes**. When this option is set to **Yes**, users in the tenant must first be assigned to this application or they won't be able to sign-in to this application.
-1. Select **Save**.
+> [!NOTE]
+> When an application requires assignment, user consent for that application is not allowed. This is true even if users consent for that app would have otherwise been allowed. Be sure to [grant tenant-wide admin consent](../manage-apps/grant-admin-consent.md) to apps that require assignment.
-## Assign users and groups to the app
+## Assign the app to users and groups
-Once you've configured your app to enable user assignment, you can go ahead and assign users and groups to the app.
+Once you've configured your app to enable user assignment, you can go ahead and assign the app to users and groups.
1. Under **Manage**, select the **Users and groups** > **Add user/group** . 1. Select the **Users** selector.
Once you've configured your app to enable user assignment, you can go ahead and
A list of users and security groups will be shown along with a textbox to search and locate a certain user or group. This screen allows you to select multiple users and groups in one go. 1. Once you are done selecting the users and groups, select **Select**.
-1. (Optional) If you have defined App roles in your application, you can use the **Select role** option to assign the selected users and groups to one of the application's roles.
-1. Select **Assign** to complete the assignments of users and groups to the app.
+1. (Optional) If you have defined app roles in your application, you can use the **Select role** option to assign the app role to the selected users and groups.
+1. Select **Assign** to complete the assignments of the app to the users and groups.
1. Confirm that the users and groups you added are showing up in the updated **Users and groups** list. ## More information
active-directory Add Users Administrator https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/external-identities/add-users-administrator.md
To add B2B collaboration users to the directory, follow these steps:
- **Email address (required)**. The email address of the guest user. - **Personal message (optional)** Include a personal welcome message to the guest user. - **Groups**: You can add the guest user to one or more existing groups, or you can do it later.
- - **Roles**: If you require Azure AD administrative permissions for the user, you can add them to an Azure AD role by selecting **User** next to **Roles**.
+ - **Roles**: If you require Azure AD administrative permissions for the user, you can add them to an Azure AD role by selecting **User** next to **Roles**. [Learn more](/azure/role-based-access-control/role-assignments-external-users) about Azure roles for external guest users.
> [!NOTE] > Group email addresses arenΓÇÖt supported; enter the email address for an individual. Also, some email providers allow users to add a plus symbol (+) and additional text to their email addresses to help with things like inbox filtering. However, Azure AD doesnΓÇÖt currently support plus symbols in email addresses. To avoid delivery issues, omit the plus symbol and any characters following it up to the @ symbol.
active-directory Entitlement Management Access Package Approval Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/governance/entitlement-management-access-package-approval-policy.md
na
ms.devlang: na Previously updated : 09/16/2020 Last updated : 05/16/2021
This article describes how to change the approval and requestor information sett
In the Approval section, you specify whether an approval is required when users request this access package. The approval settings work in the following way: - Only one of the selected approvers or fallback approvers needs to approve a request for single-stage approval. -- Only one of the selected approvers from each stage needs to approve a request for 2-stage approval.-- The approver can be a Manager, Internal sponsor, or External sponsor depending on who the policy is governing access.-- Approval from every selected approver isn't required for single or 2-stage approval.-- The approval decision is based on whichever approver reviews the request first.
+- Only one of the selected approvers from each stage needs to approve a request for multi-stage approval for the request to progress to the next stage.
+- If one of the selected approved in a stage denies a request before another approver in that stage approves it, or if no one approves, the request terminates and the user does not receive access.
+- The approver can be a specified user or member of a group, the requestor's Manager, Internal sponsor, or External sponsor depending on who the policy is governing access.
For a demonstration of how to add approvers to a request policy, watch the following video:
Follow these steps to specify the approval settings for requests for the access
1. To require users to provide a justification to request the access package, set the **Require requestor justification** toggle to **Yes**.
-1. Now determine if requests will require single or 2-stage approval. Set the **How many stages** toggle to **1** for single stage approval or set the toggle to **2** for 2-stage approval.
+1. Now determine if requests will require single or multi-stage approval. Set the **How many stages** to the number of stages of approval needed.
![Access package - Requests - Approval settings](./media/entitlement-management-access-package-approval-policy/approval.png)
Use the following steps to add approvers after selecting how many stages you req
The justification is visible to other approvers and the requestor.
-### 2-stage approval
+### Multi-stage approval
-If you selected a 2-stage approval, you'll need to add a second approver.
+If you selected a multi-stage approval, you'll need to add an approver for each additional stage.
1. Add the **Second Approver**:
If you selected a 2-stage approval, you'll need to add a second approver.
### Alternate approvers
-You can specify alternate approvers, similar to specifying the first and second approvers who can approve requests. Having alternate approvers will help ensure that the requests are approved or denied before they expire (timeout). You can list alternate approvers the first approver and second approver for 2-stage approval.
+You can specify alternate approvers, similar to specifying the primary approvers who can approve requests on each stage. Having alternate approvers will help ensure that the requests are approved or denied before they expire (timeout). You can list alternate approvers alongside the primary approver on each stage.
-By specifying alternate approvers, in the event that the first or second approvers were unable to approve or deny the request, the pending request gets forwarded to the alternate approvers, per the forwarding schedule you specified during policy setup. They receive an email to approve or deny the pending request.
+By specifying alternate approvers on a stage, in the event that the primary approvers were unable to approve or deny the request, the pending request gets forwarded to the alternate approvers, per the forwarding schedule you specified during policy setup. They receive an email to approve or deny the pending request.
-After the request is forwarded to the alternate approvers, the first or second approvers can still approve or deny the request. Alternate approvers use the same My Access site to approve or deny the pending request.
+After the request is forwarded to the alternate approvers, the primary approvers can still approve or deny the request. Alternate approvers use the same My Access site to approve or deny the pending request.
-We can list people or groups of people to be approvers and alternate approvers. Please ensure that you list different sets of people to be the first, second, and alternate approvers.
-For example, if you listed Alice and Bob as the First Approver(s), list Carol and Dave as the alternate approvers. Use the following steps to add alternate approvers to an access package:
+You can list people or groups of people to be approvers and alternate approvers. Please ensure that you list different sets of people to be the first, second, and alternate approvers.
+For example, if you listed Alice and Bob as the first stage approver(s), list Carol and Dave as the alternate approvers. Use the following steps to add alternate approvers to an access package:
-1. Under the First Approver, Second Approver, or both, click **Show advanced request settings**.
+1. Under the approver on a stage, click **Show advanced request settings**.
:::image type="content" source="media/entitlement-management-access-package-approval-policy/alternate-approvers-click-advanced-request.png" alt-text="Access package - Policy - Show advanced request settings":::
active-directory Entitlement Management Process https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/governance/entitlement-management-process.md
na
ms.devlang: na Previously updated : 12/23/2020 Last updated : 5/17/2021
The following diagram shows the experience of requestors and the email notificat
![Requestor process flow](./media/entitlement-management-process/requestor-approval-request-flow.png)
-### 2-stage approval
+### Multi-stage approval
The following diagram shows the experience of stage-1 and stage-2 approvers and the email notifications they receive during the request process: ![2-stage approval process flow](./media/entitlement-management-process/2stage-approval-with-request-timeout-flow.png) ### Email notifications table
-The following table provides more detail about each of these email notifications. To manage these emails, you can use rules. For example, in Outlook, you can create rules to move the emails to a folder if the subject contains words from this table:
+The following table provides more detail about each of these email notifications. To manage these emails, you can use rules. For example, in Outlook, you can create rules to move the emails to a folder if the subject contains words from this table. Note that the words will be based on the default language settings of the tenant where the user is requesting access.
| # | Email subject | When sent | Sent to | | | | | |
The following table provides more detail about each of these email notifications
| 5 | Action required reminder: Approve or deny the request by *[date]* for *[requestor]* | This reminder email will be sent to the first approver, if escalation is enabled. The email asks them to take action if they haven't. | First approver | | 6 | Request has expired for *[access_package]* | This email will be sent to the first approver and stage-1 alternate approvers after the request has expired. | First approver, stage-1 alternate approvers | | 7 | Request approved for *[requestor]* to *[access_package]* | This email will be sent to the first approver and stage-1 alternate approvers upon request completion. | First approver, stage-1 alternate approvers |
-| 8 | Request approved for *[requestor]* to *[access_package]* | This email will be sent to the first approver and stage-1 alternate approvers of a 2-stage request when the stage-1 request is approved. | First approver, stage-1 alternate approvers |
+| 8 | Request approved for *[requestor]* to *[access_package]* | This email will be sent to the first approver and stage-1 alternate approvers of a multi-stage request when the stage-1 request is approved. | First approver, stage-1 alternate approvers |
| 9 | Request denied to *[access_package]* | This email will be sent to the requestor when their request is denied | Requestor |
-| 10 | Your request has expired for *[access_package]* | This email will be sent to the requestor at the end of a single or 2-stage request. The email notifies the requestor that the request expired. | Requestor |
+| 10 | Your request has expired for *[access_package]* | This email will be sent to the requestor at the end of a single or multi-stage request. The email notifies the requestor that the request expired. | Requestor |
| 11 | Action required: Approve or deny request by *[date]* | This email will be sent to the second approver, if escalation is disabled, to take action. | Second approver | | 12 | Action required reminder: Approve or deny the request by *[date]* | This reminder email will be sent to the second approver, if escalation is disabled. The notification asks them to take action if they haven't yet. | Second approver | | 13 | Action required: Approve or deny the request by *[date]* for *[requestor]* | This email will be sent to second approver, if escalation is enabled, to take action. | Second approver |
The following table provides more detail about each of these email notifications
When a requestor submits an access request for an access package configured to require approval, all approvers added to the policy will receive an email notification with details of the request. The details in the email include: requestor's name organization, and business justification; and the requested access start and end date (if provided). The details will also include when the request was submitted and when the request will expire.
-The email includes a link approvers can click on to go to My Access to approve or deny the access request. Here is a sample email notification that is sent to the first approver or second approver (if 2-stage approval is enabled) to complete an access request:
+The email includes a link approvers can click on to go to My Access to approve or deny the access request. Here is a sample email notification that is sent to an approver to complete an access request:
![Approve request to access package email](./media/entitlement-management-shared/approver-request-email.png)
When an access request is denied, an email notification is sent to the requestor
![Requestor request denied email](./media/entitlement-management-process/requestor-email-denied.png)
-### 2-stage approval access request emails
+### Multi-stage approval access request emails
-If 2-stage approval is enabled, at least two approvers must approve the request, one from each stage, before the requestor can receive access.
+If multi-stage approval is enabled, at least one approvers from each stage must approve the request, before the requestor can receive access.
-During stage-1, the first approver will receive the access request email and make a decision. If they approve the request, all first approvers and alternate approvers in stage-1 (if escalation is enabled) will receive notification that stage-1 is complete. Here is a sample email of the notification that is sent when stage-1 is complete:
-
-![2-stage access request email](./media/entitlement-management-process/approver-request-email-2stage.png)
+During stage-1, the first approver will receive the access request email and make a decision.
After the first or alternate approvers approve the request in stage-1, stage-2 begins. During stage-2, the second approver will receive the access request notification email. After the second approver or alternate approvers in stage-2 (if escalation is enabled) decide to approve or deny the request, notification emails are sent to the first and second approvers, and all alternate approvers in stage-1 and stage-2, as well as the requestor.
After the first or alternate approvers approve the request in stage-1, stage-2 b
Access requests could expire if no approver has approved or denied the request.
-When the request reaches its configured expiration date and expires, it can no longer be approved or denied by the approvers. Here is a sample email of the notification sent to all of the first, second (if 2-stage approval is enabled), and alternate approvers:
-
-![Approvers expired access request email](./media/entitlement-management-process/approver-request-email-expired.png)
+When the request reaches its configured expiration date and expires, it can no longer be approved or denied by the approvers.
-An email notification is also sent to the requestor, notifying them that their access request has expired, and that they need to resubmit the access request. The following diagram shows the experience of the requestor and the email notifications they receive when they request to extend access:
+An email notification is sent to the requestor, notifying them that their access request has expired, and that they need to resubmit the access request. The following diagram shows the experience of the requestor and the email notifications they receive when they request to extend access:
![Requestor extend access process flow](./media/entitlement-management-process/requestor-expiration-request-flow.png)
active-directory Assign User Or Group Access Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/assign-user-or-group-access-portal.md
When assignment is *not required*, either because you've set this option to **No
This setting doesn't affect whether or not an application appears on My Apps. Applications appear on users' My Apps access panels once you've assigned a user or group to the application. For background, see [Managing access to apps](what-is-access-management.md).
+> [!NOTE]
+> When an application requires assignment, user consent for that application is not allowed. This is true even if users consent for that app would have otherwise been allowed. Be sure to [grant tenant-wide admin consent](../manage-apps/grant-admin-consent.md) to apps that require assignment.
+ To require user assignment for an application: 1. Sign in to the [Azure portal](https://portal.azure.com) with an administrator account or as an owner of the application. 2. Select **Azure Active Directory**. In the left navigation menu, select **Enterprise applications**.
active-directory Concept Activity Logs Azure Monitor https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/reports-monitoring/concept-activity-logs-azure-monitor.md
You can route Azure Active Directory (Azure AD) activity logs to several endpoints for long term retention and data insights. This feature allows you to: * Archive Azure AD activity logs to an Azure storage account, to retain the data for a long time.
-* Stream Azure AD activity logs to an Azure event hub for analytics, using popular Security Information and Event Management (SIEM) tools, such as Splunk and QRadar.
+* Stream Azure AD activity logs to an Azure event hub for analytics, using popular Security Information and Event Management (SIEM) tools, such as Splunk, QRadar, and Azure Sentinel.
* Integrate Azure AD activity logs with your own custom log solutions by streaming them to an event hub. * Send Azure AD activity logs to Azure Monitor logs to enable rich visualizations, monitoring and alerting on the connected data.
This section answers frequently asked questions and discusses known issues with
* [Archive activity logs to a storage account](quickstart-azure-monitor-route-logs-to-storage-account.md) * [Route activity logs to an event hub](./tutorial-azure-monitor-stream-logs-to-event-hub.md)
-* [Integrate activity logs with Azure Monitor](howto-integrate-activity-logs-with-log-analytics.md)
+* [Integrate activity logs with Azure Monitor](howto-integrate-activity-logs-with-log-analytics.md)
active-directory Concept All Sign Ins https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/reports-monitoring/concept-all-sign-ins.md
To make it easier to digest the data, managed identities for Azure resources sig
- Status -- IP address- - Resource name or ID Select an item in the list view to display all sign-ins that are grouped under a node.
Each JSON download consists of four different files:
* [Sign-in activity report error codes](./concept-sign-ins.md) * [Azure AD data retention policies](reference-reports-data-retention.md)
-* [Azure AD report latencies](reference-reports-latencies.md)
+* [Azure AD report latencies](reference-reports-latencies.md)
active-directory Overview Sign In Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/reports-monitoring/overview-sign-in-diagnostics.md
na Previously updated : 12/15/2020 Last updated : 06/18/2021
In this scenario, sign-in events weren't interrupted by conditional access or mu
This diagnostic scenario provides details about user sign-in events that were expected to be interrupted due to conditional access policies or multifactor authentication. +
+### The account is locked
+
+In this scenario, a user signed-in with incorrect credentials too many times.
+
+This diagnostic scenario provides details about the apps, the number of attempts, the device used, the operating system and the IP address.
+
+### Incorrect Credentials Invalid username or password
+
+In this scenario, a user tried to sign-in using an invalid username or password.
+
+This diagnostic scenario provides details about the apps, the number of attempts, the device used, the operating system and the IP address.
+
+### Enterprise apps service provider
+
+In this scenario, a user tried to sign-in to an app, which failed due to a problem with the service provider problem.
+
+### Enterprise apps configuration
+
+In this scenario, a sign-in failed due to an application configuration issue.
+
+#### Error code insights
+
+When an event does not have a contextual analysis in the Sign-in Diagnostic an updated error code explanation and relevant content may be shown. The error code insights will contain detailed text about the scenario, how to remediate the problem and any content to read regarding the problem.
+
+#### Legacy Authentication
+
+This diagnostics scenario diagnosis a sign-in event which was blocked or interrupted since the client was attempting to use Basic (also known as Legacy) Authentication.
+
+Preventing legacy authentication sign-in is recommended as a best practice for security. Legacy authentication protocols like POP, SMTP, IMAP, and MAPI cannot enforce Multi-Factor Authentication (MFA) which makes them preferred entry points for adversaries to attack your organization.
+
+#### B2B Blocked Sign-in
+
+This diagnostic scenario detects a blocked or interrupted sign-in due to the user being from another organization-a B2B sign-in-where a Conditional Access policy requires that the clients device is joined to the resource tenant.
+
+#### Blocked by Risk Policy
+
+This scenario is where Identity Protection Policy blocks a sign-in attempt due to the sign-in attempt having been identified as risky.
+
+### Security Defaults
+
+This scenario covers sign-in events where the userΓÇÖs sign-in was interrupted due to Security Defaults settings. Security Defaults enforce best practice security for your organization and will require Multi-Factor Authentication (MFA) to be configured and used in many scenarios to prevent password sprays, replay attacks and phishing attempts from being successful.
++++++ ## Next steps - [What are Azure Active Directory reports?](overview-reports.md)
active-directory Delegate By Task https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/roles/delegate-by-task.md
In this article, you can find the information needed to restrict a user's admini
> [!div class="mx-tableFixed"] > | Task | Least privileged role | Additional roles | > | - | | - |
-> | Read sign-in logs | Reports Reader | Security Reader<br/>Security Administrator |
+> | Read sign-in logs | Reports Reader | Security Reader<br/>Security Administrator<br/> Global Reader |
## Multi-factor authentication
app-service Configure Language Python https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/configure-language-python.md
This article describes how [Azure App Service](overview.md) runs Python apps, how you can migrate existing apps to Azure, and how you can customize the behavior of App Service when needed. Python apps must be deployed with all the required [pip](https://pypi.org/project/pip/) modules.
-The App Service deployment engine automatically activates a virtual environment and runs `pip install -r requirements.txt` for you when you deploy a [Git repository](deploy-local-git.md), or a [zip package](deploy-zip.md).
+The App Service deployment engine automatically activates a virtual environment and runs `pip install -r requirements.txt` for you when you deploy a [Git repository](deploy-local-git.md), or a [zip package](deploy-zip.md) if `SCM_DO_BUILD_DURING_DEPLOYMENT` is set to `1`.
This guide provides key concepts and instructions for Python developers who use a built-in Linux container in App Service. If you've never used Azure App Service, first follow the [Python quickstart](quickstart-python.md) and [Python with PostgreSQL tutorial](tutorial-python-postgresql-app.md).
You can run an unsupported version of Python by building your own container imag
## Customize build automation
-App Service's build system, called Oryx, performs the following steps when you deploy your app using Git or zip packages:
+App Service's build system, called Oryx, performs the following steps when you deploy your app if the app setting `SCM_DO_BUILD_DURING_DEPLOYMENT` is set to `1`:
1. Run a custom pre-build script if specified by the `PRE_BUILD_COMMAND` setting. (The script can itself run other Python and Node.js scripts, pip and npm commands, and Node-based tools like yarn, for example, `yarn install` and `yarn build`.)
azure-arc Quickstart Connect Cluster https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/kubernetes/quickstart-connect-cluster.md
Previously updated : 05/25/2021 Last updated : 06/18/2021 keywords: "Kubernetes, Arc, Azure, cluster"
In this quickstart, you'll learn the benefits of Azure Arc enabled Kubernetes an
* Self-managed Kubernetes cluster using [Cluster API](https://cluster-api.sigs.k8s.io/user/quick-start.html) * If you want to connect a OpenShift cluster to Azure Arc, you need to execute the following command just once on your cluster before running `az connectedk8s connect`:
- ```azurecli-interactive
+ ```console
oc adm policy add-scc-to-user privileged system:serviceaccount:azure-arc:azure-arc-kube-aad-proxy-sa ```
In this quickstart, you'll learn the benefits of Azure Arc enabled Kubernetes an
* [Install or upgrade Azure CLI](/cli/azure/install-azure-cli) to version >= 2.16.0 * Install the `connectedk8s` Azure CLI extension of version >= 1.0.0:
- ```azurecli-interactive
+ ```console
az extension add --name connectedk8s ``` >[!NOTE]
In this quickstart, you'll learn the benefits of Azure Arc enabled Kubernetes an
## 1. Register providers for Azure Arc enabled Kubernetes 1. Enter the following commands:
- ```azurecli-interactive
+ ```console
az provider register --namespace Microsoft.Kubernetes az provider register --namespace Microsoft.KubernetesConfiguration az provider register --namespace Microsoft.ExtendedLocation ``` 2. Monitor the registration process. Registration may take up to 10 minutes.
- ```azurecli-interactive
+ ```console
az provider show -n Microsoft.Kubernetes -o table az provider show -n Microsoft.KubernetesConfiguration -o table az provider show -n Microsoft.ExtendedLocation -o table
In this quickstart, you'll learn the benefits of Azure Arc enabled Kubernetes an
Run the following command:
-```azurecli-interactive
+```console
az group create --name AzureArcTest --location EastUS --output table ```
eastus AzureArcTest
## 3. Connect an existing Kubernetes cluster Run the following command:
-```azurecli-interactive
+
+```console
az connectedk8s connect --name AzureArcTest1 --resource-group AzureArcTest ```
Helm release deployment succeeded
Run the following command:
-```azurecli-interactive
+```console
az connectedk8s list --resource-group AzureArcTest --output table ```
If your cluster is behind an outbound proxy server, Azure CLI and the Azure Arc
2. Run the connect command with proxy parameters specified:
- ```azurecli-interactive
+ ```console
az connectedk8s connect --name <cluster-name> --resource-group <resource-group> --proxy-https https://<proxy-server-ip-address>:<port> --proxy-http http://<proxy-server-ip-address>:<port> --proxy-skip-range <excludedIP>,<excludedCIDR> --proxy-cert <path-to-cert-file> ```
Azure Arc enabled Kubernetes deploys a few operators into the `azure-arc` namesp
1. View these deployments and pods using:
- ```azurecli-interactive
- kubectl -name azure-arc get deployments,pods
+ ```console
+ kubectl get deployments,pods -n azure-arc
``` 1. Verify all pods are in a `Running` state.
Azure Arc enabled Kubernetes deploys a few operators into the `azure-arc` namesp
You can delete the Azure Arc enabled Kubernetes resource, any associated configuration resources, *and* any agents running on the cluster using Azure CLI using the following command:
-```azurecli-interactive
+```console
az connectedk8s delete --name AzureArcTest1 --resource-group AzureArcTest ``` >[!NOTE]
->Deleting the Azure Arc enabled Kubernetes resource using Azure portal removes any associated configuration resources, but *does not* remove any agents running on the cluster. Best practice is to delete the Azure Arc enabled Kubernetes resource using `az connectedk8s delete` instead of Azure portal.
+> Deleting the Azure Arc enabled Kubernetes resource using Azure portal removes any associated configuration resources, but *does not* remove any agents running on the cluster. Best practice is to delete the Azure Arc enabled Kubernetes resource using `az connectedk8s delete` instead of Azure portal.
## Next steps Advance to the next article to learn how to deploy configurations to your connected Kubernetes cluster using GitOps. > [!div class="nextstepaction"]
-> [Deploy configurations using Gitops](tutorial-use-gitops-connected-cluster.md)
+> [Deploy configurations using GitOps](tutorial-use-gitops-connected-cluster.md)
azure-functions Configure Networking How To https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/configure-networking-how-to.md
# How to configure Azure Functions with a virtual network
-This article shows you how to perform tasks related to configuring your function app to connect to and run on a virtual network. To learn more about Azure Functions and networking, see [Azure Functions networking options](functions-networking-options.md).
+This article shows you how to perform tasks related to configuring your function app to connect to and run on a virtual network. For an in-depth tutorial on how to secure your storage account, please refer to the [Connect to a Virtual Network tutorial](functions-create-vnet.md). To learn more about Azure Functions and networking, see [Azure Functions networking options](functions-networking-options.md).
## Restrict your storage account to a virtual network
-When you create a function app, you must create or link to a general-purpose Azure Storage account that supports Blob, Queue, and Table storage. You can replace this storage account with one that is secured with service endpoints or private endpoint.
+When you create a function app, you must create or link to a general-purpose Azure Storage account that supports Blob, Queue, and Table storage. You can replace this storage account with one that is secured with service endpoints or private endpoints. When configuring your storage account with private endpoints, public access to your function app will be automatically disabled, and your function app will only be accessible through the virtual network.
> [!NOTE]
-> This feature currently works for all Windows virtual network-supported SKUs in the Dedicated (App Service) plan and for Premium plans. Consumption plan isn't supported.
+> This feature currently works for all Windows virtual network-supported SKUs in the Dedicated (App Service) plan and for Windows Elastic Premium plans. Consumption and Linux Elastic Premium plans aren't supported.
To set up a function with a storage account restricted to a private network:
azure-monitor Resource Logs Categories https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/essentials/resource-logs-categories.md
If you think there is something is missing, you can open a GitHub comment at the
|AuditEvent|Audit Logs|No|
+## Microsoft.Kusto/Clusters
+
+|Category|Category Display Name|Costs To Export|
+||||
+|Command|Command|No|
+|FailedIngestion|Failed ingest operations|No|
+|IngestionBatching|Ingestion batching|No|
+|Query|Query|No|
+|SucceededIngestion|Successful ingest operations|No|
+|TableDetails|Table details|No|
+|TableUsageStatistics|Table usage statistics|No|
++ ## Microsoft.Logic/integrationAccounts |Category|Category Display Name|Costs To Export|
If you think there is something is missing, you can open a GitHub comment at the
|BigDataPoolAppsEnded|Big Data Pool Applications Ended|No|
-## Microsoft.Synapse/workspaces/kustoPools
-
-|Category|Category Display Name|Costs To Export|
-||||
-|Command|Command|Yes|
-|FailedIngestion|Failed ingest operations|Yes|
-|IngestionBatching|Ingestion batching|Yes|
-|Query|Query|Yes|
-|SucceededIngestion|Successful ingest operations|Yes|
-|TableDetails|Table details|Yes|
-|TableUsageStatistics|Table usage statistics|Yes|
-- ## Microsoft.Synapse/workspaces/sqlPools |Category|Category Display Name|Costs To Export|
azure-monitor Usage Estimated Costs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/usage-estimated-costs.md
The basic Azure Monitor billing model is a cloud-friendly, consumption-based pricing ("Pay-As-You-Go"). You only pay for what you use. Pricing details are available for [alerting, metrics, notifications](https://azure.microsoft.com/pricing/details/monitor/), [Log Analytics](https://azure.microsoft.com/pricing/details/log-analytics/) and [Application Insights](https://azure.microsoft.com/pricing/details/application-insights/).
-In addition to the Pay-As-You-Go model for log data, Log Analytics has Capacity Reservations, which enable you to save as much as 25% compared to the Pay-As-You-Go price. The capacity reservation pricing enables you to buy a reservation starting at 100 GB/day. Any usage above the reservation level will be billed at the Pay-As-You-Go rate. [Learn more](https://azure.microsoft.com/pricing/details/monitor/) about Capacity Reservation pricing.
+In addition to the Pay-As-You-Go model for log data, Azure Monitor Log Analytics has Commitment Tiers. These enable you to save as much as 30% compared to the Pay-As-You-Go pricing. Commitment Tiers start at just 100 GB/day. Any usage above the Commitment Tier will be billed at the same price-per-GB as the Commitment Tier. [Learn more](https://azure.microsoft.com/pricing/details/monitor/) about Commitment Tiers pricing.
Some customers will have access to [legacy Log Analytics pricing tiers](logs/manage-cost-storage.md#legacy-pricing-tiers) and the [legacy Enterprise Application Insights pricing tier](app/pricing.md#legacy-enterprise-per-node-pricing-tier).
azure-portal Original Preferences https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-portal/original-preferences.md
+
+ Title: Manage Azure portal settings and preferences (older version)
+description: You can change Azure portal default settings to meet your own preferences. This document describes the older version of the settings experience.
Last updated : 06/17/2021+++
+# Manage Azure portal settings and preferences (older version)
+
+You can change the default settings of the Azure portal to meet your own preferences.
+
+> [!IMPORTANT]
+> We're in the process of moving all Azure users to a newer experience. This topic describes the older experience. For the latest information, see [Manage Azure portal settings and preferences](set-preferences.md).
+
+Most settings are available from the **Settings** menu in the global page header.
+
+![Screenshot showing global page header icons with settings highlighted](./media/original-preferences/header-settings.png)
++
+## Choose your default subscription
+
+You can change the subscription that opens by default when you sign-in to the Azure portal. This is helpful if you have a primary subscription you work with but use others occasionally.
++
+1. Select the directory and subscription filter icon in the global page header.
+
+1. Select the subscriptions you want as the default subscriptions when you launch the portal.
+
+ :::image type="content" source="media/original-preferences/default-directory-subscription-filter.png" alt-text="Select the subscriptions you want as the default subscriptions when you launch the portal.":::
++
+## Choose your default view
+
+You can change the page that opens by default when you sign in to the Azure portal.
+
+![Screenshot showing Azure portal settings with default view highlighted](./media/original-preferences/default-view.png)
+
+- **Home** can't be customized. It displays shortcuts to popular Azure services and lists the resources you've used most recently. We also give you useful links to resources like Microsoft Learn and the Azure roadmap.
+
+- Dashboards can be customized to create a workspace designed just for you. For example, you can build a dashboard that is project, task, or role focused. If you select **Dashboard**, your default view will go to your most recently used dashboard. For more information, see [Create and share dashboards in the Azure portal](azure-portal-dashboards.md).
+
+## Choose a portal menu mode
+
+The default mode for the portal menu controls how much space the portal menu takes up on the page.
+
+![Screenshot that shows how to set the default mode for the portal menu.](./media/original-preferences/menu-mode.png)
+
+- When the portal menu is in **Flyout** mode, it's hidden until you need it. Select the menu icon to open or close the menu.
+
+- If you choose **Docked mode** for the portal menu, it's always visible. You can collapse the menu to provide more working space.
+
+## Choose a theme or enable high contrast
+
+The theme that you choose affects the background and font colors that appear in the Azure portal. You can select from one of four preset color themes. Select each thumbnail to find the theme that best suits you.
+
+Alternatively, you can choose one of the high-contrast themes. The high contrast themes make the Azure portal easier to read for people who have a visual impairment; they override all other theme selections.
+
+![Screenshot showing Azure portal settings with themes highlighted](./media/original-preferences/theme.png)
+
+## Enable or disable pop-up notifications
+
+Notifications are system messages related to your current session. They provide information like your current credit balance, when resources you just created become available, or confirm your last action, for example. When pop-up notifications are turned on, the messages briefly display in the top corner of your screen.
+
+To enable or disable pop-up notifications, select or clear **Enable pop-up notifications**.
+
+![Screenshot showing Azure portal settings with pop-up notifications highlighted](./media/original-preferences/pop-up-notifications.png)
+
+To read all notifications received during your current session, select **Notifications** from the global header.
+
+![Screenshot showing Azure portal global header with notifications highlighted](./media/original-preferences/read-notifications.png)
+
+If you want to read notifications from previous sessions, look for events in the Activity log. For more information, see [View the Activity log](../azure-monitor/essentials/activity-log.md#view-the-activity-log).
+
+## Change the inactivity timeout setting
+
+The inactivity timeout setting helps to protect resources from unauthorized access if you forget to secure your workstation. After you've been idle for a while, you're automatically signed out of your Azure portal session. As an individual, you can change the timeout setting for yourself. If you're an admin, you can set it at the directory level for all your users in the directory.
+
+### Change your individual timeout setting (user)
+
+Select the drop-down under **Sign me out when inactive**. Choose the duration after which your Azure portal session is signed out if you're idle.
+
+![Screenshot showing portal settings with inactive timeout settings highlighted](./media/original-preferences/inactive-sign-out-user.png)
+
+The change is saved automatically. If you're idle, your Azure portal session will sign out after the duration you set.
+
+If your admin has enabled an inactivity timeout policy, you can still set your own, as long as it's less than the directory-level setting. Select **Override the directory inactivity timeout policy**, then set a time interval.
+
+![Screenshot showing portal settings with override the directory inactivity timeout policy setting highlighted](./media/original-preferences/inactive-sign-out-override.png)
+
+### Change the directory timeout setting (admin)
+
+Admins in the [Global Administrator role](../active-directory/roles/permissions-reference.md#global-administrator) can enforce the maximum idle time before a session is signed out. The inactivity timeout setting applies at the directory level. The setting takes effect for new sessions. It won't apply immediately to any users who are already signed in. For more information about directories, see [Active Directory Domain Services Overview](/windows-server/identity/ad-ds/get-started/virtual-dc/active-directory-domain-services-overview).
+
+If you're a Global Administrator, and you want to enforce an idle timeout setting for all users of the Azure portal, follow these steps:
+
+1. Select the link text **Configure directory level timeout**.
+
+ ![Screenshot showing portal settings with link text highlighted](./media/original-preferences/settings-admin.png)
+
+1. On the **Configure directory level inactivity timeout** page, select **Enable directory level idle timeout for the Azure portal** to turn on the setting.
+
+1. Next, enter the **Hours** and **Minutes** for the maximum time that a user can be idle before their session is automatically signed out.
+
+1. Select **Apply**.
+
+ ![Screenshot showing page to set directory-level inactivity timeout](./media/original-preferences/configure.png)
+
+To confirm that the inactivity timeout policy is set correctly, select **Notifications** from the global page header. Verify that a success notification is listed.
+
+![Screenshot showing successful notification message for directory-level inactivity timeout](./media/original-preferences/confirmation.png)
+
+## Restore default settings
+
+If you've made changes to the Azure portal settings and want to discard them, select **Restore default settings**. Any changes you've made to portal settings will be lost. This option doesn't affect dashboard customizations.
+
+![Screenshot showing restore of default settings](./media/original-preferences/useful-links-restore-defaults.png)
+
+## Export user settings
+
+Information about your custom settings is stored in Azure. You can export the following user data:
+
+* Private dashboards in the Azure portal
+* User settings like favorite subscriptions or directories, and last logged-in directory
+* Themes and other custom portal settings
+
+It's a good idea to export and review your settings if you plan to delete them. Rebuilding dashboards or redoing settings can be time-consuming.
+
+To export your portal settings, select **Export all settings**.
+
+![Screenshot showing export of settings](./media/original-preferences/useful-links-export-settings.png)
+
+Exporting settings creates a *.json* file that contains your user settings like your color theme, favorites, and private dashboards. Due to the dynamic nature of user settings and risk of data corruption, you can't import settings from the *.json* file.
+
+## Delete user settings and dashboards
+
+Information about your custom settings is stored in Azure. You can delete the following user data:
+
+* Private dashboards in the Azure portal
+* User settings like favorite subscriptions or directories, and last logged-in directory
+* Themes and other custom portal settings
+
+It's a good idea to export and review your settings before you delete them. Rebuilding dashboards or redoing custom settings can be time-consuming.
++
+To delete your portal settings, select **Delete all settings and private dashboards**.
+
+![Screenshot showing delete of settings](./media/original-preferences/useful-links-delete-settings.png)
+
+## Change language and regional settings
+
+There are two settings that control how the text in the Azure portal appears:
+- The **Language** setting controls the language you see for text in the Azure portal.
+
+- **Regional format** controls the way dates, time, numbers, and currency are shown.
+
+To change the language that is used in the Azure portal, use the drop-down to select from the list of available languages.
+
+The regional format selection changes to display regional options for only the language you selected. To change that automatic selection, use the drop-down to choose the regional format you want.
+
+For example, if you select English as your language, and then select United States as the regional format, currency is shown in U.S. dollars. If you select English as the language and then select Europe as the regional format, currency is shown in euros.
+
+Select **Apply** to update your language and regional format settings.
+
+ ![Screenshot showing language and regional format settings](./media/original-preferences/language.png)
+
+>[!NOTE]
+>These language and regional settings affect only the Azure portal. Documentation links that open in a new tab or window use your browser's language settings to determine the language to display.
+>
+
+## Next steps
+
+- [Keyboard shortcuts in Azure portal](azure-portal-keyboard-shortcuts.md)
+- [Supported browsers and devices](azure-portal-supported-browsers-devices.md)
+- [Add, remove, and rearrange favorites](azure-portal-add-remove-sort-favorites.md)
+- [Create and share custom dashboards](azure-portal-dashboards.md)
+- [Azure portal how-to video series](azure-portal-video-series.md)
azure-portal Set Preferences https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-portal/set-preferences.md
Title: Manage Azure portal settings and preferences
-description: You can change Azure portal default settings to meet your own preferences. Settings include inactive session timeout, default view, menu mode, contrast, theme, notifications, and language and regional formats
-keywords: settings, timeout, language, regional
Previously updated : 03/15/2021
+description: Change Azure portal settings such as default subscription/directory, timeouts, menu mode, contrast, theme, notifications, language/region and more.
Last updated : 06/17/2021 # Manage Azure portal settings and preferences
-You can change the default settings of the Azure portal to meet your own preferences. Most settings are available from the **Settings** menu in the global page header.
+You can change the default settings of the Azure portal to meet your own preferences.
-![Screenshot showing global page header icons with settings highlighted](./media/set-preferences/header-settings.png)
+Most settings are available from the **Settings** menu in the top right section of global page header.
-## Choose your default subscription
+> [!NOTE]
+> We're in the process of moving all users to the newest settings experience described in this topic. For information about the older experience, see [Manage Azure portal settings and preferences (older version)](original-preferences.md).
-You can change the subscription that opens by default when you sign-in to the Azure portal. This is helpful if you have a primary subscription you work with but use others occasionally.
+## Settings overview
+The settings **Overview** pane shows key settings in one glance and lets you switch directories or view and activate subscription filters.
-1. Select the directory and subscription filter icon in the global page header.
+In the **Directories** section, you can switch to a recently used directory by selecting **Switch** next to the desired directory. For a full list of directories to which you have access, select **See all**.
-1. Select the subscriptions you want as the default subscriptions when you launch the portal.
+If you've [opted in to the new subscription filtering experience](#opt-into-the-new-subscription-filtering-experience), you can change the active filter to view the subscriptions or resources of your choice in the *Subscriptions + filters** section. To view all available filters, select **See all**.
- :::image type="content" source="media/set-preferences/default-directory-subscription-filter.png" alt-text="Select the subscriptions you want as the default subscriptions when you launch the portal.":::
+To change other settings, select any of the items in the pane, or select an item in the left menu bar. You can also use the search menu near the top of the screen to find a setting.
-## Choose your default view
+## Directories
-You can change the page that opens by default when you sign in to the Azure portal.
+In **Directories**, you can select **All Directories** to see a full list of directories to which you have access.
-![Screenshot showing Azure portal settings with default view highlighted](./media/set-preferences/default-view.png)
+To mark a directory as a favorite, select its star icon. Those directories will be listed in the **Favorites** section.
-- **Home** can't be customized. It displays shortcuts to popular Azure services and lists the resources you've used most recently. We also give you useful links to resources like Microsoft Learn and the Azure roadmap.
+To switch to a different directory, select the directory that you want to work in, then select the **Switch** button near the bottom of the screen. You'll be prompted to confirm before switching. If you'd like the new directory to be the default directory whenever you sign in to the Azure portal, you can select the box to make it your startup directory.
-- Dashboards can be customized to create a workspace designed just for you. For example, you can build a dashboard that is project, task, or role focused. If you select **Dashboard**, your default view will go to your most recently used dashboard. For more information, see [Create and share dashboards in the Azure portal](azure-portal-dashboards.md).
-## Choose a portal menu mode
+## Subscriptions + filters
-The default mode for the portal menu controls how much space the portal menu takes up on the page.
+You can choose the subscriptions that are filtered by default when you sign in to the Azure portal by selecting the directory and subscription filter icon in the global page header. This can be helpful if you have a primary list of subscriptions you work with but use others occasionally.
-![Screenshot that shows how to set the default mode for the portal menu.](./media/set-preferences/menu-mode.png)
-- When the portal menu is in **Flyout** mode, it's hidden until you need it. Select the menu icon to open or close the menu.
+### Opt into the new subscription filtering experience
-- If you choose **Docked mode** for the portal menu, it's always visible. You can collapse the menu to provide more working space.
+The new subscription filtering experience can help you manage large numbers of subscriptions. You can opt in to this experience at any time when you select the directory and subscription filter icon. If you decide to return to the [previous experience](original-preferences.md#choose-your-default-subscription), you can do so from the **Subscriptions + filters** pane.
-## Choose a theme or enable high contrast
-The theme that you choose affects the background and font colors that appear in the Azure portal. You can select from one of four preset color themes. Select each thumbnail to find the theme that best suits you.
+In the new experience, the **Subscriptions + filters** pane lets you create customized filters. When you activate one of your filters, the full portal experience will be scoped to show only the subscriptions to which the filter applies. You can do this by selecting **Activate** in the **Subscription + filters** pane, or in the **Subscriptions + filters** section of the overview pane.
-Alternatively, you can choose one of the high-contrast themes. The high contrast themes make the Azure portal easier to read for people who have a visual impairment; they override all other theme selections.
-![Screenshot showing Azure portal settings with themes highlighted](./media/set-preferences/theme.png)
+The **Default** filter shows all subscriptions to which you have access. This filter is used if there are no other filters, or when the active filter fails to include any subscriptions.
-## Enable or disable pop-up notifications
+You'll also see a filter named **Imported-filter**, which includes all subscriptions that had been selected before opting in to the new filtering experience.
-Notifications are system messages related to your current session. They provide information like your current credit balance, when resources you just created become available, or confirm your last action, for example. When pop-up notifications are turned on, the messages briefly display in the top corner of your screen.
+### Create a filter
-To enable or disable pop-up notifications, select or clear **Enable pop-up notifications**.
+To create additional filters of your choice, select **Create a filter** in the **Subscriptions + filters** pane. You can create up to ten filters.
-![Screenshot showing Azure portal settings with pop-up notifications highlighted](./media/set-preferences/popup-notifications.png)
+Each filter must have a unique name that is between 8 and 50 characters long and contains only letters, numbers, and hyphens.
-To read all notifications received during your current session, select **Notifications** from the global header.
-![Screenshot showing Azure portal global header with notifications highlighted](./media/set-preferences/read-notifications.png)
+After you've named your filter, enter at least one condition. In the **Filter type** field, select either **Subscription name**, **Subscription ID**, or **Subscription state**. Then select an operator and enter a value to filter on.
-If you want to read notifications from previous sessions, look for events in the Activity log. For more information, see [View the Activity log](../azure-monitor/essentials/activity-log.md#view-the-activity-log).
-## Change the inactivity timeout setting
+When you're finished adding conditions, select **Create**. Your filter will then appear in the list in **Subscriptions + filters**.
-The inactivity timeout setting helps to protect resources from unauthorized access if you forget to secure your workstation. After you've been idle for a while, you're automatically signed out of your Azure portal session. As an individual, you can change the timeout setting for yourself. If you're an admin, you can set it at the directory level for all your users in the directory.
+### Modify or delete a filter
-### Change your individual timeout setting (user)
+You can modify or rename an existing filter by selecting the pencil icon in that filter's row. Make your changes, and then select **Apply**.
-Select the drop-down under **Sign me out when inactive**. Choose the duration after which your Azure portal session is signed out if you're idle.
+> [!NOTE]
+> If you modify a filter that is currently active, and the changes result in 0 subscriptions, the **Default** filter will become active instead. You can't activate a filter which doesn't include any subscriptions.
-![Screenshot showing portal settings with inactive timeout settings highlighted](./media/set-preferences/inactive-signout-user.png)
+To delete a filter, select the trash can icon in that filter's row. You can't delete the **Default** filter or any filter that is currently active.
-The change is saved automatically. If you're idle, your Azure portal session will sign out after the duration you set.
+## Appearance
-If your admin has enabled an inactivity timeout policy, you can still set your own, as long as it's less than the directory-level setting. Select **Override the directory inactivity timeout policy**, then set a time interval.
+The **Appearance** pane lets you choose menu behavior, your color theme, and whether to use a high-contrast theme.
-![Screenshot showing portal settings with override the directory inactivity timeout policy setting highlighted](./media/set-preferences/inactive-signout-override.png)
-### Change the directory timeout setting (admin)
+### Set menu behavior
-Admins in the [Global Administrator role](../active-directory/roles/permissions-reference.md#global-administrator) can enforce the maximum idle time before a session is signed out. The inactivity timeout setting applies at the directory level. The setting takes effect for new sessions. It won't apply immediately to any users who are already signed in. For more information about directories, see [Active Directory Domain Services Overview](/windows-server/identity/ad-ds/get-started/virtual-dc/active-directory-domain-services-overview).
+The **Menu behavior** section lets you choose how the default Azure portal menu behaves.
-If you're a Global Administrator, and you want to enforce an idle timeout setting for all users of the Azure portal, follow these steps:
+- **Flyout**: The menu will be hidden until you need it. You can select the menu icon in the upper left hand corner to open or close the menu.
+- **Docked**: The menu will always be visible. You can collapse the menu to provide more working space.
-1. Select the link text **Configure directory level timeout**.
+### Choose a theme or enable high contrast
- ![Screenshot showing portal settings with link text highlighted](./media/set-preferences/settings-admin.png)
+The theme that you choose affects the background and font colors that appear in the Azure portal. In the **Theme** section, you can select from one of four preset color themes. Select each thumbnail to find the theme that best suits you.
-1. On the **Configure directory level inactivity timeout** page, select **Enable directory level idle timeout for the Azure portal** to turn on the setting.
+Alternatively, you can choose a theme from the **High contrast theme** section. These themes can make the Azure portal easier to read, especially if you have a visual impairment. Selecting either the white or black high-contrast theme will override any other theme selections.
-1. Next, enter the **Hours** and **Minutes** for the maximum time that a user can be idle before their session is automatically signed out.
+## Startup views
-1. Select **Apply**.
+This pane allows you to set options for what you see when you first sign in to the Azure portal.
- ![Screenshot showing page to set directory-level inactivity timeout](./media/set-preferences/configure.png)
-To confirm that the inactivity timeout policy is set correctly, select **Notifications** from the global page header. Verify that a success notification is listed.
+### Startup page
-![Screenshot showing successful notification message for directory-level inactivity timeout](./media/set-preferences/confirmation.png)
+Choose one of the following options for the page you'll see when you first sign in to the Azure portal.
-## Restore default settings
+- **Home**: Displays the home page, with shortcuts to popular Azure services, a list of resources you've used most recently, and useful links to tools, documentation, and more.
+- **Dashboard**: Displays your most recently used dashboard. Dashboards can be customized to create a workspace designed just for you. For example, you can build a dashboard that is project, task, or role focused. For more information, see [Create and share dashboards in the Azure portal](azure-portal-dashboards.md).
-If you've made changes to the Azure portal settings and want to discard them, select **Restore default settings**. Any changes you've made to portal settings will be lost. This option doesn't affect dashboard customizations.
+### Startup directory
-![Screenshot showing restore of default settings](./media/set-preferences/useful-links-restore-defaults.png)
+Choose one of the following options for the directory to work in when you first sign in to the Azure portal.
-## Export user settings
+- **Sign in to your last visited directory**: When you sign in to the Azure portal, you'll start in whichever directory you'd been working in last time.
+- **Select a directory**: Choose this option to select one of your directory. You'll start in that directory every time you sign in to the Azure portal, even if you had been working in a different directory last time.
-Information about your custom settings is stored in Azure. You can export the following user data:
+## Language + region
-* Private dashboards in the Azure portal
-* User settings like favorite subscriptions or directories, and last logged-in directory
-* Themes and other custom portal settings
+Choose your language and the regional format that will influence how data such as dates and currency will appear in the Azure portal.
-It's a good idea to export and review your settings if you plan to delete them. Rebuilding dashboards or redoing settings can be time-consuming.
-To export your portal settings, select **Export all settings**.
+> [!NOTE]
+> These language and regional settings affect only the Azure portal. Documentation links that open in a new tab or window use your browser's settings to determine the language to display.
-![Screenshot showing export of settings](./media/set-preferences/useful-links-export-settings.png)
+### Language
-Exporting settings creates a *.json* file that contains your user settings like your color theme, favorites, and private dashboards. Due to the dynamic nature of user settings and risk of data corruption, you can't import settings from the *.json* file.
+Use the drop-down list to select from the list of available languages. This setting controls the language you see for text throughout the Azure portal.
-## Delete user settings and dashboards
+### Regional format
-Information about your custom settings is stored in Azure. You can delete the following user data:
+Select an option to control the way dates, time, numbers, and currency are shown in the Azure portal.
-* Private dashboards in the Azure portal
-* User settings like favorite subscriptions or directories, and last logged-in directory
-* Themes and other custom portal settings
+The options shown in the **Regional format** drop-down list changes based on the option you selected for **Language**. For example, if you select **English** as your language, and then select **English (United States)** as the regional format, currency is shown in U.S. dollars. If you select **English** as your language and then select **English (Europe)** as the regional format, currency is shown in euros.
-It's a good idea to export and review your settings before you delete them. Rebuilding dashboards or redoing custom settings can be time-consuming.
+Select **Apply** to update your language and regional format settings.
+## Contact information
-To delete your portal settings, select **Delete all settings and private dashboards**.
+This pane lets you update the email address that is used for updates on Azure services, billing, support, or security issues.
-![Screenshot showing delete of settings](./media/set-preferences/useful-links-delete-settings.png)
+You can also opt in or out from additional emails about Microsoft Azure and other products and services on this page.
-## Change language and regional settings
+## Signing out + notifications
-There are two settings that control how the text in the Azure portal appears:
-- The **Language** setting controls the language you see for text in the Azure portal.
+This pane lets you manage pop-up notifications and session timeouts.
-- **Regional format** controls the way dates, time, numbers, and currency are shown.
-To change the language that is used in the Azure portal, use the drop-down to select from the list of available languages.
+### Signing out
-The regional format selection changes to display regional options for only the language you selected. To change that automatic selection, use the drop-down to choose the regional format you want.
+The inactivity timeout setting helps to protect resources from unauthorized access if you forget to secure your workstation. After you've been idle for a while, you're automatically signed out of your Azure portal session. As an individual, you can change the timeout setting for yourself. If you're an admin, you can set it at the directory level for all your users in the directory.
-For example, if you select English as your language, and then select United States as the regional format, currency is shown in U.S. dollars. If you select English as the language and then select Europe as the regional format, currency is shown in euros.
+### Change your individual timeout setting (user)
-Select **Apply** to update your language and regional format settings.
+In the drop-down menu next to **Sign me out when inactive**, choose the duration after which your Azure portal session is signed out if you're idle.
++
+Select **Apply** to save your changes. After that, if you're inactive during the portal session, Azure portal will sign out after the duration you set.
+
+If your admin has enabled an inactivity timeout policy, you can still set your own, as long as it's shorter than the directory-level setting. To do so, select **Override the directory inactivity timeout policy**, then enter a time interval for the **Override value**.
++
+### Change the directory timeout setting (admin)
+
+Admins in the [Global Administrator role](../active-directory/roles/permissions-reference.md#global-administrator) can enforce the maximum idle time before a session is signed out. This inactivity timeout setting applies at the directory level. The setting takes effect for new sessions. It won't apply immediately to any users who are already signed in. For more information about directories, see [Active Directory Domain Services Overview](/windows-server/identity/ad-ds/get-started/virtual-dc/active-directory-domain-services-overview).
+
+If you're a Global Administrator, and you want to enforce an idle timeout setting for all users of the Azure portal, select **Enable directory level idle timeout** to turn on the setting. Next, enter the **Hours** and **Minutes** for the maximum time that a user can be inactive before their session is automatically signed out. After you select **Apply**, this setting will apply to all users in the directory.
++
+To confirm that the inactivity timeout policy is set correctly, select **Notifications** from the global page header and verify that a success notification is listed.
++
+### Enable or disable pop-up notifications
+
+Notifications are system messages related to your current session. They provide information such as showing your current credit balance, confirming your last action, or letting you know when resources you created become . When pop-up notifications are turned on, the messages briefly display in the top corner of your screen.
- ![Screenshot showing language and regional format settings](./media/set-preferences/language.png)
+To enable or disable pop-up notifications, select or clear **Enable pop-up notifications**.
+
+To read all notifications received during your current session, select **Notifications** from the global header.
++
+To view notifications from previous sessions, look for events in the Activity log. For more information, see [View the Activity log](../azure-monitor/essentials/activity-log.md#view-the-activity-log).
+
+## Export, restore, or delete settings
+
+The settings **Overview** pane lets you export, restore, or delete settings.
++
+### Export user settings
+
+Information about your custom settings is stored in Azure. You can export the following user data:
+
+- Private dashboards in the Azure portal
+- User settings like favorite subscriptions or directories
+- Themes and other custom portal settings
+
+It's a good idea to export and review your settings if you plan to delete them. Rebuilding dashboards or redoing settings can be time-consuming.
+
+To export your portal settings, select **Export settings** from the top of the settings **Overview** pane. This creates a *.json* file that contains your user settings data.
+
+Due to the dynamic nature of user settings and risk of data corruption, you can't import settings from the *.json* file.
+
+### Restore default settings
+
+If you've made changes to the Azure portal settings and want to discard them, select **Restore default settings** from the top of the settings **Overview** pane. You'll be prompted to confirm this action. When you do so, any changes you've made to your Azure portal settings will be lost. This option doesn't affect dashboard customizations.
+
+### Delete user settings and dashboards
+
+Information about your custom settings is stored in Azure. You can delete the following user data:
+
+- Private dashboards in the Azure portal
+- User settings like favorite subscriptions or directories
+- Themes and other custom portal settings
+
+It's a good idea to export and review your settings before you delete them. Rebuilding [dashboards](azure-portal-dashboards.md) or redoing custom settings can be time-consuming.
+
->[!NOTE]
->These language and regional settings affect only the Azure portal. Documentation links that open in a new tab or window use your browser's language settings to determine the language to display.
->
+To delete your portal settings, select **Delete all settings and private dashboards** from the top of the settings **Overview** pane. You'll be prompted to confirm the deletion. When you do so, all settings customizations will return to the default settings, and all of your private dashboards will be lost.
## Next steps -- [Keyboard shortcuts in Azure portal](azure-portal-keyboard-shortcuts.md)-- [Supported browsers and devices](azure-portal-supported-browsers-devices.md)
+- [Learn about keyboard shortcuts in the Azure portal](azure-portal-keyboard-shortcuts.md)
+- [View supported browsers and devices](azure-portal-supported-browsers-devices.md)
- [Add, remove, and rearrange favorites](azure-portal-add-remove-sort-favorites.md) - [Create and share custom dashboards](azure-portal-dashboards.md)-- [Azure portal how-to video series](azure-portal-video-series.md)
+- [Watch Azure portal how-to videos](azure-portal-video-series.md)
azure-sql High Availability Sla https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/high-availability-sla.md
Last updated 10/28/2020
# High availability for Azure SQL Database and SQL Managed Instance [!INCLUDE[appliesto-sqldb-sqlmi](../includes/appliesto-sqldb-sqlmi.md)]
-The goal of the high availability architecture in Azure SQL Database and SQL Managed Instance is to guarantee that your database is up and running minimum of 99.99% of time (For more information regarding specific SLA for different tiers, Please refer [SLA for Azure SQL Database and SQL Managed Instance](https://azure.microsoft.com/support/legal/sl#resiliency) in your app. SQL Database and SQL Managed Instance can quickly recover even in the most critical circumstances ensuring that your data is always available.
+The goal of the high availability architecture in Azure SQL Database and SQL Managed Instance is to guarantee that your database is up and running minimum of 99.99% of time (For more information regarding specific SLA for different tiers, Please refer [SLA for Azure SQL Database and SQL Managed Instance](https://azure.microsoft.com/support/legal/sl#resiliency) in your app. SQL Database and SQL Managed Instance can quickly recover even in the most critical circumstances ensuring that your data is always available.
The high availability solution is designed to ensure that committed data is never lost due to failures, that maintenance operations do not affect your workload, and that the database will not be a single point of failure in your software architecture. There are no maintenance windows or downtimes that should require you to stop the workload while the database is upgraded or maintained.
azure-sql Transact Sql Tsql Differences Sql Server https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/transact-sql-tsql-differences-sql-server.md
Title: Resolving T-SQL differences-migration
-description: Transact-SQL statements that are less than fully supported in Azure SQL Database.
+description: T-SQL statements that are less than fully supported in Azure SQL Database.
Previously updated : 12/03/2018 Last updated : 06/17/2021
-# Resolving Transact-SQL differences during migration to SQL Database
+# T-SQL differences between SQL Server and Azure SQL Database
-When [migrating your database](migrate-to-database-from-sql-server.md) from SQL Server to Azure SQL Database, you may discover that your SQL Server database requires some re-engineering before it can be migrated. This article provides guidance to assist you in both performing this re-engineering and understanding the underlying reasons why the re-engineering is necessary. To detect incompatibilities, use the [Data Migration Assistant (DMA)](https://www.microsoft.com/download/details.aspx?id=53595).
+When [migrating your database](migrate-to-database-from-sql-server.md) from SQL Server to Azure SQL Database, you may discover that your SQL Server databases require some re-engineering before they can be migrated. This article provides guidance to assist you in both performing this re-engineering and understanding the underlying reasons why the re-engineering is necessary. To detect incompatibilities and migrate databases to Azure SQL Database, use [Data Migration Assistant (DMA)](/sql/dm).
## Overview
-Most Transact-SQL features that applications use are fully supported in both Microsoft SQL Server and Azure SQL Database. For example, the core SQL components such as data types, operators, string, arithmetic, logical, and cursor functions work identically in SQL Server and SQL Database. There are, however, a few T-SQL differences in DDL (data-definition language) and DML (data manipulation language) elements resulting in T-SQL statements and queries that are only partially supported (which we discuss later in this article).
+Most T-SQL features that applications use are fully supported in both Microsoft SQL Server and Azure SQL Database. For example, the core SQL components such as data types, operators, string, arithmetic, logical, and cursor functions work identically in SQL Server and SQL Database. There are, however, a few T-SQL differences in DDL (data definition language) and DML (data manipulation language) elements resulting in T-SQL statements and queries that are only partially supported (which we discuss later in this article).
-In addition, there are some features and syntax that isn't supported at all because Azure SQL Database is designed to isolate features from dependencies on the master database and the operating system. As such, most server-level activities are inappropriate for SQL Database. T-SQL statements and options aren't available if they configure server-level options, operating system components, or specify file system configuration. When such capabilities are required, an appropriate alternative is often available in some other way from SQL Database or from another Azure feature or service.
+In addition, there are some features and syntax that isn't supported at all because Azure SQL Database is designed to isolate features from dependencies on the system databases and the operating system. As such, most instance-level features are not supported in SQL Database. T-SQL statements and options aren't available if they configure instance-level options, operating system components, or specify file system configuration. When such capabilities are required, an appropriate alternative is often available in some other way from SQL Database or from another Azure feature or service.
-For example, high availability is built into Azure SQL Database using technology similar to [Always On Availability Groups](/sql/database-engine/availability-groups/windows/always-on-availability-groups-sql-server). T-SQL statements related to availability groups are not supported by SQL Database, and the dynamic management views related to Always On Availability Groups are also not supported.
+For example, high availability is built into Azure SQL Database. T-SQL statements related to availability groups are not supported by SQL Database, and the dynamic management views related to Always On Availability Groups are also not supported.
-For a list of the features that are supported and unsupported by SQL Database, see [Azure SQL Database feature comparison](features-comparison.md). The list on this page supplements that guidelines and features article, and focuses on Transact-SQL statements.
+For a list of the features that are supported and unsupported by SQL Database, see [Azure SQL Database feature comparison](features-comparison.md). This page supplements that article, and focuses on T-SQL statements.
-## Transact-SQL syntax statements with partial differences
+## T-SQL syntax statements with partial differences
-The core DDL (data definition language) statements are available, but some DDL statements have extensions related to disk placement and unsupported features.
+The core DDL statements are available, but DDL statement extensions related to unsupported features, such as file placement on disk, are not supported.
-- CREATE and ALTER DATABASE statements have over three dozen options. The statements include file placement, FILESTREAM, and service broker options that only apply to SQL Server. This may not matter if you create databases before you migrate, but if you're migrating T-SQL code that creates databases you should compare [CREATE DATABASE (Azure SQL Database)](/sql/t-sql/statements/create-database-transact-sql) with the SQL Server syntax at [CREATE DATABASE (SQL Server Transact-SQL)](/sql/t-sql/statements/create-database-transact-sql) to make sure all the options you use are supported. CREATE DATABASE for Azure SQL Database also has service objective and elastic scale options that apply only to SQL Database.-- The CREATE and ALTER TABLE statements have FileTable options that can't be used on SQL Database because FILESTREAM isn't supported.-- CREATE and ALTER login statements are supported but SQL Database doesn't offer all the options. To make your database more portable, SQL Database encourages using contained database users instead of logins whenever possible. For more information, see [CREATE/ALTER LOGIN](/sql/t-sql/statements/alter-login-transact-sql) and [Manage logins and users](logins-create-manage.md).
+- In SQL Server, `CREATE DATABASE` and `ALTER DATABASE` statements have over three dozen options. The statements include file placement, FILESTREAM, and service broker options that only apply to SQL Server. This may not matter if you create databases in SQL Database before you migrate, but if you're migrating T-SQL code that creates databases you should compare [CREATE DATABASE (Azure SQL Database)](/sql/t-sql/statements/create-database-transact-sql?view=azuresqldb-current&preserve-view=true) with the SQL Server syntax at [CREATE DATABASE (SQL Server T-SQL)](/sql/t-sql/statements/create-database-transact-sql?view=sql-server-ver15&preserve-view=true) to make sure all the options you use are supported. `CREATE DATABASE` for Azure SQL Database also has service objective and elastic pool options that apply only to SQL Database.
+- The `CREATE TABLE` and `ALTER TABLE` statements have `FILETABLE` and `FILESTREAM` options that can't be used on SQL Database because these features aren't supported.
+- `CREATE LOGIN` and `ALTER LOGIN` statements are supported, but do not offer all options available in SQL Server. To make your database more portable, SQL Database encourages using contained database users instead of logins whenever possible. For more information, see [CREATE LOGIN](/sql/t-sql/statements/create-login-transact-sql?view=azuresqldb-current&preserve-view=true) and [ALTER LOGIN](/sql/t-sql/statements/alter-login-transact-sql?view=azuresqldb-current&preserve-view=true) and [Manage logins and users](logins-create-manage.md).
-## Transact-SQL syntax not supported in Azure SQL Database
+## T-SQL syntax not supported in Azure SQL Database
-In addition to Transact-SQL statements related to the unsupported features described in [Azure SQL Database feature comparison](features-comparison.md), the following statements and groups of statements aren't supported. As such, if your database to be migrated is using any of the following features, re-engineer your T-SQL to eliminate these T-SQL features and statements.
+In addition to T-SQL statements related to the unsupported features described in [Azure SQL Database feature comparison](features-comparison.md), the following statements and groups of statements aren't supported. As such, if your database to be migrated is using any of the following features, re-engineer your application to eliminate these T-SQL features and statements.
-- Collation of system objects-- Connection related: Endpoint statements. SQL Database doesn't support Windows authentication, but does support the similar Azure Active Directory authentication. Some authentication types require the latest version of SSMS. For more information, see [Connecting to SQL Database or Azure Azure Synapse Analytics By Using Azure Active Directory Authentication](authentication-aad-overview.md).-- Cross database queries using three or four part names. (Read-only cross-database queries are supported by using [elastic database query](elastic-query-overview.md).)-- Cross database ownership chaining, `TRUSTWORTHY` setting-- `EXECUTE AS LOGIN` Use 'EXECUTE AS USER' instead.-- Encryption is supported except for extensible key management-- Eventing: Events, event notifications, query notifications-- File placement: Syntax related to database file placement, size, and database files that are automatically managed by Microsoft Azure.-- High availability: Syntax related to high availability, which is managed through your Microsoft Azure account. This includes syntax for backup, restore, Always On, database mirroring, log shipping, recovery modes.-- Log reader: Syntax that relies upon the log reader, which isn't available on SQL Database: Push Replication, Change Data Capture. SQL Database can be a subscriber of a push replication article.-- Functions: `fn_get_sql`, `fn_virtualfilestats`, `fn_virtualservernodes`-- Hardware: Syntax related to hardware-related server settings: such as memory, worker threads, CPU affinity, trace flags. Use service tiers and compute sizes instead.-- `KILL STATS JOB`-- `OPENQUERY`, `OPENROWSET`, `OPENDATASOURCE`, and four-part names-- .NET Framework: CLR integration with SQL Server
+- Collation of system objects.
+- Connection related: Endpoint statements. SQL Database doesn't support Windows authentication, but does support Azure Active Directory authentication. This includes authentication of Active Directory principals federated with Azure Active Directory. For more information, see [Connecting to SQL Database or Azure Azure Synapse Analytics By Using Azure Active Directory Authentication](authentication-aad-overview.md).
+- Cross-database and cross-instance queries using three or four part names. Three part names referencing the `tempdb` database and the current database are supported. [Elastic query](elastic-query-overview.md) supports read-only references to tables in other MSSQL databases.
+- Cross database ownership chaining and the `TRUSTWORTHY` database property.
+- `EXECUTE AS LOGIN`. Use `EXECUTE AS USER` instead.
+- Extensible key management (EKM) for encryption keys. Transparent Data Encryption (TDE) [customer-managed keys](transparent-data-encryption-byok-overview.md) and Always Encrypted [column master keys](always-encrypted-azure-key-vault-configure.md) may be stored in Azure Key Vault.
+- Eventing: event notifications, query notifications.
+- File properties: Syntax related to database file name, placement, size, and other file properties automatically managed by SQL Database.
+- High availability: Syntax related to high availability and database recovery, which are managed by SQL Database. This includes syntax for backup, restore, Always On, database mirroring, log shipping, recovery models.
+- Syntax related to snapshot, transactional, and merge replication, which is not available in SQL Database. [Replication subscriptions](replication-to-sql-database.md) are supported.
+- Functions: `fn_get_sql`, `fn_virtualfilestats`, `fn_virtualservernodes`.
+- Instance configuration: Syntax related to server memory, worker threads, CPU affinity, trace flags. Use service tiers and compute sizes instead.
+- `KILL STATS JOB`.
+- `OPENQUERY`, `OPENDATASOURCE`, and four-part names.
+- .NET Framework: CLR integration
- Semantic search-- Server credentials: Use [database scoped credentials](/sql/t-sql/statements/create-database-scoped-credential-transact-sql) instead.-- Server-level items: Server roles, `sys.login_token`. `GRANT`, `REVOKE`, and `DENY` of server level permissions aren't available though some are replaced by database-level permissions. Some useful server-level DMVs have equivalent database-level DMVs.
+- Server credentials: Use [database scoped credentials](/sql/t-sql/statements/create-database-scoped-credential-T-SQL) instead.
+- Server-level permissions: `GRANT`, `REVOKE`, and `DENY` of server level permissions are not supported. Some server-level permissions are replaced by database-level permissions, or granted implicitly by built-in server roles. Some server-level DMVs and catalog views have similar database-level views.
- `SET REMOTE_PROC_TRANSACTIONS` - `SHUTDOWN` - `sp_addmessage`-- `sp_configure` options and `RECONFIGURE`. Some options are available using [ALTER DATABASE SCOPED CONFIGURATION](/sql/t-sql/statements/alter-database-scoped-configuration-transact-sql).
+- `sp_configure` and `RECONFIGURE`. [ALTER DATABASE SCOPED CONFIGURATION](/sql/t-sql/statements/alter-database-scoped-configuration-T-SQL) is supported.
- `sp_helpuser` - `sp_migrate_user_to_contained`-- SQL Server Agent: Syntax that relies upon the SQL Server Agent or the MSDB database: alerts, operators, central management servers. Use scripting, such as Azure PowerShell instead.-- SQL Server audit: Use SQL Database auditing instead.-- SQL Server trace-- Trace flags: Some trace flag items have been moved to compatibility modes.-- Transact-SQL debugging-- Triggers: Server-scoped or logon triggers-- `USE` statement: To change the database context to a different database, you must make a new connection to the new database.
+- SQL Server Agent: Syntax that relies upon the SQL Server Agent or the MSDB database: alerts, operators, central management servers. Use scripting, such as PowerShell, instead.
+- SQL Server audit: Use SQL Database [auditing](auditing-overview.md) instead.
+- SQL Server trace.
+- Trace flags.
+- T-SQL debugging.
+- Server-scoped or logon triggers.
+- `USE` statement: To change database context to a different database, you must create a new connection to that database.
-## Full Transact-SQL reference
+## Full T-SQL reference
-For more information about Transact-SQL grammar, usage, and examples, see [Transact-SQL Reference (Database Engine)](/sql/t-sql/language-reference) in SQL Server Books Online.
+For more information about T-SQL grammar, usage, and examples, see [T-SQL Reference (Database Engine)](/sql/t-sql/language-reference).
### About the "Applies to" tags
-The Transact-SQL reference includes articles related to SQL Server versions 2008 to the present. Below the article title there's an icon bar, listing the four SQL Server platforms, and indicating applicability. For example, availability groups were introduced in SQL Server 2012. The [CREATE AVAILABILITY GROUP](/sql/t-sql/statements/create-availability-group-transact-sql) article indicates that the statement applies to **SQL Server (starting with 2012)**. The statement doesn't apply to SQL Server 2008, SQL Server 2008 R2, Azure SQL Database, Azure Azure Synapse Analytics, or Parallel Data Warehouse.
+The T-SQL reference includes articles related to all recent SQL Server versions. Below the article title there's an icon bar, listing MSSQL platforms, and indicating applicability. For example, availability groups were introduced in SQL Server 2012. The [CREATE AVAILABILITY GROUP](/sql/t-sql/statements/create-availability-group-T-SQL) article indicates that the statement applies to **SQL Server (starting with 2012)**. The statement doesn't apply to SQL Server 2008, SQL Server 2008 R2, Azure SQL Database, Azure Azure Synapse Analytics, or Parallel Data Warehouse.
-In some cases, the general subject of an article can be used in a product, but there are minor differences between products. The differences are indicated at midpoints in the article as appropriate. In some cases, the general subject of an article can be used in a product, but there are minor differences between products. The differences are indicated at midpoints in the article as appropriate. For example, the CREATE TRIGGER article is available in SQL Database. But the **ALL SERVER** option for server-level triggers, indicates that server-level triggers can't be used in SQL Database. Use database-level triggers instead.
+In some cases, the general subject of an article can be used in a product, but there are minor differences between products. The differences are indicated at midpoints in the article as appropriate. For example, the `CREATE TRIGGER` article is available in SQL Database. But the `ALL SERVER` option for server-level triggers, indicates that server-level triggers can't be used in SQL Database. Use database-level triggers instead.
## Next steps
-For a list of the features that are supported and unsupported by SQL Database, see [Azure SQL Database feature comparison](features-comparison.md). The list on this page supplements that guidelines and features article, and focuses on Transact-SQL statements.
+For a list of the features that are supported and unsupported by SQL Database, see [Azure SQL Database feature comparison](features-comparison.md).
+
+To detect compatibility issues in your SQL Server databases before migrating to Azure SQL Database, and to migrate your databases, use [Data Migration Assistant (DMA)](/sql/dm).
azure-sql Sql Server To Sql Database Assessment Rules https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/migration-guides/database/sql-server-to-sql-database-assessment-rules.md
Azure SQL Database does not support SQL CLR assemblies.
**Recommendation** Currently, there is no way to achieve this in Azure SQL Database. The recommended alternative solutions will require application code and database changes to use only assemblies supported by Azure SQL Database. Alternatively migrate to Azure SQL Managed Instance or SQL Server on Azure Virtual Machine
-More information: [Unsupported Transact-SQL differences in SQL Database](../../database/transact-sql-tsql-differences-sql-server.md#transact-sql-syntax-not-supported-in-azure-sql-database)
+More information: [Unsupported Transact-SQL differences in SQL Database](../../database/transact-sql-tsql-differences-sql-server.md#t-sql-syntax-not-supported-in-azure-sql-database)
## Cryptographic provider<a id="CryptographicProvider"></a>
OPENROWSET supports bulk operations through a built-in BULK provider that enable
**Recommendation** Azure SQL Database cannot access file shares and Windows folders, so the files must be imported from Azure blob storage. Therefore, only blob type DATASOURCE is supported in OPENROWSET function. Alternatively, migrate to SQL Server on Azure Virtual Machine
-More information: [Resolving Transact-SQL differences during migration to SQL Database](../../database/transact-sql-tsql-differences-sql-server.md#transact-sql-syntax-not-supported-in-azure-sql-database)
+More information: [Resolving Transact-SQL differences during migration to SQL Database](../../database/transact-sql-tsql-differences-sql-server.md#t-sql-syntax-not-supported-in-azure-sql-database)
## OPENROWSET (provider)<a id="OpenRowsetWithSQLAndNonSQLProvider"></a>
OpenRowSet with SQL or non-SQL provider is an alternative to accessing tables in
**Recommendation** Azure SQL Database supports OPENROWSET only to import from Azure blob storage. Alternatively, migrate to SQL Server on Azure Virtual Machine
-More information: [Resolving Transact-SQL differences during migration to SQL Database](../../database/transact-sql-tsql-differences-sql-server.md#transact-sql-syntax-not-supported-in-azure-sql-database)
+More information: [Resolving Transact-SQL differences during migration to SQL Database](../../database/transact-sql-tsql-differences-sql-server.md#t-sql-syntax-not-supported-in-azure-sql-database)
## Non-ANSI left outer join<a id="NonANSILeftOuterJoinSyntax"></a>
A trigger is a special kind of stored procedure that executes in response to cer
**Recommendation** Use database level trigger instead. Alternatively migrate to Azure SQL Managed Instance or SQL Server on Azure Virtual Machine
-More information: [Resolving Transact-SQL differences during migration to SQL Database](../../database/transact-sql-tsql-differences-sql-server.md#transact-sql-syntax-not-supported-in-azure-sql-database)
+More information: [Resolving Transact-SQL differences during migration to SQL Database](../../database/transact-sql-tsql-differences-sql-server.md#t-sql-syntax-not-supported-in-azure-sql-database)
## SQL Agent jobs<a id="AgentJobs"></a>
Trace flags are used to temporarily set specific server characteristics or to sw
**Recommendation** Review impacted objects section in Azure Migrate to see all trace flags that are not supported in Azure SQL Database and evaluate if they can be removed. Alternatively, migrate to Azure SQL Managed Instance which supports limited number of global trace flags or SQL Server on Azure Virtual Machine.
-More information: [Resolving Transact-SQL differences during migration to SQL Database](../../database/transact-sql-tsql-differences-sql-server.md#transact-sql-syntax-not-supported-in-azure-sql-database)
+More information: [Resolving Transact-SQL differences during migration to SQL Database](../../database/transact-sql-tsql-differences-sql-server.md#t-sql-syntax-not-supported-in-azure-sql-database)
## Windows authentication<a id="WindowsAuthentication"></a>
backup Backup Blobs Storage Account Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-blobs-storage-account-cli.md
+
+ Title: Back up Azure Blobs using Azure CLI
+description: Learn how to back up Azure Blobs using Azure CLI.
+ Last updated : 06/18/2021++
+# Back up Azure Blobs in a storage account using Azure CLI
+
+This article describes how to back up [Azure Blobs](/azure/backup/blob-backup-overview) using Azure CLI.
+
+> [!IMPORTANT]
+> Support for Azure Blobs backup and restore via CLI is in preview and available as an extension in Az 2.15.0 version and later. The extension isk automatically installed when you run the **az dataprotection** commands. [Learn more](/cli/azure/azure-cli-extensions-overview) about extensions.
+
+In this article, you'll learn how to:
+
+- Create a Backup vault
+
+- Create a Backup policy
+
+- Configure Backup of an Azure Blob
+
+- Run an on-demand backup job
+
+For information on the Azure Blobs regions availability, supported scenarios, and limitations, see the [support matrix](disk-backup-support-matrix.md).
+
+## Create a Backup vault
+
+Backup vault is a storage entity in Azure that stores backup data for various newer workloads that Azure Backup supports, such as Azure Database for PostgreSQL servers, and blobs in a storage account and Azure Disks. Backup vaults make it easy to organize your backup data, while minimizing management overhead. Backup vaults are based on the Azure Resource Manager model of Azure, which provides enhanced capabilities to help secure backup data.
+
+Before creating a Backup vault, choose the storage redundancy of the data within the vault. Then proceed to create the Backup vault with that storage redundancy and the location. In this article, we'll create a Backup vault _TestBkpVault_, in the region _westus_, under the resource group _testBkpVaultRG_. Use the [az dataprotection vault create](/cli/azure/dataprotection/backup-vault?view=azure-cli-latest&preserve-view=true#az_dataprotection_backup_vault_create) command to create a Backup vault. Learn more about [creating a Backup vault](./backup-vault-overview.md#create-a-backup-vault).
+
+```azurecli-interactive
+az dataprotection backup-vault create -g testBkpVaultRG --vault-name TestBkpVault -l westus --type SystemAssigned --storage-settings datastore-type="VaultStore" type="LocallyRedundant"
+
+{
+ "eTag": null,
+ "id": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourcegroups/testBkpVaultRG/providers/Microsoft.DataProtection/BackupVaults/TestBkpVault",
+ "identity": {
+ "principalId": "2ca1d5f7-38b3-4b61-aa45-8147d7e0edbc",
+ "tenantId": "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx",
+ "type": "SystemAssigned"
+ },
+ "location": "westus",
+ "name": "TestBkpVault",
+ "properties": {
+ "provisioningState": "Succeeded",
+ "storageSettings": [
+ {
+ "datastoreType": "VaultStore",
+ "type": "LocallyRedundant"
+ }
+ ]
+ },
+ "resourceGroup": "testBkpVaultRG",
+ "systemData": null,
+ "tags": null,
+ "type": "Microsoft.DataProtection/backupVaults"
+}
+```
+
+> [!IMPORTANT]
+> Though you'll see the Backup storage redundancy of the vault, the redundancy doesn't apply to the operational backup of blobs. This is because the backup is local in nature and no data is stored in the Backup vault. Here, the Backup vault is the management entity to help you manage the protection of block blobs in your storage accounts.
+
+After creating a vault, let's create a Backup policy to protect Azure Blobs in a storage account.
+
+## Create a Backup policy
+
+> [!IMPORTANT]
+> Read [this section](blob-backup-configure-manage.md#before-you-start) before creating the policy and configure backups for Azure Blobs.
+
+To understand the inner components of a Backup policy for Azure Blobs backup, retrieve the policy template using the [az dataprotection backup-policy get-default-policy-template](/cli/azure/dataprotection/backup-policy?view=azure-cli-latest&preserve-view=true#az_dataprotection_backup_policy_get_default_policy_template) command. This command returns a default policy template for a given datasource type. Use this policy template to create a new policy.
+
+```azurecli-interactive
+az dataprotection backup-policy get-default-policy-template --datasource-type AzureBlob
+
+{
+ "datasourceTypes": [
+ "Microsoft.Storage/storageAccounts/blobServices"
+ ],
+ "name": "BlobPolicy1",
+ "objectType": "BackupPolicy",
+ "policyRules": [
+ {
+ "isDefault": true,
+ "lifecycles": [
+ {
+ "deleteAfter": {
+ "duration": "P30D",
+ "objectType": "AbsoluteDeleteOption"
+ },
+ "sourceDataStore": {
+ "dataStoreType": "OperationalStore",
+ "objectType": "DataStoreInfoBase"
+ }
+ }
+ ],
+ "name": "Default",
+ "objectType": "AzureRetentionRule"
+ }
+ ]
+}
+
+```
+
+The policy template consists of a lifecycle only (which decides when to delete/copy/move the backup). As operational backup for blobs is continuous in nature, you don't need a schedule to perform backups.
+
+```json
+"policyRules": [
+ {
+ "isDefault": true,
+ "lifecycles": [
+ {
+ "deleteAfter": {
+ "duration": "P30D",
+ "objectType": "AbsoluteDeleteOption"
+ },
+ "sourceDataStore": {
+ "dataStoreType": "OperationalStore",
+ "objectType": "DataStoreInfoBase"
+ }
+ }
+ ],
+ "name": "Default",
+ "objectType": "AzureRetentionRule"
+ }
+ ]
+```
+
+> [!NOTE]
+> Restoring over long durations may lead to restore operations taking longer to complete. Also, the time taken to restore a set of data is based on the number of write and delete operations made during the restore period. For example, an account with one million objects with 3,000 objects added per day and 1,000 objects deleted per day will require approximately two hours to restore to a point 30 days in the past.<br><br>We don't recommend a retention period and restoration more than 90 days in the past for an account with this rate of change.
+
+Once the policy JSON has all the required values, proceed to create a new policy from the policy object using the [az dataprotection backup-policy create](/cli/azure/dataprotection/backup-policy?view=azure-cli-latest&preserve-view=true#az_dataprotection_backup_policy_create) command.
+
+```azurecli-interactive
+az dataprotection backup-policy get-default-policy-template --datasource-type AzureDisk > policy.json
+az dataprotection backup-policy create -g testBkpVaultRG --vault-name TestBkpVault -n BlobBackup-Policy --policy policy.json
+
+{
+ "id": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/testBkpVaultRG/providers/Microsoft.DataProtection/backupVaults/TestBkpVault/backupPolicies/BlobBackup-Policy",
+ "name": "BlobBackup-Policy",
+ "properties": {
+ "datasourceTypes": [
+ "Microsoft.Storage/storageAccounts/blobServices"
+ ],
+ "objectType": "BackupPolicy",
+ "policyRules": [
+ {
+ "isDefault": true,
+ "lifecycles": [
+ {
+ "deleteAfter": {
+ "duration": "P2D",
+ "objectType": "AbsoluteDeleteOption"
+ },
+ "sourceDataStore": {
+ "dataStoreType": "OperationalStore",
+ "objectType": "DataStoreInfoBase"
+ },
+ "targetDataStoreCopySettings": []
+ }
+ ],
+ "name": "Default",
+ "objectType": "AzureRetentionRule"
+ }
+ ]
+ },
+ "resourceGroup": "testBkpVaultRG",
+ "systemData": null,
+ "type": "Microsoft.DataProtection/backupVaults/backupPolicies"
+ }
+```
+
+## Configure backup
+
+Once the vault and policy are created, there are two critical points that you need to consider to protect all Azure Blobs within a storage account.
+
+### Key entities involved
+
+#### Storage account that contains the blobs to be protected
+
+Fetch the Azure Resource Manager ID of the storage account that contains the blobs to be protected. This will serve as the identifier of the storage account. We'll use an example of a storage account named _CLITestSA_, under the resource group _blobrg_, in a different subscription present in the South-east Asia region.
+
+```azurecli-interactive
+"/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx/resourcegroups/blobrg/providers/Microsoft.Storage/storageAccounts/CLITestSA"
+```
+
+#### Backup vault
+
+The Backup vault requires permissions on the storage account to enable backups on blobs present within the storage account. The system-assigned managed identity of the vault is used for assigning such permissions.
+
+### Assign permissions
+
+You need to assign a few permissions via RBAC to vault (represented by vault MSI) and the relevant storage account. These can be performed via Portal or PowerShell. Learn more about all [related permissions](blob-backup-configure-manage.md#grant-permissions-to-the-backup-vault-on-storage-accounts).
+
+### Prepare the request
+
+Once all the relevant permissions are set, the configuration of backup is performed in 2 steps. First, we prepare the relevant request by using the relevant vault, policy, storage account using the [az dataprotection backup-instance initialize](/cli/azure/dataprotection/backup-instance?view=azure-cli-latest&preserve-view=true#az_dataprotection_backup_instance_initialize) command. Then, we submit the request to protect the disk using the [az dataprotection backup-instance create](/cli/azure/dataprotection/backup-instance?view=azure-cli-latest&preserve-view=true#az_dataprotection_backup_instance_create) command.
+
+```azurecli-interactive
+az dataprotection backup-instance initialize --datasource-type AzureBlob -l southeastasia --policy-id "subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/testBkpVaultRG/providers/Microsoft.DataProtection/backupVaults/TestBkpVault/backupPolicies/BlobBackup-Policy" --datasource-id "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx/resourcegroups/blobrg/providers/Microsoft.Storage/storageAccounts/CLITestSA" > backup_instance.json
+```
+
+```azurecli-interactive
+az dataprotection backup-instance create -g testBkpVaultRG --vault-name TestBkpVault --backup-instance backup_instance.json
+
+{
+ "id": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxx/resourceGroups/testBkpVaultRG/providers/Microsoft.DataProtection/backupVaults/TestBkpVault/backupInstances/CLITestSA-CLITestSA-c3a2a98c-def8-44db-bd1d-ff6bc86ed036",
+ "name": "CLITestSA-CLITestSA-c3a2a98c-def8-44db-bd1d-ff6bc86ed036",
+ "properties": {
+ "currentProtectionState": "ProtectionConfigured",
+ "dataSourceInfo": {
+ "datasourceType": "Microsoft.Storage/storageAccounts/blobServices",
+ "objectType": "Datasource",
+ "resourceId": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx/resourcegroups/blobrg/providers/Microsoft.Storage/storageAccounts/CLITestSA",
+ "resourceLocation": "southeastasia",
+ "resourceName": "CLITestSA",
+ "resourceType": "Microsoft.Storage/storageAccounts",
+ "resourceUri": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx/resourcegroups/blobrg/providers/Microsoft.Storage/storageAccounts/CLITestSA"
+ },
+ "dataSourceSetInfo": null,
+ "friendlyName": "CLITestSA",
+ "objectType": "BackupInstance",
+ "policyInfo": {
+ "policyId": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/testBkpVaultRG/providers/Microsoft.DataProtection/backupVaults/TestBkpVault/backupPolicies/BlobBackup-Policy",
+ "policyParameters": {
+ "dataStoreParametersList": [
+ {
+ "dataStoreType": "OperationalStore",
+ "objectType": "AzureOperationalStoreParameters",
+ "resourceGroupId": ""
+ }
+ ]
+ },
+ "policyVersion": ""
+ },
+ "protectionErrorDetails": null,
+ "protectionStatus": {
+ "errorDetails": null,
+ "status": "ProtectionConfigured"
+ },
+ "provisioningState": "Succeeded"
+ },
+ "resourceGroup": "testBkpVaultRG",
+ "systemData": null,
+ "type": "Microsoft.DataProtection/backupVaults/backupInstances"
+ }
+```
+
+> [!IMPORTANT]
+> Once a storage account is configured for blobs backup, a few capabilities are affected, such as change feed and delete lock. [Learn more](blob-backup-configure-manage.md#effects-on-backed-up-storage-accounts).
+
+## Next steps
+
+[Restore Azure Blobs using Azure CLI](restore-blobs-storage-account-cli.md)
backup Backup Managed Disks Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-managed-disks-cli.md
+
+ Title: Back up Azure Managed Disks using Azure CLI
+description: Learn how to back up Azure Managed Disks using Azure CLI.
+ Last updated : 06/18/2021++
+# Back up Azure Managed Disks using Azure CLI
+
+This article describes how to back up [Azure Managed Disk](../virtual-machines/managed-disks-overview.md) using Azure CLI.
+
+> [!IMPORTANT]
+> Support for Azure Managed Disks backup and restore via CLI is in preview and available as an extension in Az 2.15.0 version and later. The extension is automatically installed when you run the **az dataprotection** commands. [Learn more](/cli/azure/azure-cli-extensions-overview) about extensions.
+
+In this article, you'll learn how to:
+
+- Create a Backup vault
+
+- Create a Backup policy
+
+- Configure Backup of an Azure Disk
+
+- Run an on-demand backup job
+
+For information on the Azure Disk backup region availability, supported scenarios and limitations, see the [support matrix](disk-backup-support-matrix.md).
+
+## Create a Backup vault
+
+Backup vault is a storage entity in Azure that stores backup data for various newer workloads that Azure Backup supports, such as Azure Database for PostgreSQL servers, blobs in a storage account, and Azure Disks. Backup vaults make it easy to organize your backup data, while minimizing management overhead. Backup vaults are based on the Azure Resource Manager model of Azure, which provides enhanced capabilities to help secure backup data.
+
+Before you create a Backup vault, choose the storage redundancy of the data within the vault. Then proceed to create the Backup vault with that storage redundancy and the location. In this article, we'll create a Backup vault _TestBkpVault_, in the region _westus_, under the resource group _testBkpVaultRG_. Use the [az dataprotection vault create](/cli/azure/dataprotection/backup-vault?view=azure-cli-latest&preserve-view=true#az_dataprotection_backup_vault_create) command to create a Backup vault. Learn more about [creating a Backup vault](./backup-vault-overview.md#create-a-backup-vault).
+
+```azurecli-interactive
+az dataprotection backup-vault create -g testBkpVaultRG --vault-name TestBkpVault -l westus --type SystemAssigned --storage-settings datastore-type="VaultStore" type="LocallyRedundant"
+
+{
+ "eTag": null,
+ "id": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourcegroups/testBkpVaultRG/providers/Microsoft.DataProtection/BackupVaults/TestBkpVault",
+ "identity": {
+ "principalId": "2ca1d5f7-38b3-4b61-aa45-8147d7e0edbc",
+ "tenantId": "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx",
+ "type": "SystemAssigned"
+ },
+ "location": "westus",
+ "name": "TestBkpVault",
+ "properties": {
+ "provisioningState": "Succeeded",
+ "storageSettings": [
+ {
+ "datastoreType": "VaultStore",
+ "type": "LocallyRedundant"
+ }
+ ]
+ },
+ "resourceGroup": "testBkpVaultRG",
+ "systemData": null,
+ "tags": null,
+ "type": "Microsoft.DataProtection/backupVaults"
+}
+```
+
+After creation of vault, let's create a Backup policy to protect Azure disks.
+
+## Create a Backup policy
+
+To understand the inner components of a Backup policy for Azure Disk Backup, retrieve the policy template using the [az dataprotection backup-policy get-default-policy-template](/cli/azure/dataprotection/backup-policy?view=azure-cli-latest&preserve-view=true#az_dataprotection_backup_policy_get_default_policy_template) command. This command returns a default policy template for a given datasource type. Use this policy template to create a new policy.
+
+```azurecli-interactive
+az dataprotection backup-policy get-default-policy-template --datasource-type AzureDisk
+
+{
+ "datasourceTypes": [
+ "Microsoft.Compute/disks"
+ ],
+ "name": "DiskPolicy",
+ "objectType": "BackupPolicy",
+ "policyRules": [
+ {
+ "backupParameters": {
+ "backupType": "Incremental",
+ "objectType": "AzureBackupParams"
+ },
+ "dataStore": {
+ "dataStoreType": "OperationalStore",
+ "objectType": "DataStoreInfoBase"
+ },
+ "name": "BackupHourly",
+ "objectType": "AzureBackupRule",
+ "trigger": {
+ "objectType": "ScheduleBasedTriggerContext",
+ "schedule": {
+ "repeatingTimeIntervals": [
+ "R/2020-04-05T13:00:00+00:00/PT4H"
+ ]
+ },
+ "taggingCriteria": [
+ {
+ "isDefault": true,
+ "tagInfo": {
+ "id": "Default_",
+ "tagName": "Default"
+ },
+ "taggingPriority": 99
+ }
+ ]
+ }
+ },
+ {
+ "isDefault": true,
+ "lifecycles": [
+ {
+ "deleteAfter": {
+ "duration": "P7D",
+ "objectType": "AbsoluteDeleteOption"
+ },
+ "sourceDataStore": {
+ "dataStoreType": "OperationalStore",
+ "objectType": "DataStoreInfoBase"
+ }
+ }
+ ],
+ "name": "Default",
+ "objectType": "AzureRetentionRule"
+ }
+ ]
+}
+
+```
+
+The policy template consists of a trigger (which decides what triggers the backup) and a lifecycle (which decides when to delete/copy/move the backup). In Azure Disk Backup, the default values for trigger are a scheduled trigger for every 4 hours (PT4H) and to retain each backup for seven days.
+
+**Scheduled trigger:**
+
+```json
+"trigger": {
+ "objectType": "ScheduleBasedTriggerContext",
+ "schedule": {
+ "repeatingTimeIntervals": [
+ "R/2020-04-05T13:00:00+00:00/PT4H"
+ ]
+ }
+
+```
+
+**Default retention lifecycle:**
+
+```json
+"lifecycles": [
+ {
+ "deleteAfter": {
+ "duration": "P7D",
+ "objectType": "AbsoluteDeleteOption"
+ },
+ "sourceDataStore": {
+ "dataStoreType": "OperationalStore",
+ "objectType": "DataStoreInfoBase"
+ }
+ }
+ ]
+```
+
+Azure Disk Backup offers multiple backups per day. If you require more frequent backups, choose the **Hourly** backup frequency with the ability to take backups with intervals of every 4, 6, 8 or 12 hours. The backups are scheduled based on the **Time** interval selected.
+
+For example, if you select **Every 4 hours**, then the backups are taken at approximately in the interval of every 4 hours so the backups are distributed equally across the day. If a once-a-day backup is sufficient, choose the **Daily** backup frequency. In the daily backup frequency, you can specify the time of the day when your backups are taken.
+
+>[!IMPORTANT]
+>The time of the day indicates the backup start time and not the time when the backup completes.
+
+The time required for completing the backup operation depends on various factors including size of the disk, and churn rate between consecutive backups. However, Azure Disk Backup is an agentless backup that uses [incremental snapshots](../virtual-machines/disks-incremental-snapshots.md), which doesn't impact the production application performance.
+
+ >[!NOTE]
+ >Although the selected vault may have the global-redundancy setting, currently, Azure Disk Backup supports snapshot datastore only. All backups are stored in a resource group in your subscription and aren't copied to the Backup vault storage.
+
+To know more details about policy creation, refer to the [Azure Disk Backup policy](backup-managed-disks.md#create-backup-policy) document.
+
+Once the template is downloaded as a JSON file, you can edit it for scheduling and retention as required. Then create a new policy with the resulting JSON. If you want to edit the hourly frequency or the retention period, use the [az dataprotection backup-policy trigger set](/cli/azure/dataprotection/backup-policy/trigger?view=azure-cli-latest&preserve-view=true#az_dataprotection_backup_policy_trigger_set) and/or [az dataprotection backup-policy retention-rule set](/cli/azure/dataprotection/backup-policy/retention-rule?view=azure-cli-latest&preserve-view=true#az_dataprotection_backup_policy_retention_rule_set) commands. Once the policy JSON has all the required values, proceed to create a new policy from the policy object using the [az dataprotection backup-policy create](/cli/azure/dataprotection/backup-policy?view=azure-cli-latest&preserve-view=true#az_dataprotection_backup_policy_create) command.
+
+```azurecli-interactive
+az dataprotection backup-policy get-default-policy-template --datasource-type AzureDisk > policy.json
+az dataprotection backup-policy create -g testBkpVaultRG --vault-name TestBkpVault -n mypolicy --policy policy.json
+
+{
+"id": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/testBkpVaultRG/providers/Microsoft.DataProtection/backupVaults/TestBkpVault/backupPolicies/mypolicy",
+"name": "mypolicy",
+"properties": {
+"datasourceTypes": [
+"Microsoft.Compute/disks"
+],
+"objectType": "BackupPolicy",
+"policyRules": [
+{
+"backupParameters": {
+"backupType": "Incremental",
+"objectType": "AzureBackupParams"
+},
+"dataStore": {
+"dataStoreType": "OperationalStore",
+"objectType": "DataStoreInfoBase"
+},
+"name": "BackupHourly",
+"objectType": "AzureBackupRule",
+"trigger": {
+"objectType": "ScheduleBasedTriggerContext",
+"schedule": {
+"repeatingTimeIntervals": [
+"R/2020-04-05T13:00:00+00:00/PT4H"
+]
+},
+"taggingCriteria": [
+{
+"criteria": null,
+"isDefault": true,
+"tagInfo": {
+"eTag": null,
+"id": "Default_",
+"tagName": "Default"
+},
+"taggingPriority": 99
+}
+]
+}
+},
+{
+"isDefault": true,
+"lifecycles": [
+{
+"deleteAfter": {
+"duration": "P7D",
+"objectType": "AbsoluteDeleteOption"
+},
+"sourceDataStore": {
+"dataStoreType": "OperationalStore",
+"objectType": "DataStoreInfoBase"
+},
+"targetDataStoreCopySettings": null
+}
+],
+"name": "Default",
+"objectType": "AzureRetentionRule"
+}
+]
+},
+"resourceGroup": "testBkpVaultRG",
+"systemData": null,
+"type": "Microsoft.DataProtection/backupVaults/backupPolicies"
+}
+```
+
+## Configure backup
+
+Once the vault and policy are created, there are three critical points that you need to consider to protect an Azure Disk.
+
+### Key entities involved
+
+#### Disk to be protected
+
+Fetch the ARM ID and the location of the disk to be protected. This will serve as the identifier of the disk. We'll use an example of a disk named _CLITestDisk_, under a resource group _diskrg_, under a different subscription.
+
+```azurecli-interactive
+$DiskId = "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx/resourcegroups/diskrg/providers/Microsoft.Compute/disks/CLITestDisk"
+```
+
+#### Snapshot resource group
+
+The disk snapshots are stored in a resource group within your subscription. As a guideline, we recommend creating a dedicated resource group as a snapshot datastore to be used by the Azure Backup service. Having a dedicated resource group allows restricting access permissions on the resource group, providing safety and ease of management of the backup data. Note the ARM ID for the resource group where you wish to place the disk snapshots
+
+```azurecli-interactive
+$snapshotrg = "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx/resourceGroups/snapshotrg"
+```
+
+#### Backup vault
+
+The Backup vaults require permissions on the disk and the snapshot resource group to be able to trigger snapshots and manage their lifecycle. The system-assigned managed identity of the vault is used for assigning such permissions. Use the [az dataprotection backup-vault update](/cli/azure/dataprotection/backup-vault?view=azure-cli-latest&preserve-view=true#az_dataprotection_backup_vault_update) command to enable system-assigned managed identity for the Recovery Services Vault.
+
+```azurecli-interactive
+az dataprotection backup-vault update -g testBkpVaultRG --vault-name TestBkpVault --type SystemAssigned
+```
+
+### Assign permissions
+
+You need to assign a few permissions via RBAC to the vault (represented by vault MSI) and the relevant disk and/or the disk RG. These can be performed via Azure portal or CLI. All related permissions are detailed in points - 1, 2, and 3 - in [Configure backup](backup-managed-disks.md#configure-backup).
+
+### Prepare the request
+
+Once all the relevant permissions are set, the configuration of backup is performed in two steps. First, we prepare the relevant request by using the relevant vault, policy, disk, and snapshot resource group using the [az dataprotection backup-instance initialize](/cli/azure/dataprotection/backup-instance?view=azure-cli-latest&preserve-view=true#az_dataprotection_backup_instance_initialize) command. The initialize command will return a JSON file, and then you have to update the snapshot resource group value. Then, we submit the request to protect the disk using the [az dataprotection backup-instance create](/cli/azure/dataprotection/backup-instance?view=azure-cli-latest&preserve-view=true#az_dataprotection_backup_instance_create) command.
+
+```azurecli-interactive
+az dataprotection backup-instance initialize --datasource-type AzureDisk -l southeastasia --policy-id "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/testBkpVaultRG/providers/Microsoft.DataProtection/backupVaults/TestBkpVault/backupPolicies/mypolicy" --datasource-id "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx/resourcegroups/diskrg/providers/Microsoft.Compute/disks/CLITestDisk" > backup_instance.json
+```
+
+Open the JSON file and edit the **snapshot resource group ID** in the ``` resource_group_id ``` under the ```data_store_parameters_list``` section.
+
+```json
+{
+ "backup_instance_name": "diskrg-CLITestDisk-3df6ac08-9496-4839-8fb5-8b78e594f166",
+ "properties": {
+ "data_source_info": {
+ "datasource_type": "Microsoft.Compute/disks",
+ "object_type": "Datasource",
+ "resource_id": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx/resourcegroups/diskrg/providers/Microsoft.Compute/disks/CLITestDisk",
+ "resource_location": "southeastasia",
+ "resource_name": "CLITestDisk",
+ "resource_type": "Microsoft.Compute/disks",
+ "resource_uri": ""
+ },
+ "data_source_set_info": null,
+ "object_type": "BackupInstance",
+ "policy_info": {
+ "policy_id": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourcegroups/testBkpVaultRG/providers/Microsoft.DataProtection/BackupVaults/TestBkpVault/backupPolicies/DiskPolicy",
+ "policy_parameters": {
+ "data_store_parameters_list": [
+ {
+ "data_store_type": "OperationalStore",
+ "object_type": "AzureOperationalStoreParameters",
+ "resource_group_id": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/snapshotrg"
+ }
+ ]
+ }
+ }
+ }
+}
+```
+
+> [!NOTE]
+> The backup instance name is generated by clients so that this will be a unique value. It's based on datasource name and a unique GUID. Once you list the backup instances, you sholuld be able to check the name of backup instance and the relevant datasource name.
+
+Use the edited JSON file to create a backup instance of the Azure Managed Disk.
+
+```azurecli-interactive
+az dataprotection backup-instance create -g testBkpVaultRG --vault-name TestBkpVault --backup-instance backup_instance.json
++
+{
+ "id": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourcegroups/testBkpVaultRG/providers/Microsoft.DataProtection/BackupVaults/TestBkpVault/backupInstances/diskrg-CLITestDisk-3df6ac08-9496-4839-8fb5-8b78e594f166",
+ "name": "diskrg-CLITestDisk-3df6ac08-9496-4839-8fb5-8b78e594f166",
+ "properties": {
+ "currentProtectionState": "ProtectionConfigured",
+ "dataSourceInfo": {
+ "datasourceType": "Microsoft.Compute/disks",
+ "objectType": "Datasource",
+ "resourceId": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx/resourcegroups/diskrg/providers/Microsoft.Compute/disks/CLITestDisk",
+ "resourceLocation": "southeastasia",
+ "resourceName": "CLITestDisk",
+ "resourceType": "Microsoft.Compute/disks",
+ "resourceUri": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx/resourcegroups/diskrg/providers/Microsoft.Compute/disks/CLITestDisk"
+ },
+ "dataSourceSetInfo": null,
+ "friendlyName": "CLITestDisk",
+ "objectType": "BackupInstance",
+ "policyInfo": {
+ "policyId": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourcegroups/testBkpVaultRG/providers/Microsoft.DataProtection/BackupVaults/TestBkpVault/backupPolicies/DiskPolicy",
+ "policyParameters": {
+ "dataStoreParametersList": [
+ {
+ "dataStoreType": "OperationalStore",
+ "objectType": "AzureOperationalStoreParameters",
+ "resourceGroupId": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourcegroups/sarath-rg"
+ }
+ ]
+ },
+ "policyVersion": null
+ },
+ "protectionErrorDetails": null,
+ "protectionStatus": {
+ "errorDetails": null,
+ "status": "ProtectionConfigured"
+ },
+ "provisioningState": "Succeeded"
+ },
+ "resourceGroup": "testBkpVaultRG",
+ "systemData": null,
+ "type": "Microsoft.DataProtection/backupVaults/backupInstances"
+}
+```
+
+Once the backup instance is created, you can proceed to trigger an on-demand backup if you don't want to wait for the policy's scheduled.
+
+## Run an on-demand backup
+
+List all backup instances within a vault using [az dataprotection backup-instance list](/cli/azure/dataprotection/backup-instance?view=azure-cli-latest&preserve-view=true) command, and then fetch the relevant instance using the [az dataprotection backup-instance show](/cli/azure/dataprotection/backup-instance?view=azure-cli-latest&preserve-view=true) command. Alternatively, for at-scale scenarios, you can list backup instances across vaults and subscriptions using the [az dataprotection backup-instance list-from-resourcegraph](/cli/azure/dataprotection/backup-instance?view=azure-cli-latest&preserve-view=true#az_dataprotection_backup_instance_list_from_resourcegraph) command.
+
+```azurecli-interactive
+az dataprotection backup-instance list-from-resourcegraph --datasource-type AzureDisk --datasource-id /subscriptions/xxxxxxxx-xxxx-xxxx-xxxx/resourcegroups/diskrg/providers/Microsoft.Compute/disks/CLITestDisk
++
+[
+ {
+ "datasourceId": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx/resourcegroups/diskrg/providers/Microsoft.Compute/disks/CLITestDisk",
+ "extendedLocation": null,
+ "id": "//subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourcegroups/testBkpVaultRG/providers/Microsoft.DataProtection/BackupVaults/TestBkpVault/backupInstances/diskrg-CLITestDisk-3df6ac08-9496-4839-8fb5-8b78e594f166",
+ "identity": null,
+ "kind": "",
+ "location": "",
+ "managedBy": "",
+ "name": "diskrg-CLITestDisk-3df6ac08-9496-4839-8fb5-8b78e594f166",
+ "plan": null,
+ "properties": {
+ "currentProtectionState": "ProtectionConfigured",
+ "dataSourceInfo": {
+ "baseUri": null,
+ "datasourceType": "Microsoft.Compute/disks",
+ "objectType": "Datasource",
+ "resourceID": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx/resourcegroups/diskrg/providers/Microsoft.Compute/disks/CLITestDisk",
+ "resourceLocation": "westus",
+ "resourceName": "CLITestDisk",
+ "resourceType": "Microsoft.Compute/disks",
+ "resourceUri": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx/resourcegroups/diskrg/providers/Microsoft.Compute/disks/CLITestDisk"
+ },
+ "dataSourceProperties": null,
+ "dataSourceSetInfo": null,
+ "datasourceAuthCredentials": null,
+ "friendlyName": "CLITestDisk",
+ "objectType": "BackupInstance",
+ "policyInfo": {
+ "policyId": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourcegroups/testBkpVaultRG/providers/Microsoft.DataProtection/BackupVaults/TestBkpVault/backupPolicies/DiskPolicy",
+ "policyParameters": {
+ "dataStoreParametersList": [
+ {
+ "dataStoreType": "OperationalStore",
+ "objectType": "AzureOperationalStoreParameters",
+ "resourceGroupId": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/snapshotrg"
+ }
+ ]
+ },
+ "policyVersion": null
+ },
+ "protectionErrorDetails": null,
+ "protectionStatus": {
+ "errorDetails": null,
+ "status": "ProtectionConfigured"
+ },
+ "provisioningState": "Succeeded"
+ },
+ "protectionState": "ProtectionConfigured",
+ "resourceGroup": "testBkpVaultRG",
+ "sku": null,
+ "subscriptionId": "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx",
+ "tags": null,
+ "tenantId": "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx",
+ "type": "microsoft.dataprotection/backupvaults/backupinstances",
+ "vaultName": "TestBkpVault",
+ "zones": null
+ }
+]
++
+```
+
+You can specify a retention rule while triggering backup. To view the retention rules in policy, look through the policy JSON for retention rules. In the below example, the rule with the name _default_ is displayed and we'll use that rule for the on-demand backup.
+
+```JSON
+{
+ "isDefault": true,
+ "lifecycles": [
+ {
+ "deleteAfter": {
+ "duration": "P7D",
+ "objectType": "AbsoluteDeleteOption"
+ },
+ "sourceDataStore": {
+ "dataStoreType": "OperationalStore",
+ "objectType": "DataStoreInfoBase"
+ }
+ }
+ ],
+ "name": "Default",
+ "objectType": "AzureRetentionRule"
+ }
+```
+
+Trigger an on-demand backup using the [az dataprotection backup-instance adhoc-backup](/cli/azure/dataprotection/backup-instance?view=azure-cli-latest&preserve-view=true#az_dataprotection_backup_instance_adhoc_backup) command.
+
+```azurecli-interactive
+az dataprotection backup-instance adhoc-backup --name "diskrg-CLITestDisk-3df6ac08-9496-4839-8fb5-8b78e594f166" --rule-name "Default" --resource-group "000pikumar" --vault-name "PratikPrivatePreviewVault1"
+```
+
+## Tracking jobs
+
+Track all the jobs using the [az dataprotection job list](/cli/azure/dataprotection/job?view=azure-cli-latest&preserve-view=true#az_dataprotection_job_list) command. You can list all jobs and fetch a particular job detail.
+
+You can also use Az.ResourceGraph to track all jobs across all Backup vaults. Use the [az dataprotection job list-from-resourcegraph](/cli/azure/dataprotection/job?view=azure-cli-latest&preserve-view=true#az_dataprotection_job_list_from_resourcegraph) command to get the relevant job that can be across any Backup vault.
+
+```azurepowershell-interactive
+az dataprotection job list-from-resourcegraph --datasource-type AzureDisk --status Completed
+```
+
+## Next steps
+
+[Restore Azure Managed Disks using Azure CLI](restore-managed-disks-cli.md)
backup Restore Blobs Storage Account Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/restore-blobs-storage-account-cli.md
+
+ Title: Restore Azure Blobs via Azure CLI
+description: Learn how to restore Azure Blobs to any point-in-time using Azure CLI.
+ Last updated : 06/18/2021++
+# Restore Azure Blobs to point-in-time using Azure CLI
+
+This article describes how to restore [blobs](blob-backup-overview.md) to any point-in-time using Azure Backup.
+
+> [!IMPORTANT]
+> Support for Azure Blobs backup and restore via CLI is in preview and available as an extension in Az 2.15.0 version and later. The extension is automatically installed when you run the **az dataprotection** commands. [Learn more](/cli/azure/azure-cli-extensions-overview) about extensions.
+
+> [!IMPORTANT]
+> Before you restore Azure Blobs using Azure Backup, see [important points](blob-restore.md#before-you-start).
+
+In this article, you'll learn how to:
+
+- Restore Azure Blobs to point-in-time
+
+- Track the restore operation status
+
+We'll refer to an existing Backup vault _TestBkpVault_, under the resource group _testBkpVaultRG_ in the examples.
+
+## Restoring Azure Blobs within a storage account
+
+### Fetching the valid time range for restore
+
+As the operational backup for blobs is continuous, there are no distinct points to restore from. Instead, we need to fetch the valid time-range under which blobs can be restored to any point-in-time. In this example, let's check for valid time-ranges to restore within the last 30 days.
+
+First, we need to fetch the relevant backup instance ID. List all backup instances within a vault using the [az dataprotection backup-instance list](/cli/azure/dataprotection/backup-instance?view=azure-cli-latest&preserve-view=true#az_dataprotection_backup_instance_list) command, and then fetch the relevant instance using [az dataprotection backup-instance show](/cli/azure/dataprotection/backup-instance?view=azure-cli-latest&preserve-view=true#az_dataprotection_backup_instance_show) command. Alternatively, for at-scale scenarios, you can list backup instances across vaults and subscriptions using the [az dataprotection backup-instance list-from-resourcegraph](/cli/azure/dataprotection/backup-instance?view=azure-cli-latest&preserve-view=true#az_dataprotection_backup_instance_list_from_resourcegraph) command.
+
+```azurecli-interactive
+az dataprotection backup-instance list-from-resourcegraph --datasource-type AzureBlob --datasource-id "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx/resourcegroups/blobrg/providers/Microsoft.Storage/storageAccounts/CLITestSA"
+
+[
+ {
+ "datasourceId": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx/resourcegroups/blobrg/providers/Microsoft.Storage/storageAccounts/CLITestSA",
+ "extendedLocation": null,
+ "id": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxx/resourceGroups/testBkpVaultRG/providers/Microsoft.DataProtection/backupVaults/TestBkpVault/backupInstances/CLITestSA-CLITestSA-c3a2a98c-def8-44db-bd1d-ff6bc86ed036",
+ "identity": null,
+ "kind": "",
+ "location": "",
+ "managedBy": "",
+ "name": "CLITestSA-CLITestSA-c3a2a98c-def8-44db-bd1d-ff6bc86ed036",
+ "plan": null,
+ "properties": {
+ "currentProtectionState": "ProtectionConfigured",
+ "dataSourceInfo": {
+ "baseUri": null,
+ "datasourceType": "Microsoft.Storage/storageAccounts/blobServices",
+ "objectType": "Datasource",
+ "resourceID": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx/resourcegroups/blobrg/providers/Microsoft.Storage/storageAccounts/CLITestSA",
+ "resourceLocation": "southeastasia",
+ "resourceName": "CLITestSA",
+ "resourceType": "Microsoft.Storage/storageAccounts",
+ "resourceUri": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx/resourcegroups/blobrg/providers/Microsoft.Storage/storageAccounts/CLITestSA"
+ },
+ "dataSourceProperties": null,
+ "dataSourceSetInfo": null,
+ "datasourceAuthCredentials": null,
+ "friendlyName": "CLITestSA",
+ "objectType": "BackupInstance",
+ "policyInfo": {
+ "policyId": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/testBkpVaultRG/providers/Microsoft.DataProtection/backupVaults/TestBkpVault/backupPolicies/BlobBackup-Policy",
+ "policyParameters": {
+ "dataStoreParametersList": [
+ {
+ "dataStoreType": "OperationalStore",
+ "objectType": "AzureOperationalStoreParameters",
+ "resourceGroupId": ""
+ }
+ ]
+ },
+ "policyVersion": ""
+ },
+ "protectionErrorDetails": null,
+ "protectionStatus": {
+ "errorDetails": null,
+ "status": "ProtectionConfigured"
+ },
+ "provisioningState": "Succeeded"
+ },
+ "protectionState": "ProtectionConfigured",
+ "resourceGroup": "rg-bv",
+ "sku": null,
+ "subscriptionId": "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxx",
+ "tags": null,
+ "tenantId": "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxx",
+ "type": "microsoft.dataprotection/backupvaults/backupinstances",
+ "vaultName": "TestBkpVault",
+ "zones": null
+ }
+]
+```
+
+Once the instance is identified, fetch the relevant recovery range using the [az dataprotection restorable-time-range find](/cli/azure/dataprotection/restorable-time-range?view=azure-cli-latest&preserve-view=true#az_dataprotection_restorable_time_range_find) command.
+
+```azurecli-interactive
+az dataprotection restorable-time-range find --start-time 2021-05-30T00:00:00 --end-time 2021-05-31T00:00:00 --source-data-store-type OperationalStore -g testBkpVaultRG --vault-name TestBkpVault --backup-instances CLITestSA-CLITestSA-c3a2a98c-def8-44db-bd1d-ff6bc86ed036
+
+{
+ "id": "CLITestSA-CLITestSA-c3a2a98c-def8-44db-bd1d-ff6bc86ed036",
+ "name": null,
+ "properties": {
+ "objectType": "AzureBackupFindRestorableTimeRangesResponse",
+ "restorableTimeRanges": [
+ {
+ "endTime": "2021-05-31T00:00:00.0000000Z",
+ "objectType": "RestorableTimeRange",
+ "startTime": "2021-06-13T18:53:44.4465407Z"
+ }
+ ]
+ },
+ "systemData": null,
+ "type": "Microsoft.DataProtection/backupVaults/backupInstances/findRestorableTimeRanges"
+}
+```
+
+### Preparing the restore request
+
+Once the point-in-time to restore is fixed, there are multiple options to restore.
+
+#### Restoring all the blobs to a point-in-time
+
+Using this option, you can restore all block blobs in the storage account by rolling them back to the selected point in time. Storage accounts containing large amounts of data or witnessing a high churn may take longer times to restore. To restore all block blobs, use the [az dataprotection backup-instance restore initialize-for-data-recovery](/cli/azure/dataprotection/backup-instance/restore?view=azure-cli-latest&preserve-view=true#az_dataprotection_backup_instance_restore_initialize_for_data_recovery) command. The restore location and the target resource ID will be the same as the protected storage account.
+
+```azurecli-interactive
+az dataprotection backup-instance restore initialize-for-data-recovery --datasource-type AzureBlob --restore-location southeastasia --source-datastore OperationalStore --target-resource-id "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx/resourcegroups/blobrg/providers/Microsoft.Storage/storageAccounts/CLITestSA" --point-in-time 2021-06-02T18:53:44.4465407Z
+
+{
+ "object_type": "AzureBackupRecoveryTimeBasedRestoreRequest",
+ "recovery_point_time": "2021-06-02T18:53:44.4465407Z.0000000Z",
+ "restore_target_info": {
+ "datasource_info": {
+ "datasource_type": "Microsoft.Storage/storageAccounts/blobServices",
+ "object_type": "Datasource",
+ "resource_id": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx/resourcegroups/blobrg/providers/Microsoft.Storage/storageAccounts/CLITestSA",
+ "resource_location": "southeastasia",
+ "resource_name": "CLITestSA",
+ "resource_type": "Microsoft.Storage/storageAccounts",
+ "resource_uri": ""
+ },
+ "object_type": "RestoreTargetInfo",
+ "recovery_option": "FailIfExists",
+ "restore_location": "southeastasia"
+ },
+ "source_data_store_type": "OperationalStore"
+}
++
+az dataprotection backup-instance restore initialize-for-data-recovery --datasource-type AzureBlob --restore-location southeastasia --source-datastore OperationalStore --target-resource-id "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx/resourcegroups/blobrg/providers/Microsoft.Storage/storageAccounts/CLITestSA" --point-in-time 2021-06-02T18:53:44.4465407Z > restore.json
+```
+
+#### Restoring selected containers
+
+Using this option, you can browse and select up to 10 containers to restore. To restore selected containers, use the [az dataprotection backup-instance restore initialize-for-item-recovery](/cli/azure/dataprotection/backup-instance/restore?view=azure-cli-latest&preserve-view=true#az_dataprotection_backup_instance_restore_initialize_for_item_recovery) command.
+
+```azurecli-interactive
+az dataprotection backup-instance restore initialize-for-item-recovery --datasource-type AzureBlob --restore-location southeastasia --source-datastore OperationalStore --backup-instance-id "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxx/resourceGroups/testBkpVaultRG/providers/Microsoft.DataProtection/backupVaults/TestBkpVault/backupInstances/CLITestSA-CLITestSA-c3a2a98c-def8-44db-bd1d-ff6bc86ed036" --point-in-time 2021-06-02T18:53:44.4465407Z --container-list container1 container2
+
+{
+ "object_type": "AzureBackupRecoveryTimeBasedRestoreRequest",
+ "recovery_point_time": "2021-06-02T18:53:44.4465407Z.0000000Z",
+ "restore_target_info": {
+ "datasource_info": {
+ "datasource_type": "Microsoft.Storage/storageAccounts/blobServices",
+ "object_type": "Datasource",
+ "resource_id": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx/resourcegroups/blobrg/providers/Microsoft.Storage/storageAccounts/CLITestSA",
+ "resource_location": "southeastasia",
+ "resource_name": "CLITestSA",
+ "resource_type": "Microsoft.Storage/storageAccounts",
+ "resource_uri": ""
+ },
+ "object_type": "ItemLevelRestoreTargetInfo",
+ "recovery_option": "FailIfExists",
+ "restore_criteria": [
+ {
+ "max_matching_value": "container1-0",
+ "min_matching_value": "container1",
+ "object_type": "RangeBasedItemLevelRestoreCriteria"
+ },
+ {
+ "max_matching_value": "container2-0",
+ "min_matching_value": "container2",
+ "object_type": "RangeBasedItemLevelRestoreCriteria"
+ }
+ ],
+ "restore_location": "southeastasia"
+ },
+ "source_data_store_type": "OperationalStore"
+}
++
+az dataprotection backup-instance restore initialize-for-item-recovery --datasource-type AzureBlob --restore-location southeastasia --source-datastore OperationalStore --backup-instance-id "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxx/resourceGroups/testBkpVaultRG/providers/Microsoft.DataProtection/backupVaults/TestBkpVault/backupInstances/CLITestSA-CLITestSA-c3a2a98c-def8-44db-bd1d-ff6bc86ed036" --point-in-time 2021-06-02T18:53:44.4465407Z --container-list container1 container2 > restore.json
+```
+
+#### Restoring containers using a prefix match
+
+This option lets you restore a subset of blobs using a prefix match. You can specify up to 10 lexicographical ranges of blobs within a single container or across multiple containers to return those blobs to their previous state at a given point-in-time. Here are a few things to keep in mind:
+
+- You can use a forward slash (/) to delineate the container name from the blob prefix.
+- The start of the range specified is inclusive, however the specified range is exclusive.
+
+[Learn more](blob-restore.md#use-prefix-match-for-restoring-blobs) about using prefixes to restore blob ranges.
+
+To restore selected containers, use the [az dataprotection backup-instance restore initialize-for-item-recovery](/cli/azure/dataprotection/backup-instance/restore?view=azure-cli-latest&preserve-view=true#az_dataprotection_backup_instance_restore_initialize_for_item_recovery) command.
+
+```azurecli-interactive
+az dataprotection backup-instance restore initialize-for-item-recovery --datasource-type AzureBlob --restore-location southeastasia --source-datastore OperationalStore --backup-instance-id "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxx/resourceGroups/testBkpVaultRG/providers/Microsoft.DataProtection/backupVaults/TestBkpVault/backupInstances/CLITestSA-CLITestSA-c3a2a98c-def8-44db-bd1d-ff6bc86ed036" --point-in-time 2021-06-02T18:53:44.4465407Z --from-prefix-pattern container1/text1 container2/text4 --to-prefix-pattern container1/text4 container2/text41
+
+{
+ "object_type": "AzureBackupRecoveryTimeBasedRestoreRequest",
+ "recovery_point_time": "2021-06-02T18:53:44.4465407Z.0000000Z",
+ "restore_target_info": {
+ "datasource_info": {
+ "datasource_type": "Microsoft.Storage/storageAccounts/blobServices",
+ "object_type": "Datasource",
+ "resource_id": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx/resourcegroups/blobrg/providers/Microsoft.Storage/storageAccounts/CLITestSA",
+ "resource_location": "southeastasia",
+ "resource_name": "CLITestSA",
+ "resource_type": "Microsoft.Storage/storageAccounts",
+ "resource_uri": ""
+ },
+ "object_type": "ItemLevelRestoreTargetInfo",
+ "recovery_option": "FailIfExists",
+ "restore_criteria": [
+ {
+ "max_matching_value": "container1/text4",
+ "min_matching_value": "container1/text1",
+ "object_type": "RangeBasedItemLevelRestoreCriteria"
+ },
+ {
+ "max_matching_value": "container2/text41",
+ "min_matching_value": "container2/text4",
+ "object_type": "RangeBasedItemLevelRestoreCriteria"
+ }
+ ],
+ "restore_location": "southeastasia"
+ },
+ "source_data_store_type": "OperationalStore"
+}
+++
+az dataprotection backup-instance restore initialize-for-item-recovery --datasource-type AzureBlob --restore-location southeastasia --source-datastore OperationalStore --backup-instance-id "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxx/resourceGroups/testBkpVaultRG/providers/Microsoft.DataProtection/backupVaults/TestBkpVault/backupInstances/CLITestSA-CLITestSA-c3a2a98c-def8-44db-bd1d-ff6bc86ed036" --point-in-time 2021-06-02T18:53:44.4465407Z --from-prefix-pattern container1/text1 container2/text4 --to-prefix-pattern container1/text4 container2/text41 > restore.json
+```
+
+### Trigger the restore
+
+Use the [az dataprotection backup-instance restore trigger](/cli/azure/dataprotection/backup-instance/restore?view=azure-cli-latest&preserve-view=true#az_dataprotection_backup_instance_restore_trigger) command to trigger the restore with the request prepared above.
+
+```azurecli-interactive
+az dataprotection backup-instance restore trigger -g testBkpVaultRG --vault-name TestBkpVault --backup-instance-name CLITestSA-CLITestSA-c3a2a98c-def8-44db-bd1d-ff6bc86ed036 --parameters restore.json
+```
+
+## Tracking job
+
+Track all the jobs using the [az dataprotection job list](/cli/azure/dataprotection/job?view=azure-cli-latest&preserve-view=true#az_dataprotection_job_list) command. You can list all jobs and fetch a particular job detail.
+
+You can also use Az.ResourceGraph to track all jobs across all Backup vaults. Use the [az dataprotection job list-from-resourcegraph](/cli/azure/dataprotection/job?view=azure-cli-latest&preserve-view=true#az_dataprotection_job_list_from_resourcegraph) command to get the relevant job which can be across any Backup vault.
+
+```azurepowershell-interactive
+az dataprotection job list-from-resourcegraph --datasource-type AzureDisk --operation Restore
+```
+
+## Next steps
+
+[Support matrix for Azure Blobs backup](blob-backup-support-matrix.md)
backup Restore Managed Disks Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/restore-managed-disks-cli.md
+
+ Title: Restore Azure Managed Disks via Azure CLI
+description: Learn how to restore Azure Managed Disks using Azure CLI.
+ Last updated : 06/18/2021++
+# Restore Azure Managed Disks using Azure CLI
+
+This article describes how to restore [Azure Managed Disks](../virtual-machines/managed-disks-overview.md) from a restore point created by Azure Backup using Azure CLI.
+
+> [!IMPORTANT]
+> Support for Azure Managed Disks backup and restore via CLI is in preview and available as an extension in Az 2.15.0 version and later. The extension is automatically installed when you run the **az dataprotection** commands. [Learn more](/cli/azure/azure-cli-extensions-overview) about extensions.
+
+Currently, the Original-Location Recovery (OLR) option of restoring by replacing the existing source disk from where the backups were taken isn't supported. You can restore from a recovery point to create a new disk in the same resource group of the source disk or in any other resource group. It's known as Alternate-Location Recovery (ALR).
+
+In this article, you'll learn how to:
+
+- Restore to create a new disk
+
+- Track the restore operation status
+
+We'll refer to an existing Backup vault _TestBkpVault_, under the resource group _testBkpVaultRG_ in the examples.
+
+## Restore to create a new disk
+
+### Setting up permissions
+
+Backup vault uses managed identity to access other Azure resources. To restore from backup, Backup vaultΓÇÖs managed identity requires a set of permissions on the resource group where the disk is to be restored.
+
+Backup vault uses a system-assigned managed identity, which is restricted to one per resource and is tied to the lifecycle of this resource. You can grant permissions to the managed identity by using the Azure role-based access control (Azure RBAC). Managed identity is a service principal of a special type that may only be used with Azure resources. Learn more about [Managed Identities](../active-directory/managed-identities-azure-resources/overview.md).
+
+Assign the relevant permissions for vault's system-assigned managed identity on the target resource group where the disks will be restored/created as mentioned [here](restore-managed-disks.md#restore-to-create-a-new-disk).
+
+### Fetching the relevant recovery point
+
+List all backup instances within a vault using [az dataprotection backup-instance list](/cli/azure/dataprotection/backup-instance?view=azure-cli-latest&preserve-view=true#az_dataprotection_backup_instance_list) command, and then fetch the relevant instance using the [az dataprotection backup-instance show](/cli/azure/dataprotection/backup-instance?view=azure-cli-latest&preserve-view=true#az_dataprotection_backup_instance_show) command. Alternatively, for at-scale scenarios, you can list backup instances across vaults and subscriptions using the [az dataprotection backup-instance list-from-resourcegraph](/cli/azure/dataprotection/backup-instance?view=azure-cli-latest&preserve-view=true#az_dataprotection_backup_instance_list_from_resourcegraph)
+
+```azurecli-interactive
+az dataprotection backup-instance list-from-resourcegraph --datasource-type AzureDisk --datasource-id /subscriptions/xxxxxxxx-xxxx-xxxx-xxxx/resourcegroups/diskrg/providers/Microsoft.Compute/disks/CLITestDisk
++
+[
+ {
+ "datasourceId": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx/resourcegroups/diskrg/providers/Microsoft.Compute/disks/CLITestDisk",
+ "extendedLocation": null,
+ "id": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourcegroups/testBkpVaultRG/providers/Microsoft.DataProtection/BackupVaults/TestBkpVault/backupInstances/diskrg-CLITestDisk-3df6ac08-9496-4839-8fb5-8b78e594f166",
+ "identity": null,
+ "kind": "",
+ "location": "",
+ "managedBy": "",
+ "name": "diskrg-CLITestDisk-3df6ac08-9496-4839-8fb5-8b78e594f166",
+ "plan": null,
+ "properties": {
+ "currentProtectionState": "ProtectionConfigured",
+ "dataSourceInfo": {
+ "baseUri": null,
+ "datasourceType": "Microsoft.Compute/disks",
+ "objectType": "Datasource",
+ "resourceID": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx/resourcegroups/diskrg/providers/Microsoft.Compute/disks/CLITestDisk",
+ "resourceLocation": "westus",
+ "resourceName": "CLITestDisk",
+ "resourceType": "Microsoft.Compute/disks",
+ "resourceUri": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx/resourcegroups/diskrg/providers/Microsoft.Compute/disks/CLITestDisk"
+ },
+ "dataSourceProperties": null,
+ "dataSourceSetInfo": null,
+ "datasourceAuthCredentials": null,
+ "friendlyName": "CLITestDisk",
+ "objectType": "BackupInstance",
+ "policyInfo": {
+ "policyId": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourcegroups/testBkpVaultRG/providers/Microsoft.DataProtection/BackupVaults/TestBkpVault/backupPolicies/DiskPolicy",
+ "policyParameters": {
+ "dataStoreParametersList": [
+ {
+ "dataStoreType": "OperationalStore",
+ "objectType": "AzureOperationalStoreParameters",
+ "resourceGroupId": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/snapshotrg"
+ }
+ ]
+ },
+ "policyVersion": null
+ },
+ "protectionErrorDetails": null,
+ "protectionStatus": {
+ "errorDetails": null,
+ "status": "ProtectionConfigured"
+ },
+ "provisioningState": "Succeeded"
+ },
+ "protectionState": "ProtectionConfigured",
+ "resourceGroup": "testBkpVaultRG",
+ "sku": null,
+ "subscriptionId": "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx",
+ "tags": null,
+ "tenantId": "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx",
+ "type": "microsoft.dataprotection/backupvaults/backupinstances",
+ "vaultName": "TestBkpVault",
+ "zones": null
+ }
+]
++
+```
+
+Once the instance is identified, fetch the relevant recovery point using the [az dataprotection recovery-point list](/cli/azure/dataprotection/recovery-point?view=azure-cli-latest&preserve-view=true#az_dataprotection_recovery_point_list) command.
+
+```azurecli-interactive
+az dataprotection recovery-point list --backup-instance-name diskrg-CLITestDisk-3df6ac08-9496-4839-8fb5-8b78e594f166 -g testBkpVaultRG --vault-name TestBkpVault
+
+{
+"id": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourcegroups/testBkpVaultRG/providers/Microsoft.DataProtection/BackupVaults/TestBkpVault/backupInstances/diskrg-CLITestDisk-3df6ac08-9496-4839-8fb5-8b78e594f166/recoveryPoints/5081ad8f1e6c4548ae89536d0d45c493",
+"name": "5081ad8f1e6c4548ae89536d0d45c493",
+"properties": {
+"friendlyName": "0f598ced-cbfe-4169-b962-ee94b0210490",
+"objectType": "AzureBackupDiscreteRecoveryPoint",
+"policyName": "DiskPSPolicy2",
+"policyVersion": null,
+"recoveryPointDataStoresDetails": [
+{
+"creationTime": "2021-06-08T09:01:57.708319+00:00",
+"expiryTime": "2021-06-15T09:01:57.708319+00:00",
+"id": "c2ad4629-f2ef-49b6-b3f8-50f3eb5404f4",
+"metaData": null,
+"rehydrationExpiryTime": null,
+"rehydrationStatus": null,
+"state": "COMMITTED",
+"type": "OperationalStore",
+"visible": true
+}
+],
+"recoveryPointId": "5081ad8f1e6c4548ae89536d0d45c493",
+"recoveryPointTime": "2021-06-08T09:01:57.708319+00:00",
+"recoveryPointType": "Incremental",
+"retentionTagName": "Default",
+"retentionTagVersion": "637553616953961153"
+},
+"resourceGroup": "testBkpVaultRG",
+"systemData": null,
+"type": "Microsoft.DataProtection/backupVaults/backupInstances/recoveryPoints"
+},
+{
+"id": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourcegroups/testBkpVaultRG/providers/Microsoft.DataProtection/BackupVaults/TestBkpVault/backupInstances/diskrg-CLITestDisk-3df6ac08-9496-4839-8fb5-8b78e594f166/recoveryPoints/039322cc563049bcbdb77bd695d4c02c",
+"name": "039322cc563049bcbdb77bd695d4c02c",
+"properties": {
+"friendlyName": "af6512b6-aa38-4966-b8e1-660c4eccdc0d",
+"objectType": "AzureBackupDiscreteRecoveryPoint",
+"policyName": "DiskPSPolicy2",
+"policyVersion": null,
+"recoveryPointDataStoresDetails": [
+{
+"creationTime": "2021-06-08T05:01:55.426507+00:00",
+"expiryTime": "2021-06-15T05:01:55.426507+00:00",
+"id": "c2ad4629-f2ef-49b6-b3f8-50f3eb5404f4",
+"metaData": null,
+"rehydrationExpiryTime": null,
+"rehydrationStatus": null,
+"state": "COMMITTED",
+"type": "OperationalStore",
+"visible": true
+}
+],
+"recoveryPointId": "039322cc563049bcbdb77bd695d4c02c",
+"recoveryPointTime": "2021-06-08T05:01:55.426507+00:00",
+"recoveryPointType": "Incremental",
+"retentionTagName": "Default",
+"retentionTagVersion": "637553616953961153"
+},
+"resourceGroup": "testBkpVaultRG",
+"systemData": null,
+"type": "Microsoft.DataProtection/backupVaults/backupInstances/recoveryPoints"
+}
+]
+```
+
+For example, the below query returns the latest recovery point.
+
+```azurecli-interactive
+az dataprotection recovery-point list --backup-instance-name diskrg-CLITestDisk-3df6ac08-9496-4839-8fb5-8b78e594f166 -g testBkpVaultRG --vault-name TestBkpVault --query "[0].id"
+
+"/subscriptions/62b829ee-7936-40c9-a1c9-47a93f9f3965/resourceGroups/testBkpVaultRG/providers/Microsoft.DataProtection/backupVaults/sarath-vault/backupInstances/clitest-clitest-3165cfe7-a932-11eb-9d24-9cfce85d4fae/recoveryPoints/5081ad8f1e6c4548ae89536d0d45c493"
+```
+
+### Preparing the restore request
+
+Construct the ARM ID of the new disk to be created with the target resource group, to which permissions were assigned as detailed [above](#setting-up-permissions), and the required disk name. We'll use an example of a disk named _CLITestDisk2_, under a resource group _targetrg_, under a different subscription.
+
+```azurecli-interactive
+$targetDiskId = /subscriptions/xxxxxxxx-xxxx-xxxx-xxxx/resourceGroups/targetrg/providers/Microsoft.Compute/disks/CLITestDisk2
+```
+
+Use the [az dataprotection backup-instance restore initialize-for-data-recovery](/cli/azure/dataprotection/backup-instance/restore?view=azure-cli-latest&preserve-view=true#az_dataprotection_backup_instance_restore_initialize_for_data_recovery) command to prepare the restore request with all relevant details.
+
+```azurecli-interactive
+az dataprotection backup-instance restore initialize-for-data-recovery --datasource-type AzureDisk --restore-location southeastasia --source-datastore OperationalStore --recovery-point-id /subscriptions/62b829ee-7936-40c9-a1c9-47a93f9f3965/resourceGroups/testBkpVaultRG/providers/Microsoft.DataProtection/backupVaults/sarath-vault/backupInstances/clitest-clitest-3165cfe7-a932-11eb-9d24-9cfce85d4fae/recoveryPoints/5081ad8f1e6c4548ae89536d0d45c493 --target-resource-id /subscriptions/xxxxxxxx-xxxx-xxxx-xxxx/resourceGroups/targetrg/providers/Microsoft.Compute/disks/CLITestDisk2 > restore.json
+```
+
+```json
+{
+ "object_type": "AzureBackupRecoveryPointBasedRestoreRequest",
+ "recovery_point_id": "77594ce0470849e79b86a6875b726dca",
+ "restore_target_info": {
+ "datasource_info": {
+ "datasource_type": "Microsoft.Compute/disks",
+ "object_type": "Datasource",
+ "resource_id": "//subscriptions/xxxxxxxx-xxxx-xxxx-xxxx/resourceGroups/targetrg/providers/Microsoft.Compute/disks/CLITestDisk2",
+ "resource_location": "southeastasia",
+ "resource_name": "CLITestDisk2",
+ "resource_type": "Microsoft.Compute/disks",
+ "resource_uri": ""
+ },
+ "object_type": "RestoreTargetInfo",
+ "recovery_option": "FailIfExists",
+ "restore_location": "southeastasia"
+ },
+ "source_data_store_type": "OperationalStore"
+}
+
+```
+
+You can also validate if the JSON file will succeed in creating new resources using the [az dataprotection backup-instance validate-for-restore](/cli/azure/dataprotection/backup-instance?view=azure-cli-latest&preserve-view=true#az_dataprotection_backup_instance_validate_for_restore) command.
+
+```azurecli-interactive
+az dataprotection backup-instance validate-for-restore -g testBkpVaultRG --vault-name TestBkpVault --backup-instance-name diskrg-CLITestDisk-3df6ac08-9496-4839-8fb5-8b78e594f166 --restore-request-object restore.json
+```
+
+### Trigger the restore
+
+Use the [az dataprotection backup-instance restore trigger](/cli/azure/dataprotection/backup-instance/restore?view=azure-cli-latest&preserve-view=true#az_dataprotection_backup_instance_restore_trigger) command to trigger the restore with the request prepared above.
+
+```azurecli-interactive
+az dataprotection backup-instance restore trigger -g testBkpVaultRG --vault-name TestBkpVault --backup-instance-name diskrg-CLITestDisk-3df6ac08-9496-4839-8fb5-8b78e594f166 --parameters restore.json
+```
+
+## Tracking job
+
+Track all jobs using the [az dataprotection job list](/cli/azure/dataprotection/job?view=azure-cli-latest&preserve-view=true#az_dataprotection_job_list) command. You can list all jobs and fetch a particular job detail.
+
+You can also use Az.ResourceGraph to track all jobs across all Backup vaults. Use the [az dataprotection job list-from-resourcegraph](/cli/azure/dataprotection/job?view=azure-cli-latest&preserve-view=true#az_dataprotection_job_list_from_resourcegraph) command to get the relevant job that can be across any Backup vault.
+
+```azurepowershell-interactive
+az dataprotection job list-from-resourcegraph --datasource-type AzureDisk --operation Restore
+```
+
+## Next steps
+
+[Azure Disk Backup FAQ](/azure/backup/disk-backup-faq)
cognitive-services Data Feeds From Different Sources https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/metrics-advisor/data-feeds-from-different-sources.md
The timestamp field must match one of these two formats:
## <span id="table">Azure Table Storage</span>
-* **Connection String**: Please create an SAS (shared access signature) URL and fill in here. The most straightforward way to generate a SAS URL is using the Azure Portal. By using the Azure portal, you can navigate graphically. To create an SAS URL via the Azure portal, first, navigate to the storage account youΓÇÖd like to access under the Settings section then click Shared access signature. Check at least "Table" and "Object" checkboxes, then click the Generate SAS and connection string button. Table service SAS URL is what you need to copy and fill in the text box in the Metrics Advisor workspace.
+* **Connection String**: Please create an SAS (shared access signature) URL and fill in here. The most straightforward way to generate a SAS URL is using the Azure portal. By using the Azure portal, you can navigate graphically. To create an SAS URL via the Azure portal, first, navigate to the storage account youΓÇÖd like to access under the Settings section then click Shared access signature. Check at least "Table" and "Object" checkboxes, then click the Generate SAS and connection string button. Table service SAS URL is what you need to copy and fill in the text box in the Metrics Advisor workspace.
* **Table Name**: Specify a table to query against. This can be found in your Azure Storage Account instance. Click **Tables** in the **Table Service** section.
You can use the `@StartTime` in your query. `@StartTime` is replaced with a yyyy
* **Host**: Specify the master host of Elasticsearch Cluster. * **Port**: Specify the master port of Elasticsearch Cluster. * **Authorization Header**: Specify the authorization header value of Elasticsearch Cluster.
-* **Query**: Specify the query to get data. Placeholder @StartTime is supported.(e.g. when data of 2020-06-21T00:00:00Z is ingested, @StartTime = 2020-06-21T00:00:00)
+* **Query**: Specify the query to get data. Placeholder `@StartTime` is supported. For example, when data of `2020-06-21T00:00:00Z` is ingested, `@StartTime = 2020-06-21T00:00:00`.
## <span id="http">HTTP request</span> * **Request URL**: An HTTP url that can return a JSON. The placeholders %Y,%m,%d,%h,%M are supported: %Y=year in format yyyy, %m=month in format MM, %d=day in format dd, %h=hour in format HH, %M=minute in format mm. For example: `http://microsoft.com/ProjectA/%Y/%m/X_%Y-%m-%d-%h-%M`. * **Request HTTP method**: Use GET or POST. * **Request header**: Could add basic authentication.
-* **Request payload**: Only JSON payload is supported. Placeholder @StartTime is supported in the payload. The response should be in the following JSON format: [{"timestamp": "2018-01-01T00:00:00Z", "market":"en-us", "count":11, "revenue":1.23}, {"timestamp": "2018-01-01T00:00:00Z", "market":"zh-cn", "count":22, "revenue":4.56}].(e.g. when data of 2020-06-21T00:00:00Z is ingested, @StartTime = 2020-06-21T00:00:00.0000000+00:00)
+* **Request payload**: Only JSON payload is supported. Placeholder @StartTime is supported in the payload. The response should be in the following JSON format: `[{"timestamp": "2018-01-01T00:00:00Z", "market":"en-us", "count":11, "revenue":1.23}, {"timestamp": "2018-01-01T00:00:00Z", "market":"zh-cn", "count":22, "revenue":4.56}]`. For example, when data of `2020-06-21T00:00:00Z` is ingested, `@StartTime = 2020-06-21T00:00:00.0000000+00:00)`.
## <span id="influxdb">InfluxDB (InfluxQL)</span>
You can use the `@StartTime` in your query. `@StartTime` is replaced with a yyyy
## Next steps * While waiting for your metric data to be ingested into the system, read about [how to manage data feed configurations](how-tos/manage-data-feeds.md).
-* When your metric data is ingested, you can [Configure metrics and fine tune detecting configuration](how-tos/configure-metrics.md).
+* When your metric data is ingested, you can [Configure metrics and fine tune detection configuration](how-tos/configure-metrics.md).
cognitive-services Glossary https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/metrics-advisor/glossary.md
# Metrics Advisor glossary of common vocabulary and concepts
-This document explains the technical terms used in Metrics Advisor. Use this article to learn about common concepts and objects you might encounter when using the service .
+This document explains the technical terms used in Metrics Advisor. Use this article to learn about common concepts and objects you might encounter when using the service.
## Data feed
A metric is a quantifiable measure that is used to monitor and assess the status
## Dimension
-A dimension is one or more categorical values. The combination of those values identify a particular univariate time series, for example: country, language, tenant, and so on.
+A dimension is one or more categorical values. The combination of those values identifies a particular univariate time series, for example: country, language, tenant, and so on.
## Multi-dimensional metric
Start time is the time that you want Metrics Advisor to begin ingesting data fro
In Metrics Advisor, confidence boundaries represent the sensitivity of the algorithm used, and are used to filter out overly sensitive anomalies. On the web portal, confidence bounds appear as a transparent blue band. All the points within the band are treated as normal points.
-Metrics Advisor provides tools to adjust the sensitivity of the algorithms used. See [How to: Configure metrics and fine tune detecting configuration](how-tos/configure-metrics.md) for more information.
+Metrics Advisor provides tools to adjust the sensitivity of the algorithms used. See [How to: Configure metrics and fine tune detection configuration](how-tos/configure-metrics.md) for more information.
![Confidence bounds](media/confidence-bounds.png)
Metrics Advisor lets you create and subscribe to real-time alerts. These alerts
## Anomaly incident
-After a detection configuration is applied to metrics, incidents are generated whenever any series within it has an anomaly. In large data sets this can be overwhelming, so Metrics Advisor groups series of anomalies within a metric into an incident. The service will also evaluate the severity and provide tools for [diagnosing the incident](how-tos/diagnose-incident.md).
+After a detection configuration is applied to metrics, incidents are generated whenever any series within it has an anomaly. In large data sets this can be overwhelming, so Metrics Advisor groups series of anomalies within a metric into an incident. The service will also evaluate the severity and provide tools for [diagnosing an incident](how-tos/diagnose-an-incident.md).
-### Incident tree
+### Diagnostic tree
In Metrics Advisor, you can apply anomaly detection on metrics, then Metrics Advisor automatically monitors all time series of all dimension combinations. Whenever there is any anomaly detected, Metrics Advisor aggregates anomalies into incidents.
-After an incident occurs, Metrics Advisor will provide an incident tree with a hierarchy of contributing anomalies, and identify ones with the biggest impact. Each incident has a root cause anomaly, which is the top node of the tree.
+After an incident occurs, Metrics Advisor will provide a diagnostic tree with a hierarchy of contributing anomalies, and identify ones with the biggest impact. Each incident has a root cause anomaly, which is the top node of the tree.
### Anomaly grouping
-Metrics Advisor provides the capability to find related time series with a similar patterns. It can also provide deeper insights into the impact on other dimensions, and correlate the anomalies.
+Metrics Advisor provides the capability to find related time series with similar patterns. It can also provide deeper insights into the impact on other dimensions, and correlate the anomalies.
### Time series comparison
cognitive-services Alerts https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/metrics-advisor/how-tos/alerts.md
After an anomaly is detected by Metrics Advisor, an alert notification will be t
## Create a hook
-Metrics Advisor supports three different types of hooks: email hook, web hook and Azure DevOps. You can choose the one that works for your specific scenario.
+Metrics Advisor supports four different types of hooks: email, Teams, webhook, and Azure DevOps. You can choose the one that works for your specific scenario.
### Email hook > [!Note]
-> Metrics Advisor resource administrators need to configure the Email settings, and input SMTP related information into Metrics Advisor before anomaly alerts can be sent. The resource group admin or subscription admin needs to assign at least one *Cognitive Services Metrics Advisor Administrator* role in the Access control tab of the Metrics Advisor resource. [Learn more about e-mail settings configuration](/azure/cognitive-services/metrics-advisor/faq#how-to-set-up-email-settings-and-enable-alerting-by-email-).
+> Metrics Advisor resource administrators need to configure the Email settings, and input **SMTP related information** into Metrics Advisor before anomaly alerts can be sent. The resource group admin or subscription admin needs to assign at least one *Cognitive Services Metrics Advisor Administrator* role in the Access control tab of the Metrics Advisor resource. [Learn more about e-mail settings configuration](../faq.yml#how-to-set-up-email-settings-and-enable-alerting-by-email-).
-To create an email hook, the following parameters are available:
-An email hook is the channel for anomaly alerts to be sent to email addresses specified in the **Email to** section. Two types of alert emails will be sent: *Data feed not available* alerts, and *Incident reports* which contain one or multiple anomalies.
+An email hook is the channel for anomaly alerts to be sent to email addresses specified in the **Email to** section. Two types of alert emails will be sent: **Data feed not available** alerts, and **Incident reports**, which contain one or multiple anomalies.
+
+To create an email hook, the following parameters are available:
|Parameter |Description | ||| | Name | Name of the email hook |
-| Email to| Email addresses that would send alert to|
-| External link | Optional field which enables a customized redirect, such as for troubleshooting notes. |
+| Email to| Email addresses to send alerts to|
+| External link | Optional field, which enables a customized redirect, such as for troubleshooting notes. |
| Customized anomaly alert title | Title template supports `${severity}`, `${alertSettingName}`, `${datafeedName}`, `${metricName}`, `${detectConfigName}`, `${timestamp}`, `${topDimension}`, `${incidentCount}`, `${anomalyCount}`
-After you click **OK**, an email hook will be created. You can use it in any alert settings to receive anomaly alerts.
+After you select **OK**, an email hook will be created. You can use it in any alert settings to receive anomaly alerts. Refer to the tutorial of [enable anomaly notification in Metrics Advisor](../tutorials/enable-anomaly-notification.md#send-notifications-with-logic-apps-teams-and-smtp) for detailed steps.
+
+### Teams hook
+
+A Teams hook is the channel for anomaly alerts to be sent to a channel in Microsoft Teams. A Teams hook is implemented through an "Incoming webhook" connector. You may need to create an "Incoming webhook" connector ahead in your target Teams channel and get a URL of it. Then pivot back to your Metrics Advisor workspace.
+
+Select "Hooks" tab in left navigation bar, and select "Create hook" button at top right of the page. Choose hook type of "Teams", following parameters are provided:
+
+|Parameter |Description |
+|||
+| Name | Name of the Teams hook |
+| Connector URL | The URL that just copied from "Incoming webhook" connector that created in target Teams channel. |
+
+After you select **OK**, a Teams hook will be created. You can use it in any alert settings to notify anomaly alerts to target Teams channel. Refer to the tutorial of [enable anomaly notification in Metrics Advisor](../tutorials/enable-anomaly-notification.md#send-notifications-with-logic-apps-teams-and-smtp) for detailed steps.
### Web hook
After you click **OK**, an email hook will be created. You can use it in any ale
`{"timestamp":"2019-09-11T00:00:00Z","alertSettingGuid":"49635104-1234-4c1c-b94a-744fc920a9eb"}` > * When a web hook is created or modified, the API will be called as a test with an empty request body. Your API needs to return a 200 HTTP code.
-A web hook is the entry point for all the information available from the Metrics Advisor service, and calls a user-provided api when an alert is triggered. All alerts can be sent through a web hook.
+A web hook is the entry point for all the information available from the Metrics Advisor service, and calls a user-provided API when an alert is triggered. All alerts can be sent through a web hook.
To create a web hook, you will need to add the following information:
When a notification is pushed through a web hook, you can use the following APIs
- `query_alert_result_anomalies` - `query_alert_result_incidents`
+By using web hook and Azure Logic Apps, it's possible to send email notification **without an SMTP server configured**. Refer to the tutorial of [enable anomaly notification in Metrics Advisor](../tutorials/enable-anomaly-notification.md#send-notifications-with-logic-apps-teams-and-smtp) for detailed steps.
+ ### Azure DevOps
-Metrics Advisor also supports automatically creating a work item in Azure DevOps to track issues/bugs when any anomaly detected. All alerts can be sent through Azure DevOps hooks.
+Metrics Advisor also supports automatically creating a work item in Azure DevOps to track issues/bugs when any anomaly is detected. All alerts can be sent through Azure DevOps hooks.
To create an Azure DevOps hook, you will need to add the following information
To create an Azure DevOps hook, you will need to add the following information
> [!Note] > You need to grant write permissions if you want Metrics Advisor to create work items based on anomaly alerts.
-> After creating hooks, you can use them in any of your alert settings. Manage your hooks in the **hook settings** page.
+> After creating hooks, you can use them in any of your alert settings. Manage your hooks in the **hook settings** page.
## Add or edit alert settings
-Go to metrics detail page to find the **Alert settings** section, in the bottom left corner of metrics detail page. It lists all alert settings that apply to the selected detection configuration. When a new detection configuration is created, there's no alert setting, and no alerts will be sent.
+Go to metrics detail page to find the **Alert settings** section, in the bottom-left corner of the metrics detail page. It lists all alert settings that apply to the selected detection configuration. When a new detection configuration is created, there's no alert setting, and no alerts will be sent.
You can use the **add**, **edit** and **delete** icons to modify alert settings. :::image type="content" source="../media/alerts/alert-setting.png" alt-text="Alert settings menu item.":::
-Click the **add** or **edit** buttons to get a window to add or edit your alert settings.
+Select the **add** or **edit** buttons to get a window to add or edit your alert settings.
:::image type="content" source="../media/alerts/edit-alert.png" alt-text="Add or edit alert settings":::
-**Alert setting name**: The name of this alert setting. It will be displayed in the alert email title.
+**Alert setting name**: The name of the alert setting. It will be displayed in the alert email title.
**Hooks**: The list of hooks to send alerts to.
-The section marked in the screenshot above are the settings for one detecting configuration. You can set different alert settings for different detection configurations. Choose the target configuration using the third drop-down list in this window.
+The section marked in the screenshot above are the settings for one detection configuration. You can set different alert settings for different detection configurations. Choose the target configuration using the third drop-down list in this window.
### Filter settings The following are filter settings for one detection configuration.
-**Alert For** has 4 options for filtering anomalies:
+**Alert For** has four options for filtering anomalies:
* **Anomalies in all series**: All anomalies will be included in the alert. * **Anomalies in the series group**: Filter series by dimension values. Set specific values for some dimensions. Anomalies will only be included in the alert when the series matches the specified value. * **Anomalies in favorite series**: Only the series marked as favorite will be included in the alert. |
-* **Anomalies in top N of all series**: This filter is for the case that you only care about the series whose value is in the top N. We will look back some timestamps, and check if value of the series at these timestamp are in top N. If the "in top n" count is larger than the specified number, the anomaly will be included in an alert. |
+* **Anomalies in top N of all series**: This filter is for the case that you only care about the series whose value is in the top N. Metrics Advisor will look back over previous timestamps, and check if values of the series at these timestamps are in top N. If the "in top n" count is larger than the specified number, the anomaly will be included in an alert. |
-**Filter anomaly options** is an additional filter with the following options:
+**Filter anomaly options are an extra filter with the following options**:
-- **severity** : The anomaly will only be included when the anomaly severity is within the specified range.-- **Snooze** : Stop alerts temporarily for anomalies in the next N points (period), when triggered in an alert.
- - **snooze type** : When set to **Series**, a triggered anomaly will only snooze its series. For **Metric**, one triggered anomaly will snooze all the series in this metric.
- - **snooze number** : the number of points (period) to snooze.
- - **reset for non-successive** : When selected, a triggered anomaly will only snooze the next n successive anomalies. If one of the following data points isn't an anomaly, the snooze will be reset from that point; When unselected, one triggered anomaly will snooze next n points (period), even if successive data points aren't anomalies.
-- **value** (optional) : Filter by value. Only point values that meet the condition, anomaly will be included. If you use the corresponding value of another metric, the dimension names of the two metrics should be consistent.
+- **Severity**: The anomaly will only be included when the anomaly severity is within the specified range.
+- **Snooze**: Stop alerts temporarily for anomalies in the next N points (period), when triggered in an alert.
+ - **snooze type**: When set to **Series**, a triggered anomaly will only snooze its series. For **Metric**, one triggered anomaly will snooze all the series in this metric.
+ - **snooze number**: the number of points (period) to snooze.
+ - **reset for non-successive**: When selected, a triggered anomaly will only snooze the next n successive anomalies. If one of the following data points isn't an anomaly, the snooze will be reset from that point; When unselected, one triggered anomaly will snooze next n points (period), even if successive data points aren't anomalies.
+- **value** (optional): Filter by value. Only point values that meet the condition, anomaly will be included. If you use the corresponding value of another metric, the dimension names of the two metrics should be consistent.
Anomalies not filtered out will be sent in an alert. ### Add cross-metric settings
-Click **+ Add cross-metric settings** in the alert settings page to add another section.
+Select **+ Add cross-metric settings** in the alert settings page to add another section.
The **Operator** selector is the logical relationship of each section, to determine if they send an alert.
The **Operator** selector is the logical relationship of each section, to determ
|AND | Only send an alert if a series matches each alert section, and all data points are anomalies. If the metrics have different dimension names, an alert will never be triggered. | |OR | Send the alert if at least one section contains anomalies. | ## Next steps - [Adjust anomaly detection using feedback](anomaly-feedback.md)-- [Diagnose an incident](diagnose-incident.md).-- [Configure metrics and fine tune detecting configuration](configure-metrics.md)
+- [Diagnose an incident](diagnose-an-incident.md).
+- [Configure metrics and fine tune detection configuration](configure-metrics.md)
cognitive-services Anomaly Feedback https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/metrics-advisor/how-tos/anomaly-feedback.md
Another way to view feedback history is from a series. You will see several butt
> Anyone who has access to the metric is permitted to give feedback, so you may see feedback given by other datafeed owners. If you edit the same point as someone else, your feedback will overwrite the previous feedback entry. ## Next steps-- [Diagnose an incident](diagnose-incident.md).-- [Configure metrics and fine tune detecting configuration](configure-metrics.md)
+- [Diagnose an incident](diagnose-an-incident.md).
+- [Configure metrics and fine tune detection configuration](configure-metrics.md)
- [Configure alerts and get notifications using a hook](../how-tos/alerts.md)
cognitive-services Configure Metrics https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/metrics-advisor/how-tos/configure-metrics.md
Last updated 09/10/2020
-# How to: Configure metrics and fine tune detecting configuration
+# How to: Configure metrics and fine tune detection configuration
Use this article to start configuring your Metrics Advisor instance using the web portal. To browse the metrics for a specific data feed, go to the **Data feeds** page and select one of the feeds. This will display a list of metrics associated with it.
You can also select time ranges, and change the layout of the page.
> - The start time is inclusive. > - The end time is exclusive.
-You can click the **Incidents** tab to view anomalies, and find a link to the [Incident hub](diagnose-incident.md).
+You can click the **Incidents** tab to view anomalies, and find a link to the [Incident hub](diagnose-an-incident.md).
-## Tune the detecting configuration
+## Tune the detection configuration
-A metric can apply one or more detecting configurations. There is a default configuration for each metric, which you can edit or add to, according to your monitoring needs.
+A metric can apply one or more detection configurations. There is a default configuration for each metric, which you can edit or add to, according to your monitoring needs.
### Tune the configuration for all series in current metric
Cycle event is used to reduce anomalies if they follow a cyclic pattern, but it
Metrics Advisor detects anomalies on all your time series data as they're ingested. However, not all anomalies need to be escalated, because they might not have a big impact. Aggregation will be performed on anomalies to group related ones into incidents. You can view these incidents from the **Incident** tab in metrics details page.
-Click on an incident to go to the **Incidents analysis** page where you can see more details about it. Click on **Manage incidents in new Incident hub**, to find the [Incident hub](diagnose-incident.md) page where you can find all incidents under the specific metric.
+Click on an incident to go to the **Incidents analysis** page where you can see more details about it. Click on **Manage incidents in new Incident hub**, to find the [Incident hub](diagnose-an-incident.md) page where you can find all incidents under the specific metric.
## Subscribe anomalies for notification
If you'd like to get notified whenever an anomaly is detected, you can subscribe
## Next steps - [Configure alerts and get notifications using a hook](alerts.md) - [Adjust anomaly detection using feedback](anomaly-feedback.md)-- [Diagnose an incident](diagnose-incident.md).
+- [Diagnose an incident](diagnose-an-incident.md).
cognitive-services Diagnose An Incident https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/metrics-advisor/how-tos/diagnose-an-incident.md
+
+ Title: Diagnose an incident using Metrics Advisor
+
+description: Learn how to diagnose an incident using Metrics Advisor, and get detailed views of anomalies in your data.
++++++ Last updated : 04/15/2021+++
+# Diagnose an incident using Metrics Advisor
+
+## What is an incident?
+
+When there are anomalies detected on multiple time series within one metric at a particular timestamp, Metrics Advisor will automatically group anomalies that **share the same root cause** into one incident. An incident usually indicates a real issue, Metrics Advisor performs analysis on top of it and provides automatic root cause analysis insights.
+
+This will significantly remove customer's effort to view each individual anomaly and quickly finds the most important contributing factor to an issue.
+
+An alert generated by Metrics Advisor may contain multiple incidents and each incident may contain multiple anomalies captured on different time series at the same timestamp.
+
+## Paths to diagnose an incident
+
+- **Diagnose from an alert notification**
+
+ If you've configured a hook of the email/Teams type and applied at least one alerting configuration. Then you will receive continuous alert notifications escalating incidents that are analyzed by Metrics Advisor. Within the notification, there's an incident list and a brief description. For each incident, there's a **"Diagnose"** button, selecting it will direct you to the incident detail page to view diagnostic insights.
+
+ :::image type="content" source="../media/diagnostics/alert-notification.png" alt-text="Diagnose from an alert notification":::
+
+- **Diagnose from an incident in "Incident hub"**
+
+ There's a central place in Metrics Advisor that gathers all incidents that have been captured and make it easy to track any ongoing issues. Selecting the **Incident Hub** tab in left navigation bar will list out all incidents within the selected metrics. Within the incident list, select one of them to view detailed diagnostic insights.
+
+ :::image type="content" source="../media/diagnostics/incident-list.png" alt-text="Diagnose from an incident in Incident hub":::
+
+- **Diagnose from an incident listed in metrics page**
+
+ Within the metrics detail page, there's a tab named **Incidents** which lists the latest incidents captured for this metric. The list can be filtered by the severity of the incidents or the dimension value of the metrics.
+
+ Selecting one incident in the list will direct you to the incident detail page to view diagnostic insights.
+
+ :::image type="content" source="../media/diagnostics/incident-in-metrics.png" alt-text="Diagnose from an incident listed in metrics page":::
+
+## Typical diagnostic flow
+
+After being directed to the incident detail page, you're able to take advantage of the insights that are automatically analyzed by Metrics Advisor to quickly locate root cause of an issue or use the analysis tool to further evaluate the issue impact. There are three sections in the incident detail page which correspond to three major steps to diagnosing an incident.
+
+### Step 1. Check summary of current incident
+
+The first section lists a summary of the current incident, including basic information, actions & tracings, and an analyzed root cause.
+
+- Basic information includes the "top impacted series" with a diagram, "impact start & end time", "incident severity" and "total anomalies included". By reading this, you can get a basic understanding of an ongoing issue and the impact of it.
+- Actions & tracings, this is used to facilitate team collaboration on an ongoing incident. Sometimes one incident may need to involve cross-team members' effort to analyze and resolve it. Everyone who has the permission to view the incident can add an action or a tracing event.
+
+ For example, after diagnosing the incident and root cause is identified, an engineer can add a tracing item with type of "customized" and input the root cause in the comment section. Leave the status as "Active". Then other teammates can share the same info and know there's someone working on the fix. You can also add an "Azure DevOps" item to track the incident with a specific task or bug.
++
+- Analyzed root cause is an automatically analyzed result. Metrics Advisor analyzes all anomalies that are captured on time series within one metric with different dimension values at the same timestamp. Then performs correlation, clustering to group related anomalies together and generates root cause advice.
+
+For metrics with multiple dimensions, it's a common case that multiple anomalies will be detected at the same time. However, those anomalies may share the same root cause. Instead of analyzing all anomalies one by one, leveraging **Analyzed root cause** should be the most efficient way to diagnose current incident.
++
+### Step 2. View cross-dimension diagnostic insights
+
+After getting basic info and automatic analysis insights, you can get more detailed info on abnormal status on other dimensions within the same metric in a holistic way using the **"Diagnostic tree"**.
+
+For metrics with multiple dimensions, Metrics Advisor categorizes the time series into a hierarchy, which is named the **Diagnostic tree**. For example, a "revenue" metric is monitored by two dimensions: "region" and "category". Despite concrete dimension values, there needs to have an **aggregated** dimension value, like **"SUM"**. Then time series of "region" = **"SUM"** and "category" = **"SUM"** will be categorized as the root node within the tree. Whenever there's an anomaly captured at **"SUM"** dimension, then it could be drilled down and analyzed to locate which specific dimension value has contributed the most to the parent node anomaly. Select each node to expand and see detailed information.
++
+- To enable an "aggregated" dimension value in your metrics
+
+ Metrics Advisor supports performing "Roll-up" on dimensions to calculate an "aggregated" dimension value. The diagnostic tree supports diagnosing on **"SUM", "AVG", "MAX","MIN","COUNT"** aggregations. To enable an "aggregated" dimension value, you can enable the "Roll-up" function during data onboarding. Please make sure your metrics is **mathematically computable** and that the aggregated dimension has real business value.
+
+ :::image type="content" source="../media/diagnostics/automatic-roll-up.png" alt-text="Roll-up settings":::
+
+- If there's no "aggregated" dimension value in your metrics
+
+ If there's no "aggregated" dimension value in your metrics and the "Roll-up" function is not enabled during data onboarding. There will be no metric value calculated for "aggregated" dimension, it will show up as a gray node in the tree and could be expanded to view its child nodes.
+
+#### Legend of diagnostic tree
+
+There are three kinds of nodes in the diagnostic tree:
+- **Blue node**, which corresponds to a time series with real metric value.
+- **Gray node**, which corresponds to a virtual time series with no metric value, it's a logical node.
+- **Red node**, which corresponds to the top impacted time series of the current incident.
+
+For each node abnormal status is described by the color of the node border
+- **Red border** means there's an anomaly captured on the time series corresponding to the incident timestamp.
+- **Non-red border** means there's no anomaly captured on the time series corresponding to the incident timestamp.
+
+#### Display mode
+
+There are two display modes for a diagnostic tree: only show anomaly series or show major proportions.
+
+- **Only show anomaly series mode** enables customer to focus on current anomalies that captured on different series and diagnose root cause of top impacted series.
+- **Show major proportions** enables customer to check on abnormal status of major proportions of top impacted series. In this mode, the tree would show both series with anomaly detected and series with no anomaly. But more focus on important series.
+
+#### Analyze options
+
+- **Show delta ratio**
+
+ "Delta ratio" is the percentage of current node delta compared to parent node delta. HereΓÇÖs the formula:
+
+ (real value of current node - expected value of current node) / (real value of parent node - expected value of parent node) * 100%
+
+ This is used to analyze the major contribution of parent node delta.
+
+- **Show value proportion**
+
+ "Value proportion" is the percentage of current node value compared to parent node value. HereΓÇÖs the formula:
+
+ (real value of current node / real value of parent node) * 100%
+
+ This is used to evaluate the proportion of current node within the whole.
+
+By using "Diagnostic tree", customers can locate root cause of current incident into specific dimension. This significantly removes customer's effort to view each individual anomalies or pivot through different dimensions to find the major anomaly contribution.
+
+### Step 3. View cross-metrics diagnostic insights using "Metrics graph"
+
+Sometimes, it's hard to analyze an issue by checking abnormal status of one single metric, but need to correlate multiple metrics together. Customers are able to configure a **Metrics graph**, which indicates the relationship between metrics. Refer to [How to build a metrics graph](metrics-graph.md) to get started.
+
+#### Check anomaly status on root cause dimension within "Metrics graph"
+
+By using the above cross-dimension diagnostic result, the root cause is limited to a specific dimension value. Then use the "Metrics graph" and filter by the analyzed root cause dimension to check anomaly status on other metrics.
+
+For example, if there's an incident captured on "revenue" metrics. The top impacted series is at global region with "region" = "SUM". By using cross-dimension diagnostic, the root cause has been located on "region" = "Karachi". There's a pre-configured metrics graph, including metrics of "revenue", "cost", "DAU", "PLT(page load time)" and "CHR(cache hit rate)".
+
+Metrics Advisor will automatically filter the metrics graph by the root cause dimension of "region" = "Karachi" and display anomaly status of each metric. By analyzing the relation between metrics and anomaly status, customers can gain further insights of what is the final root cause.
++
+#### Auto related anomalies
+
+By applying the root cause dimension filter on the metrics graph, anomalies on each metric at the timestamp of the current incident will be autorelated. Those anomalies should be related to the identified root cause of current incident.
++
+## Next steps
+
+- [Adjust anomaly detection using feedback](anomaly-feedback.md)
+- [Configure metrics and fine tune detection configuration](configure-metrics.md)
cognitive-services Diagnose Incident https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/metrics-advisor/how-tos/diagnose-incident.md
- Title: Diagnose incidents using Metrics Advisor-
-description: Learn how to diagnose an incidents using Metrics Advisor, and get detailed views of anomalies in your data.
------ Previously updated : 08/19/2020---
-# How-to: Diagnose an incident using Metrics Advisor
-
-Metrics Advisor provides several features for diagnostics, and gives an in-depth view of detected incidents, and provide root-cause analysis. When a group of anomalies detected on a metric, Metrics Advisor will group anomalies into a hierarchy and analyze on top of it.
-
-> [!NOTE]
-> Currently Metrics Advisor supports incident diagnostics for metrics with at least one dimension, and measure with the *numeric* type. Your metric needs to have an aggregated dimension value like SUM for each dimension, which is used to build the diagnostics hierarchy. Metrics Advisor offers [**Automatic roll up settings**](onboard-your-data.md#automatic-roll-up-settings) to help with generating aggregated values.
-
-Click on **Incident hub** in the left navigation window to see all incidents under a given metric. At the top of the page, you can select different metrics to see their detection configurations, and detection results, and change the time range.
-
-> [!TIP]
-> You can also get to the **Incident hub** by:
-> * Clicking on a data point in the visualization for your metric, and using the links at the bottom of the **Add feedback** window that appears.
-> * Clicking on one of the anomalies in the **incidents** tab for your metric.
-
-The **overview** section contains detection results, including counts of the anomalies and alerts within in the selected time range.
--
-Detected incidents within the selected metric and time range are listed in the **Incident list**. There are options to filter and order the incidents. For example, by severity. Click on one of the incidents to go to the **Incident** page for further diagnostics.
--
-The **Diagnostic** section lets you perform in-depth analysis on an incident, and tools to identify root-causes.
--
-## Root cause advice
-
-When a group of anomalies is detected in a metric and causes an incident, Metrics Advisor will try to analyze the root cause of the incident. **Root cause advice** provides automatic suggestions for likely causes of an incident. This feature is only available if there is an aggregated value within dimension. If the metric has no dimension, the root cause will be itself. Root causes are listed at right side panel and there might be several reasons listed. If there is no data in the table, it means your dimension doesn't satisfy the requirements to perform the analysis.
---
-When the root cause metric is provided with specific dimensions, you can click **go to metric** to view more details of the metric.
-
-## Incident tree
-
-Along with automated analysis on potential root causes, Metrics Advisor supports manual root cause analysis, using the **Incident Tree**. There are two kinds of incident tree in incident page: the **quick diagnose** tree, and the **interactive tree**.
-
-The quick diagnosis tree is for diagnosing a current incident, and the root node is limited to current incident root node. You can expand and collapse the tree nodes by clicking on it, and its series will be shown together with the current incident series in the chart above the tree.
-
-The interactive tree lets you diagnose current incidents as well as older incidents, and ones that are related. When using the interactive tree, right click on a node to open an action menu, where you can choose a dimension to drill up through the root nodes, and a dimension to drill down for each node. By clicking on the cancel button of the dimension list on the top, you can remove the drilling up or down from this dimension. left click a node to select it and show its series together with current incident series in the chart.
--
-## Anomaly drill down
-
-When you're viewing incident information, you may need to get more detailed information, for example, for different dimensions, and timestamps. If your data has one or more dimensions, you can use the drill down function to get a more detailed view.
-
-To use the drill down function, click on the **Metric drilling** tab in the **Incident hub**.
--
-The **Dimensions** setting is a list of dimensions for an incident, you can select other available dimension values for each one. After the dimension values are changed. The **Timestamp** setting lets you view the current incident at different moments in time.
-
-### Select drilling options and choose a dimension
-
-There are two types of drill down options: **Drill down** and **Horizontal comparison**.
-
-> [!Note]
-> 1. For drill down, you can explore the data from different dimension values, except the currenly selected dimensions.
-> 2. For horizontal comparison, you can explore the data from different dimension values, except the all-up dimensions.
--
-### Value comparison for different dimension values
-
-The second section of the drill down tab is a table with comparisons for different dimension values. It includes the value, baseline value, difference value, delta value and whether it is an anomaly.
-
--
-### Value and expected value comparisons for different dimension value
-
-The third section of the drill down tab is an histogram with the values and expected values, for different dimension values. The histogram is sorted by the difference between value and expected value. You can find the unexpected value with the biggest impact easily. For example, in the above picture, we can find that, except the all up value, **US7** contributes the most for the anomaly.
--
-### Raw value visualization
-The last part of drill down tab is a line chart of the raw values. With this chart provided, you don't need to navigate to the metric page to view details.
--
-## View similar anomalies using Time Series Clustering
-
-When viewing an incident, you can use the **Similar time-series-clustering** tab to see the various series associated with it. Series in one group are summarized together. From the above picture, we can know that there is at least two series groups. This feature is only available if the following requirements are met:
-
-1. Metrics must have one or more dimensions or dimension values.
-2. The series within one metric must have a similar trend.
-
-Available dimensions are listed on the top the the tab, and you can make a selection to specify the series.
--
-## Compare time series
-
-Sometimes when an anomaly is detected on a specific time series, it's helpful to compare it with multiple other series in a single visualization.
-Click on the **Compare tools** tab, and then click on the blue **+ Add** button.
--
-Select a series from your data feed. You can choose the same granularity or a different one. Select the target dimensions and load the series trend, then click **Ok** to compare it with a previous series. The series will be put together in one visualization. You can continue to add more series for comparison and get further insights. Click the drop down menu at the top of the **Compare tools** tab to compare the time series data over a time-shifted period.
-
-> [!Warning]
-> To make a comparison, time series data analysis may require shifts in data points so the granularity of your data must support it. For example, if your data is weekly and you use the **Day over day** comparison, you will get no results. In this example, you would use the **Month over month** comparison instead.
-
-After selecting a time-shifted comparison, you can select whether you want to compare the data values, the delta values, or the percentage delta.
-
-> [!Note]
-> * **Data value** is the raw data value.
-> * **Delta value** is the difference between raw value and compared value.
-> * **Percentage delta value** is the difference between raw value and compared value divided by compared value.
-
-## Related incidents across metrics
-
-Sometimes you may need to check the incidents of different metrics at the same time, or related incidents in other metrics. You can find a list of related incidents in the **Cross Metrics Analysis** section.
--
-Before you can see related incidents for current metric, you need to add a relationship between metrics. Click **Metrics Graph Settings** to add a relationship. Only metrics with same dimension names can be related. Use the following parameters.
--- Current Data feed & Metric: the data feed and metric of current incident-- Direction: the direction of relationship between two metrics. (not effect to related incidents list now)-- Another Data feed & Metric : the data feed and metric to connect with current metric--
-## Next steps
--- [Adjust anomaly detection using feedback](anomaly-feedback.md)-- [Configure metrics and fine tune detecting configuration](configure-metrics.md)
cognitive-services Further Analysis https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/metrics-advisor/how-tos/further-analysis.md
+
+ Title: Further analyze an incident and evaluate impact
+
+description: Learn how to leverage analysis tools to further analyze an incident.
++++++ Last updated : 04/15/2021+++
+# Further analyze an incident and evaluate impact
+
+## Metrics drill down by dimensions
+
+When you're viewing incident information, you may need to get more detailed information, for example, for different dimensions, and timestamps. If your data has one or more dimensions, you can use the drill down function to get a more detailed view.
+
+To use the drill down function, click on the **Metric drilling** tab in the **Incident hub**.
++
+The **Dimensions** setting is a list of dimensions for an incident, you can select other available dimension values for each one. After the dimension values are changed. The **Timestamp** setting lets you view the current incident at different moments in time.
+
+### Select drilling options and choose a dimension
+
+There are two types of drill down options: **Drill down** and **Horizontal comparison**.
+
+> [!Note]
+> - For drill down, you can explore the data from different dimension values, except the currenly selected dimensions.
+> - For horizontal comparison, you can explore the data from different dimension values, except the all-up dimensions.
++
+### Value comparison for different dimension values
+
+The second section of the drill down tab is a table with comparisons for different dimension values. It includes the value, baseline value, difference value, delta value and whether it is an anomaly.
+
++
+### Value and expected value comparisons for different dimension value
+
+The third section of the drill down tab is a histogram with the values and expected values, for different dimension values. The histogram is sorted by the difference between value and expected value. You can find the unexpected value with the biggest impact easily. For example, in the above picture, we can find that, except the all up value, **US7** contributes the most for the anomaly.
++
+### Raw value visualization
+The last part of drill down tab is a line chart of the raw values. With this chart provided, you don't need to navigate to the metric page to view details.
++
+## Compare time series
+
+Sometimes when an anomaly is detected on a specific time series, it's helpful to compare it with multiple other series in a single visualization.
+Click on the **Compare tools** tab, and then click on the blue **+ Add** button.
++
+Select a series from your data feed. You can choose the same granularity or a different one. Select the target dimensions and load the series trend, then click **Ok** to compare it with a previous series. The series will be put together in one visualization. You can continue to add more series for comparison and get further insights. Click the drop down menu at the top of the **Compare tools** tab to compare the time series data over a time-shifted period.
+
+> [!Warning]
+> To make a comparison, time series data analysis may require shifts in data points so the granularity of your data must support it. For example, if your data is weekly and you use the **Day over day** comparison, you will get no results. In this example, you would use the **Month over month** comparison instead.
+
+After selecting a time-shifted comparison, you can select whether you want to compare the data values, the delta values, or the percentage delta.
+
+## View similar anomalies using Time Series Clustering
+
+When viewing an incident, you can use the **Similar time-series-clustering** tab to see the various series associated with it. Series in one group are summarized together. From the above picture, we can know that there is at least two series groups. This feature is only available if the following requirements are met:
+
+- Metrics must have one or more dimensions or dimension values.
+- The series within one metric must have a similar trend.
+
+Available dimensions are listed on the top the tab, and you can make a selection to specify the series.
+
cognitive-services Manage Data Feeds https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/metrics-advisor/how-tos/manage-data-feeds.md
Workspace access is controlled by the Metrics Advisor resource, which uses Azure
Metrics Advisor lets you grant permissions to different groups of people on different data feeds. There are two types of roles: -- Administrator: Who has full permissions to manage a data feed, including modify and delete.-- Viewer: Who has access to a read-only view of the data feed.
+- Administrator: Has full permissions to manage a data feed, including modify and delete.
+- Viewer: Has access to a read-only view of the data feed.
## Advanced settings
Action link templates are used to predefine actionable HTTP urls, which consist
:::image type="content" source="../media/action-link-template.png" alt-text="Action link template" lightbox="../media/action-link-template.png":::
-Once you've filled in the action link, click **Go to action link** on the incident list's action option, and incident tree's right-click menu. Replace the placeholders in the action link template with the corresponding values of the anomaly or incident.
+Once you've filled in the action link, click **Go to action link** on the incident list's action option, and diagnostic tree's right-click menu. Replace the placeholders in the action link template with the corresponding values of the anomaly or incident.
| Placeholder | Examples | Comment | | - | -- | - |
To configure an alert, you need to [create a hook](alerts.md#create-a-hook) firs
* **Grace period**: The Grace period setting is used to determine when to send an alert if no data points are ingested. The reference point is the time of first ingestion. If an ingestion fails, Metrics Advisor will keep trying at a regular interval specified by the granularity. If it continues to fail past the grace period, an alert will be sent.
-* **Auto snooze**: When this option is set to zero, each timestamp with *Not Available* triggers an alert. When a setting other than zero is specified, continuous timestamps after the first timestamp with *not available* are not triggered according to the the setting specified.
+* **Auto snooze**: When this option is set to zero, each timestamp with *Not Available* triggers an alert. When a setting other than zero is specified, continuous timestamps after the first timestamp with *not available* are not triggered according to the setting specified.
## Next steps-- [Configure metrics and fine tune detecting configuration](configure-metrics.md)
+- [Configure metrics and fine tune detection configuration](configure-metrics.md)
- [Adjust anomaly detection using feedback](anomaly-feedback.md)-- [Diagnose an incident](diagnose-incident.md).
+- [Diagnose an incident](diagnose-an-incident.md).
cognitive-services Metrics Graph https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/metrics-advisor/how-tos/metrics-graph.md
Each metric in Metrics Advisor is monitored separately by a model that learns from historical data to predict future trends. Each metric has a separate model that is applied to it. In some cases however, several metrics may relate to each other, and anomalies need to be analyzed across multiple metrics. The **Metrics Graph** helps with this.
-As an example, if you have different streams of telemetry in separate metrics, Metrics Advisor will monitor them separately. If anomalies in one metric cause anomalies in others, finding those relations and the root cause in your data can be helpful when addressing incidents. The metrics graph enables you to create a visual topology graph of found anomalies.
+As an example, if you have different streams of telemetry in separate metrics, Metrics Advisor will monitor them separately. If anomalies in one metric cause anomalies in other metrics, finding those relationships and the root cause in your data can be helpful when addressing incidents. The metrics graph enables you to create a visual topology graph of found anomalies.
## Select a metric to put the first node to the graph
Click into an incident within the graph and scroll down to **cross metrics analy
## Next steps - [Adjust anomaly detection using feedback](anomaly-feedback.md)-- [Diagnose an incident](diagnose-incident.md).-- [Configure metrics and fine tune detecting configuration](configure-metrics.md)
+- [Diagnose an incident](diagnose-an-incident.md).
+- [Configure metrics and fine tune detection configuration](configure-metrics.md)
cognitive-services Onboard Your Data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/metrics-advisor/how-tos/onboard-your-data.md
Use this article to learn about onboarding your data to Metrics Advisor.
Partial data is caused by inconsistencies between the data stored in Metrics Advisor and the data source. This can happen when the data source is updated after Metrics Advisor has finished pulling data. Metrics Advisor only pulls data from a given data source once.
-For example, if a metric has been onboarded to Metrics Advisor for monitoring. Metrics Advisor successfully grabs metric data at timestamp A and performs anomaly detection on it. However, if the metric data of that particular timestamp A has been refreshed after the data been ingested. New data value won't be retrieved.
+For example, if a metric has been onboarded to Metrics Advisor for monitoring. Metrics Advisor successfully grabs metric data at timestamp A and performs anomaly detection on it. However, if the metric data of that particular timestamp A has been refreshed after the data has been ingested. New data value won't be retrieved.
You can try to [backfill](manage-data-feeds.md#backfill-your-data-feed) historical data (described later) to mitigate inconsistencies but this won't trigger new anomaly alerts, if alerts for those time points have already been triggered. This process may add additional workload to the system, and is not automatic.
Next you'll input a set of parameters to connect your time-series data source.
* **Source Type**: The type of data source where your time series data is stored. * **Granularity**: The interval between consecutive data points in your time series data. Currently Metrics Advisor supports: Yearly, Monthly, Weekly, Daily, Hourly, and Custom. The lowest interval The customization option supports is 60 seconds. * **Seconds**: The number of seconds when *granularityName* is set to *Customize*.
-* **Ingest data since (UTC)**: The baseline start time for data ingestion. *startOffsetInSeconds* is often used to add an offset to help with data consistency.
+* **Ingest data since (UTC)**: The baseline start time for data ingestion. `startOffsetInSeconds` is often used to add an offset to help with data consistency.
Next, you'll need to specify the connection information for the data source, and the custom queries used to convert the data into the required schema. For details on the other fields and connecting different types of data sources, see [Add data feeds from different data sources](../data-feeds-from-different-sources.md).
After the connection string and query string are set, select **Verify and get sc
Once the data schema is loaded, select the appropriate fields.
-If the timestamp of a data point is omitted, Metrics Advisor will use the timestamp when the data point is ingested instead. For each data feed, you can specify at most one column as a timestamp. If you get a message that a column cannot be specified as a timestamp, check your query or data source, and whether there are multiple timestamps in the query result - not only in the preview data. When performing data ingestion, Metrics Advisor can only consume only one chunk (for example one day, one hour - according to the granularity) of time-series data from the given source each time.
+If the timestamp of a data point is omitted, Metrics Advisor will use the timestamp when the data point is ingested instead. For each data feed, you can specify at most one column as a timestamp. If you get a message that a column cannot be specified as a timestamp, check your query or data source, and whether there are multiple timestamps in the query result - not only in the preview data. When performing data ingestion, Metrics Advisor can only consume one chunk (for example one day, one hour - according to the granularity) of time-series data from the given source each time.
|Selection |Description |Notes | ||||
If the timestamp of a data point is omitted, Metrics Advisor will use the timest
|**Dimension** | Categorical values. A combination of different values identifies a particular single-dimension time series, for example: country, language, tenant. You can select zero or more columns as dimensions. Note: be cautious when selecting a non-string column as a dimension. | Optional. | |**Ignore** | Ignore the selected column. | Optional. See the below text. |
-If you want to ignore columns, we recommend updating your query or data source to exclude those columns. You can also ignore columns using **Ignore columns** and then then **Ignore** on the specific columns. If a column should be a dimension and is mistakenly set as *Ignored*, Metrics Advisor may end up ingesting partial data. For example, assume the data from your query is as below:
+If you want to ignore columns, we recommend updating your query or data source to exclude those columns. You can also ignore columns using **Ignore columns** and then **Ignore** on the specific columns. If a column should be a dimension and is mistakenly set as *Ignored*, Metrics Advisor may end up ingesting partial data. For example, assume the data from your query is as below:
| Row ID | Timestamp | Country | Language | Income | | | | | | |
You can also reload the progress of an ingestion by clicking **Refresh Progress*
## Next steps - [Manage your data feeds](manage-data-feeds.md) - [Configurations for different data sources](../data-feeds-from-different-sources.md)-- [Configure metrics and fine tune detecting configuration](configure-metrics.md)
+- [Configure metrics and fine tune detection configuration](configure-metrics.md)
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/metrics-advisor/overview.md
Metrics Advisor can connect to, and [ingest multi-dimensional metric](how-tos/on
* Automatically monitor every time series within [multi-dimensional metrics](glossary.md#multi-dimensional-metric). * Use [parameter tuning](how-tos/configure-metrics.md) and [interactive feedback](how-tos/anomaly-feedback.md) to customize the model applied on your data, and future anomaly detection results.
-## Real-time alerts through multiple channels
+## Real-time notification through multiple channels
-Whenever anomalies are detected, Metrics Advisor is able to [send real time alerts](how-tos/alerts.md) through multiple channels using hooks, such as: email hooks, web hooks, and Azure DevOps hooks. Flexible alert rules let you customize which alerts are sent, and their destination.
+Whenever anomalies are detected, Metrics Advisor is able to [send real time notification](how-tos/alerts.md) through multiple channels using hooks, such as: email hooks, web hooks, Teams hooks and Azure DevOps hooks. Flexible alert configuration lets you customize when and where to send a notification.
## Smart diagnostic insights by analyzing anomalies
-Analyze anomalies detected on multi-dimensional metrics, and generate [smart diagnostic insights](how-tos/diagnose-incident.md) including most the most likely root cause, diagnostic trees, metric drilling, and more. By configuring [Metrics graph](how-tos/metrics-graph.md), cross metrics analysis can be enabled to help you visualize incidents.
+### Analyze root cause into specific dimension
+Metrics Advisor combines anomalies detected on the same multi-dimensional metric into a diagnostic tree to help you analyze root cause into specific dimension. There's also automated analyzed insights available by analyzing the greatest contribution of each dimension.
+
+### Cross-metrics analysis using Metrics graph
+
+A [Metrics graph](./how-tos/metrics-graph.md) indicates the relation between metrics. Cross-metrics analysis can be enabled to help you catch on abnormal status among all related metrics in a holistic view. And eventually locate the final root cause.
+
+Refer to [how to diagnose an incident](./how-tos/diagnose-an-incident.md) for more detail.
## Typical workflow
The workflow is simple: after onboarding your data, you can fine-tune the anomal
1. [Create an Azure resource](https://go.microsoft.com/fwlink/?linkid=2142156) for Metrics Advisor. 2. Build your first monitor using the web portal.
- 1. Onboard your data
- 2. Fine-tune anomaly detection
- 3. Subscribe to alerts
- 4. View diagnostic insights
+ 1. [Onboard your data](./how-tos/onboard-your-data.md)
+ 2. [Fine-tune anomaly detection configuration](./how-tos/configure-metrics.md)
+ 3. [Subscribe anomalies for notification](./how-tos/alerts.md)
+ 4. [View diagnostic insights](./how-tos/diagnose-an-incident.md)
3. Use the REST API to customize your instance.
+## Video
+* [Introducing Metrics Advisor](https://www.youtube.com/watch?v=0Y26cJqZMIM)
+* [New to Cognitive Services](https://www.youtube.com/watch?v=7tCLJHdBZgM)
+ ## Next steps * Explore a quickstart: [Monitor your first metric on web](quickstarts/web-portal.md).
cognitive-services Rest Api And Client Library https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/metrics-advisor/quickstarts/rest-api-and-client-library.md
zone_pivot_groups: programming-languages-metrics-monitor
# Quickstart: Use the client libraries or REST APIs to customize your solution
-Get started with the the Metrics Advisor REST API or client libraries. Follow these steps to install the package and try out the example code for basic tasks.
+Get started with the Metrics Advisor REST API or client libraries. Follow these steps to install the package and try out the example code for basic tasks.
Use Metrics Advisor to perform:
If you want to clean up and remove a Cognitive Services subscription, you can de
- [Onboard your data feeds](../how-tos/onboard-your-data.md) - [Manage data feeds](../how-tos/manage-data-feeds.md) - [Configurations for different data sources](../data-feeds-from-different-sources.md)-- [Configure metrics and fine tune detecting configuration](../how-tos/configure-metrics.md)
+- [Configure metrics and fine tune detection configuration](../how-tos/configure-metrics.md)
- [Adjust anomaly detection using feedback](../how-tos/anomaly-feedback.md)
cognitive-services Web Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/metrics-advisor/quickstarts/web-portal.md
After the data feed is added, Metrics Advisor will attempt to ingest metric data
When detection is applied, click one of the metrics listed in data feed to find the **Metric detail page** to: - View visualizations of all time series slices under this metric-- Update detecting configuration to meet expected results
+- Update detection configuration to meet expected results
- Set up notification for detected anomalies :::image type="content" source="../media/metric-details.png" alt-text="Metric details" lightbox="../media/metric-details.png":::
To view the diagnostic insights, click on the red dots on time series visualizat
:::image type="content" source="../media/incident-link.png" alt-text="Incident link" lightbox="../media/incident-link.png":::
-After clicking the link, you will be pivoted to the incident analysis page which analyzes on corresponding anomaly, with a bunch of diagnostics insights. At the top, there will be statistics about the incident, such as **Severity**, **Anomalies involved**, and impacted **Start time** and **End time**.
+After clicking the link, you will be pivoted to the incident analysis page which analyzes on corresponding anomaly, with a bunch of diagnostics insights. There are three sections in the incident detail page which correspond to three major steps to diagnosing an incident.
-Next you'll see the ancestor anomaly of the incident, and automated root-cause advice. This automated root cause advice is generated by analyzing the incident tree of all related anomalies, including: deviation, distribution and contribution to the parent anomalies.
-
+- The first section lists a summary of the current incident, including basic information, actions & tracings, and an analyzed root cause.
+ :::image type="content" source="../media/diagnostics/incident-summary.png" alt-text="Incident summary":::
+- After getting basic info and automatic analysis insights, you can get more detailed info on abnormal status on other dimensions within the same metric in a holistic way using the **"Diagnostic tree"**.
+ :::image type="content" source="../media/diagnostics/cross-dimension-diagnostic.png" alt-text="Cross dimension diagnostic using diagnostic tree":::
+- And last to view cross-metrics diagnostic insights using "Metrics graph".
+ :::image type="content" source="../media/diagnostics/cross-metrics-analysis.png" alt-text="Cross metrics analysis":::
Based on these, you can already get a straightforward view of what is happening and the impact of the incident as well as the most potential root cause. So that immediate action could be taken to resolve incident as soon as possible.
-But you can also pivot across more diagnostics insights leveraging additional features to drill down anomalies by dimension, view similar anomalies and do comparison across metrics. Please find more at [How to: diagnose an incident](../how-tos/diagnose-incident.md).
+But you can also pivot across more diagnostics insights leveraging additional features to drill down anomalies by dimension, view similar anomalies and do comparison across metrics. Please find more at [How to: diagnose an incident](../how-tos/diagnose-an-incident.md).
## Get notified when new anomalies are found
After creating a hook, an alert setting determines how and which alert notificat
- [Manage data feeds](../how-tos/manage-data-feeds.md) - [Configurations for different data sources](../data-feeds-from-different-sources.md) - [Use the REST API or Client libraries](./rest-api-and-client-library.md)-- [Configure metrics and fine tune detecting configuration](../how-tos/configure-metrics.md)
+- [Configure metrics and fine tune detection configuration](../how-tos/configure-metrics.md)
cognitive-services Enable Anomaly Notification https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/metrics-advisor/tutorials/enable-anomaly-notification.md
+
+ Title: Metrics Advisor anomaly notification e-mails with Azure Logic Apps
+description: Learn how to automate sending e-mail alerts in response to Metric Advisor anomalies
++++ Last updated : 05/20/2021 ++
+# Tutorial: Enable anomaly notification in Metrics Advisor
+
+<!-- 2. Introductory paragraph
+Required. Lead with a light intro that describes, in customer-friendly language,
+what the customer will learn, or do, or accomplish. Answer the fundamental ΓÇ£why
+would I want to do this?ΓÇ¥ question. Keep it short.
+-->
++
+<!-- 3. Tutorial outline
+Required. Use the format provided in the list below.
+-->
+
+In this tutorial, you learn how to:
+
+> [!div class="checklist"]
+> * Create a hook in Metrics Advisor
+> * Send Notifications with Azure Logic Apps
+> * Send Notifications to Microsoft Teams
+> * Send Notifications via SMTP server
+
+<!-- 4. Prerequisites
+Required. First prerequisite is a link to a free trial account if one exists. If there
+are no prerequisites, state that no prerequisites are needed for this tutorial.
+-->
+
+## Prerequisites
+### Create a Metrics Advisor resource
+
+To explore capabilities of Metrics Advisor, you may need to <a href="https://go.microsoft.com/fwlink/?linkid=2142156" title="Create a Metrics Advisor resource" target="_blank">create a Metrics Advisor resource </a> in the Azure portal to deploy your Metrics Advisor instance.
+
+### Create a hook in Metrics Advisor
+A hook in Metrics Advisor is a bridge that enables customer to subscribe to metrics anomalies and send notifications through different channels. There are four types of hooks in Metrics Advisor:
+
+- Email hook
+- Webhook
+- Teams hook
+- Azure DevOps hook
+
+Each hook type corresponds to a specific channel that anomaly will be notified through.
+
+<!-- 5. H2s
+Required. Give each H2 a heading that sets expectations for the content that follows.
+Follow the H2 headings with a sentence about how the section contributes to the whole.
+-->
+
+## Send notifications with Logic Apps, Teams, and SMTP
+
+#### [Logic Apps](#tab/logic)
+
+### Send email notification by using Azure Logic Apps
+
+<!-- Introduction paragraph -->
+There are two common options to send email notifications that are supported in Metrics Advisor. One is to use webhooks and Azure Logic Apps to send email alerts, the other is to set up an SMTP server and use it to send email alerts directly. This section will focus on the first option, which is easier for customers who don't have an available SMTP server.
+
+**Step 1.** Create a webhook in Metrics Advisor
+
+A webhook is the entry point for all the information available from the Metrics Advisor service, and calls a user-provided API when an alert is triggered. All alerts can be sent through a webhook.
+
+Select the **Hooks** tab in your Metrics Advisor workspace, and select the **Create hook** button. Choose a hook type of **web hook**. Fill in the required parameters and select **OK**. For detailed steps, refer to [create a webhook](../how-tos/alerts.md#web-hook).
+
+There's one extra parameter of **Endpoint** that needs to be filled out, this could be done after completing Step 3 below.
++
+**Step 2.** Create a Logic Apps resource
+
+In the [Azure portal](https://portal.azure.com), create an empty Logic App by following the instructions in [Create your logic app](../../../logic-apps/quickstart-create-first-logic-app-workflow.md). When you see the **Logic Apps Designer**, return to this tutorial.
++
+**Step 3.** Add a trigger of **When an HTTP request is received**
+
+- Azure Logic Apps uses various actions to trigger workflows that are defined. For this use case, it uses the trigger of **When an HTTP request is received**.
+
+- In the dialog for **When an HTTP request is received**, select **Use sample payload to generate schema**.
+
+ ![Screenshot that shows the When an HTTP request dialog box and the Use sample payload to generate schema option selected. ](../media/tutorial/logic-apps-generate-schema.png)
+
+ Copy the following sample JSON into the textbox and select **Done**.
+
+ ```json
+ {
+ "properties": {
+ "value": {
+ "items": {
+ "properties": {
+ "alertInfo": {
+ "properties": {
+ "alertId": {
+ "type": "string"
+ },
+ "anomalyAlertingConfigurationId": {
+ "type": "string"
+ },
+ "createdTime": {
+ "type": "string"
+ },
+ "modifiedTime": {
+ "type": "string"
+ },
+ "timestamp": {
+ "type": "string"
+ }
+ },
+ "type": "object"
+ },
+ "alertType": {
+ "type": "string"
+ },
+ "callBackUrl": {
+ "type": "string"
+ },
+ "hookId": {
+ "type": "string"
+ }
+ },
+ "required": [
+ "hookId",
+ "alertType",
+ "alertInfo",
+ "callBackUrl"
+ ],
+ "type": "object"
+ },
+ "type": "array"
+ }
+ },
+ "type": "object"
+ }
+ ```
+
+- Choose the method as 'POST' and select **Save**. You can now see the URL of your HTTP request trigger. Select the copy icon to copy it and fill it back in the **Endpoint** in Step 1.
+
+ ![Screenshot that highlights the copy icon to copy the URL of your HTTP request trigger.](../media/tutorial/logic-apps-copy-url.png)
+
+**Step 4.** Add a next step using 'HTTP' action
+
+Signals that are pushed through the webhook only contain limited information like timestamp, alertID, configurationID, etc. Detailed information needs to be queried using the callback URL provided in the signal. This step is to query detailed alert info.
+
+- Choose a method of 'GET'
+- Select 'callBackURL' from 'Dynamic content' list in 'URI'.
+- Enter a key of 'Content-Type' in 'Headers' and input a value of 'application/json'
+- Enter a key of 'x-api-key' in 'Headers' and get this by the clicking **'API keys'** tab in your Metrics Advisor workspace. This step is to ensure the workflow has sufficient permissions for API calls.
+
+ ![Screenshot that highlights the api-keys](../media/tutorial/logic-apps-api-key.png)
+
+**Step 5.** Add a next step to ΓÇÿparse JSONΓÇÖ
+
+You need to parse the response of the API for easier formatting of email content.
+
+> [!NOTE]
+> This tutorial only shares a quick example, the final email format needs to be further designed.
+
+- Select 'Body' from 'Dynamic content' list in 'Content'
+- select **Use sample payload to generate schema**. Copy the following sample JSON into the textbox and select **Done**.
+
+```json
+{
+ "properties": {
+ "@@nextLink": {},
+ "value": {
+ "items": {
+ "properties": {
+ "properties": {
+ "properties": {
+ "IncidentSeverity": {
+ "type": "string"
+ },
+ "IncidentStatus": {
+ "type": "string"
+ }
+ },
+ "type": "object"
+ },
+ "rootNode": {
+ "properties": {
+ "createdTime": {
+ "type": "string"
+ },
+ "detectConfigGuid": {
+ "type": "string"
+ },
+ "dimensions": {
+ "properties": {
+ },
+ "type": "object"
+ },
+ "metricGuid": {
+ "type": "string"
+ },
+ "modifiedTime": {
+ "type": "string"
+ },
+ "properties": {
+ "properties": {
+ "AnomalySeverity": {
+ "type": "string"
+ },
+ "ExpectedValue": {}
+ },
+ "type": "object"
+ },
+ "seriesId": {
+ "type": "string"
+ },
+ "timestamp": {
+ "type": "string"
+ },
+ "value": {
+ "type": "number"
+ }
+ },
+ "type": "object"
+ }
+ },
+ "required": [
+ "rootNode",
+ "properties"
+ ],
+ "type": "object"
+ },
+ "type": "array"
+ }
+ },
+ "type": "object"
+}
+```
+
+**Step 6.** Add a next step to ΓÇÿcreate HTML tableΓÇÖ
+
+A bunch of information has been returned from the API call, however, depending on your scenarios not all of the information may be useful. Choose the items that you care about and would like included in the alert email.
+
+Below is an example of an HTML table that chooses 'timestamp', 'metricGUID' and 'dimension' to be included in the alert email.
+
+![Screenshot of html table example](../media/tutorial/logic-apps-html-table.png)
+
+**Step 7.** Add the final step to ΓÇÿsend an emailΓÇÖ
+
+There are several options to send email, both Microsoft hosted and 3rd-party offerings. Customer may need to have a tenant/account for their chosen option. For example, when choosing ΓÇÿOffice 365 OutlookΓÇÖ as the server. Sign in process will be pumped for building connection and authorization. An API connection will be established to use email server to send alert.
+
+Fill in the content that you'd like to include to 'Body', 'Subject' in the email and fill in an email address in 'To'.
+
+![Screenshot of send an email](../media/tutorial/logic-apps-send-email.png)
+
+#### [Teams Channel](#tab/teams)
+
+### Send anomaly notification through a Microsoft Teams channel
+This section will walk through the practice of sending anomaly notifications through a Microsoft Teams channel. This can help enable scenarios where team members are collaborating on analyzing anomalies that are detected by Metrics Advisor. The workflow is easy to configure and doesn't have a large number of prerequisites.
+
++
+**Step 1.** Add a 'Incoming Webhook' connector to your Teams channel
+
+- Navigate to the Teams channel that you'd like to send notification to, select 'ΓÇóΓÇóΓÇó'(More options).
+- In the dropdown list, select 'Connectors'. Within the new dialog, search for 'Incoming Webhook' and click 'Add'.
+
+ ![Screenshot to create an incoming webhook](../media/tutorial/add-webhook.png)
+
+- If you are not able to view the 'Connectors' option, please contact your Teams group owners. Select 'Manage team', then select the 'Settings' tab at the top and check whether the setting of 'Allow members to create, update and remove connectors' is checked.
+
+ ![Screenshot to check teams settings](../media/tutorial/teams-settings.png)
+
+- Input a name for the connector and you can also upload an image to make it as the avatar. Select 'Create', then the Incoming Webhook connector is added successfully to your channel. A URL will be generated at the bottom of the dialog, **be sure to select 'Copy'**, then select 'Done'.
+
+ ![Screenshot to copy URL](../media/tutorial/webhook-url.png)
+
+**Step 2.** Create a new 'Teams hook' in Metrics Advisor
+
+- Select 'Hooks' tab in left navigation bar, and select the 'Create hook' button at top right of the page.
+- Choose hook type of 'Teams', then input a name and paste the URL that you copied from the above step.
+- Select 'Save'.
+
+ ![Screenshot to create a Teams hook](../media/tutorial/teams-hook.png)
+
+**Step 3.** Apply the Teams hook to an alert configuration
+
+Go and select one of the data feeds that you have onboarded. Select a metric within the feed and open the metrics detail page. You can create an 'alerting configuration' to subscribe to anomalies that are detected and notify through a Teams channel.
+
+Select the '+' button and choose the hook that you created, fill in other fields and select 'Save'. Then you're set for applying a Teams hook to an alert configuration. Any new anomalies will be notified through the Teams channel.
+
+![Screenshot that applies an Teams hook to an alert configuration](../media/tutorial/teams-hook-in-alert.png)
++
+#### [SMTP E-mail](#tab/smtp)
+
+### Send email notification by configuring an SMTP server
+
+This section will share the practice of using an SMTP server to send email notifications on anomalies that are detected. Make sure you have a usable SMTP server and have sufficient permission to get parameters like account name and password.
+
+**Step 1.** Assign your account as the 'Cognitive Service Metrics Advisor Administrator' role
+
+- A user with the subscription administrator or resource group administrator privileges needs to navigate to the Metrics Advisor resource that was created in the Azure portal, and select the Access control(IAM) tab.
+- Select 'Add role assignments'.
+- Pick a role of 'Cognitive Services Metrics Advisor Administrator', select your account as in the image below.
+- Select 'Save' button, then you've been successfully added as administrator of a Metrics Advisor resource. All the above actions need to be performed by a subscription administrator or resource group administrator. It might take up to one minute for the permissions to propagate.
+
+![Screenshot that shows how to assign admin role to a specific role](../media/tutorial/access-control.png)
+
+**Step 2.** Configure SMTP server in Metrics Advisor workspace
+
+After you've completed the above steps and have been successfully added as an administrator of the Metrics Advisor resource. Wait several minutes for the permissions to propagate. Then sign in to your Metrics Advisor workspace, you should be able to view a new tab named 'Email setting' on the left navigation panel. Select it and to continue configuration.
+
+Parameters to be filled out:
+
+- SMTP server name (**required**): Fill in the name of your SMTP server provider, most server names are written in the form ΓÇ£smtp.domain.comΓÇ¥ or ΓÇ£mail.domain.comΓÇ¥. Take Office365 as an example, it should be set as 'smtp.office365.com'.
+- SMTP server port (**required**): Port 587 is the default port for SMTP submission on the modern web. While you can use other ports for submission (more on those next), you should always start with port 587 as the default and only use a different port if circumstances dictate (like your host blocking port 587 for some reason).
+- Email sender(s)(**required**): This is the real email account that takes responsibility to send emails. You may need to fill in the account name and password of the sender. You can set a quota threshold for the maximum number of alert emails to be sent within one minute for one account. You can set multiple senders if there's possibility of having large volume of alerts to be sent in one minute, but at least one account should be set.
+- Send on behalf of (optional): If you have multiple senders configured, but you'd like alert emails to appear to be sent from one account. You can use this field to align them. But note you may need to grant permission to the senders to allow sending emails on behalf of their account.
+- Default CC (optional): To set a default email address that will be cc'd in all email alerts.
+
+Below is an example of a configured SMTP server:
+
+![Screenshot that shows an example of a configured SMTP server](../media/tutorial/email-setting.png)
+
+**Step 3.** Create an email hook in Metrics Advisor
+
+After successfully configuring an SMTP server, you're set to create an 'email hook' in the 'Hooks' tab in Metrics Advisor. For more about creating an 'email hook', refer to [article on alerts](../how-tos/alerts.md#email-hook) and follow the steps to completion.
+
+**Step 4.** Apply the email hook to an alert configuration
+
+ Go and select one of the data feeds that you on-boarded, select a metric within the feed and open the metrics detail page. You can create an 'alerting configuration' to subscribe to the anomalies that have been detected and sent through emails.
+
+Select the '+' button and choose the hook that you created, fill in other fields and select 'Save'. You have now successfully setup an email hook with a custom alert configuration and any new anomalies will be escalated through the hook using the SMTP server.
+
+![Screenshot that applies an email hook to an alert configuration](../media/tutorial/apply-hook.png)
+++
+## Next steps
+
+Advance to the next article to learn how to create.
+> [!div class="nextstepaction"]
+> [Write a valid query](write-a-valid-query.md)
+
+<!--
+Remove all the comments in this template before you sign-off or merge to the
+main branch.
+-->
cognitive-services Write A Valid Query https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/metrics-advisor/tutorials/write-a-valid-query.md
+
+ Title: Write a query for Metrics Advisor data ingestion
+description: Learn how to onboard your data to Metrics Advisor.
++++ Last updated : 05/20/2021 ++
+<!--
+Remove all the comments in this template before you sign-off or merge to the
+main branch.
+-->
+
+<!--
+This template provides the basic structure of a tutorial article.
+See the [tutorial guidance](contribute-how-to-mvc-tutorial.md) in the contributor guide.
+
+To provide feedback on this template contact
+[the templates workgroup](mailto:templateswg@microsoft.com).
+-->
+
+<!-- 1. H1
+Required. Start with "Tutorial: ". Make the first word following "Tutorial: " a
+verb.
+-->
+
+# Tutorial: Write a valid query to onboard metrics data
+
+<!-- 2. Introductory paragraph
+Required. Lead with a light intro that describes, in customer-friendly language,
+what the customer will learn, or do, or accomplish. Answer the fundamental ΓÇ£why
+would I want to do this?ΓÇ¥ question. Keep it short.
+-->
++
+<!-- 3. Tutorial outline
+Required. Use the format provided in the list below.
+-->
+
+In this tutorial, you learn how to:
+
+> [!div class="checklist"]
+> * How to write a valid data onboarding query
+> * Common errors and how to avoid them
+
+<!-- 4. Prerequisites
+Required. First prerequisite is a link to a free trial account if one exists. If there
+are no prerequisites, state that no prerequisites are needed for this tutorial.
+-->
+
+## Prerequisites
+
+### Create a Metrics Advisor resource
+
+To explore capabilities of Metrics Advisor, you may need to <a href="https://go.microsoft.com/fwlink/?linkid=2142156" title="Create a Metrics Advisor resource" target="_blank">create a Metrics Advisor resource </a> in the Azure portal to deploy your Metrics Advisor instance.
+
+<!-- 5. H2s
+Required. Give each H2 a heading that sets expectations for the content that follows.
+Follow the H2 headings with a sentence about how the section contributes to the whole.
+-->
+
+## Data schema requirements
+<!-- Introduction paragraph -->
+++
+## <span id="ingestion-work">How does data ingestion work in Metrics Advisor?</span>
+
+When onboarding your metrics to Metrics Advisor, generally there are two ways:
+<!-- Introduction paragraph -->
+- Pre-aggregate your metrics into the expected schema and store data into certain files. Fill in the path template during onboarding, and Metrics Advisor will continuously grab new files from the path and perform detection on the metrics. This is a common practice for a data source like Azure Data Lake and Azure Blob Storage.
+- If you're ingesting data from data sources like Azure SQL Server, Azure Data Explorer, or other sources, which support using a query script than you need to make sure you are properly constructing your query. This article will teach you how to write a valid query to onboard metric data as expected.
++
+### What is an interval?
+
+Metrics need to be monitored at a certain granularity according to business requirements. For example, business Key Performance Indicators (KPIs) are monitored at daily granularity. However, service performance metrics are often monitored at minute/hourly granularity. So the frequency to collect metric data from sources are different.
+
+Metrics Advisor continuously grabs metrics data at each time interval, **the interval is equal to the granularity of the metrics.** Every time, Metrics Advisor runs the query you have written ingests data at this specific interval. Based on this data ingestion mechanism, the query script **should not return all metric data that exists in the database, but needs to limit the result to a single interval.**
+
+![Illustration that describes what is an interval](../media/tutorial/what-is-interval.png)
+
+## How to write a valid query?
+<!-- Introduction paragraph -->
+### <span id="use-parameters"> Use @IntervalStart and @IntervalEnd to limit query results</span>
+
+ To help in achieving this, two parameters have been provided to use within the query: **@IntervalStart** and **@IntervalEnd**.
+
+Every time when the query runs, @IntervalStart and @IntervalEnd will be automatically updated to the latest interval timestamp and gets corresponding metrics data. @IntervalEnd is always assigned as @IntervalStart + 1 granularity.
+
+Here's an example of proper use of these two parameters with Azure SQL Server:
+
+```SQL
+SELECT [timestampColumnName] AS timestamp, [dimensionColumnName], [metricColumnName] FROM [sampleTable] WHERE [timestampColumnName] >= @IntervalStart and [timestampColumnName] < @IntervalEnd;
+```
+
+By writing the query script in this way, the timestamps of metrics should fall in the same interval for each query result. Metrics Advisor will automatically align the timestamps with the metrics' granularity.
+
+### <span id="use-aggregation"> Use aggregation functions to aggregate metrics</span>
+
+It's a common case that there are many columns within customers data sources, however, not all of them make sense to be monitored or included as a dimension. Customers can use aggregation functions to aggregate metrics and only include meaningful columns as dimensions.
+
+Below is an example where there are more than 10 columns in a customer's data source, but only a few of them are meaningful and need to be included and aggregated into a metric to be monitored.
+
+| TS | Market | Device OS | Category | ... | Measure1 | Measure2 | Measure3 |
+| -|--|--|-|--|-|-|-|
+| 2020-09-18T12:23:22Z | New York | iOS | Sunglasses | ...| 43242 | 322 | 54546|
+| 2020-09-18T12:27:34Z | Beijing | Android | Bags | ...| 3333 | 126 | 67677 |
+| ...
+
+If customer would like to monitor **'Measure1'** at **hourly granularity** and choose **'Market'** and **'Category'** as dimensions, below are examples of how to properly make use of the aggregation functions to achieve this:
+
+- SQL sample:
+
+ ```sql
+ SELECT dateadd(hour, datediff(hour, 0, TS),0) as NewTS
+ ,Market
+ ,Category
+ ,sum(Measure1) as M1
+ FROM [dbo].[SampleTable] where TS >= @IntervalStart and TS < @IntervalEnd
+ group by Market, Category, dateadd(hour, datediff(hour, 0, TS),0)
+ ```
+- Azure Data Explorer sample:
+
+ ```kusto
+ SampleTable
+ | where TS >= @IntervalStart and TS < @IntervalEnd
+ | summarize M1 = sum(Measure1) by Market, Category, NewTS = startofhour(TS)
+ ```
+
+> [!Note]
+> In the above case, the customer would like to monitor metrics at an hourly granularity, but the raw timestamp(TS) is not aligned. Within aggregation statement, **a process on the timestamp is required** to align at the hour and generate a new timestamp column named 'NewTS'.
++
+## Common errors during onboarding
+
+- **Error:** Multiple timestamp values are found in query results
+
+ This is a common error, if you haven't limited query results within one interval. For example, if you're monitoring a metric at a daily granularity, you will get this error if your query returns results like this:
+
+ ![Screenshot that shows multiple timestamp values returned](../media/tutorial/multiple-timestamps.png)
+
+ There are multiple timestamp values and they're not in the same metrics interval(one day). Check [How does data ingestion work in Metrics Advisor?](#ingestion-work) and understand that Metrics Advisor grabs metrics data at each metrics interval. Then make sure to use **@IntervalStart** and **@IntervalEnd** in your query to limit results within one interval. Check [Use @IntervalStart and @IntervalEnd to limit query results](#use-parameters) for detailed guidance and samples.
++
+- **Error:** Duplicate metric values are found on the same dimension combination within one metric interval
+
+ Within one interval, Metrics Advisor expects only one metrics value for the same dimension combinations. For example, if you're monitoring a metric at a daily granularity, you will get this error if your query returns results like this:
+
+ ![Screenshot that shows duplicate values returned](../media/tutorial/duplicate-values.png)
+
+ Refer to [Use aggregation functions to aggregate metrics](#use-aggregation) for detailed guidance and samples.
+
+<!-- 7. Next steps
+Required: A single link in the blue box format. Point to the next logical tutorial
+in a series, or, if there are no other tutorials, to some other cool thing the
+customer can do.
+-->
+
+## Next steps
+
+Advance to the next article to learn how to create.
+> [!div class="nextstepaction"]
+> [Enable anomaly notifications](enable-anomaly-notification.md)
+
+<!--
+Remove all the comments in this template before you sign-off or merge to the
+main branch.
cognitive-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/metrics-advisor/whats-new.md
If you want to learn about the latest updates to Metrics Advisor client SDKs see
* [Python SDK change log](https://github.com/Azure/azure-sdk-for-python/blob/master/sdk/metricsadvisor/azure-ai-metricsadvisor/CHANGELOG.md) * [JavaScript SDK change log](https://github.com/Azure/azure-sdk-for-js/blob/master/sdk/metricsadvisor/ai-metrics-advisor/CHANGELOG.md)
+## June 2021
+
+### New articles
+
+* [Tutorial: Write a valid query to onboard metrics data](tutorials/write-a-valid-query.md)
+* [Tutorial: Enable anomaly notification in Metrics Advisor](tutorials/enable-anomaly-notification.md)
+
+### Updated articles
+
+* [Updated metrics onboarding flow](how-tos/onboard-your-data.md#add-a-data-feed-using-the-web-based-workspace)
+* [Enriched guidance when adding data feeds from different sources](data-feeds-from-different-sources.md)
+* [Updated new notification channel using Microsoft Teams](how-tos/alerts.md#teams-hook)
+* [Updated incident diagnostic experience](how-tos/diagnose-an-incident.md)
+ ## October 2020 ### New articles
If you want to learn about the latest updates to Metrics Advisor client SDKs see
### Updated articles
-* [Update on how Metric Advisor builds an incident tree for multi-dimensional metrics](/azure/cognitive-services/metrics-advisor/faq#how-does-metric-advisor-build-an-incident-tree-for-multi-dimensional-metrics)
+* [Update on how Metric Advisor builds an incident tree for multi-dimensional metrics](/azure/cognitive-services/metrics-advisor/faq#how-does-metric-advisor-build-a-diagnostic-tree-for-multi-dimensional-metrics)
cosmos-db Monitor Cosmos Db Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/monitor-cosmos-db-reference.md
For a list of all Azure Monitor log categories and links to associated schemas,
## Azure Monitor Logs tables
-Azure Cosmos DB uses Kusto tables from Azure Monitor Logs. You can query these tables with Log analytics. For a list of Kusto bales uses, see the [Azure Monitor Logs table reference](/azure/azure-monitor/reference/tables/tables-resourcetype#azure-cosmos-db) article.
+Azure Cosmos DB uses Kusto tables from Azure Monitor Logs. You can query these tables with Log analytics. For a list of Kusto tables Cosmos DB uses, see the [Azure Monitor Logs table reference](/azure/azure-monitor/reference/tables/tables-resourcetype#azure-cosmos-db) article.
## See Also - See [Monitoring Azure Cosmos DB](monitor-cosmos-db.md) for a description of monitoring Azure Cosmos DB.-- See [Monitoring Azure resources with Azure Monitor](../azure-monitor/essentials/monitor-azure-resource.md) for details on monitoring Azure resources.
+- See [Monitoring Azure resources with Azure Monitor](../azure-monitor/essentials/monitor-azure-resource.md) for details on monitoring Azure resources.
data-factory Concepts Integration Runtime https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/concepts-integration-runtime.md
Previously updated : 07/14/2020 Last updated : 06/16/2021 # Integration runtime in Azure Data Factory
Selecting the right location for your Azure-SSIS IR is essential to achieve high
The following diagram shows location settings of Data Factory and its integration run times:
-![Integration runtime location](media/concepts-integration-runtime/integration-runtime-location.png)
## Determining which IR to use If one data factory activity associates with more than one type of integration runtime, it will resolve to one of them. The self-hosted integration runtime takes precedence over Azure integration runtime in Azure Data Factory managed virtual network. And the latter takes precedence over public Azure integration runtime.
data-factory Connector Azure Blob Storage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-azure-blob-storage.md
Previously updated : 03/17/2021 Last updated : 06/17/2021 # Copy and transform data in Azure Blob storage by using Azure Data Factory
The following properties are supported for Azure Blob storage under `storeSettin
| copyBehavior | Defines the copy behavior when the source is files from a file-based data store.<br/><br/>Allowed values are:<br/><b>- PreserveHierarchy (default)</b>: Preserves the file hierarchy in the target folder. The relative path of the source file to the source folder is identical to the relative path of the target file to the target folder.<br/><b>- FlattenHierarchy</b>: All files from the source folder are in the first level of the target folder. The target files have autogenerated names. <br/><b>- MergeFiles</b>: Merges all files from the source folder to one file. If the file or blob name is specified, the merged file name is the specified name. Otherwise, it's an autogenerated file name. | No | | blockSizeInMB | Specify the block size, in megabytes, used to write data to block blobs. Learn more [about Block Blobs](/rest/api/storageservices/understanding-block-blobs--append-blobs--and-page-blobs#about-block-blobs). <br/>Allowed value is *between 4 MB and 100 MB*. <br/>By default, Data Factory automatically determines the block size based on your source store type and data. For nonbinary copy into Blob storage, the default block size is 100 MB so it can fit in (at most) 4.95 TB of data. It might be not optimal when your data is not large, especially when you use the self-hosted integration runtime with poor network connections that result in operation timeout or performance issues. You can explicitly specify a block size, while ensuring that `blockSizeInMB*50000` is big enough to store the data. Otherwise, the Copy activity run will fail. | No | | maxConcurrentConnections |The upper limit of concurrent connections established to the data store during the activity run. Specify a value only when you want to limit concurrent connections.| No |
+| metadata |Set custom metadata when copy to sink. Each object under the `metadata` array represents an extra column. The `name` defines the metadata key name, and the `value` indicates the data value of that key. If [preserve attributes feature](/azure/data-factory/copy-activity-preserve-metadata#preserve-metadata) is used, the specified metadata will union/overwrite with the source file metadata.<br/><br/>Allowed data values are:<br/>- `$$LASTMODIFIED`: a reserved variable indicates to store the source files' last modified time. Apply to file-based source with binary format only.<br/><b>- Expression<b><br/>- <b>Static value<b>| No |
**Example:**
The following properties are supported for Azure Blob storage under `storeSettin
"type": "ParquetSink", "storeSettings":{ "type": "AzureBlobStorageWriteSettings",
- "copyBehavior": "PreserveHierarchy"
+ "copyBehavior": "PreserveHierarchy",
+ "metadata": [
+ {
+ "name": "testKey1",
+ "value": "value1"
+ },
+ {
+ "name": "testKey2",
+ "value": "value2"
+ },
+ {
+ "name": "lastModifiedKey",
+ "value": "$$LASTMODIFIED"
+ }
+ ]
} } }
data-factory Connector Azure Data Lake Storage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-azure-data-lake-storage.md
Previously updated : 03/17/2021 Last updated : 06/17/2021 # Copy and transform data in Azure Data Lake Storage Gen2 using Azure Data Factory
The following properties are supported for Data Lake Storage Gen2 under `storeSe
| copyBehavior | Defines the copy behavior when the source is files from a file-based data store.<br/><br/>Allowed values are:<br/><b>- PreserveHierarchy (default)</b>: Preserves the file hierarchy in the target folder. The relative path of the source file to the source folder is identical to the relative path of the target file to the target folder.<br/><b>- FlattenHierarchy</b>: All files from the source folder are in the first level of the target folder. The target files have autogenerated names. <br/><b>- MergeFiles</b>: Merges all files from the source folder to one file. If the file name is specified, the merged file name is the specified name. Otherwise, it's an autogenerated file name. | No | | blockSizeInMB | Specify the block size in MB used to write data to ADLS Gen2. Learn more [about Block Blobs](/rest/api/storageservices/understanding-block-blobs--append-blobs--and-page-blobs#about-block-blobs). <br/>Allowed value is **between 4 MB and 100 MB**. <br/>By default, ADF automatically determines the block size based on your source store type and data. For non-binary copy into ADLS Gen2, the default block size is 100 MB so as to fit in at most 4.95-TB data. It may be not optimal when your data is not large, especially when you use Self-hosted Integration Runtime with poor network resulting in operation timeout or performance issue. You can explicitly specify a block size, while ensure blockSizeInMB*50000 is big enough to store the data, otherwise copy activity run will fail. | No | | maxConcurrentConnections | The upper limit of concurrent connections established to the data store during the activity run. Specify a value only when you want to limit concurrent connections.| No |
+| metadata |Set custom metadata when copy to sink. Each object under the `metadata` array represents an extra column. The `name` defines the metadata key name, and the `value` indicates the data value of that key. If [preserve attributes feature](/azure/data-factory/copy-activity-preserve-metadata#preserve-metadata) is used, the specified metadata will union/overwrite with the source file metadata.<br/><br/>Allowed data values are:<br/>- `$$LASTMODIFIED`: a reserved variable indicates to store the source files' last modified time. Apply to file-based source with binary format only.<br/><b>- Expression<b><br/>- <b>Static value<b>| No |
**Example:**
The following properties are supported for Data Lake Storage Gen2 under `storeSe
"type": "ParquetSink", "storeSettings":{ "type": "AzureBlobFSWriteSettings",
- "copyBehavior": "PreserveHierarchy"
+ "copyBehavior": "PreserveHierarchy",
+ "metadata": [
+ {
+ "name": "testKey1",
+ "value": "value1"
+ },
+ {
+ "name": "testKey2",
+ "value": "value2"
+ },
+ {
+ "name": "lastModifiedKey",
+ "value": "$$LASTMODIFIED"
+ }
+ ]
} } }
data-factory Create Self Hosted Integration Runtime https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/create-self-hosted-integration-runtime.md
Previously updated : 02/10/2021 Last updated : 06/16/2021
To create and set up a self-hosted integration runtime, use the following proced
Get-AzDataFactoryV2IntegrationRuntimeKey -ResourceGroupName $resourceGroupName -DataFactoryName $dataFactoryName -Name $selfHostedIntegrationRuntimeName ```-
+> [!NOTE]
+> Run PowerShell command in Azure government, please see [Connect to Azure Government with PowerShell](../azure-government/documentation-government-get-started-connect-with-ps.md).
### Create a self-hosted IR via Azure Data Factory UI Use the following steps to create a self-hosted IR using Azure Data Factory UI. 1. On the **Let's get started** page of Azure Data Factory UI, select the [Manage tab](./author-management-hub.md) from the leftmost pane.
- ![The home page Manage button](media/doc-common-process/get-started-page-manage-button.png)
+ :::image type="content" source="media/doc-common-process/get-started-page-manage-button.png" alt-text="The home page Manage button":::
1. Select **Integration runtimes** on the left pane, and then select **+New**.
- ![Create an integration runtime](media/doc-common-process/manage-new-integration-runtime.png)
+ :::image type="content" source="media/doc-common-process/manage-new-integration-runtime.png" alt-text="Create an integration runtime":::
1. On the **Integration runtime setup** page, select **Azure, Self-Hosted**, and then select **Continue**. 1. On the following page, select **Self-Hosted** to create a Self-Hosted IR, and then select **Continue**.
- ![Create a selfhosted IR](media/create-self-hosted-integration-runtime/new-selfhosted-integration-runtime.png)
+ :::image type="content" source="media/create-self-hosted-integration-runtime/new-selfhosted-integration-runtime.png" alt-text="Create a selfhosted IR":::
1. Enter a name for your IR, and select **Create**. 1. On the **Integration runtime setup** page, select the link under **Option 1** to open the express setup on your computer. Or follow the steps under **Option 2** to set up manually. The following instructions are based on manual setup:
- ![Integration runtime setup](media/create-self-hosted-integration-runtime/integration-runtime-setting-up.png)
+ :::image type="content" source="media/create-self-hosted-integration-runtime/integration-runtime-setting-up.png" alt-text="Integration runtime setup":::
1. Copy and paste the authentication key. Select **Download and install integration runtime**.
Use the following steps to create a self-hosted IR using Azure Data Factory UI.
1. On the **Register Integration Runtime (Self-hosted)** page, paste the key you saved earlier, and select **Register**.
- ![Register the integration runtime](media/create-self-hosted-integration-runtime/register-integration-runtime.png)
+ :::image type="content" source="media/create-self-hosted-integration-runtime/register-integration-runtime.png" alt-text="Register the integration runtime":::
1. On the **New Integration Runtime (Self-hosted) Node** page, select **Finish**. 1. After the self-hosted integration runtime is registered successfully, you see the following window:
- ![Successful registration](media/create-self-hosted-integration-runtime/registered-successfully.png)
+ :::image type="content" source="media/create-self-hosted-integration-runtime/registered-successfully.png" alt-text="Successful registration":::
### Set up a self-hosted IR on an Azure VM via an Azure Resource Manager template
When processor usage is high and available memory is low on the self-hosted IR,
When the processor and available RAM aren't well utilized, but the execution of concurrent jobs reaches a node's limits, scale up by increasing the number of concurrent jobs that a node can run. You might also want to scale up when activities time out because the self-hosted IR is overloaded. As shown in the following image, you can increase the maximum capacity for a node:
-![Increase the number of concurrent jobs that can run on a node](media/create-self-hosted-integration-runtime/scale-up-self-hosted-IR.png)
### TLS/SSL certificate requirements
Here are the requirements for the TLS/SSL certificate that you use to secure com
> > Data movement in transit from a self-hosted IR to other data stores always happens within an encrypted channel, regardless of whether or not this certificate is set.
+### Credential Sync
+If you don't store credentials or secret values in an Azure Key Vault, the credentials or secret values will be stored in the machines where your self-hosted integration runtime locates. Each node will have a copy of credential with certain version. In order to make all nodes work together, the version number should be the same for all nodes.
+ ## Proxy server considerations If your corporate network environment uses a proxy server to access the internet, configure the self-hosted integration runtime to use appropriate proxy settings. You can set the proxy during the initial registration phase.
-![Specify the proxy](media/create-self-hosted-integration-runtime/specify-proxy.png)
When configured, the self-hosted integration runtime uses the proxy server to connect to the cloud service's source and destination (which use the HTTP or HTTPS protocol). This is why you select **Change link** during initial setup.
-![Set the proxy](media/create-self-hosted-integration-runtime/set-http-proxy.png)
There are three configuration options:
After you register the self-hosted integration runtime, if you want to view or u
You can use the configuration manager tool to view and update the HTTP proxy.
-![View and update the proxy](media/create-self-hosted-integration-runtime/view-proxy.png)
> [!NOTE] > If you set up a proxy server with NTLM authentication, the integration runtime host service runs under the domain account. If you later change the password for the domain account, remember to update the configuration settings for the service and restart the service. Because of this requirement, we suggest that you access the proxy server by using a dedicated domain account that doesn't require you to update the password frequently.
There are two firewalls to consider:
- The *corporate firewall* that runs on the central router of the organization - The *Windows firewall* that is configured as a daemon on the local machine where the self-hosted integration runtime is installed
-![The firewalls](media/create-self-hosted-integration-runtime/firewall.png)
At the corporate firewall level, you need to configure the following domains and outbound ports:
One required domain and port that need to be put in the allowlist of your firewa
2. In Edit page, select **Nodes**. 3. Select **View Service URLs** to get all FQDNs.
- ![Azure Relay URLs](media/create-self-hosted-integration-runtime/Azure-relay-url.png)
+ :::image type="content" source="media/create-self-hosted-integration-runtime/Azure-relay-url.png" alt-text="Azure Relay URLs":::
4. You can add these FQDNs in the allowlist of firewall rules.
+> [!NOTE]
+> For the details related to Azure Relay connections protocol, see [Azure Relay Hybrid Connections protocol](../azure-relay/relay-hybrid-connections-protocol.md).
+ ### Copy data from a source to a sink Ensure that you properly enable firewall rules on the corporate firewall, the Windows firewall of the self-hosted integration runtime machine, and the data store itself. Enabling these rules lets the self-hosted integration runtime successfully connect to both source and sink. Enable rules for each data store that is involved in the copy operation.
data-factory Data Factory Private Link https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/data-factory-private-link.md
Previously updated : 06/10/2021 Last updated : 06/16/2021 # Azure Private Link for Azure Data Factory
With the support of Private Link for Azure Data Factory, you can:
The communications to Azure Data Factory service go through Private Link and help provide secure private connectivity.
-![Diagram of Private Link for Azure Data Factory architecture.](./media/data-factory-private-link/private-link-architecture.png)
Enabling the Private Link service for each of the preceding communication channels offers the following functionality: - **Supported**:
data-factory Managed Virtual Network Private Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/managed-virtual-network-private-endpoint.md
Previously updated : 07/15/2020 Last updated : 06/16/2021 # Azure Data Factory Managed Virtual Network (preview)
Benefits of using Managed Virtual Network:
>Existing public Azure integration runtime can't switch to Azure integration runtime in Azure Data Factory managed virtual network and vice versa.
-![ADF Managed Virtual Network architecture](./media/managed-vnet/managed-vnet-architecture-diagram.png)
## Managed private endpoints Managed private endpoints are private endpoints created in the Azure Data Factory Managed Virtual Network establishing a private link to Azure resources. Azure Data Factory manages these private endpoints on your behalf.
-![New Managed private endpoint](./media/tutorial-copy-data-portal-private/new-managed-private-endpoint.png)
Azure Data Factory supports private links. Private link enables you to access Azure (PaaS) services (such as Azure Storage, Azure Cosmos DB, Azure Synapse Analytics).
Private endpoint uses a private IP address in the managed Virtual Network to eff
A private endpoint connection is created in a "Pending" state when you create a managed private endpoint in Azure Data Factory. An approval workflow is initiated. The private link resource owner is responsible to approve or reject the connection.
-![Manage private endpoint](./media/tutorial-copy-data-portal-private/manage-private-endpoint.png)
If the owner approves the connection, the private link is established. Otherwise, the private link won't be established. In either case, the Managed private endpoint will be updated with the status of the connection.
-![Approve Managed private endpoint](./media/tutorial-copy-data-portal-private/approve-private-endpoint.png)
Only a Managed private endpoint in an approved state can send traffic to a given private link resource. ## Interactive Authoring Interactive authoring capabilities is used for functionalities like test connection, browse folder list and table list, get schema, and preview data. You can enable interactive authoring when creating or editing an Azure Integration Runtime which is in ADF-managed virtual network. The backend service will pre-allocate compute for interactive authoring functionalities. Otherwise, the compute will be allocated every time any interactive operation is performed which will take more time. The Time To Live (TTL) for interactive authoring is 60 minutes, which means it will automatically become disabled after 60 minutes of the last interactive authoring operation.
-![Interactive authoring](./media/managed-vnet/interactive-authoring.png)
## Activity execution time using managed virtual network By design, Azure integration runtime in managed virtual network takes longer queue time than public Azure integration runtime as we are not reserving one compute node per data factory, so there is a warm up for each activity to start, and it occurs primarily on virtual network join rather than Azure integration runtime. For non-copy activities including pipeline activity and external activity, there is a 60 minutes Time To Live (TTL) when you trigger them at the first time. Within TTL, the queue time is shorter because the node is already warmed up.
$privateEndpointResourceId = "subscriptions/${subscriptionId}/resourceGroups/${r
$integrationRuntimeResourceId = "subscriptions/${subscriptionId}/resourceGroups/${resourceGroupName}/providers/Microsoft.DataFactory/factories/${factoryName}/integrationRuntimes/${integrationRuntimeName}" # Create managed Virtual Network resource
-New-AzResource -ApiVersion "${apiVersion}" -ResourceId "${vnetResourceId}"
+New-AzResource -ApiVersion "${apiVersion}" -ResourceId "${vnetResourceId}" -Properties
# Create managed private endpoint resource New-AzResource -ApiVersion "${apiVersion}" -ResourceId "${privateEndpointResourceId}" -Properties @{
To access on premises data sources from managed Virtual Network using Private En
- When you create a Linked Service for Azure Key Vault, there is no Azure Integration Runtime reference. So you can't create Private Endpoint during Linked Service creation of Azure Key Vault. But when you create Linked Service for data stores which references Azure Key Vault Linked Service and this Linked Service references Azure Integration Runtime with Managed Virtual Network enabled, then you are able to create a Private Endpoint for the Azure Key Vault Linked Service during the creation. - **Test connection** operation for Linked Service of Azure Key Vault only validates the URL format, but doesn't do any network operation. - The column **Using private endpoint** is always shown as blank even if you create Private Endpoint for Azure Key Vault.
-![Private Endpoint for AKV](./media/managed-vnet/akv-pe.png)
+ ## Next steps
data-factory Self Hosted Integration Runtime Auto Update https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/self-hosted-integration-runtime-auto-update.md
Previously updated : 12/25/2020 Last updated : 06/16/2021 # Self-hosted integration runtime auto-update and expire notification
Last updated 12/25/2020
This article will describe how to let self-hosted integration runtime auto-update to the latest version and how ADF manages the versions of self-hosted integration runtime. ## Self-hosted Integration Runtime Auto-update
-Generally, when you install a self-hosted integration runtime in your local machine or an Azure VM, you have two options to manage the version of self-hosted integration runtime: auto-update or maintain manually. Typically, ADF releases two new versions of self-hosted integration runtime every month which includes new feature release, bug fix or enhancement. So we recommend users to update to the latest version in order to get the newest feature and enhancement.
+Generally, when you install a self-hosted integration runtime in your local machine or an Azure VM, you have two options to manage the version of self-hosted integration runtime: auto-update or maintain manually. Typically, ADF releases two new versions of self-hosted integration runtime every month which includes new feature release, bug fix or enhancement. So we recommend users to update to newer version in order to get the latest feature and enhancement.
-The most convenient way is to enable auto-update when you create or edit self-hosted integration runtime. Then it will be automatically update to the latest version. You can also schedule the update at the most suitable time slot as you wish.
+The most convenient way is to enable auto-update when you create or edit self-hosted integration runtime. The self-hosted integration runtime will be automatically update to newer version. You can also schedule the update at the most suitable time slot as you wish.
-![Enable auto-update](media/create-self-hosted-integration-runtime/shir-auto-update.png)
You can check the last update datetime in your self-hosted integration runtime client.
-![Screenshot of checking the update time](media/create-self-hosted-integration-runtime/shir-auto-update-2.png)
> [!NOTE]
-> To ensure the stability of self-hosted integration runtime, although we release two versions, we will only update it automatically once every month. So sometimes you will find that the auto-updated version is the previous version of the actual latest version. If you want to get the latest version, you can go to [download center](https://www.microsoft.com/download/details.aspx?id=39717).
+> If you have multiple self-hosted integration runtime nodes, there is no downtime during auto-update. The auto-update happens in one node first while others are working on tasks. When the first node finishes the update, it will take over the remain tasks when other nodes are updating. If you only have one self-hosted integration runtime, then it has some downtime during the auto-update.
+
+## Auto-update version vs latest version
+To ensure the stability of self-hosted integration runtime, although we release two versions, we will only push one version every month. So sometimes you will find that the auto-update version is the previous version of the actual latest version. If you want to get the latest version, you can go to [download center](https://www.microsoft.com/download/details.aspx?id=39717).
+
+The self-hosted integration runtime **Auto update** page in ADF portal shows the newer version if current version is old. When your self-hosted integration runtime is online, this version is auto-update version and will automatically update your self-hosted integration runtime in the scheduled time. But if your self-hosted integration runtime is offline, the page only shows the latest version.
## Self-hosted Integration Runtime Expire Notification If you want to manually control which version of self-hosted integration runtime, you can disable the setting of auto-update and install it manually. Each version of self-hosted integration runtime will be expired in one year. The expiring message is shown in ADF portal and self-hosted integration runtime client **90 days** before expiration.
data-factory Wrangling Functions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/wrangling-functions.md
Last updated 04/16/2021
Data Wrangling in Azure Data Factory allows you to do code-free agile data preparation and wrangling at cloud scale by translating Power Query ```M``` scripts into Data Flow script. ADF integrates with [Power Query Online](/powerquery-m/power-query-m-reference) and makes Power Query ```M``` functions available for data wrangling via Spark execution using the data flow Spark infrastructure. > [!NOTE]
-> Power Query in ADF is currently avilable in public preview
+> Power Query in ADF is currently available in public preview
Currently not all Power Query M functions are supported for data wrangling despite being available during authoring. While building your mash-ups, you'll be prompted with the following error message if a function isn't supported:
event-grid Create Topic Subscription https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-grid/kubernetes/create-topic-subscription.md
description: This article describes how to create an event grid topic on a Kuber
Previously updated : 05/25/2021 Last updated : 06/17/2021
In this quickstart, you'll create a topic in Event Grid on Kubernetes, create a
1. [Connect your Kubernetes cluster to Azure Arc](../../azure-arc/kubernetes/quickstart-connect-cluster.md). 1. [Install Event Grid extension on Kubernetes cluster](install-k8s-extension.md). This extension deploys Event Grid to a Kubernetes cluster.
-1. [Create a custom location](../../azure-arc/kubernetes/custom-locations.md). A custom location represents a namespace in the cluster and it's the place where topics and event subscriptions are deployed.
++
+## Create a custom location
+As an Azure location extension, a custom location lets you use your Azure Arc-enabled Kubernetes cluster as a target location for deploying resources such as Event Grid topics. A custom location represents a namespace in the cluster and it's the place where topics and event subscriptions are deployed. In this section, you'll create a custom location.
+
+1. Declare the following variables to hold values of the Azure Arc cluster, resource group, and custom location names. Copy these statements to an editor, replace the values, and then copy/paste to the bash window.
+
+ ```azurecli-interactive
+ resourcegroupname="<AZURE RESOURCE GROUP NAME>"
+ arcclustername="<AZURE ARC CLUSTER NAME>"
+ customlocationname="<CUSTOM LOCATION NAME>"
+ ```
+1. Get the resource ID of the Azure Arc connected cluster. Update values for the Azure Arc cluster name and resource group parameters before running the command.
+
+ ```azurecli-interactive
+ hostresourceid=$(az connectedk8s show -n $arcclustername -g $resourcegroupname --query id -o tsv)
+ ```
+1. Get the Event Grid extension resource ID. This step assumes that the name you gave for the Event Grid extension is **eventgrid-ext**. Update Azure Arc cluster and resource group names before running the command.
+
+ ```azurecli-interactive
+ clusterextensionid=$(az k8s-extension show --name eventgrid-ext --cluster-type connectedClusters -c $arcclustername -g $resourcegroupname --query id -o tsv)
+ ```
+1. Create a custom location using the above two values. Update custom location and resource group names before running the command.
+
+ ```azurecli-interactive
+ az customlocation create -n $customlocationname -g $resourcegroupname --namespace arc --host-resource-id $hostresourceid --cluster-extension-ids $clusterextensionid
+ ```
+1. Get the resource ID of the custom location. Update the custom location name before running the command.
+
+ ```azurecli-interactive
+ customlocationid=$(az customlocation show -n $customlocationname -g $resourcegroupname --query id -o tsv)
+ ```
+
+ For more information on creating custom locations, see [Create and manage custom locations on Azure Arc enabled Kubernetes](../../azure-arc/kubernetes/custom-locations.md).
## Create a topic
+In this section, you'll create a topic in the custom location you created in the previous step. Update resource group and event grid topic names before running the command. Update the location if you are using a location other than East US.
-### Azure CLI
-Run the following Azure CLI command to create a topic:
+1. Declare a variable to hold the topic name.
-```azurecli-interactive
-az eventgrid topic create --name <EVENT GRID TOPIC NAME> \
- --resource-group <RESOURCE GROUP NAME> \
- --location <REGION> \
- --kind azurearc \
- --extended-location-name /subscriptions/<AZURE SUBSCRIPTION ID>/resourceGroups/<RESOURCE GROUP NAME>/providers/Microsoft.ExtendedLocation/customLocations/<CUSTOM LOCATION NAME> \
- --extended-location-type customlocation \
- --input-schema CloudEventSchemaV1_0
-```
-Specify values for the place holders before running the command:
-- Name of the Azure resource group in which you want the event grid topic to created. -- Name for the topic. -- Region for the topic.-- In the resource ID of the custom location, specify the following values:
- - ID of the Azure subscription in which the custom location exists.
- - Name of the resource group that contains the custom location.
- - Name of the custom location
+ ```azurecli-interactive
+ topicname="<TOPIC NAME>"
+ ```
+4. Run the following command to create a topic.
-For more information about the CLI command, see [`az eventgrid topic create`](/cli/azure/eventgrid/topic#az_eventgrid_topic_create).
+ ```azurecli-interactive
+ az eventgrid topic create -g $resourcegroupname --name $topicname --kind azurearc --extended-location-name $customlocationid --extended-location-type customlocation --input-schema CloudEventSchemaV1_0 --location $region
+ ```
+
+ For more information about the CLI command, see [`az eventgrid topic create`](/cli/azure/eventgrid/topic#az_eventgrid_topic_create).
## Create a message endpoint+ Before you create a subscription for the custom topic, create an endpoint for the event message. Typically, the endpoint takes actions based on the event data. To simplify this quickstart, you deploy a [pre-built web app](https://github.com/Azure-Samples/azure-event-grid-viewer) that displays the event messages. The deployed solution includes an App Service plan, an App Service web app, and source code from GitHub. 1. In the article page, select **Deploy to Azure** to deploy the solution to your subscription. In the Azure portal, provide values for the parameters.
Before you create a subscription for the custom topic, create an endpoint for th
## Create a subscription Subscribers can register for events published to a topic. To receive any event, you'll need to create an Event Grid subscription for a topic of interest. An event subscription defines the destination to which those events are sent. To learn about all the destinations or handlers supported, see [Event handlers](event-handlers.md). -
-### Azure CLI
-To create an event subscription with a WebHook (HTTPS endpoint) destination, run the following Azure CLI command:
+To create an event subscription with a WebHook (HTTPS endpoint) destination, enter a name for the event subscription, update the name of the web site, and run the following command.
```azurecli-interactive
-az eventgrid event-subscription create --name <EVENT SUBSCRIPTION NAME> \
- --source-resource-id /subscriptions/<AZURE SUBSCRIPTION ID>/resourceGroups/<TOPIC'S RESOURCE GROUP NAME>/providers/Microsoft.EventGrid/topics/<TOPIC NAme> \
- --endpoint https://<SITE NAME>.azurewebsites.net/api/updates
+topicid=$(az eventgrid topic show --name $topicname --resource-group $resourcegroupname --query id -o tsv)
+az eventgrid event-subscription create --name <EVENT SUBSCRIPTION NAME> --source-resource-id $topicid --endpoint https://<SITE NAME>.azurewebsites.net/api/updates
```
-Specify values for the place holders before running the command:
-- Name of the event subscription to be created. -- In the **resource ID of the topic**, specify the following values:
- - ID of the Azure subscription in which you want the subscription to be created.
- - Name of the resource group that contains the topic.
- - Name of the topic.
-- For the endpoint, specify the name of the Event Grid Viewer web site.
-
For more information about the CLI command, see [`az eventgrid event-subscription create`](/cli/azure/eventgrid/event-subscription#az_eventgrid_event_subscription_create). - ## Send events to the topic 1. Run the following command to get the **endpoint** for the topic: After you copy and paste the command, update the **topic name** and **resource group name** before you run the command. You'll publish sample events to this topic endpoint. ```azurecli
- az eventgrid topic show --name <topic name> -g <resource group name> --query "endpoint" --output tsv
+ az eventgrid topic show --name $topicname -g $resourcegroupname --query "endpoint" --output tsv
``` 2. Run the following command to get the **key** for the custom topic: After you copy and paste the command, update the **topic name** and **resource group** name before you run the command. It's the primary key of the topic. To get this key from the Azure portal, switch to the **Access keys** tab of the **Event Grid Topic** page. To be able post an event to a custom topic, you need the access key. ```azurecli
- az eventgrid topic key list --name <topic name> -g <resource group name> --query "key1" --output tsv
+ az eventgrid topic key list --name $topicname -g $resourcegroupname --query "key1" --output tsv
``` 1. Run the following **Curl** command to post the event. Specify the endpoint URL and key from step 1 and 2 before running the command.
For more information about the CLI command, see [`az eventgrid event-subscriptio
```yml apiVersion: v1
- dnsPolicy: ClusterFirstWithHostNet
- hostNetwork: true
kind: Pod
- metadata:
- name: test-pod
- spec:
- containers:
- -
- name: nginx
- emptyDir: {}
- image: nginx
- volumeMounts:
- -
- mountPath: /usr/share/nginx/html
- name: shared-data
- volumes:
- -
- name: shared-data
+ metadata:
+ name: test-pod2
+ spec:
+ containers:
+ - name: nginx
+ image: nginx
+ hostNetwork: true
+ dnsPolicy: ClusterFirstWithHostNet
``` 1. Create the pod. ```bash
event-grid Resize Images On Storage Blob Upload Event https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-grid/resize-images-on-storage-blob-upload-event.md
You must have completed the previous Blob storage tutorial: [Upload image data i
You need an [Azure subscription](../guides/developer/azure-developer-guide.md#understanding-accounts-subscriptions-and-billing). This tutorial doesn't work with the **free** subscription.
+If you've not previously registered the Event Grid resource provider in your subscription, make sure it's registered.
+
+# [PowerShell](#tab/azure-powershell)
+
+```powershell
+Register-AzResourceProvider -ProviderNamespace Microsoft.EventGrid
+```
+
+# [Azure CLI](#tab/azure-cli)
+ [!INCLUDE [cloud-shell-try-it.md](../../includes/cloud-shell-try-it.md)] If you choose to install and use the CLI locally, this tutorial requires the Azure CLI version 2.0.14 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli). If you are not using Cloud Shell, you must first sign in using `az login`.
-If you've not previously registered the Event Grid resource provider in your subscription, make sure it's registered.
-
-```bash
+```azurecli
az provider register --namespace Microsoft.EventGrid ```
-```powershell
-az provider register --namespace Microsoft.EventGrid
-```
+ ## Create an Azure Storage account
-Azure Functions requires a general storage account. In addition to the Blob storage account you created in the previous tutorial, create a separate general storage account in the resource group by using the [az storage account create](/cli/azure/storage/account) command. Storage account names must be between 3 and 24 characters in length and may contain numbers and lowercase letters only.
-
-1. Set a variable to hold the name of the resource group that you created in the previous tutorial.
-
- ```bash
- resourceGroupName="myResourceGroup"
- ```
+Azure Functions requires a general storage account. In addition to the Blob storage account you created in the previous tutorial, create a separate general storage account in the resource group. Storage account names must be between 3 and 24 characters in length and may contain numbers and lowercase letters only.
- ```powershell
- $resourceGroupName="myResourceGroup"
- ```
+Set variables to hold the name of the resource group that you created in the previous tutorial, the location for resources to be created, and the name of the new storage account that Azure Functions requires. Then, create the storage account for the Azure function.
-1. Set a variable to hold the location for resources to be created.
+# [PowerShell](#tab/azure-powershell)
- ```bash
- location="eastus"
- ```
+Use the [New-AzStorageAccount](/powershell/module/az.storage/new-azstorageaccount) command.
- ```powershell
- $location="eastus"
- ```
+```powershell
+$resourceGroupName="myResourceGroup"
+$location="eastus"
+$functionstorage="<name of the storage account to be used by the function>"
-1. Set a variable for the name of the new storage account that Azure Functions requires.
+New-AzStorageAccount -ResourceGroupName $resourceGroupName -AccountName $functionstorage -Location $location -SkuName Standard_LRS -Kind StorageV2
+```
- ```bash
- functionstorage="<name of the storage account to be used by the function>"
- ```
+# [Azure CLI](#tab/azure-cli)
- ```powershell
- $functionstorage="<name of the storage account to be used by the function>"
- ```
+Use the [az storage account create](/cli/azure/storage/account) command.
-1. Create the storage account for the Azure function.
+```azurecli
+resourceGroupName="myResourceGroup"
+location="eastus"
+functionstorage="<name of the storage account to be used by the function>"
- ```bash
- az storage account create --name $functionstorage --location $location \
- --resource-group $resourceGroupName --sku Standard_LRS --kind StorageV2
- ```
+az storage account create --name $functionstorage --location $location --resource-group $resourceGroupName --sku Standard_LRS --kind StorageV2
+```
- ```powershell
- az storage account create --name $functionstorage --location $location `
- --resource-group $resourceGroupName --sku Standard_LRS --kind StorageV2
- ```
+ ## Create a function app
-You must have a function app to host the execution of your function. The function app provides an environment for serverless execution of your function code. Create a function app by using the [az functionapp create](/cli/azure/functionapp) command.
+You must have a function app to host the execution of your function. The function app provides an environment for serverless execution of your function code.
In the following command, provide your own unique function app name. The function app name is used as the default DNS domain for the function app, and so the name needs to be unique across all apps in Azure.
-1. Specify a name for the function app that's to be created.
+Specify a name for the function app that's to be created, then create the Azure function.
- ```bash
- functionapp="<name of the function app>"
- ```
+# [PowerShell](#tab/azure-powershell)
- ```powershell
- $functionapp="<name of the function app>"
- ```
+Create a function app by using the [New-AzFunctionApp](/powershell/module/az.functions/new-azfunctionapp) command.
-1. Create the Azure function.
+```powershell
+$functionapp="<name of the function app>"
+
+New-AzFunctionApp
+ -Location $location
+ -Name $functionapp
+ -ResourceGroupName $resourceGroupName
+ -Runtime PowerShell
+ -StorageAccountName $functionstorage
+```
+
+# [Azure CLI](#tab/azure-cli)
- ```bash
- az functionapp create --name $functionapp --storage-account $functionstorage \
- --resource-group $resourceGroupName --consumption-plan-location $location \
- --functions-version 2
- ```
+Create a function app by using the [az functionapp create](/cli/azure/functionapp) command.
- ```powershell
- az functionapp create --name $functionapp --storage-account $functionstorage `
- --resource-group $resourceGroupName --consumption-plan-location $location `
- --functions-version 2
- ```
+```azurecli
+functionapp="<name of the function app>"
+
+az functionapp create --name $functionapp --storage-account $functionstorage \
+ --resource-group $resourceGroupName --consumption-plan-location $location \
+ --functions-version 2
+```
++ Now configure the function app to connect to the Blob storage account you created in the [previous tutorial][previous-tutorial]. ## Configure the function app
-The function needs credentials for the Blob storage account, which are added to the application settings of the function app using the [az functionapp config appsettings set](/cli/azure/functionapp/config/appsettings) command.
+The function needs credentials for the Blob storage account, which are added to the application settings of the function app using either the [az functionapp config appsettings set](/cli/azure/functionapp/config/appsettings) or [Update-AzFunctionAppSetting](/powershell/module/az.functions/update-azfunctionappsetting) command.
# [\.NET v12 SDK](#tab/dotnet)
-```bash
+```azurecli
storageConnectionString=$(az storage account show-connection-string --resource-group $resourceGroupName \ --name $blobStorageAccount --query connectionString --output tsv)
az functionapp config appsettings set --name $functionapp --resource-group $reso
$storageConnectionString=$(az storage account show-connection-string --resource-group $resourceGroupName ` --name $blobStorageAccount --query connectionString --output tsv)
-az functionapp config appsettings set --name $functionapp --resource-group $resourceGroupName `
- --settings AzureWebJobsStorage=$storageConnectionString THUMBNAIL_CONTAINER_NAME=thumbnails `
- THUMBNAIL_WIDTH=100 FUNCTIONS_EXTENSION_VERSION=~2
+Update-AzFunctionAppSetting -Name $functionapp -ResourceGroupName $resourceGroupName -AppSetting AzureWebJobsStorage=$storageConnectionString THUMBNAIL_CONTAINER_NAME=thumbnails THUMBNAIL_WIDTH=100 FUNCTIONS_EXTENSION_VERSION=~2
``` # [Node.js v10 SDK](#tab/nodejsv10)
-```bash
+```azurecli
blobStorageAccountKey=$(az storage account keys list -g $resourceGroupName \ -n $blobStorageAccount --query [0].value --output tsv)
az functionapp config appsettings set --name $functionapp --resource-group $reso
AZURE_STORAGE_CONNECTION_STRING=$storageConnectionString ```
-```powershell
-$blobStorageAccountKey=$(az storage account keys list -g $resourceGroupName `
- -n $blobStorageAccount --query [0].value --output tsv)
-
-$storageConnectionString=$(az storage account show-connection-string --resource-group $resourceGroupName `
- --name $blobStorageAccount --query connectionString --output tsv)
-
-az functionapp config appsettings set --name $functionapp --resource-group $resourceGroupName `
- --settings FUNCTIONS_EXTENSION_VERSION=~2 BLOB_CONTAINER_NAME=thumbnails `
- AZURE_STORAGE_ACCOUNT_NAME=$blobStorageAccount `
- AZURE_STORAGE_ACCOUNT_ACCESS_KEY=$blobStorageAccountKey `
- AZURE_STORAGE_CONNECTION_STRING=$storageConnectionString
-```
- The `FUNCTIONS_EXTENSION_VERSION=~2` setting makes the function app run on version 2.x of the Azure Functions runtime.
You can now deploy a function code project to this function app.
The sample C# resize function is available on [GitHub](https://github.com/Azure-Samples/function-image-upload-resize). Deploy this code project to the function app by using the [az functionapp deployment source config](/cli/azure/functionapp/deployment/source) command.
-```bash
+```azurecli
az functionapp deployment source config --name $functionapp --resource-group $resourceGroupName \ --branch master --manual-integration \ --repo-url https://github.com/Azure-Samples/function-image-upload-resize
az functionapp deployment source config --name $functionapp --resource-group $re
The sample Node.js resize function is available on [GitHub](https://github.com/Azure-Samples/storage-blob-resize-function-node-v10). Deploy this Functions code project to the function app by using the [az functionapp deployment source config](/cli/azure/functionapp/deployment/source) command.
-```bash
+```azurecli
az functionapp deployment source config --name $functionapp \ --resource-group $resourceGroupName --branch master --manual-integration \ --repo-url https://github.com/Azure-Samples/storage-blob-resize-function-node-v10
Advance to part three of the Storage tutorial series to learn how to secure acce
+ To try another tutorial that features Azure Functions, see [Create a function that integrates with Azure Logic Apps](../azure-functions/functions-twitter-email.md). [previous-tutorial]: ../storage/blobs/storage-upload-process-images.md-
expressroute Expressroute About Virtual Network Gateways https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/expressroute/expressroute-about-virtual-network-gateways.md
Each virtual network can have only one virtual network gateway per gateway type.
[!INCLUDE [expressroute-gwsku-include](../../includes/expressroute-gwsku-include.md)] If you want to upgrade your gateway to a more powerful gateway SKU, in most cases you can use the 'Resize-AzVirtualNetworkGateway' PowerShell cmdlet. This will work for upgrades to Standard and HighPerformance SKUs. However, to upgrade to the UltraPerformance SKU, you will need to recreate the gateway. Recreating a gateway incurs downtime.
+### <a name="gatewayfeaturesupport"></a>Feature support by gateway SKU
+The following table shows the features supported across each gateway type.
-### <a name="aggthroughput"></a>Estimated performances by gateway SKU
-The following table shows the gateway types and the estimated performances. This table applies to both the Resource Manager and classic deployment models.
+|**Gateway SKU**|**VPN Gateway and ExpressRoute coexistence**|**FastPath**|**Max Number of Circuit Connections**|
+| | | | |
+|**Standard SKU/ERGw1Az**|No|No|4|
+|**High Perf SKU/ERGw2Az**|Yes|No|8
+|**Ultra Performance SKU/ErGw3Az**|Yes|Yes|16
+### <a name="aggthroughput"></a>Estimated performances by gateway SKU
+The following table shows the gateway types and the estimated performance scale numbers. These numbers are derived from the following testing conditions and represent the max support limits. Actual performance may vary, depending on how closely traffic replicates the testing conditions.
+
+### Testing conditions
+##### **Standard** #####
+
+- Circuit bandwidth: 1Gbps
+- Number of routes advertises by the Gateway: 500
+- Number of routes learned: 4,000
+##### **High Performance** #####
+
+- Circuit bandwidth: 1Gbps
+- Number of routes advertises by the Gateway: 500
+- Number of routes learned: 9,500
+##### **Ultra Performance** #####
+
+- Circuit bandwidth: 1Gbps
+- Number of routes advertises by the Gateway: 500
+- Number of routes learned: 9,500
+
+ This table applies to both the Resource Manager and classic deployment models.
+
+|**Gateway SKU**|**Connections per second**|**Mega-Bits per second**|**Packets per second**|**Supported number of VMs in the Virtual Network**|
+| | | | | |
+|**Standard**|7,000|1,000|100,000|2,000|
+|**High Performance**|14,000|2,000|250,000|4,500|
+|**Ultra Performance**|16,000|10,000|1,000,000|11,000|
> [!IMPORTANT] > Application performance depends on multiple factors, such as the end-to-end latency, and the number of traffic flows the application opens. The numbers in the table represent the upper limit that the application can theoretically achieve in an ideal environment.
->
->
-> [!NOTE]
+>[!NOTE]
> The maximum number of ExpressRoute circuits from the same peering location that can connect to the same virtual network is 4 for all gateways. >
->
-- ## <a name="gwsub"></a>Gateway subnet
frontdoor Front Door Caching https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/frontdoor/front-door-caching.md
Azure Front Door delivers large files without a cap on file size. Front Door use
After the chunk arrives at the Front Door environment, it's cached and immediately served to the user. Front Door then pre-fetches the next chunk in parallel. This pre-fetch ensures that the content stays one chunk ahead of the user, which reduces latency. This process continues until the entire file gets downloaded (if requested) or the client closes the connection.
-For more information on the byte-range request, read [RFC 7233](https://web.archive.org/web/20171009165003/http://www.rfc-base.org/rfc-7233.html).
+For more information on the byte-range request, read [RFC 7233](https://www.rfc-editor.org/info/rfc7233).
Front Door caches any chunks as they're received so the entire file doesn't need to be cached on the Front Door cache. Ensuing requests for the file or byte ranges are served from the cache. If the chunks aren't all cached, pre-fetching is used to request chunks from the backend. This optimization relies on the backend's ability to support byte-range requests. If the backend doesn't support byte-range requests, this optimization isn't effective. ## File compression
frontdoor Concept Caching https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/frontdoor/standard-premium/concept-caching.md
Front Door Standard/Premium (Preview) delivers large files without a cap on file
After the chunk arrives at the Front Door environment, it's cached and immediately served to the user. Front Door then pre-fetches the next chunk in parallel. This pre-fetch ensures that the content stays one chunk ahead of the user, which reduces latency. This process continues until the entire file gets downloaded (if requested) or the client closes the connection.
-For more information on the byte-range request, read [RFC 7233](https://web.archive.org/web/20171009165003/http://www.rfc-base.org/rfc-7233.html).
+For more information on the byte-range request, read [RFC 7233](https://www.rfc-editor.org/info/rfc7233).
Front Door caches any chunks as they're received so the entire file doesn't need to be cached on the Front Door cache. Ensuing requests for the file or byte ranges are served from the cache. If the chunks aren't all cached, pre-fetching is used to request chunks from the backend. This optimization relies on the origin's ability to support byte-range requests. If the origin doesn't support byte-range requests, this optimization isn't effective. ## File compression
Cache duration can be configured in Rule Set. The cache duration set via Rules S
## Next steps * Learn more about [Rule Set Match Conditions](concept-rule-set-match-conditions.md)
-* Learn more about [Rule Set Actions](concept-rule-set-actions.md)
+* Learn more about [Rule Set Actions](concept-rule-set-actions.md)
hdinsight Apache Hadoop Etl At Scale https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hdinsight/hadoop/apache-hadoop-etl-at-scale.md
Sqoop uses MapReduce to import and export the data, to provide parallel operatio
Apache Flume is a distributed, reliable, and available service for efficiently collecting, aggregating, and moving large amounts of log data. Its flexible architecture is based on streaming data flows. Flume is robust and fault-tolerant with tunable reliability mechanisms. It has many failover and recovery mechanisms. Flume uses a simple extensible data model that allows for online, analytic application.
-Apache Flume can't be used with Azure HDInsight. But, an on-premises Hadoop installation can use Flume to send data to either Azure Blob storage or Azure Data Lake Storage. For more information, see [Using Apache Flume with HDInsight](https://web.archive.org/web/20190217104751/https://blogs.msdn.microsoft.com/bigdatasupport/2014/03/18/using-apache-flume-with-hdinsight/).
+Apache Flume can't be used with Azure HDInsight. But, an on-premises Hadoop installation can use Flume to send data to either Azure Blob storage or Azure Data Lake Storage. For more information, see [Using Apache Flume with HDInsight](/archive/blogs/bigdatasupport/using-apache-flume-with-hdinsight).
## Transform
hdinsight Hdinsight Use Hive https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hdinsight/hadoop/hdinsight-use-hive.md
ROW FORMAT DELIMITED FIELDS TERMINATED BY ' '
STORED AS TEXTFILE LOCATION '/example/data/'; ```
-Hive also supports custom **serializer/deserializers (SerDe)** for complex or irregularly structured data. For more information, see the [How to use a custom JSON SerDe with HDInsight](https://web.archive.org/web/20190217104719/https://blogs.msdn.microsoft.com/bigdatasupport/2014/06/18/how-to-use-a-custom-json-serde-with-microsoft-azure-hdinsight/) document.
+Hive also supports custom **serializer/deserializers (SerDe)** for complex or irregularly structured data. For more information, see the [How to use a custom JSON SerDe with HDInsight](/archive/blogs/bigdatasupport/how-to-use-a-custom-json-serde-with-microsoft-azure-hdinsight) document.
For more information on file formats supported by Hive, see the [Language manual (https://cwiki.apache.org/confluence/display/Hive/LanguageManual)](https://cwiki.apache.org/confluence/display/Hive/LanguageManual)
Now that you've learned what Hive is and how to use it with Hadoop in HDInsight,
* [Upload data to HDInsight](../hdinsight-upload-data.md) * [Use Python User Defined Functions (UDF) with Apache Hive and Apache Pig in HDInsight](./python-udf-hdinsight.md)
-* [Use MapReduce jobs with HDInsight](hdinsight-use-mapreduce.md)
+* [Use MapReduce jobs with HDInsight](hdinsight-use-mapreduce.md)
hdinsight Using Json In Hive https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hdinsight/hadoop/using-json-in-hive.md
The `json_tuple` UDF uses the [lateral view](https://cwiki.apache.org/confluence
### Use a custom SerDe
-SerDe is the best choice for parsing nested JSON documents. It lets you define the JSON schema, and then you can use the schema to parse the documents. For instructions, see [How to use a custom JSON SerDe with Microsoft Azure HDInsight](https://web.archive.org/web/20190217104719/https://blogs.msdn.microsoft.com/bigdatasupport/2014/06/18/how-to-use-a-custom-json-serde-with-microsoft-azure-hdinsight/).
+SerDe is the best choice for parsing nested JSON documents. It lets you define the JSON schema, and then you can use the schema to parse the documents. For instructions, see [How to use a custom JSON SerDe with Microsoft Azure HDInsight](/archive/blogs/bigdatasupport/how-to-use-a-custom-json-serde-with-microsoft-azure-hdinsight).
## Summary
The type of JSON operator in Hive that you choose depends on your scenario. With
For related articles, see: * [Use Apache Hive and HiveQL with Apache Hadoop in HDInsight to analyze a sample Apache log4j file](./hdinsight-use-hive.md)
-* [Analyze flight delay data by using Interactive Query in HDInsight](../interactive-query/interactive-query-tutorial-analyze-flight-data.md)
+* [Analyze flight delay data by using Interactive Query in HDInsight](../interactive-query/interactive-query-tutorial-analyze-flight-data.md)
hdinsight Hbase Troubleshoot Bindexception Address Use https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hdinsight/hbase/hbase-troubleshoot-bindexception-address-use.md
Restarting Apache HBase Region Servers during heavy workload activity. Below is
## Resolution
-Reduce the load on the HBase region servers before initiating a restart. Also, it's a good idea to first flush all the tables. For a reference on how to flush tables, see [HDInsight HBase: How to improve the Apache HBase cluster restart time by flushing tables](https://web.archive.org/web/20190112153155/https://blogs.msdn.microsoft.com/azuredatalake/2016/09/19/hdinsight-hbase-how-to-improve-hbase-cluster-restart-time-by-flushing-tables/).
+Reduce the load on the HBase region servers before initiating a restart. Also, it's a good idea to first flush all the tables. For a reference on how to flush tables, see [HDInsight HBase: How to improve the Apache HBase cluster restart time by flushing tables](/archive/blogs/azuredatalake/hdinsight-hbase-how-to-improve-hbase-cluster-restart-time-by-flushing-tables).
Alternatively, try to manually restart region servers on the worker nodes using following commands:
If you didn't see your problem or are unable to solve your issue, visit one of t
* Connect with [@AzureSupport](https://twitter.com/azuresupport) - the official Microsoft Azure account for improving customer experience. Connecting the Azure community to the right resources: answers, support, and experts.
-* If you need more help, you can submit a support request from the [Azure portal](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade/). Select **Support** from the menu bar or open the **Help + support** hub. For more detailed information, review [How to create an Azure support request](../../azure-portal/supportability/how-to-create-azure-support-request.md). Access to Subscription Management and billing support is included with your Microsoft Azure subscription, and Technical Support is provided through one of the [Azure Support Plans](https://azure.microsoft.com/support/plans/).
+* If you need more help, you can submit a support request from the [Azure portal](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade/). Select **Support** from the menu bar or open the **Help + support** hub. For more detailed information, review [How to create an Azure support request](../../azure-portal/supportability/how-to-create-azure-support-request.md). Access to Subscription Management and billing support is included with your Microsoft Azure subscription, and Technical Support is provided through one of the [Azure Support Plans](https://azure.microsoft.com/support/plans/).
hdinsight Hdinsight Business Continuity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hdinsight/hdinsight-business-continuity.md
It doesn't always take a catastrophic event to impact business functionality. Se
### HDInsight metastore
-HDInsight uses [Azure SQL Database](https://azure.microsoft.com/support/legal/sl).
+HDInsight uses [Azure SQL Database](https://azure.microsoft.com/support/legal/sl).
### HDInsight Storage
To learn more about the items discussed in this article, see:
* [Azure HDInsight business continuity architectures](./hdinsight-business-continuity-architecture.md) * [Azure HDInsight highly available solution architecture case study](./hdinsight-high-availability-case-study.md)
-* [What is Apache Hive and HiveQL on Azure HDInsight?](./hadoop/hdinsight-use-hive.md)
+* [What is Apache Hive and HiveQL on Azure HDInsight?](./hadoop/hdinsight-use-hive.md)
hdinsight Hdinsight Troubleshoot Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hdinsight/hdinsight-troubleshoot-guide.md
Last updated 08/14/2019
| For information about | See these articles | | | |
-| HDInsight on Linux and optimization | - [Information about using HDInsight on Linux](hdinsight-hadoop-linux-information.md)<br>- [Apache Hadoop memory and performance troubleshooting](hdinsight-hadoop-stack-trace-error-messages.md)<br>- [Apache Hive query performance](https://web.archive.org/web/20190217214250/https://blogs.msdn.microsoft.com/bigdatasupport/2015/08/13/troubleshooting-hive-query-performance-in-hdinsight-hadoop-cluster/) |
+| HDInsight on Linux and optimization | - [Information about using HDInsight on Linux](hdinsight-hadoop-linux-information.md)<br>- [Apache Hadoop memory and performance troubleshooting](hdinsight-hadoop-stack-trace-error-messages.md) |
| Logs and dumps | - [Access Apache Hadoop YARN application logs on Linux](hdinsight-hadoop-access-yarn-app-logs-linux.md)<br>- [Enable heap dumps for Apache Hadoop services on Linux](hdinsight-hadoop-collect-debug-heap-dump-linux.md)| | Errors | - [Understand and resolve WebHCat errors](hdinsight-hadoop-templeton-webhcat-debug-errors.md)<br>- [Apache Hive settings to fix OutofMemory error](hdinsight-hadoop-hive-out-of-memory-error-oom.md) | | Tools | - [Optimize Apache Hive queries](hdinsight-hadoop-optimize-hive-query.md)<br>- [HDInsight IntelliJ tool](./spark/apache-spark-intellij-tool-plugin.md)<br>- [HDInsight Eclipse tool](./spark/apache-spark-eclipse-tool-plugin.md)<br>- [HDInsight VSCode tool](hdinsight-for-vscode.md)<br>- [HDInsight Visual Studio tool](./hadoop/apache-hadoop-visual-studio-tools-get-started.md) |
hdinsight Apache Spark Known Issues https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hdinsight/spark/apache-spark-known-issues.md
HDInsight Spark clusters do not support the Spark-Phoenix connector.
**Mitigation:**
-You must use the Spark-HBase connector instead. For the instructions, see [How to use Spark-HBase connector](https://web.archive.org/web/20190112153146/https://blogs.msdn.microsoft.com/azuredatalake/2016/07/25/hdinsight-how-to-use-spark-hbase-connector/).
+You must use the Spark-HBase connector instead. For the instructions, see [How to use Spark-HBase connector](/archive/blogs/azuredatalake/hdinsight-how-to-use-spark-hbase-connector).
## Issues related to Jupyter Notebooks
hdinsight Apache Troubleshoot Spark https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hdinsight/spark/apache-troubleshoot-spark.md
spark-submit --master yarn-cluster --class com.microsoft.spark.application --num
### Additional reading
-[Apache Spark job submission on HDInsight clusters](https://web.archive.org/web/20190112152841/https://blogs.msdn.microsoft.com/azuredatalake/2017/01/06/spark-job-submission-on-hdinsight-101/)
+[Apache Spark job submission on HDInsight clusters](/archive/blogs/azuredatalake/spark-job-submission-on-hdinsight-101)
## Next steps
If you didn't see your problem or are unable to solve your issue, visit one of t
* Connect with [@AzureSupport](https://twitter.com/azuresupport) - the official Microsoft Azure account for improving customer experience. Connecting the Azure community to the right resources: answers, support, and experts.
-* If you need more help, you can submit a support request from the [Azure portal](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade/). Select **Support** from the menu bar or open the **Help + support** hub. For more detailed information, review [How to create an Azure support request](../../azure-portal/supportability/how-to-create-azure-support-request.md). Access to Subscription Management and billing support is included with your Microsoft Azure subscription, and Technical Support is provided through one of the [Azure Support Plans](https://azure.microsoft.com/support/plans/).
+* If you need more help, you can submit a support request from the [Azure portal](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade/). Select **Support** from the menu bar or open the **Help + support** hub. For more detailed information, review [How to create an Azure support request](../../azure-portal/supportability/how-to-create-azure-support-request.md). Access to Subscription Management and billing support is included with your Microsoft Azure subscription, and Technical Support is provided through one of the [Azure Support Plans](https://azure.microsoft.com/support/plans/).
healthcare-apis How To Run A Reindex https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/healthcare-apis/fhir/how-to-run-a-reindex.md
Below is a table outlining the available parameters, defaults, and recommended r
| | - | | - | | QueryDelayIntervalInMilliseconds | This is the delay between each batch of resources being kicked off during the reindex job. | 500 MS (.5 seconds) | 50 to 5000: 50 will speed up the reindex job and 5000 will slow it down from the default. | | MaximumResourcesPerQuery | This is the maximum number of resources included in the batch of resources to be reindexed. | 100 | 1-500 |
-| MaximumConcurreny | This is the number of batches done at a time. | 1 | 1-5 |
-| targetDataStoreUsagePercentrage | This allows you to specify what percent of your data store to use for the reindex job. For example, you could specify 50% and that would ensure that at most the reindex job would use 50% of available RUs on Cosmos DB. | No present, which means that up to 100% can be used. | 1-100 |
+| MaximumConcurrency | This is the number of batches done at a time. | 1 | 1-5 |
+| targetDataStoreUsagePercentage | This allows you to specify what percent of your data store to use for the reindex job. For example, you could specify 50% and that would ensure that at most the reindex job would use 50% of available RUs on Cosmos DB. | No present, which means that up to 100% can be used. | 1-100 |
If you want to use any of the parameters above, you can pass them into the Parameters resource when you start the reindex job.
In this article, youΓÇÖve learned how to start a reindex job. To learn how to de
>[Defining custom search parameters](how-to-do-custom-search.md)
-
+
iot-edge How To Collect And Transport Metrics https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/how-to-collect-and-transport-metrics.md
You can remotely monitor your IoT Edge fleet using Azure Monitor and built-in me
| Note | Description | |-|-| | 1 | All modules must emit metrics using the [Prometheus data model](https://prometheus.io/docs/concepts/data_model/). While [built-in metrics](how-to-access-built-in-metrics.md) enable broad workload visibility by default, custom modules can also be used to emit scenario-specific metrics to enhance the monitoring solution. Learn how to instrument custom modules using open-source libraries in the [Add custom metrics](how-to-add-custom-metrics.md) article. |
-| 2️ | The [metrics-collector module](https://aka.ms/edgemon-metric-collector) is a Microsoft-supplied IoT Edge module that collects workload module metrics and transports them off-device. Metrics collection uses a *pull* model. Collection frequency, endpoints, and filters can be configured to control the data egressed from the module. For more information, see [metrics collector configuration section](#metrics-collector-configuration) later in this article. |
+| 2️ | The [metrics-collector module](https://aka.ms/edgemon-metrics-collector) is a Microsoft-supplied IoT Edge module that collects workload module metrics and transports them off-device. Metrics collection uses a *pull* model. Collection frequency, endpoints, and filters can be configured to control the data egressed from the module. For more information, see [metrics collector configuration section](#metrics-collector-configuration) later in this article. |
| 3️ | You have two options for sending metrics from the metrics-collector module to the cloud. *Option 1* sends the metrics to Log Analytics.<sup>1</sup> The collected metrics are ingested into the specified Log Analytics workspace using a fixed, native table called `InsightsMetrics`. This table's schema is compatible with the Prometheus metrics data model.<br><br> This option requires access to the workspace on outbound port 443. The Log Analytics workspace ID and key must be specified as part of the module configuration. To enable in restricted networks, see [Enable in restricted network access scenarios](#enable-in-restricted-network-access-scenarios) later in this article. | 4️ | Each metric entry contains the `ResourceId` that was specified as part of [module configuration](#metrics-collector-configuration). This association automatically links the metric with the specified resource (for example, IoT Hub). As a result, the [curated IoT Edge workbook templates](how-to-explore-curated-visualizations.md) can retrieve metrics by issuing queries against the resource. <br><br> This approach also allows multiple IoT hubs to safely share a single Log Analytics workspace as a metrics database. | | 5️ | *Option 2* sends the metrics to IoT Hub.<sup>1</sup> The collector module can be configured to send the collected metrics as UTF-8 encoded JSON [device-to-cloud messages](../iot-hub/iot-hub-devguide-messages-d2c.md) via the `edgeHub` module. This option unlocks monitoring of locked-down IoT Edge devices that are allowed external access to only the IoT Hub endpoint. It also enables monitoring of child IoT Edge devices in a nested configuration where child devices can only access their parent device. |
iot-hub-device-update Import Schema https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub-device-update/import-schema.md
If you want to import an update into Device Update for IoT Hub, be sure you've r
| Name | Type | Description | Restrictions | | | | | |
-| Filename | string | Name of file | Must be unique within an update |
+| Filename | string | Name of file | Must be no more than 255 characters. Must be unique within an update |
| SizeInBytes | Int64 | Size of file in bytes. | Maximum of 800 MB per individual file, or 800 MB collectively per update | | Hashes | `Hashes` object | JSON object containing hash(es) of the file |
machine-learning Dsvm Samples And Walkthroughs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/data-science-virtual-machine/dsvm-samples-and-walkthroughs.md
In order to run these samples, you must have provisioned an [Ubuntu Data Science
## Available samples | Samples category | Description | Locations | | - | - | - |
-| R language | Samples illustrate scenarios such as how to connect with Azure-based cloud data stores and how to compare open-source R and Microsoft Machine Learning Server. They also explain how to operationalize models on Microsoft Machine Learning Server and SQL Server. <br/> [R language](#r-language) | <br/>`~notebooks` <br/> <br/> `~samples/MicrosoftR` <br/> <br/> `~samples/RSqlDemo` <br/> <br/> `~samples/SQLRServices`<br/> <br/>|
| Python language | Samples explain scenarios like how to connect with Azure-based cloud data stores and how to work with Azure Machine Learning. <br/> [Python language](#python-language) | <br/>`~notebooks` <br/><br/>| | Julia language | Provides a detailed description of plotting and deep learning in Julia. Also explains how to call C and Python from Julia. <br/> [Julia language](#julia-language) |<br/> Windows:<br/> `~notebooks/Julia_notebooks`<br/><br/> Linux:<br/> `~notebooks/julia`<br/><br/> | | Azure Machine Learning | Illustrates how to build machine-learning and deep-learning models with Machine Learning. Deploy models anywhere. Use automated machine learning and intelligent hyperparameter tuning. Also use model management and distributed training. <br/> [Machine Learning](#azure-machine-learning) | <br/>`~notebooks/AzureML`<br/> <br/>|
machine-learning How To Configure Auto Train https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-configure-auto-train.md
Learn about the specific definitions of these metrics in [Understand automated m
|Classification | Regression | Time Series Forecasting |--|--|--
-|`accuracy`| `spearman_correlation` | `spearman_correlation`
-|`AUC_weighted` | `normalized_root_mean_squared_error` | `normalized_root_mean_squared_error`
-|`average_precision_score_weighted` | `r2_score` | `r2_score`
-|`norm_macro_recall` | `normalized_mean_absolute_error` | `normalized_mean_absolute_error`
+|`accuracy`| `spearman_correlation` | `normalized_root_mean_squared_error`
+|`AUC_weighted` | `normalized_root_mean_squared_error` | `r2_score`
+|`average_precision_score_weighted` | `r2_score` | `normalized_mean_absolute_error`
+|`norm_macro_recall` | `normalized_mean_absolute_error` |
|`precision_score_weighted` | ### Primary metrics for classification scenarios
See regression notes, above.
| Metric | Example use case(s) | | | - |
-| `spearman_correlation` | |
-| `normalized_root_mean_squared_error` | Price prediction (forecasting), Inventory optimization, Demand forecasting |
+| `normalized_root_mean_squared_error` | Price prediction (forecasting), Inventory optimization, Demand forecasting | |
| `r2_score` | Price prediction (forecasting), Inventory optimization, Demand forecasting | | `normalized_mean_absolute_error` | |
migrate How To Use Azure Migrate With Private Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/migrate/how-to-use-azure-migrate-with-private-endpoints.md
Title: Using Azure Migrate with private endpoints
-description: Use Azure Migrate private link support to discover, assess, and migrate using private link.
+ Title: Use Azure Migrate with private endpoints
+description: Use Azure Migrate private link support to discover, assess, and migrate by using Azure Private Link.
ms.
Last updated 05/10/2020
-# Using Azure Migrate with private endpoints
+# Use Azure Migrate with private endpoints
-This article describes how to use Azure Migrate to discover, assess, and migrate servers over a private network using [Azure Private Link](../private-link/private-endpoint-overview.md).
+This article describes how to use Azure Migrate to discover, assess, and migrate servers over a private network by using [Azure Private Link](../private-link/private-endpoint-overview.md).
-You can use the [Azure Migrate: Discovery and Assessment](./migrate-services-overview.md#azure-migrate-discovery-and-assessment-tool) and [Azure Migrate: Server Migration](./migrate-services-overview.md#azure-migrate-server-migration-tool) tools to connect privately and securely to the Azure Migrate service over an ExpressRoute private peering or a site to site VPN connection, using Azure Private Link.
+You can use the [Azure Migrate: Discovery and assessment](./migrate-services-overview.md#azure-migrate-discovery-and-assessment-tool) and [Azure Migrate: Server Migration](./migrate-services-overview.md#azure-migrate-server-migration-tool) tools to connect privately and securely to Azure Migrate over an Azure ExpressRoute private peering or a site-to-site (S2S) VPN connection by using Private Link.
-The private endpoint connectivity method is recommended when there is an organizational requirement to access the Azure Migrate service and other Azure resources without traversing public networks. Using the Private Link, you can use your existing ExpressRoute private peering circuits for better bandwidth or latency requirements.
+We recommend the private endpoint connectivity method when there's an organizational requirement to access Azure Migrate and other Azure resources without traversing public networks. By using Private Link, you can use your existing ExpressRoute private peering circuits for better bandwidth or latency requirements.
## Support requirements
+Review the following required permissions and the supported scenarios and tools.
+ ### Required permissions
-**Contributor + User Access Administrator** or **Owner** permissions on the subscription.
+You must have Contributor + User Access Administrator or Owner permissions on the subscription.
### Supported scenarios and tools **Deployment** | **Details** | **Tools** | |
-**Discovery and Assessment** | Perform an agentless, at-scale discovery and assessment of your servers running on any platform ΓÇô hypervisor platforms such as [VMware vSphere](./tutorial-discover-vmware.md) or [Microsoft Hyper-V](./tutorial-discover-hyper-v.md), public clouds such as [AWS](./tutorial-discover-aws.md) or [GCP](./tutorial-discover-gcp.md), or even [bare metal servers](./tutorial-discover-physical.md). | Azure Migrate: Discovery and Assessment <br/>
-**Software inventory** | Discover apps, roles, and features running on VMware VMs. | Azure Migrate: Discovery and Assessment
-**Dependency visualization** | Use the dependency analysis capability to identify and understand dependencies across servers. <br/> [Agentless dependency visualization](./how-to-create-group-machine-dependencies-agentless.md) is supported natively with Azure Migrate private link support. <br/>[Agent-based dependency visualization](./how-to-create-group-machine-dependencies.md) requires Internet connectivity. [learn how](../azure-monitor/logs/private-link-security.md) to use private endpoints for agent-based dependency visualization. | Azure Migrate: Discovery and Assessment |
+**Discovery and assessment** | Perform an agentless, at-scale discovery and assessment of your servers running on any platform. Examples include hypervisor platforms such as [VMware vSphere](./tutorial-discover-vmware.md) or [Microsoft Hyper-V](./tutorial-discover-hyper-v.md), public clouds such as [AWS](./tutorial-discover-aws.md) or [GCP](./tutorial-discover-gcp.md), or even [bare metal servers](./tutorial-discover-physical.md). | Azure Migrate: Discovery and assessment <br/>
+**Software inventory** | Discover apps, roles, and features running on VMware VMs. | Azure Migrate: Discovery and assessment
+**Dependency visualization** | Use the dependency analysis capability to identify and understand dependencies across servers. <br/> [Agentless dependency visualization](./how-to-create-group-machine-dependencies-agentless.md) is supported natively with Azure Migrate private link support. <br/>[Agent-based dependency visualization](./how-to-create-group-machine-dependencies.md) requires internet connectivity. Learn how to use [private endpoints for agent-based dependency visualization](../azure-monitor/logs/private-link-security.md). | Azure Migrate: Discovery and assessment |
**Migration** | Perform [agentless Hyper-V migrations](./tutorial-migrate-hyper-v.md) or use the agent-based approach to migrate your [VMware VMs](./tutorial-migrate-vmware-agent.md), [Hyper-V VMs](./tutorial-migrate-physical-virtual-machines.md), [physical servers](./tutorial-migrate-physical-virtual-machines.md), [VMs running on AWS](./tutorial-migrate-aws-virtual-machines.md), [VMs running on GCP](./tutorial-migrate-gcp-virtual-machines.md), or VMs running on a different virtualization provider. | Azure Migrate: Server Migration >[!Note]
->
-> [Agentless VMware migrations](./tutorial-migrate-vmware.md) require Internet access or connectivity via ExpressRoute Microsoft peering. <br/> [Learn how](./replicate-using-expressroute.md) to use private endpoints to perform replications over ExpressRoute private peering or a site-to-site (S2S) VPN connection. <br/><br/>
+> [Agentless VMware migrations](./tutorial-migrate-vmware.md) require internet access or connectivity via ExpressRoute Microsoft peering. Learn how to use [private endpoints to perform replications over ExpressRoute private peering or a S2S VPN connection](./replicate-using-expressroute.md).
#### Other integrated tools
-Other migration tools may not be able to upload usage data to the Azure Migrate project if the public network access is disabled. The Azure Migrate project should be configured to allow traffic from all networks to receive data from other Microsoft or external [independent software vendor (ISV)](./migrate-services-overview.md#isv-integration) offerings.
-
+Other migration tools might not be able to upload usage data to the Azure Migrate project if the public network access is disabled. The Azure Migrate project should be configured to allow traffic from all networks to receive data from other Microsoft or external [independent software vendor (ISV)](./migrate-services-overview.md#isv-integration) offerings.
-To enable public network access for the Azure Migrate project, Sign in to Azure portal, Navigate to **Azure Migrate properties** page on the Azure portal, select **No** > **Save**.
+To enable public network access for the Azure Migrate project, sign in to the Azure portal, go to the **Azure Migrate Properties** page in the portal, and select **No** > **Save**.
-![Diagram that shows how to change the network access mode.](./media/how-to-use-azure-migrate-with-private-endpoints/migration-project-properties.png)
+![Screenshot that shows how to change the network access mode.](./media/how-to-use-azure-migrate-with-private-endpoints/migration-project-properties.png)
-### Other considerations
+### Other considerations
**Considerations** | **Details** |
-**Pricing** | For pricing information, see [Azure blob pricing](https://azure.microsoft.com/pricing/details/storage/page-blobs/) and [Azure Private Link pricing](https://azure.microsoft.com/pricing/details/private-link/).
-**Virtual network requirements** | The ExpressRoute/VPN gateway endpoint should reside in the selected virtual network or a virtual network connected to it. You may need ~15 IP addresses in the virtual network.
+**Pricing** | For pricing information, see [Azure Page Blobs pricing](https://azure.microsoft.com/pricing/details/storage/page-blobs/) and [Private Link pricing](https://azure.microsoft.com/pricing/details/private-link/).
+**Virtual network requirements** | The ExpressRoute/VPN gateway endpoint should reside in the selected virtual network or a virtual network connected to it. You might need about 15 IP addresses in the virtual network.
## Create a project with private endpoint connectivity
-Use this [article](./create-manage-projects.md#create-a-project-for-the-first-time) to set up a new Azure Migrate project.
+To set up a new Azure Migrate project, see [Create and manage projects](./create-manage-projects.md#create-a-project-for-the-first-time).
> [!Note]
-> You cannot change the connectivity method to private endpoint connectivity for existing Azure Migrate projects.
+> You can't change the connectivity method to private endpoint connectivity for existing Azure Migrate projects.
-In the **Advanced** configuration section, provide the below details to create a private endpoint for your Azure Migrate project.
+In the **Advanced** configuration section, provide the following details to create a private endpoint for your Azure Migrate project.
1. In **Connectivity method**, choose **Private endpoint**.
-2. In **Disable public endpoint access**, keep the default setting **No**. Some migration tools may not be able to upload usage data to the Azure Migrate project if public network access is disabled. [Learn more.](#other-integrated-tools)
-3. In **Virtual network subscription**, select the subscription for the private endpoint virtual network.
-4. In **Virtual network**, select the virtual network for the private endpoint. The Azure Migrate appliance and other software components that need to connect to the Azure Migrate project must be on this network or a connected virtual network.
-5. In **Subnet**, select the subnet for the private endpoint.
+1. In **Disable public endpoint access**, keep the default setting **No**. Some migration tools might not be able to upload usage data to the Azure Migrate project if public network access is disabled. Learn more about [other integrated tools](#other-integrated-tools).
+1. In **Virtual network subscription**, select the subscription for the private endpoint virtual network.
+1. In **Virtual network**, select the virtual network for the private endpoint. The Azure Migrate appliance and other software components that need to connect to the Azure Migrate project must be on this network or a connected virtual network.
+1. In **Subnet**, select the subnet for the private endpoint.
- ![Create project](./media/how-to-use-azure-migrate-with-private-endpoints/create-project.png)
+ ![Screenshot that shows the Advanced section on the Create project page.](./media/how-to-use-azure-migrate-with-private-endpoints/create-project.png)
-6. Select **Create**. to create a migrate project and attach a Private Endpoint to it. Wait a few minutes for the Azure Migrate project to deploy. Do not close this page while the project creation is in progress.
+1. Select **Create** to create a migration project and attach a private endpoint to it. Wait a few minutes for the Azure Migrate project to deploy. Don't close this page while the project creation is in progress.
-## Discover and assess servers for migration using Azure Private Link
+## Discover and assess servers for migration by using Private Link
+
+This section describes how to set up the Azure Migrate appliance. Then you'll use it to discover and assess servers for migration.
### Set up the Azure Migrate appliance 1. In **Discover machines** > **Are your machines virtualized?**, select the server type.
-2. In **Generate Azure Migrate project key**, provide a name for the Azure Migrate appliance.
-3. Select **Generate key** to create the required Azure resources.
+1. In **Generate Azure Migrate project key**, provide a name for the Azure Migrate appliance.
+1. Select **Generate key** to create the required Azure resources.
> [!Important]
- > Do not close the Discover machines page during the creation of resources.
- - At this step, Azure Migrate creates a key vault, storage account, Recovery Services vault (only for agentless VMware migrations), and a few internal resources and attaches a private endpoint to each resource. The private endpoints are created in the virtual network selected during the project creation.
- - Once the private endpoints are created, the DNS CNAME resource records for the Azure Migrate resources are updated to an alias in a subdomain with the prefix *privatelink*. By default, Azure Migrate also creates a private DNS zone corresponding to the *privatelink* subdomain for each resource type and inserts DNS A records for the associated private endpoints. This enables the Azure Migrate appliance and other software components residing in the source network to reach the Azure Migrate resource endpoints on private IP addresses.
- - Azure Migrate also enables a [managed identity](../active-directory/managed-identities-azure-resources/overview.md) for the migrate project, and grants permissions to the managed identity to securely access the storage account.
+ > Don't close the **Discover machines** page during the creation of resources.
+ - At this step, Azure Migrate creates a key vault, a storage account, a Recovery Services vault (only for agentless VMware migrations), and a few internal resources. Azure Migrate attaches a private endpoint to each resource. The private endpoints are created in the virtual network selected during the project creation.
+ - After the private endpoints are created, the DNS CNAME resource records for the Azure Migrate resources are updated to an alias in a subdomain with the prefix *privatelink*. By default, Azure Migrate also creates a private DNS zone corresponding to the *privatelink* subdomain for each resource type and inserts DNS A records for the associated private endpoints. This action enables the Azure Migrate appliance and other software components that reside in the source network to reach the Azure Migrate resource endpoints on private IP addresses.
+ - Azure Migrate also enables a [managed identity](../active-directory/managed-identities-azure-resources/overview.md) for the migrate project and grants permissions to the managed identity to securely access the storage account.
-4. After the key is successfully generated, copy the key details to configure and register the appliance.
+1. After the key is successfully generated, copy the key details to configure and register the appliance.
-#### Download the appliance installer file
+#### Download the appliance installer file
-Azure Migrate: Discovery and assessment use a lightweight Azure Migrate appliance. The appliance performs server discovery and sends server configuration and performance metadata to Azure Migrate.
+Azure Migrate: Discovery and assessment use a lightweight Azure Migrate appliance. The appliance performs server discovery and sends server configuration and performance metadata to Azure Migrate.
> [!Note]
-> The option to deploy an appliance using a template (OVA for servers on VMware environment and VHD Hyper-V environment) isn't supported for Azure Migrate projects with private endpoint connectivity.
+> The option to deploy an appliance by using a template (OVA for servers on a VMware environment and VHD Hyper-V environment) isn't supported for Azure Migrate projects with private endpoint connectivity.
To set up the appliance:
- 1. Download the zipped file containing the installer script from the portal.
- 2. Copy the zipped file on the server that will host the appliance.
- 3. After downloading the zipped file, verify the file security
- 4. Run the installer script to deploy the appliance.
+ 1. Download the zipped file that contains the installer script from the portal.
+ 1. Copy the zipped file on the server that will host the appliance.
+ 1. After you download the zipped file, verify the file security.
+ 1. Run the installer script to deploy the appliance.
-Here are the download links for each of the scenario:
+Here are the download links for each of the scenarios.
Scenario | Download link | Hash value | |
VMware scale-out | [AzureMigrateInstaller-VMware-Public-Scaleout-PrivateLink.zip
#### Verify security
-Check that the zipped file is secure, before you deploy it.
+Check that the zipped file is secure before you deploy it.
1. Open an administrator command window on the server to which you downloaded the file.
-2. Run the following command to generate the hash for the zipped file
+1. To generate the hash for the zipped file, run the following command:
- ```C:\>CertUtil -HashFile <file_location> [Hashing Algorithm]``` - Example usage for public cloud: ```C:\>CertUtil -HashFile C:\Users\administrator\Desktop\AzureMigrateInstaller-VMware-public-PrivateLink.zip SHA256 ```
-3. Verify the latest version of the appliance by comparing the hash values from the table above.
-
-Make sure the server meets the [hardware requirements](./migrate-appliance.md) for the chosen scenario (VMware/Hyper-V/Physical or other) and can connect to the [required URLs](./migrate-appliance.md#public-cloud-urls-for-private-link-connectivity).
+1. Verify the latest version of the appliance by comparing the hash values from the preceding table.
+Make sure the server meets the [hardware requirements](./migrate-appliance.md) for the chosen scenario, such as VMware, Hyper-V, physical or other, and can connect to the [required URLs](./migrate-appliance.md#public-cloud-urls-for-private-link-connectivity).
#### Run the script 1. Extract the zipped file to a folder on the server that will host the appliance.
-2. Launch PowerShell on the machine, with administrator (elevated) privileges.
-3. Change the PowerShell directory to the folder containing the contents extracted from the downloaded zipped file.
-4. Run the script **AzureMigrateInstaller.ps1**, as follows:
+1. Open PowerShell on the machine, with administrator (elevated) privileges.
+1. Change the PowerShell directory to the folder that contains the contents extracted from the downloaded zipped file.
+1. Run the script **AzureMigrateInstaller.ps1**, as follows:
``` PS C:\Users\administrator\Desktop\AzureMigrateInstaller-VMware-public-PrivateLink> .\AzureMigrateInstaller.ps1 ```
-5. After the script runs successfully, it launches the appliance configuration manager so that you can configure the appliance. If you encounter any issues, review the script logs at C:\ProgramData\Microsoft Azure\Logs\AzureMigrateScenarioInstaller_<em>Timestamp</em>.log.
+1. After the script runs successfully, it launches the appliance configuration manager so that you can configure the appliance. If you come across any issues, review the script logs at C:\ProgramData\Microsoft Azure\Logs\AzureMigrateScenarioInstaller_<em>Timestamp</em>.log.
### Configure the appliance and start continuous discovery
-Open a browser on any machine that can connect to the appliance server, and open the URL of the appliance configuration
+Open a browser on any machine that can connect to the appliance server. Open the URL of the appliance configuration manager, `https://appliance name or IP address: 44368`. Or, you can open the configuration manager from the appliance server desktop by selecting the shortcut for the configuration manager.
#### Set up prerequisites
-1. Read the third-party information and accept the **license terms**.
+1. Read the third-party information, and accept the **license terms**.
-2. In the configuration manager > **Set up prerequisites**, do the following:
+1. In the configuration manager under **Set up prerequisites**, do the following:
- **Connectivity**: The appliance checks for access to the required URLs. If the server uses a proxy: - Select **Set up proxy** to specify the proxy address `http://ProxyIPAddress` or `http://ProxyFQDN` and listening port. - Specify credentials if the proxy needs authentication. Only HTTP proxy is supported.
- - You can add a list of URLs/IP addresses that should bypass the proxy server.
- - Select **Save** to register the configuration if you have updated the proxy server details or added URLs/IP addresses to bypass proxy.
+ - You can add a list of URLs or IP addresses that should bypass the proxy server.
+ - Select **Save** to register the configuration if you've updated the proxy server details or added URLs or IP addresses to bypass proxy.
> [!Note]
- > If you get an error with aka.ms/* link during connectivity check and you do not want the appliance to access this URL over the internet, you need to disable the auto update service on the appliance by following the steps [**here**](./migrate-appliance.md#turn-off-auto-update). After the auto-update has been disabled, the aka.ms/* URL connectivity check will be skipped.
+ > If you get an error with the aka.ms/* link during the connectivity check and you don't want the appliance to access this URL over the internet, disable the auto-update service on the appliance. Follow the steps in [Turn off auto-update](./migrate-appliance.md#turn-off-auto-update). After you've disabled auto-update, the aka.ms/* URL connectivity check will be skipped.
- **Time sync**: The time on the appliance should be in sync with internet time for discovery to work properly. - **Install updates**: The appliance ensures that the latest updates are installed. After the check completes, select **View appliance services** to see the status and versions of the services running on the appliance server. > [!Note]
- > If you have chosen to disable auto update service on the appliance, you can update the appliance services manually to get the latest versions of the services by following the steps [**here**](./migrate-appliance.md#manually-update-an-older-version).
- - **Install VDDK**: (_Needed only for VMware appliance)_ The appliance checks that VMware vSphere Virtual Disk Development Kit (VDDK) is installed. If it isn't installed, download VDDK 6.7 from VMware, and extract the downloaded zipped contents to the specified location on the appliance, as provided in the **Installation instructions**.
+ > If you disabled auto-update on the appliance, you can update the appliance services manually to get the latest versions of the services. Follow the steps in [Manually update an older version](./migrate-appliance.md#manually-update-an-older-version).
+ - **Install VDDK**: _(Needed only for VMware appliance.)_ The appliance checks that the VMware vSphere Virtual Disk Development Kit (VDDK) is installed. If it isn't installed, download VDDK 6.7 from VMware. Extract the downloaded zipped contents to the specified location on the appliance, as provided in the installation instructions.
#### Register the appliance and start continuous discovery
-After the prerequisites check has completed, follow the steps to register the appliance and start continuous discovery for respective scenarios:
+After the prerequisites check has completed, follow the steps to register the appliance and start continuous discovery for the respective scenarios:
- [VMware VMs](./tutorial-discover-vmware.md#register-the-appliance-with-azure-migrate) - [Hyper-V VMs](./tutorial-discover-hyper-v.md#register-the-appliance-with-azure-migrate)-- [Physical Servers](./tutorial-discover-physical.md#register-the-appliance-with-azure-migrate)
+- [Physical servers](./tutorial-discover-physical.md#register-the-appliance-with-azure-migrate)
- [AWS VMs](./tutorial-discover-aws.md#register-the-appliance-with-azure-migrate) - [GCP VMs](./tutorial-discover-gcp.md#register-the-appliance-with-azure-migrate) - >[!Note]
-> If you get a DNS resolution issues during appliance registration or at the time of starting discovery, ensure that Azure Migrate resources created during the **Generate key** step on portal are reachable from the on-premises server hosting the Azure Migrate appliance. [Learn more on how to verify network connectivity](./troubleshoot-network-connectivity.md).
+> If you get DNS resolution issues during appliance registration or at the time of starting discovery, ensure that Azure Migrate resources created during the **Generate key** step in the portal are reachable from the on-premises server that hosts the Azure Migrate appliance. Learn more about how to verify [network connectivity](./troubleshoot-network-connectivity.md).
### Assess your servers for migration to Azure
-After the discovery is complete, assess your servers ([VMware VMs](./tutorial-assess-vmware-azure-vm.md), [Hyper-V VMs](./tutorial-assess-hyper-v.md), [physical servers](./tutorial-assess-vmware-azure-vm.md), [AWS VMs](./tutorial-assess-aws.md), [GCP VMs](./tutorial-assess-gcp.md)) for migration to Azure VMs or Azure VMware Solution (AVS), using the Azure Migrate: Discovery and Assessment tool.
+After the discovery is complete, assess your servers, such as [VMware VMs](./tutorial-assess-vmware-azure-vm.md), [Hyper-V VMs](./tutorial-assess-hyper-v.md), [physical servers](./tutorial-assess-vmware-azure-vm.md), [AWS VMs](./tutorial-assess-aws.md), and [GCP VMs](./tutorial-assess-gcp.md), for migration to Azure VMs or Azure VMware Solution by using the Azure Migrate: Discovery and assessment tool.
-You can also [assess your on-premises machines](./tutorial-discover-import.md#prepare-the-csv) with the Azure Migrate: Discovery and Assessment tool using an imported comma-separated values (CSV) file.
+You can also [assess your on-premises machines](./tutorial-discover-import.md#prepare-the-csv) with the Azure Migrate: Discovery and assessment tool by using an imported CSV file.
-## Migrate servers to Azure using Azure Private Link
+## Migrate servers to Azure by using Private Link
-The following sections describe the steps required to use Azure Migrate with [private endpoints](../private-link/private-endpoint-overview.md) for migrations using ExpressRoute private peering or VPN connections.
+The following sections describe the steps required to use Azure Migrate with [private endpoints](../private-link/private-endpoint-overview.md) for migrations by using ExpressRoute private peering or VPN connections.
-This article shows a proof-of-concept deployment path for agent-based replications to migrate your [VMware VMs](./tutorial-migrate-vmware-agent.md), [Hyper-V VMs](./tutorial-migrate-physical-virtual-machines.md), [physical servers](./tutorial-migrate-physical-virtual-machines.md), [VMs running on AWS](./tutorial-migrate-aws-virtual-machines.md), [VMs running on GCP](./tutorial-migrate-gcp-virtual-machines.md), or VMs running on a different virtualization provider using Azure private endpoints. You can use a similar approach for performing [agentless Hyper-V migrations](./tutorial-migrate-hyper-v.md) using private link.
+This article shows a proof-of-concept deployment path for agent-based replications to migrate your [VMware VMs](./tutorial-migrate-vmware-agent.md), [Hyper-V VMs](./tutorial-migrate-physical-virtual-machines.md), [physical servers](./tutorial-migrate-physical-virtual-machines.md), [VMs running on AWS](./tutorial-migrate-aws-virtual-machines.md), [VMs running on GCP](./tutorial-migrate-gcp-virtual-machines.md), or VMs running on a different virtualization provider by using Azure private endpoints. You can use a similar approach for performing [agentless Hyper-V migrations](./tutorial-migrate-hyper-v.md) by using Private Link.
>[!Note]
->[Agentless VMware migrations](./tutorial-assess-physical.md) require Internet access or connectivity via ExpressRoute Microsoft peering.
+>[Agentless VMware migrations](./tutorial-assess-physical.md) require internet access or connectivity via ExpressRoute Microsoft peering.
### Set up a replication appliance for migration
-The following diagram illustrates the agent-based replication workflow with private endpoints using the Azure Migrate: Server Migration tool.
+The following diagram illustrates the agent-based replication workflow with private endpoints by using the Azure Migrate: Server Migration tool.
-![Replication architecture](./media/how-to-use-azure-migrate-with-private-endpoints/replication-architecture.png)
+![Diagram that shows replication architecture.](./media/how-to-use-azure-migrate-with-private-endpoints/replication-architecture.png)
-The tool uses a replication appliance to replicate your servers to Azure. See this article to [prepare and set up a machine for the replication appliance. ](./tutorial-migrate-physical-virtual-machines.md#prepare-a-machine-for-the-replication-appliance)
+The tool uses a replication appliance to replicate your servers to Azure. Learn more about how to [prepare and set up a machine for the replication appliance](./tutorial-migrate-physical-virtual-machines.md#prepare-a-machine-for-the-replication-appliance).
-After you set up the replication appliance, use the following instructions to create the required resources for migration.
+After you set up the replication appliance, follow these steps to create the required resources for migration.
1. In **Discover machines** > **Are your machines virtualized?**, select **Not virtualized/Other**.
-2. In **Target region**, select and confirm the Azure region to which you want to migrate the machines.
-3. Select **Create resources** to create the required Azure resources. Do not close the page during the creation of resources.
- - This creates a Recovery Services vault in the background and enables a managed identity for the vault. A Recovery Services vault is an entity that contains the replication information of servers and is used to trigger replication operations.
- - If the Azure Migrate project has private endpoint connectivity, a private endpoint is created for the Recovery Services vault. This adds five fully qualified private names (FQDNs) to the private endpoint, one for each microservice linked to the Recovery Services vault.
- - The five domain names are formatted in this pattern: <br/> _{Vault-ID}-asr-pod01-{type}-.{target-geo-code}_.privatelink.siterecovery.windowsazure.com
- - By default, Azure Migrate automatically creates a private DNS zone and adds DNS A records for the Recovery Services vault microservices. The private DNS zone links to the private endpoint virtual network and allows the on-premises replication appliance to resolve the fully qualified domain names to their private IP addresses.
+1. In **Target region**, select and confirm the Azure region to which you want to migrate the machines.
+1. Select **Create resources** to create the required Azure resources. Don't close the page during the creation of resources.
+ - This step creates a Recovery Services vault in the background and enables a managed identity for the vault. A Recovery Services vault is an entity that contains the replication information of servers and is used to trigger replication operations.
+ - If the Azure Migrate project has private endpoint connectivity, a private endpoint is created for the Recovery Services vault. This step adds five fully qualified domain names (FQDNs) to the private endpoint, one for each microservice linked to the Recovery Services vault.
+ - The five domain names are formatted in this pattern: <br/> _{Vault-ID}-asr-pod01-{type}-.{target-geo-code}_.privatelink.siterecovery.windowsazure.com
+ - By default, Azure Migrate automatically creates a private DNS zone and adds DNS A records for the Recovery Services vault microservices. The private DNS zone links to the private endpoint virtual network and allows the on-premises replication appliance to resolve the FQDNs to their private IP addresses.
-4. Before you register the replication appliance, ensure that the vault's private link FQDNs are reachable from the machine hosting the replication appliance. [Learn more on how to verify network connectivity.](./troubleshoot-network-connectivity.md)
+1. Before you register the replication appliance, ensure that the vault's private link FQDNs are reachable from the machine that hosts the replication appliance. Learn more about [how to verify network connectivity](./troubleshoot-network-connectivity.md).
-5. Once you verify the connectivity, download the appliance setup and key file, run the installation process, and register the appliance to Azure Migrate. Review the [detailed steps here](./tutorial-migrate-physical-virtual-machines.md#set-up-the-replication-appliance). After you set up the replication appliance, follow these instructions to [install the mobility service](./tutorial-migrate-physical-virtual-machines.md#install-the-mobility-service) on the machines you want to migrate.
+1. After you verify the connectivity, download the appliance setup and key file, run the installation process, and register the appliance to Azure Migrate. Learn more about how to [set up the replication appliance](./tutorial-migrate-physical-virtual-machines.md#set-up-the-replication-appliance). After you set up the replication appliance, follow these instructions to [install the mobility service](./tutorial-migrate-physical-virtual-machines.md#install-the-mobility-service) on the machines you want to migrate.
-### Replicate servers to Azure using Azure Private Link
+### Replicate servers to Azure by using Private Link
-Follow [these steps](./tutorial-migrate-physical-virtual-machines.md#replicate-machines) to select servers for replication.
+Follow [these steps](./tutorial-migrate-physical-virtual-machines.md#replicate-machines) to select servers for replication.
-In **Replicate** > **Target settings** > **Cache/Replication storage account**, use the drop-down to select a storage account to replicate over a private link.
+In **Replicate** > **Target settings** > **Cache/Replication storage account**, use the dropdown list to select a storage account to replicate over a private link.
-If your Azure Migrate project has private endpoint connectivity, you must [grant permissions to the Recovery Services vault managed identity](#grant-access-permissions-to-the-recovery-services-vault) to access the storage account required by Azure Migrate.
+If your Azure Migrate project has private endpoint connectivity, you must [grant permissions to the Recovery Services vault managed identity](#grant-access-permissions-to-the-recovery-services-vault) to access the storage account required by Azure Migrate.
-Additionally, to enable replications over a private link, [create a private endpoint for the storage account.](#create-a-private-endpoint-for-the-storage-account-optional)
+To enable replications over a private link, [create a private endpoint for the storage account](#create-a-private-endpoint-for-the-storage-account-optional).
#### Grant access permissions to the Recovery Services vault
-You must grant the permissions to the recovery Services vault for authenticated access to the cache/replication storage account.
+You must grant the permissions to the Recovery Services vault for authenticated access to the cache/replication storage account.
-To identify the Recovery Services vault created by Azure Migrate and grant the required permissions, follow these steps:
+To identify the Recovery Services vault created by Azure Migrate and grant the required permissions, follow these steps.
-**_Identify the recovery services vault and the managed identity object ID_**
+**Identify the Recovery Services vault and the managed identity object ID**
-You can find the details of the Recovery Services vault on the Azure Migrate: Server Migration **properties** page.
+You can find the details of the Recovery Services vault on the Azure Migrate: Server Migration **Properties** page.
-1. Go to the **Azure Migrate hub**, select **Overview** on the Azure Migrate: Server Migration tile.
+1. Go to the **Azure Migrate** hub, and on the **Azure Migrate: Server Migration** tile, select **Overview**.
- ![Overview page on the Azure Migrate hub](./media/how-to-use-azure-migrate-with-private-endpoints/hub-overview.png)
+ ![Screenshot that shows the Overview page on the Azure Migrate hub.](./media/how-to-use-azure-migrate-with-private-endpoints/hub-overview.png)
-2. On the left pane, select **Properties**. Make a note of the Recovery Services vault name and managed identity ID. The vault will have _Private endpoint_ as the **connectivity type** and _Other_ as the **replication type**. You will need this information while providing access to the vault.
+1. In the left pane, select **Properties**. Make a note of the Recovery Services vault name and managed identity ID. The vault will have **Private endpoint** as the **Connectivity type** and **Other** as the **Replication type**. You'll need this information when you provide access to the vault.
- ![Azure Migrate: Server Migration properties page](./media/how-to-use-azure-migrate-with-private-endpoints/vault-info.png)
+ ![Screenshot that shows the Azure Migrate: Server Migration Properties page.](./media/how-to-use-azure-migrate-with-private-endpoints/vault-info.png)
-**_Permissions to access the storage account_**
+**Permissions to access the storage account**
- To the managed identity of the vault you must be grant the following role permissions on the storage account required for replication. In this case, you must create the storage account in advance.
+ To the managed identity of the vault, you must grant the following role permissions on the storage account required for replication. In this case, you must create the storage account in advance.
>[!Note]
-> For migrating Hyper-V VMs to Azure using private link, you must grant access to both the replication storage account and cache storage account.
+> When you migrate Hyper-V VMs to Azure by using Private Link, you must grant access to both the replication storage account and the cache storage account.
-The role permissions for the Resource Manager vary depending on the type of the storage account.
+The role permissions for the Azure Resource Manager vary depending on the type of storage account.
-|**Storage Account Type** | **Role Permissions**|
+|**Storage account type** | **Role permissions**|
| | |
-|Standard Type | [Contributor](../role-based-access-control/built-in-roles.md#contributor)<br>[Storage Blob Data Contributor](../role-based-access-control/built-in-roles.md#storage-blob-data-contributor)|
-|Premium Type | [Contributor](../role-based-access-control/built-in-roles.md#contributor)<br>[Storage Blob Data Owner](../role-based-access-control/built-in-roles.md#storage-blob-data-owner)
-
-1. Go to the replication/cache storage account selected for replication. Select **Access control (IAM)** in the left pane.
-
-1. In the **Add a role assignment** section, select **Add**:
+|Standard type | [Contributor](../role-based-access-control/built-in-roles.md#contributor)<br>[Storage Blob Data Contributor](../role-based-access-control/built-in-roles.md#storage-blob-data-contributor)|
+|Premium type | [Contributor](../role-based-access-control/built-in-roles.md#contributor)<br>[Storage Blob Data Owner](../role-based-access-control/built-in-roles.md#storage-blob-data-owner)
- ![Add a role assignment](./media/how-to-use-azure-migrate-with-private-endpoints/storage-role-assignment.png)
+1. Go to the replication/cache storage account selected for replication. In the left pane, select **Access control (IAM)**.
+1. Select **+ Add**, and select **Add role assignment**.
-1. On the **Add role assignment** page, in the **Role**
- field, select the appropriate role from the permissions list mentioned above. Enter the name of the vault noted previously and select **Save**.
+ ![Screenshot that shows Add role assignment.](./media/how-to-use-azure-migrate-with-private-endpoints/storage-role-assignment.png)
- ![Provide role based access](./media/how-to-use-azure-migrate-with-private-endpoints/storage-role-assignment-select-role.png)
+1. On the **Add role assignment** page in the **Role** box, select the appropriate role from the permissions list previously mentioned. Enter the name of the vault noted previously, and select **Save**.
-4. In addition to these permissions, you must also allow access to Microsoft trusted services. If your network access is restricted to selected networks, select **Allow trusted Microsoft services to access this storage account** in **Exceptions** section in the **Networking** tab.
+ ![Screenshot that shows the Add role assignment page.](./media/how-to-use-azure-migrate-with-private-endpoints/storage-role-assignment-select-role.png)
-![Allow trusted Microsoft services for storage account](./media/how-to-use-azure-migrate-with-private-endpoints/exceptions.png)
+1. In addition to these permissions, you must also allow access to Microsoft trusted services. If your network access is restricted to selected networks, on the **Networking** tab in the **Exceptions** section, select **Allow trusted Microsoft services to access this storage account**.
+ ![Screenshot that shows the Allow trusted Microsoft services to access this storage account option.](./media/how-to-use-azure-migrate-with-private-endpoints/exceptions.png)
### Create a private endpoint for the storage account (optional)
-To replicate using ExpressRoute with private peering, [create a private endpoint](../private-link/tutorial-private-endpoint-storage-portal.md#create-storage-account-with-a-private-endpoint) for the cache/replication storage accounts (target subresource: **_blob_**).
+To replicate by using ExpressRoute with private peering, [create a private endpoint](../private-link/tutorial-private-endpoint-storage-portal.md#create-storage-account-with-a-private-endpoint) for the cache/replication storage accounts (target subresource: _blob_).
>[!Note]
->
-> - You can create private endpoints only on a General Purpose v2 (GPv2) storage account. For pricing information, see [Azure Page Blobs pricing](https://azure.microsoft.com/pricing/details/storage/page-blobs/) and [Azure Private Link pricing](https://azure.microsoft.com/pricing/details/private-link/)
+> You can create private endpoints only on a general-purpose v2 storage account. For pricing information, see [Azure Page Blobs pricing](https://azure.microsoft.com/pricing/details/storage/page-blobs/) and [Azure Private Link pricing](https://azure.microsoft.com/pricing/details/private-link/).
-Create The private endpoint for the storage account in the same virtual network as the Azure Migrate project private endpoint or another virtual network connected to this network.
+Create the private endpoint for the storage account in the same virtual network as the Azure Migrate project private endpoint or another virtual network connected to this network.
-Select **Yes** and integrate with a private DNS zone. The private DNS zone helps in routing the connections from the virtual network to the storage account over a private link. Selecting **Yes** automatically links the DNS zone to the virtual network and adds the DNS records for the resolution of new IPs and fully qualified domain names created. Learn more about [private DNS zones.](../dns/private-dns-overview.md)
+Select **Yes**, and integrate with a private DNS zone. The private DNS zone helps in routing the connections from the virtual network to the storage account over a private link. Selecting **Yes** automatically links the DNS zone to the virtual network. It also adds the DNS records for the resolution of new IPs and FQDNs that are created. Learn more about [private DNS zones](../dns/private-dns-overview.md).
-If the user creating the private endpoint is also the storage account owner, the private endpoint creation will be auto approved. Otherwise, the owner of the storage account must approve the private endpoint for usage. To approve or reject a requested private endpoint connection, go to **Private endpoint connections** under **Networking** on the storage account page.
+If the user who created the private endpoint is also the storage account owner, the private endpoint creation will be auto-approved. Otherwise, the owner of the storage account must approve the private endpoint for use. To approve or reject a requested private endpoint connection, on the storage account page under **Networking**, go to **Private endpoint connections**.
-Review the status of the private endpoint connection state before proceeding.
+Review the status of the private endpoint connection state before you continue.
-![Private Endpoint approval status](./media/how-to-use-azure-migrate-with-private-endpoints/private-endpoint-connection-state.png)
+![Screenshot that shows the Private endpoint approval status.](./media/how-to-use-azure-migrate-with-private-endpoints/private-endpoint-connection-state.png)
-After you've created the private endpoint, use the drop-down in **Replicate** > **Target settings** > **Cache storage account** to select the storage account for replicating over a private link.
+After you've created the private endpoint, use the dropdown list in **Replicate** > **Target settings** > **Cache storage account** to select the storage account for replicating over a private link.
-Ensure that the on-premises replication appliance has network connectivity to the storage account on its private endpoint. [Learn more on how to verify network connectivity.](./troubleshoot-network-connectivity.md)
+Ensure that the on-premises replication appliance has network connectivity to the storage account on its private endpoint. Learn more about how to verify [network connectivity](./troubleshoot-network-connectivity.md).
>[!Note]
->
-> - For Hyper-V VM migrations to Azure, if the replication storage account is of _Premium_ type, you must select another storage account of _Standard_ type for the cache storage account. In this case, you must create private endpoints for both the replication and cache storage account.
+> For Hyper-V VM migrations to Azure, if the replication storage account is of _Premium_ type, you must select another storage account of _Standard_ type for the cache storage account. In this case, you must create private endpoints for both the replication and cache storage account.
-Next, follow these instructions to [review and start replication](./tutorial-migrate-physical-virtual-machines.md#replicate-machines) and [perform migrations](./tutorial-migrate-physical-virtual-machines.md#run-a-test-migration).
+Next, follow the instructions to [review and start replication](./tutorial-migrate-physical-virtual-machines.md#replicate-machines) and [perform migrations](./tutorial-migrate-physical-virtual-machines.md#run-a-test-migration).
## Next steps-- [Complete the migration process](./tutorial-migrate-physical-virtual-machines.md#complete-the-migration) and review the [post-migration best practices](./tutorial-migrate-physical-virtual-machines.md#post-migration-best-practices).
+- Complete the [migration process](./tutorial-migrate-physical-virtual-machines.md#complete-the-migration).
+- Review the [post-migration best practices](./tutorial-migrate-physical-virtual-machines.md#post-migration-best-practices).
mysql Concepts Connectivity Architecture https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/mysql/concepts-connectivity-architecture.md
The following table lists the gateway IP addresses of the Azure Database for MyS
| Switzerland West | 51.107.152.0|| | UAE Central | 20.37.72.64 | | | | UAE North | 65.52.248.0 | | |
-| UK South | 51.140.184.11 | | |
+| UK South | 51.140.144.32 | 51.140.184.11 | | |
| UK West | 51.141.8.11 | | | | West Central US | 13.78.145.25 | | | | West Europe |13.69.105.208, 104.40.169.187 | 40.68.37.158 | 191.237.232.75 |
mysql Concepts Compute Storage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/mysql/flexible-server/concepts-compute-storage.md
You can create an Azure Database for MySQL Flexible Server in one of three diffe
| VM series| B-series | Ddsv4-series | Edsv4-series| | vCores | 1, 2 | 2, 4, 8, 16, 32, 48, 64 | 2, 4, 8, 16, 32, 48, 64 | | Memory per vCore | Variable | 4 GiB | 8 GiB * |
-| Storage size | 5 GiB to 16 TiB | 5 GiB to 16 TiB | 5 GiB to 16 TiB |
+| Storage size | 20 GiB to 16 TiB | 20 GiB to 16 TiB | 20 GiB to 16 TiB |
| Database backup retention period | 1 to 35 days | 1 to 35 days | 1 to 35 days | \* With the exception of E64ds_v4 (Memory Optimized) SKU, which has 504 GB of memory
To get more details about the compute series available, refer to Azure VM docume
## Storage
-The storage you provision is the amount of storage capacity available to your flexible server. Storage is used for the database files, temporary files, transaction logs, and the MySQL server logs. In all compute tiers, the minimum storage supported is 5 GiB and maximum is 16 TiB. Storage is scaled in 1 GiB increments and can be scaled up after the server is created.
+The storage you provision is the amount of storage capacity available to your flexible server. Storage is used for the database files, temporary files, transaction logs, and the MySQL server logs. In all compute tiers, the minimum storage supported is 20 GiB and maximum is 16 TiB. Storage is scaled in 1 GiB increments and can be scaled up after the server is created.
>[!NOTE] > Storage can only be scaled up, not down.
We recommend that you <!--turn on storage auto-grow or to--> set up an alert to
### Storage auto-grow
-Storage auto-grow is not yet available for Azure Database for MySQL Flexible Server.
+Storage auto-grow prevents your server from running out of storage and becoming read-only. If storage auto-grow is enabled, the storage automatically grows without impacting the workload. Storage auto-grow is enabled by default for all new server creates. For servers with less than equal to 100 GB provisioned storage, the provisioned storage size is increased by 5 GB when the free storage is below 10% of the provisioned storage. For servers with more than 100 GB of provisioned storage, the provisioned storage size is increased by 5% when the free storage space is below 10 GB of the provisioned storage size. Maximum storage limits as specified above apply.
+
+For example, if you have provisioned 1000 GB of storage, and the actual utilization goes over 990 GB, the server storage size is increased to 1050 GB. Alternatively, if you have provisioned 10 GB of storage, the storage size is increase to 15 GB when less than 1 GB of storage is free.
+
+Remember that storage can only be scaled up, not down
## IOPS Azure Database for MySQL ΓÇô Flexible Server supports the provisioning of additional IOPS. This feature enables you to provision additional IOPS above the complimentary IOPS limit. Using this feature, you can increase or decrease the number of IOPS provisioned based on your workload requirements at any time.
-The minimum IOPS is 100 across all compute sizes and the maximum IOPS is determined by the selected compute size. In preview, the maximum IOPS supported is 20,000 IOPS.
+The minimum IOPS is 360 across all compute sizes and the maximum IOPS is determined by the selected compute size. In preview, the maximum IOPS supported is 20,000 IOPS.
To learn more about the maximum IOPS per compute size is shown below:
To learn more about the maximum IOPS per compute size is shown below:
The maximum IOPS is dependent on the maximum available IOPS per compute size. Refer to the column *Max uncached disk throughput: IOPS/MBps* in the [B-series](../../virtual-machines/sizes-b-series-burstable.md), [Ddsv4-series](../../virtual-machines/ddv4-ddsv4-series.md), and [Edsv4-series](../../virtual-machines/edv4-edsv4-series.md) documentation. > [!Important]
-> **Complimentary IOPS** are equal to MINIMUM("Max uncached disk throughput: IOPS/MBps" of compute size, storage provisioned in GiB * 3)<br>
-> **Minimum IOPS** is 100 across all compute sizes<br>
+> **Complimentary IOPS** are equal to MINIMUM("Max uncached disk throughput: IOPS/MBps" of compute size, 300 + storage provisioned in GiB * 3)<br>
+> **Minimum IOPS** is 360 across all compute sizes<br>
> **Maximum IOPS** is determined by the selected compute size. In preview, the maximum IOPS supported is 20,000 IOPS. You can monitor your I/O consumption in the Azure portal (with Azure Monitor) using [IO percent](./concepts-monitoring.md) metric. If you need more IOPS then the max IOPS based on compute then you need to scale your server's compute.
mysql Whats New https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/mysql/flexible-server/whats-new.md
+
+ Title: What's new in Azure Database for MySQL Flexible Server
+description: Learn about recent updates to Azure Database for MySQL - Flexible server, a relational database service in the Microsoft cloud based on the MySQL Community Edition.
+++++ Last updated : 06/18/2021+
+# What's new in Azure Database for MySQL - Flexible Server?
+
+[Azure Database for MySQL - Flexible Server](./overview.md#azure-database-for-mysqlflexible-server-preview) is a deployment mode that's designed to provide more granular control and flexibility over database management functions and configuration settings than does the Single Server deployment mode. The service currently supports community version of MySQL 5.7 and 8.0.
+
+This article summarizes new releases and features in Azure Database for MySQL - Flexible Server beginning in January 2021. Listings appear in reverse chronological order, with the most recent updates first.
+
+## June 2021
+
+This release of Azure Database for MySQL - Flexible Server includes the following updates.
+
+- **Improved performance on smaller storage servers**
+
+ Beginning June 21, 2021, the minimum allowed provisioned storage size for all newly created server increases from 5 GB to 20 GB. In addition, the available free IOPS increases from 100 to 300. These changes are summarized in the following table:
+
+ | **Current** | **As of June 21, 2021** |
+ |:-|:-|
+ | Minimum allowed storage size: 5 GB | Minimum allowed storage size: 20 GB |
+ | IOPS available: Max(100, 3 * [Storage provisioned in GB]) | IOPS available: (300 + 3 * [Storage provisioned in GB]) |
+
+- **Free 12-month offer**
+
+ Beginning June 15, 2021, new Azure users can take advantage of our 12-month [Azure free account](https://azure.microsoft.com/free/), which provides up to 750 hours of Azure Database for MySQL ΓÇô Flexible Server and 32 GB of storage per month. Customers can take advantage of this offer to develop and deploy applications that use Azure Database for MySQL ΓÇô Flexible Server (Preview).
+
+- **Storage auto-grow**
+
+ Storage auto-grow prevents a server from running out of storage and becoming read-only. If storage auto grow is enabled, the storage automatically grows without impacting the workload. Beginning June 21, 2021, all newly created servers will have storage auto-grow enabled by default. [Learn more](concepts-compute-storage.md#storage-auto-grow).
+
+- **Data-in Replication**
+
+ Flexible Server now supports [Data-in Replication](concepts-data-in-replication.md). Use this feature to synchronize and migrate data from a MySQL server running on-premises, in virtual machines, on Azure Database for MySQL Single Server, or on database services outside Azure to Azure Database for MySQL ΓÇô Flexible Server. Learn more about [How to configure Data-in Replication](how-to-data-in-replication.md).
+
+- **GitHub actions support with Azure CLI**
+
+ Flexible Server CLI now allows you to automate your workflow to deploy updates with GitHub actions. Use this feature to help you set up and deploy your database updates with MySQL github action workflow. These CLI commands help you with setting up the repository to enable the continuous deployment for ease of development. [Learn more](/cli/azure/mysql/flexible-server/deploy?view=azure-cli-latest&preserve-view=true).
+
+- **Zone redundant HA forced failover fixes**
+
+ This release includes fixes for known issues related to forced failover to ensure that server parameters and additional IOPS changes are persisted across failovers.
+
+- **Known issue**
+
+ If a client application trying to connect to an instance of Flexible Server is in a peered virtual network (VNet), the application may not be able to connect using the Flexible Server *servername* because it cannot resolve the DNS name for the Flexible Server instance from a peered VNet. [Learn more](concepts-networking.md#connecting-from-peered-vnets-in-same-azure-region).
+
+## May 2021
+
+This release of Azure Database for MySQL - Flexible Server includes the following updates.
+
+- **Extended regional availability (France Central, Brazil South, and Switzerland North)**
+
+ The public preview of Azure Database for MySQL - Flexible Server is now available in the France Central, Brazil South, and Switzerland North regions. [Learn more](overview.md#azure-regions).
+
+- **SSL/TLS 1.2 enforcement can be disabled**
+
+ This release provides the enhanced flexibility to customize enforcement of SSL and minimum TLS version. To learn more, see [Connect to Azure Database for MySQL - Flexible Server with encrypted connections](how-to-connect-tls-ssl.md).
+
+- **Zone redundant HA available in UK South and Japan East region**
+
+ Azure Database for MySQL - Flexible Server now offers zone redundant high availability in two additional regions: UK South and Japan East. [Learn more](overview.md#azure-regions).
+
+- **Known issues**
+
+ - Additional IOPs changes donΓÇÖt take effect in zone redundant HA enabled servers. Customers can work around the issue by disabling HA, scaling IOPs, and the re-enabling zone redundant HA.
+ - After force failover, the standby availability zone is inaccurately reflected in the portal. (No workaround)
+ - Server parameter changes don't take effect in zone redundant HA enabled server after forced failover. (No workaround)
+
+## April 2021
+
+This release of Azure Database for MySQL - Flexible Server includes the following updates.
+
+- **Ability to force failover to standby server with zone redundant high availability released**
+
+ Customers can now manually force a failover to test functionality with their application scenarios, which can help them to prepare in case of any outages. [Learn more](https://techcommunity.microsoft.com/t5/azure-database-for-mysql/forced-failover-for-azure-database-for-mysql-flexible-server/ba-p/2280671).
+
+- **PowerShell module for Flexible Server released**
+
+ Developers can now use PowerShell to provision, manage, operate, and support MySQL Flexible Servers and dependent resources. [Learn more](https://techcommunity.microsoft.com/t5/azure-database-for-mysql/introducing-the-mysql-flexible-server-powershell-module/ba-p/2203383).
+
+- **Connect, test, and execute queries using Azure CLI**
+
+ Azure Database for MySQL Flexible Server now provides an improved developer experience allowing customers to connect and execute queries to their servers using the Azure CLI with the ΓÇ£az mysql flexible-server connectΓÇ¥ and ΓÇ£az mysql flexible-server executeΓÇ¥ commands. [Learn more](connect-azure-cli.md#view-all-the-arguments).
+
+- **Fixes for provisioning failures for server creates in virtual network with private access**
+
+ All the provisioning failures caused when creating a server in virtual network are fixed. With this release, users can successfully create flexible servers with private access every time.
+
+## March 2021
+
+This release of Azure Database for MySQL - Flexible Server includes the following updates.
+
+- **MySQL 8.0.21 released**
+
+ MySQL 8.0.21 is now available in Flexible Server in all major [Azure regions](overview.md#azure-regions). Customers can use the Azure portal, the Azure CLI, or Azure Resource Manager templates to provision the MySQL 8.0.21 release. [Learn more](quickstart-create-server-portal.md#create-an-azure-database-for-mysql-flexible-server).
+
+- **Support for Availability zone placement during server creation released**
+
+ Customers can now specify their preferred Availability zone at the time of server creation. This functionality allows customers to collocate their applications hosted on Azure VM, virtual machine scale set, or AKS and database in the same Availability zones to minimize database latency and improve performance. [Learn more](quickstart-create-server-portal.md#create-an-azure-database-for-mysql-flexible-server).
+
+- **Performance fixes for issues when running flexible server in virtual network with private access**
+
+ Before this release, the performance of flexible server degraded significantly when running in virtual network configuration. This release includes the fixes for the issue, which will allow users to see improved performance on flexible server in virtual network.
+
+- **Known issues**
+
+ - SSL\TLS 1.2 is enforced and cannot be disabled. (No workarounds)
+ - There are intermittent provisioning failures for servers provisioned in a VNet. The workaround is to retry the server provisioning until it succeeds.
+
+## February 2021
+
+This release of Azure Database for MySQL - Flexible Server includes the following updates.
+
+- **Additional IOPS feature released**
+
+ Azure Database for MySQL - Flexible Server supports provisioning additional [IOPS](concepts-compute-storage.md#iops) independent of the storage provisioned. Customers can use this feature to increase or decrease the number of IOPS anytime based on their workload requirements.
+
+- **Known issues**
+
+ The performance of Azure Database for MySQL ΓÇô Flexible Server degrades with private access virtual network isolation (No workaround).
+
+## January 2021
+
+This release of Azure Database for MySQL - Flexible Server includes the following updates.
+
+- **Up to 10 read replicas for MySQL - Flexible Server**
+
+ Flexible Server now supports asynchronous replication of data from one Azure Database for MySQL server (the ΓÇÿsourceΓÇÖ) to up to 10 Azure Database for MySQL servers (the ΓÇÿreplicasΓÇÖ) in the same region. This functionality enables read-heavy workloads to scale out and be balanced across replica servers according to a user's preferences. [Learn more](concepts-read-replicas.md).
+
+## Contacts
+
+If you have questions about or suggestions for working with Azure Database for MySQL, consider the following points of contact as appropriate:
+
+- To contact Azure Support, [file a ticket from the Azure portal](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade).
+- To fix an issue with your account, file a [support request](https://ms.portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest) in the Azure portal.
+- To provide feedback or to request new features, create an entry via [UserVoice](https://feedback.azure.com/forums/597982-azure-database-for-mysql).
+
+## Next steps
+
+- Learn more about [Azure Database for MySQL pricing](https://azure.microsoft.com/pricing/details/mysql/server/).
+- Browse the [public documentation](index.yml) for Azure Database for MySQL ΓÇô Flexible Server.
+- Review details on [troubleshooting common migration errors](../howto-troubleshoot-common-errors.md).
mysql Select Right Deployment Type https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/mysql/select-right-deployment-type.md
The main differences between these options are listed in the following table:
|:-|:-|:|:| | MySQL Version Support | 5.6, 5.7 & 8.0| 5.7 & 8.0 | Any version| | Compute scaling | Supported (Scaling from and to Basic tier is not supported)| Supported | Supported|
-| Storage size | 5 GiB to 16 TiB| 5 GiB to 16 TiB | 32 GiB to 32,767 GiB|
+| Storage size | 5 GiB to 16 TiB| 20 GiB to 16 TiB | 32 GiB to 32,767 GiB|
| Online Storage scaling | Supported| Supported| Not supported|
-| Auto storage scaling | Supported| Not supported in preview| Not supported|
+| Auto storage scaling | Supported| Supported| Not supported|
| Additional IOPs scaling | Not Supported| Supported| Not supported| | Network Connectivity | - Public endpoints with server firewall.<br/> - Private access with Private Link support.|- Public endpoints with server firewall.<br/> - Private access with Virtual Network integration.| - Public endpoints with server firewall.<br/> - Private access with Private Link support.| | Service-level agreement (SLA) | 99.99% availability SLA |No SLA in preview| 99.99% using Availability Zones|
The main differences between these options are listed in the following table:
| High availability | Built-in HA within single availability zone| Built-in HA within and across availability zones | Custom managed using clustering, replication, etc.| | Zone redundancy | Not supported | Supported | Supported| | Zone placement | Not supported | Supported | Supported|
-| Hybrid scenarios | Supported with [Data-in Replication](./concepts-data-in-replication.md)| Not available in preview | Managed by end users |
+| Hybrid scenarios | Supported with [Data-in Replication](./concepts-data-in-replication.md)| Supported with [Data-in Replication](./flexible-server/concepts-data-in-replication.md) | Managed by end users |
| Read replicas | Supported (up to 5 replicas)| Supported (up to 10 replicas)| Managed by end users | | Backup | Automated with 7-35 days retention | Automated with 1-35 days retention | Managed by end users | | Monitoring database operations | Supported | Supported | Managed by end users |
-| Disaster recovery | Supported with geo-redundant backup storage and cross region read replicas | Not supported in preview| Custom Managed with replication technologies |
+| Disaster recovery | Supported with geo-redundant backup storage and cross region read replicas | Coming soon| Custom Managed with replication technologies |
| Query Performance Insights | Supported | Not available in preview| Managed by end users |
-| Reserved Instance Pricing | Supported | Not available in preview | Supported |
+| Reserved Instance Pricing | Supported | Coming soon | Supported |
| Azure AD Authentication | Supported | Not available in preview | Not Supported| | Data Encryption at rest | Supported with customer managed keys | Supported with service managed keys | Not Supported|
-| SSL/TLS | Enabled by default with support for TLS v1.2, 1.1 and 1.0 | Enforced with TLS v1.2 | Supported with TLS v1.2, 1.1 and 1.0 |
+| SSL/TLS | Enabled by default with support for TLS v1.2, 1.1 and 1.0 | Enabled by default with support for TLS v1.2, 1.1 and 1.0| Supported with TLS v1.2, 1.1 and 1.0 |
| Fleet Management | Supported with Azure CLI, PowerShell, REST, and Azure Resource Manager | Supported with Azure CLI, PowerShell, REST, and Azure Resource Manager | Supported for VMs with Azure CLI, PowerShell, REST, and Azure Resource Manager | ## Business motivations for choosing PaaS or IaaS
openshift Configure Azure Ad Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/openshift/configure-azure-ad-cli.md
az ad app permission add \
Applications registered in an Azure Active Directory (Azure AD) tenant are, by default, available to all users of the tenant who authenticate successfully. Azure AD allows tenant administrators and developers to restrict an app to a specific set of users or security groups in the tenant.
-Follow the instructions on the Azure Active Directory documentation to [assign users and groups to the app](../active-directory/develop/howto-restrict-your-app-to-a-set-of-users.md#app-registration).
+Follow the instructions on the Azure Active Directory documentation to [assign users and groups to the app](../active-directory/develop/howto-restrict-your-app-to-a-set-of-users.md).
## Configure OpenShift OpenID authentication
openshift Configure Azure Ad Ui https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/openshift/configure-azure-ad-ui.md
Navigate to **Token configuration (preview)** and click on **Add optional claim*
Applications registered in an Azure Active Directory (Azure AD) tenant are, by default, available to all users of the tenant who authenticate successfully. Azure AD allows tenant administrators and developers to restrict an app to a specific set of users or security groups in the tenant.
-Follow the instructions on the Azure Active Directory documentation to [assign users and groups to the app](../active-directory/develop/howto-restrict-your-app-to-a-set-of-users.md#app-registration).
+Follow the instructions on the Azure Active Directory documentation to [assign users and groups to the app](../active-directory/develop/howto-restrict-your-app-to-a-set-of-users.md).
## Configure OpenShift OpenID authentication
remote-rendering Security https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/remote-rendering/tutorials/unity/security/security.md
Since the User Credentials aren't stored on the device (or in this case even ent
In the Unity Editor, when AAD Auth is active, you will need to authenticate every time you launch the application. On device, the authentication step will happen the first time and only be required again when the token expires or is invalidated.
-1. Add the **AADAuthentication** component to the **RemoteRenderingCoordinator** GameObject.
+1. Add the **AAD Authentication** component to the **RemoteRenderingCoordinator** GameObject.
![AAD auth component](./media/azure-active-directory-auth-component.png)
+> [!NOTE]
+> If you are using the completed project from the [ARR samples repository](https://github.com/Azure/azure-remote-rendering), make sure to enable the **AAD Authentication** component by clicking the checkbox next to its title.
+ 1. Fill in your values for the Client ID and the Tenant ID. These values can be found in your App Registration's Overview Page: * **Active Directory Application Client ID** is the *Application (client) ID* found in your AAD app registration (see image below).
In the Unity Editor, when AAD Auth is active, you will need to authenticate ever
![Screenshot that highlights the Application (client) ID and Directory (tenant) ID.](./media/app-overview-data.png) 1. Press Play in the Unity Editor and consent to running a session.
- Since the **AADAuthentication** component has a view controller, its automatically hooked up to display a prompt after the session authorization modal panel.
+ Since the **AAD Authentication** component has a view controller, its automatically hooked up to display a prompt after the session authorization modal panel.
1. Follow the instructions found in the panel to the right of the **AppMenu**. You should see something similar to this: ![Illustration that shows the instruction panel that appears to the right of the AppMenu.](./media/device-flow-instructions.png)
search Search Dotnet Sdk Migration Version 11 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/search-dotnet-sdk-migration-version-11.md
Field definitions are streamlined: [SearchableField](/dotnet/api/azure.search.do
| Version 10 | Version 11 equivalent | ||--|
-| [DocumentsOperationsExtensions.SearchAsync](/dotnet/api/microsoft.azure.search.documentsoperationsextensions.searchasync) | [SearchClient.SearchAsync](/dotnet/api/azure.search.documents.searchclient.searchasyn) |
+| [DocumentsOperationsExtensions.SearchAsync](/dotnet/api/microsoft.azure.search.documentsoperationsextensions.searchasync) | [SearchClient.SearchAsync](/dotnet/api/azure.search.documents.searchclient.searchasync) |
| [DocumentSearchResult](/dotnet/api/microsoft.azure.search.models.documentsearchresult-1) | [SearchResult](/dotnet/api/azure.search.documents.models.searchresult-1) or [SearchResults](/dotnet/api/azure.search.documents.models.searchresults-1), depending on whether the result is a single document or multiple. | | [DocumentSuggestResult](/dotnet/api/microsoft.azure.search.models.documentsuggestresult-1) | [SuggestResults](/dotnet/api/azure.search.documents.models.suggestresults-1) | | [SearchParameters](/dotnet/api/microsoft.azure.search.models.searchparameters) | [SearchOptions](/dotnet/api/azure.search.documents.searchoptions) |
-| [SuggestParameters](/dotnet/api/microsoft.azure.search.models.suggestparametersparameters) | [SuggestOptions](/dotnet/api/azure.search.documents.suggestoptions) |
+| [SuggestParameters](//dotnet/api/microsoft.azure.search.models.suggestparameters) | [SuggestOptions](/dotnet/api/azure.search.documents.suggestoptions) |
| [SearchParameters.Filter](/dotnet/api/microsoft.azure.search.models.searchparameters.filter) | [SearchFilter](/dotnet/api/azure.search.documents.searchfilter) (a new class for constructing OData filter expressions) | ### JSON serialization
In terms of service version updates, where code changes in version 11 relate to
+ [Tutorial: Add search to web apps](tutorial-csharp-overview.md) + [Azure.Search.Documents package](https://www.nuget.org/packages/Azure.Search.Documents/) + [Samples on GitHub](https://github.com/azure/azure-sdk-for-net/tree/Azure.Search.Documents_11.0.0/sdk/search/Azure.Search.Documents/samples)
-+ [Azure.Search.Document API reference](/dotnet/api/overview/azure/search.documents-readme)
++ [Azure.Search.Document API reference](/dotnet/api/overview/azure/search.documents-readme)
security-center Defender For Databases Usage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security-center/defender-for-databases-usage.md
-# Respond to alerts from Azure Defender for open-source relational databases
+# Enable Azure Defender for open-source relational databases and respond to alerts
Azure Defender detects anomalous activities indicating unusual and potentially harmful attempts to access or exploit databases for the following
service-fabric Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-fabric/release-notes.md
We are excited to announce that 8.0 release of the Service Fabric runtime has st
| Release date | Release | More info | |||| | April 08, 2021 | [Azure Service Fabric 8.0](https://techcommunity.microsoft.com/t5/azure-service-fabric/azure-service-fabric-8-0-release/ba-p/2260016) | [Release notes](https://github.com/microsoft/service-fabric/blob/master/release_notes/Service_Fabric_ReleaseNotes_80.md)|
+| May 17, 2021 | [Azure Service Fabric 8.0 First Refresh Release](https://techcommunity.microsoft.com/t5/azure-service-fabric/azure-service-fabric-8-0-first-refresh-release/ba-p/2362556) | [Release notes](https://github.com/microsoft/service-fabric/blob/master/release_notes/Service_Fabric_ReleaseNotes_80CU1.md) |
+| June 17, 2021 | [Azure Service Fabric 8.0 Second Refresh Release](https://techcommunity.microsoft.com/t5/azure-service-fabric/azure-service-fabric-8-0-second-refresh-release/ba-p/2462979) | [Release notes](https://github.com/microsoft/service-fabric/blob/master/release_notes/Service_Fabric_ReleaseNotes_80CU2.md) |
## Previous versions
service-fabric Service Fabric Versions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-fabric/service-fabric-versions.md
The tables in this article outline the Service Fabric and platform versions that
| Service Fabric runtime |Can upgrade directly from|Can downgrade to|Compatible SDK or NuGet package version|Supported .NET runtimes** |OS Version |End of support | | | | | | | | |
+| 8.0 CU2 | 7.1 CU10 | 7.2 | Less than or equal to version 5.0 | .NET 5.0 (GA), >= .NET Core 2.1, <br>All >= .NET Framework 4.5 | [See supported OS version](#supported-windows-versions-and-support-end-date) | Current version |
| 8.0 CU1 | 7.1 CU10 | 7.2 | Less than or equal to version 5.0 | .NET 5.0 (GA), >= .NET Core 2.1, <br>All >= .NET Framework 4.5 | [See supported OS version](#supported-windows-versions-and-support-end-date) | Current version | | 8.0 RTO | 7.1 CU10 | 7.2 | Less than or equal to version 5.0 | .NET 5.0 (GA), >= .NET Core 2.1, <br>All >= .NET Framework 4.5 | [See supported OS version](#supported-windows-versions-and-support-end-date) | Current version | | 7.2 CU7 | 7.0 CU9 | 7.1 | Less than or equal to version 4.2 | .NET 5.0 (Preview support), >= .NET Core 2.1,<br>All >= .NET Framework 4.5 | [See supported OS version](#supported-windows-versions-and-support-end-date) | November 30, 2021 |
spring-cloud How To Permissions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/spring-cloud/how-to-permissions.md
The developer role includes permissions to restart apps and see their log stream
Follow these steps to start defining a role.
-1. In the Azure portal, open the subscription and resource group where you want the custom role to be assignable.
+1. In the Azure portal, open the subscription where you want the custom role to be assignable.
2. Open **Access control (IAM)**. 3. Click **+ Add**. 4. Click **Add custom role**.
+#### [Portal](#tab/Azure-portal)
5. Click **Next**. ![Create custom role](media/spring-cloud-permissions/create-custom-role.png)
From **Microsoft.AppPlatform/locations/operationStatus/operationId**, select:
9. Click **Add**.
+#### [JSON](#tab/JSON)
+5. Click **Next**.
+
+6. Click the **JSON** tab.
+
+7. Click **Edit**, and delete the default text.
+
+ ![Edit custom role](media/spring-cloud-permissions/create-custom-role-edit-json.png)
+
+8. Paste the following JSON to define the Developer role.
+
+ ![Create custom role](media/spring-cloud-permissions/create-custom-role-json.png)
+
+```json
+{
+ "properties": {
+ "roleName": "Developer",
+ "description": "",
+ "assignableScopes": [
+ "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
+ ],
+ "permissions": [
+ {
+ "actions": [
+ "Microsoft.AppPlatform/Spring/write",
+ "Microsoft.AppPlatform/Spring/read",
+ "Microsoft.AppPlatform/Spring/listTestKeys/action",
+ "Microsoft.AppPlatform/Spring/apps/read",
+ "Microsoft.AppPlatform/Spring/apps/getResourceUploadUrl/action",
+ "Microsoft.AppPlatform/Spring/apps/bindings/read",
+ "Microsoft.AppPlatform/Spring/apps/domains/read",
+ "Microsoft.AppPlatform/Spring/apps/deployments/write",
+ "Microsoft.AppPlatform/Spring/apps/deployments/read",
+ "Microsoft.AppPlatform/Spring/apps/deployments/start/action",
+ "Microsoft.AppPlatform/Spring/apps/deployments/stop/action",
+ "Microsoft.AppPlatform/Spring/apps/deployments/restart/action",
+ "Microsoft.AppPlatform/Spring/apps/deployments/getLogFileUrl/action",
+ "Microsoft.AppPlatform/Spring/certificates/read",
+ "Microsoft.AppPlatform/locations/operationResults/Spring/read",
+ "Microsoft.AppPlatform/locations/operationStatus/operationId/read"
+ ],
+ "notActions": [],
+ "dataActions": [],
+ "notDataActions": []
+ }
+ ]
+ }
+}
+```
+9. Click **Save**.
++ 10. Review the permissions. 11. Click **Review and create**.
From **Microsoft.AppPlatform/locations/operationStatus/operationId**, select:
## Define DevOps engineer role This procedure defines a role with permissions to deploy, test, and restart Azure Spring Cloud apps.
-1. Repeat the procedure to navigate subscription, resource group,and access Access control (IAM).
+1. Repeat the procedure to navigate subscription and access Access control (IAM).
+
+#### [Portal](#tab/Azure-portal)
+ 2. Select the permissions for the DevOps engineer role: From **Microsoft.AppPlatform/Spring**, select:
From **Microsoft.AppPlatform/skus**, select:
5. Click **Review and create**.
+#### [JSON](#tab/JSON)
+
+2. Click **Next**.
+
+3. Click the **JSON** tab.
+
+4. Click **Edit**, and delete the default text.
+
+ ![Edit custom role](media/spring-cloud-permissions/create-custom-role-edit-json.png)
+
+5. Paste the following JSON to define the DevOps engineer role.
+
+```json
+{
+ "properties": {
+ "roleName": "DevOps engineer",
+ "description": "",
+ "assignableScopes": [
+ "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
+ ],
+ "permissions": [
+ {
+ "actions": [
+ "Microsoft.AppPlatform/Spring/write",
+ "Microsoft.AppPlatform/Spring/delete",
+ "Microsoft.AppPlatform/Spring/read",
+ "Microsoft.AppPlatform/Spring/enableTestEndpoint/action",
+ "Microsoft.AppPlatform/Spring/disableTestEndpoint/action",
+ "Microsoft.AppPlatform/Spring/listTestKeys/action",
+ "Microsoft.AppPlatform/Spring/regenerateTestKey/action",
+ "Microsoft.AppPlatform/Spring/apps/write",
+ "Microsoft.AppPlatform/Spring/apps/delete",
+ "Microsoft.AppPlatform/Spring/apps/read",
+ "Microsoft.AppPlatform/Spring/apps/getResourceUploadUrl/action",
+ "Microsoft.AppPlatform/Spring/apps/validateDomain/action",
+ "Microsoft.AppPlatform/Spring/apps/bindings/write",
+ "Microsoft.AppPlatform/Spring/apps/bindings/delete",
+ "Microsoft.AppPlatform/Spring/apps/bindings/read",
+ "Microsoft.AppPlatform/Spring/apps/deployments/write",
+ "Microsoft.AppPlatform/Spring/apps/deployments/delete",
+ "Microsoft.AppPlatform/Spring/apps/deployments/read",
+ "Microsoft.AppPlatform/Spring/apps/deployments/start/action",
+ "Microsoft.AppPlatform/Spring/apps/deployments/stop/action",
+ "Microsoft.AppPlatform/Spring/apps/deployments/restart/action",
+ "Microsoft.AppPlatform/Spring/apps/deployments/getLogFileUrl/action",
+ "Microsoft.AppPlatform/Spring/apps/deployments/skus/read",
+ "Microsoft.AppPlatform/locations/checkNameAvailability/action",
+ "Microsoft.AppPlatform/locations/operationResults/Spring/read",
+ "Microsoft.AppPlatform/locations/operationStatus/operationId/read",
+ "Microsoft.AppPlatform/skus/read"
+ ],
+ "notActions": [],
+ "dataActions": [],
+ "notDataActions": []
+ }
+ ]
+ }
+}
+```
+6. Review the permissions.
+
+7. Click **Review and create**.
+ ## Define Ops - Site Reliability Engineering role This procedure defines a role with permissions to deploy, test, and restart Azure Spring Cloud apps.
-1. Repeat the procedure to navigate subscription, resource group,and access Access control (IAM).
+1. Repeat the procedure to navigate subscription and access Access control (IAM).
+#### [Portal](#tab/Azure-portal)
2. Select the permissions for the Ops - Site Reliability Engineering role:
From **Microsoft.AppPlatform/locations/operationStatus/operationId**, select:
5. Click **Review and create**.
+#### [JSON](#tab/JSON)
+
+2. Click **Next**.
+
+3. Click the **JSON** tab.
+
+4. Click **Edit**, and delete the default text.
+
+ ![Edit custom role](media/spring-cloud-permissions/create-custom-role-edit-json.png)
+
+5. Paste the following JSON to define the Ops - Site Reliability Engineering role.
+
+```json
+{
+ "properties": {
+ "roleName": "Ops - Site Reliability Engineering",
+ "description": "",
+ "assignableScopes": [
+ "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
+ ],
+ "permissions": [
+ {
+ "actions": [
+ "Microsoft.AppPlatform/Spring/read",
+ "Microsoft.AppPlatform/Spring/listTestKeys/action",
+ "Microsoft.AppPlatform/Spring/apps/read",
+ "Microsoft.AppPlatform/Spring/apps/deployments/read",
+ "Microsoft.AppPlatform/Spring/apps/deployments/start/action",
+ "Microsoft.AppPlatform/Spring/apps/deployments/stop/action",
+ "Microsoft.AppPlatform/Spring/apps/deployments/restart/action",
+ "Microsoft.AppPlatform/Spring/apps/deployments/getLogFileUrl/action",
+ "Microsoft.AppPlatform/locations/operationResults/Spring/read",
+ "Microsoft.AppPlatform/locations/operationStatus/operationId/read"
+ ],
+ "notActions": [],
+ "dataActions": [],
+ "notDataActions": []
+ }
+ ]
+ }
+}
+```
+6. Review the permissions.
+
+7. Click **Review and create**.
+ ## Define Azure Pipelines/Provisioning role
-This Jenkins/GitHub Actions role can create and configure everything in Azure Spring Cloud and apps with a service instance. This role is for releasing or deploying code.
-1. Repeat the procedure to navigate subscription, resource group, and access Access control (IAM).
+This Jenkins/Github Actions role can create and configure everything in Azure Spring Cloud and apps with a service instance. This role is for releasing or deploying code.
+
+1. Repeat the procedure to navigate subscription and access Access control (IAM).
+#### [Portal](#tab/Azure-portal)
2. Open the **Permissions** options.
From **Microsoft.AppPlatform/skus**, select:
5. Review the permissions. 6. Click **Review and create**.--
+#### [JSON](#tab/JSON)
+
+2. Click **Next**.
+
+3. Click the **JSON** tab.
+
+4. Click **Edit**, and delete the default text.
+
+ ![Edit custom role](media/spring-cloud-permissions/create-custom-role-edit-json.png)
+
+5. Paste the following JSON to define the Azure Pipelines/Provisioning role.
+
+```json
+{
+ "properties": {
+ "roleName": "Azure Pipelines/Provisioning",
+ "description": "",
+ "assignableScopes": [
+ "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
+ ],
+ "permissions": [
+ {
+ "actions": [
+ "Microsoft.AppPlatform/Spring/write",
+ "Microsoft.AppPlatform/Spring/delete",
+ "Microsoft.AppPlatform/Spring/read",
+ "Microsoft.AppPlatform/Spring/enableTestEndpoint/action",
+ "Microsoft.AppPlatform/Spring/disableTestEndpoint/action",
+ "Microsoft.AppPlatform/Spring/listTestKeys/action",
+ "Microsoft.AppPlatform/Spring/regenerateTestKey/action",
+ "Microsoft.AppPlatform/Spring/apps/write",
+ "Microsoft.AppPlatform/Spring/apps/delete",
+ "Microsoft.AppPlatform/Spring/apps/read",
+ "Microsoft.AppPlatform/Spring/apps/getResourceUploadUrl/action",
+ "Microsoft.AppPlatform/Spring/apps/validateDomain/action",
+ "Microsoft.AppPlatform/Spring/apps/bindings/write",
+ "Microsoft.AppPlatform/Spring/apps/bindings/delete",
+ "Microsoft.AppPlatform/Spring/apps/bindings/read",
+ "Microsoft.AppPlatform/Spring/apps/deployments/write",
+ "Microsoft.AppPlatform/Spring/apps/deployments/delete",
+ "Microsoft.AppPlatform/Spring/apps/deployments/read",
+ "Microsoft.AppPlatform/Spring/apps/deployments/start/action",
+ "Microsoft.AppPlatform/Spring/apps/deployments/stop/action",
+ "Microsoft.AppPlatform/Spring/apps/deployments/restart/action",
+ "Microsoft.AppPlatform/Spring/apps/deployments/getLogFileUrl/action",
+ "Microsoft.AppPlatform/skus/read",
+ "Microsoft.AppPlatform/locations/checkNameAvailability/action",
+ "Microsoft.AppPlatform/locations/operationResults/Spring/read",
+ "Microsoft.AppPlatform/locations/operationStatus/operationId/read"
+ ],
+ "notActions": [],
+ "dataActions": [],
+ "notDataActions": []
+ }
+ ]
+ }
+}
+```
+6. Click **Add**.
+
+7. Review the permissions.
+ ## See also * [Create or update Azure custom roles using the Azure portal](../role-based-access-control/custom-roles-portal.md)
static-web-apps Publish Devops https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/static-web-apps/publish-devops.md
In this tutorial, you learn to:
app_location: '/' api_location: 'api' output_location: ''
- env:
azure_static_web_apps_api_token: $(deployment_token) ```
storage Soft Delete Blob Manage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/blobs/soft-delete-blob-manage.md
Previously updated : 03/27/2021 Last updated : 06/07/2021
storage Storage Secure Access Application https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/blobs/storage-secure-access-application.md
To complete this tutorial you must have completed the previous Storage tutorial:
In this part of the tutorial series, SAS tokens are used for accessing the thumbnails. In this step, you set the public access of the *thumbnails* container to `off`.
-```bash
+# [PowerShell](#tab/azure-powershell)
+
+```powershell
+$blobStorageAccount="<blob_storage_account>"
+
+blobStorageAccountKey=(Get-AzStorageAccountKey -ResourceGroupName myResourceGroup -AccountName $blobStorageAccount).Key1
+
+Set-AzStorageAccount -ResourceGroupName "MyResourceGroup" -AccountName $blobStorageAccount -KeyName $blobStorageAccountKey -AllowBlobPublicAccess $false
+```
+
+# [Azure CLI](#tab/azure-cli)
+
+```azurecli
blobStorageAccount="<blob_storage_account>" blobStorageAccountKey=$(az storage account keys list -g myResourceGroup \
az storage container set-permission \
--public-access off ```
-```powershell
-$blobStorageAccount="<blob_storage_account>"
-
-blobStorageAccountKey=$(az storage account keys list -g myResourceGroup `
- --account-name $blobStorageAccount --query [0].value --output tsv)
-
-az storage container set-permission `
- --account-name $blobStorageAccount `
- --account-key $blobStorageAccountKey `
- --name thumbnails `
- --public-access off
-```
+ ## Configure SAS tokens for thumbnails
In this example, the source code repository uses the `sasTokens` branch, which h
In the following command, `<web-app>` is the name of your web app.
-```bash
+```azurecli
az webapp deployment source delete --name <web-app> --resource-group myResourceGroup az webapp deployment source config --name <web_app> \
storage Storage Upload Process Images https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/blobs/storage-upload-process-images.md
To install and use the CLI locally, run Azure CLI version 2.0.4 or later. Run `a
## Create a resource group
-Create a resource group with the [az group create](/cli/azure/group) command. An Azure resource group is a logical container into which Azure resources are deployed and managed.
- The following example creates a resource group named `myResourceGroup`.
-```azurecli
-az group create --name myResourceGroup --location southeastasia
-```
+# [PowerShell](#tab/azure-powershell)
+
+Create a resource group with the [New-AzResourceGroup](/powershell/module/az.resources/new-azresourcegroup) command. An Azure resource group is a logical container into which Azure resources are deployed and managed.
```powershell
+New-AzResourceGroup -Name myResourceGroup -Location southeastasia
+```
+
+# [Azure CLI](#tab/azure-cli)
+
+Create a resource group with the [az group create](/cli/azure/group) command. An Azure resource group is a logical container into which Azure resources are deployed and managed.
+
+```azurecli
az group create --name myResourceGroup --location southeastasia ``` ++ ## Create a storage account
-The sample uploads images to a blob container in an Azure storage account. A storage account provides a unique namespace to store and access your Azure storage data objects. Create a storage account in the resource group you created by using the [az storage account create](/cli/azure/storage/account) command.
+The sample uploads images to a blob container in an Azure storage account. A storage account provides a unique namespace to store and access your Azure storage data objects.
> [!IMPORTANT] > In part 2 of the tutorial, you use Azure Event Grid with Blob storage. Make sure to create your storage account in an Azure region that supports Event Grid. For a list of supported regions, see [Azure products by region](https://azure.microsoft.com/global-infrastructure/services/?products=event-grid&regions=all). In the following command, replace your own globally unique name for the Blob storage account where you see the `<blob_storage_account>` placeholder.
+# [PowerShell](#tab/azure-powershell)
+
+Create a storage account in the resource group you created by using the [New-AzStorageAccount](/powershell/module/az.storage/new-azstorageaccount) command.
+
+```powershell
+$blobStorageAccount="<blob_storage_account>"
+
+New-AzStorageAccount -ResourceGroupName myResourceGroup -Name $blobStorageAccount -SkuName Standard_LRS -Location southeastasia -Kind StorageV2 -AccessTier Hot
+```
+
+# [Azure CLI](#tab/azure-cli)
+
+Create a storage account in the resource group you created by using the [az storage account create](/cli/azure/storage/account) command.
+ ```azurecli blobStorageAccount="<blob_storage_account>"
az storage account create --name $blobStorageAccount --location southeastasia \
--resource-group myResourceGroup --sku Standard_LRS --kind StorageV2 --access-tier hot ``` ++
+## Create Blob storage containers
+
+The app uses two containers in the Blob storage account. Containers are similar to folders and store blobs. The *images* container is where the app uploads full-resolution images. In a later part of the series, an Azure function app uploads resized image thumbnails to the *thumbnail
+
+The *images* container's public access is set to `off`. The *thumbnails* container's public access is set to `container`. The `container` public access setting permits users who visit the web page to view the thumbnails.
+
+# [PowerShell](#tab/azure-powershell)
+
+Get the storage account key by using the [Get-AzStorageAccountKey](/powershell/module/az.storage/get-azstorageaccountkey) command. Then, use this key to create two containers with the [New-AzStorageContainer](/powershell/module/az.storage/new-azstoragecontainer) command.
+ ```powershell
-$blobStorageAccount="<blob_storage_account>"
+$blobStorageAccountKey = (Get-AzStorageAccountKey -ResourceGroupName myResourceGroup -Name $blobStorageAccount).Key1
+$blobStorageContext = New-AzStorageContext -StorageAccountName $blobStorageAccount -StorageAccountKey $blobStorageAccountKey
-az storage account create --name $blobStorageAccount --location southeastasia `
- --resource-group myResourceGroup --sku Standard_LRS --kind StorageV2 --access-tier hot
+New-AzStorageContainer -Name images -Context $blobStorageContext
+New-AzStorageContainer -Name thumbnails -Permission Container -Context $blobStorageContext
```
-## Create Blob storage containers
-
-The app uses two containers in the Blob storage account. Containers are similar to folders and store blobs. The *images* container is where the app uploads full-resolution images. In a later part of the series, an Azure function app uploads resized image thumbnails to the *thumbnails* container.
+# [Azure CLI](#tab/azure-cli)
Get the storage account key by using the [az storage account keys list](/cli/azure/storage/account/keys) command. Then, use this key to create two containers with the [az storage container create](/cli/azure/storage/container) command.
-The *images* container's public access is set to `off`. The *thumbnails* container's public access is set to `container`. The `container` public access setting permits users who visit the web page to view the thumbnails.
-
-```bash
+```azurecli
blobStorageAccountKey=$(az storage account keys list -g myResourceGroup \ -n $blobStorageAccount --query "[0].value" --output tsv)
az storage container create --name thumbnails \
--account-key $blobStorageAccountKey --public-access container ```
-```powershell
-$blobStorageAccountKey=$(az storage account keys list -g myResourceGroup `
- -n $blobStorageAccount --query "[0].value" --output tsv)
-
-az storage container create --name images `
- --account-name $blobStorageAccount `
- --account-key $blobStorageAccountKey
-
-az storage container create --name thumbnails `
- --account-name $blobStorageAccount `
- --account-key $blobStorageAccountKey --public-access container
-```
+ Make a note of your Blob storage account name and key. The sample app uses these settings to connect to the storage account to upload the images.
Make a note of your Blob storage account name and key. The sample app uses these
An [App Service plan](../../app-service/overview-hosting-plans.md) specifies the location, size, and features of the web server farm that hosts your app.
-Create an App Service plan with the [az appservice plan create](/cli/azure/appservice/plan) command.
- The following example creates an App Service plan named `myAppServicePlan` in the **Free** pricing tier:
-```azurecli
-az appservice plan create --name myAppServicePlan --resource-group myResourceGroup --sku Free
-```
+# [PowerShell](#tab/azure-powershell)
+
+Create an App Service plan with the [New-AzAppServicePlan](/powershell/module/az.websites/new-azappserviceplan) command.
```powershell
+New-AzAppServicePlan -ResourceGroupName myResourceGroup -Name myAppServicePlan -Tier "Free"
+```
+
+# [Azure CLI](#tab/azure-cli)
+
+Create an App Service plan with the [az appservice plan create](/cli/azure/appservice/plan) command.
+
+```azurecli
az appservice plan create --name myAppServicePlan --resource-group myResourceGroup --sku Free ``` ++ ## Create a web app
-The web app provides a hosting space for the sample app code that's deployed from the GitHub sample repository. Create a [web app](../../app-service/overview.md) in the `myAppServicePlan` App Service plan with the [az webapp create](/cli/azure/webapp) command.
+The web app provides a hosting space for the sample app code that's deployed from the GitHub sample repository.
In the following command, replace `<web_app>` with a unique name. Valid characters are `a-z`, `0-9`, and `-`. If `<web_app>` isn't unique, you get the error message: *Website with given name `<web_app>` already exists.* The default URL of the web app is `https://<web_app>.azurewebsites.net`.
-```azurecli
-webapp="<web_app>"
+# [PowerShell](#tab/azure-powershell)
-az webapp create --name $webapp --resource-group myResourceGroup --plan myAppServicePlan
-```
+Create a [web app](../../app-service/overview.md) in the `myAppServicePlan` App Service plan with the [New-AzWebApp](/powershell/module/az.websites/new-azwebapp) command.
```powershell $webapp="<web_app>"
+New-AzWebApp -ResourceGroupName myResourceGroup -Name $webapp -AppServicePlan myAppServicePlan
+```
+
+# [Azure CLI](#tab/azure-cli)
+
+Create a [web app](../../app-service/overview.md) in the `myAppServicePlan` App Service plan with the [az webapp create](/cli/azure/webapp) command.
+
+```azurecli
+webapp="<web_app>"
+ az webapp create --name $webapp --resource-group myResourceGroup --plan myAppServicePlan ``` ++ ## Deploy the sample app from the GitHub repository # [.NET v12 SDK](#tab/dotnet)
az webapp deployment source config --name $webapp --resource-group myResourceGro
# [.NET v12 SDK](#tab/dotnet)
-The sample web app uses the [Azure Storage APIs for .NET](/dotnet/api/overview/azure/storage) to upload images. Storage account credentials are set in the app settings for the web app. Add app settings to the deployed app with the [az webapp config appsettings set](/cli/azure/webapp/config/appsettings) command.
+The sample web app uses the [Azure Storage APIs for .NET](/dotnet/api/overview/azure/storage) to upload images. Storage account credentials are set in the app settings for the web app. Add app settings to the deployed app with the [az webapp config appsettings set](/cli/azure/webapp/config/appsettings) or [New-AzStaticWebAppSetting](/powershell/module/az.websites/new-azstaticwebappsetting) command.
```azurecli az webapp config appsettings set --name $webapp --resource-group myResourceGroup \
az webapp config appsettings set --name $webapp --resource-group myResourceGroup
# [JavaScript v12 SDK](#tab/javascript)
-The sample web app uses the [Azure Storage client library for JavaScript](https://github.com/Azure/azure-sdk-for-js/tree/master/sdk/storage) to upload images. The storage account credentials are set in the app settings for the web app. Add app settings to the deployed app with the [az webapp config appsettings set](/cli/azure/webapp/config/appsettings) command.
+The sample web app uses the [Azure Storage client library for JavaScript](https://github.com/Azure/azure-sdk-for-js/tree/master/sdk/storage) to upload images. The storage account credentials are set in the app settings for the web app. Add app settings to the deployed app with the [az webapp config appsettings set](/cli/azure/webapp/config/appsettings) or [New-AzStaticWebAppSetting](/powershell/module/az.websites/new-azstaticwebappsetting) command.
```azurecli az webapp config appsettings set --name $webapp --resource-group myResourceGroup \
storage Versioning Enable https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/blobs/versioning-enable.md
Previously updated : 02/09/2021 Last updated : 06/07/2021
synapse-analytics Get Started Create Workspace https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/get-started-create-workspace.md
We are going to use a small 100K row sample dataset of NYX Taxi Cab data for man
* Select the container named **users (Primary)**. * Select **Upload** and select the `NYCTripSmall.parquet` file you downloaded.
-One the parquet file is uploaded it is available through two equivalent URIs:
+Once the parquet file is uploaded it is available through two equivalent URIs:
* `https://contosolake.dfs.core.windows.net/users/NYCTripSmall.parquet` * `abfss://users@contosolake.dfs.core.windows.net/NYCTripSmall.parquet`
synapse-analytics Develop Storage Files Storage Access Control https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/sql/develop-storage-files-storage-access-control.md
You can use the following combinations of authorization and Azure Storage types:
| Authorization type | Blob Storage | ADLS Gen1 | ADLS Gen2 | | - | | -- | -- |
-| [SAS](?tabs=shared-access-signature#supported-storage-authorization-types) | Supported\* | Not supported | Supported\* |
+| [SAS](?tabs=shared-access-signature#supported-storage-authorization-types) | Supported | Not supported | Supported |
| [Managed Identity](?tabs=managed-identity#supported-storage-authorization-types) | Supported | Supported | Supported |
-| [User Identity](?tabs=user-identity#supported-storage-authorization-types) | Supported\* | Supported\* | Supported\* |
-
-\* SAS token and Azure AD Identity can be used to access storage that is not protected with firewall.
+| [User Identity](?tabs=user-identity#supported-storage-authorization-types) | Supported | Supported | Supported |
## Firewall protected storage
synapse-analytics Develop Tables Data Types https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/sql/develop-tables-data-types.md
In this article, you'll find recommendations for defining table data types in Sy
## Data types
-Synapse SQL Dedicated Pool supports the most commonly used data types. For a list of the supported data types, see [data types](/sql/t-sql/statements/create-table-azure-sql-data-warehouse#DataTypes&preserve-view=true) in the CREATE TABLE statement. For Synapse SQL Serverless please refer to article [Query storage files with serverless SQL pool in Azure Synapse Analytics](/sql/query-data-storage) and [How to use OPENROWSET using serverless SQL pool in Azure Synapse Analytics](/sql/develop-openrowset)
+Synapse SQL Dedicated Pool supports the most commonly used data types. For a list of the supported data types, see [data types](/sql/t-sql/statements/create-table-azure-sql-data-warehouse#DataTypes&preserve-view=true) in the CREATE TABLE statement. For Synapse SQL Serverless please refer to article [Query storage files with serverless SQL pool in Azure Synapse Analytics](/azure/synapse-analytics/sql/query-data-storage) and [How to use OPENROWSET using serverless SQL pool in Azure Synapse Analytics](/azure/synapse-analytics/sql/develop-openrowset)
## Minimize row length
synapse-analytics Resources Self Help Sql On Demand https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/sql/resources-self-help-sql-on-demand.md
Synapse SQL will return `NULL` instead of the values that you see in the transac
The value specified in the `WITH` clause doesn't not match the underlying Cosmos DB types in analytical storage and cannot be implicitly converted. Use `VARCHAR` type in the schema.
+### Performance issues
+
+If you are experiencing some unexpected performance issues, make sure that you applied the best practices, such as:
+- Make sure that you have placed the client application, serverless pool, and Cosmos DB analytical storage in [the same region](best-practices-serverless-sql-pool.md#colocate-your-cosmosdb-analytical-storage-and-serverless-sql-pool).
+- Make sure that you are using [Latin1_General_100_BIN2_UTF8 collation](best-practices-serverless-sql-pool.md#use-proper-collation-to-utilize-predicate-pushdown-for-character-columns) when you filter your data using string predicates.
+- If you have repeating queries that might be cached, try to use [CETAS to store query results in Azure Data Lake Storage](best-practices-serverless-sql-pool.md#use-cetas-to-enhance-query-performance-and-joins).
+ ## Delta Lake Delta Lake support is currently in public preview in serverless SQL pools. There are some known issues that you might see during the preview.
Resolving Delta logs on path 'https://....core.windows.net/.../' failed with err
Try to update your Delta Lake data set using Apache Spark pools and use some value (empty string or `"null"`) instead of `null` in the partitioning column.
+## Constraints
+
+There are some general system constraints that may affect your workload:
+
+| Property | Limitation |
+|||
+| Max number of Synapse workspaces per subscription | 20 |
+| Max number of databases per serverless pool | 20 (not including databases synchronized from Apache Spark pool) |
+| Max number of databases synchronized from Apache Spark pool | Not limited |
+| Max number of databases objects per database | The sum of the number of all objects in a database cannot exceed 2,147,483,647 (see [limitations in SQL Server database engine](https://docs.microsoft.com/sql/sql-server/maximum-capacity-specifications-for-sql-server#objects) ) |
+| Max identifier length (in characters) | 128 (see [limitations in SQL Server database engine](https://docs.microsoft.com/sql/sql-server/maximum-capacity-specifications-for-sql-server#objects) )|
+| Max query duration | 30 min |
+| Max size of the result set | 80 GB (shared between all currently executing concurrent queries) |
+| Max concurrency | Not limited and depends on the query complexity and amount of data scanned. One serverless SQL pool can concurrently handle 1000 active sessions that are executing lightweight queries, but the numbers will drop if the queries are more complex or scan a larger amount of data. |
+ ## Next steps Review the following articles to learn more about how to use serverless SQL pool:
time-series-insights How To Monitor Tsi https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/time-series-insights/how-to-monitor-tsi.md
Title: 'Monitoring Time Series Insights | Microsoft Docs' description: Monitor Time Series Insights for availability, performance, and operation.---++++ Last updated 12/10/2020- # Monitoring Time Series Insights
time-series-insights How To Plan Your Environment https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/time-series-insights/how-to-plan-your-environment.md
 Title: 'Plan your Gen2 environment - Azure Time Series Insights | Microsoft Docs' description: Best practices to configure, manage, plan, and deploy your Azure Time Series Insights Gen2 environment.---++++
time-series-insights Overview Use Cases https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/time-series-insights/overview-use-cases.md
 Title: 'Gen2 use cases - Azure Time Series Insights Gen2 | Microsoft Docs' description: Learn about Azure Time Series Insights Gen2 use cases.---++++
time-series-insights Time Series Insights Diagnose And Solve Problems https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/time-series-insights/time-series-insights-diagnose-and-solve-problems.md
Title: 'Diagnose, troubleshoot, and solve issues - Azure Time Series Insights'
description: This article describes how to diagnose, troubleshoot, and solve common issues in your Azure Time Series Insights environment. ----++++ Last updated 09/29/2020
time-series-insights Time Series Insights Environment Mitigate Latency https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/time-series-insights/time-series-insights-environment-mitigate-latency.md
Title: 'How to monitor and reduce throttling - Azure Time Series Insights | Micr
description: Learn how to monitor, diagnose, and mitigate performance issues that cause latency and throttling in Azure Time Series Insights. ----++++ ms.devlang: csharp
Alerts can help you to diagnose and mitigate latency issues occurring in your en
|Metric |Description | |||
- |**Ingress Received Bytes** | Count of raw bytes read from event sources. Raw count usually includes the property name and value. |
+ |**Ingress Received Bytes** | Count of raw bytes read from event sources. Raw count usually includes the property name and value. |
|**Ingress Received Invalid Messages** | Count of invalid messages read from all Azure Event Hubs or Azure IoT Hub event sources. | |**Ingress Received Messages** | Count of messages read from all Event Hubs or IoT Hubs event sources. | |**Ingress Stored Bytes** | Total size of events stored and available for query. Size is computed only on the property value. |
Alerts can help you to diagnose and mitigate latency issues occurring in your en
## Throttling and ingress management -- If you're being throttled, a value for the *Ingress Received Message Time Lag* will be registered informing you about how many seconds behind your Azure Time Series Insights environment are from the actual time the message hits the event source (excluding indexing time of appx. 30-60 seconds).
+- If you're being throttled, a value for the *Ingress Received Message Time Lag* will be registered informing you about how many seconds behind your Azure Time Series Insights environment are from the actual time the message hits the event source (excluding indexing time of appx. 30-60 seconds).
- *Ingress Received Message Count Lag* should also have a value, allowing you to determine how many messages behind you are. The easiest way to get caught up is to increase your environment's capacity to a size that will enable you to overcome the difference.
+ *Ingress Received Message Count Lag* should also have a value, allowing you to determine how many messages behind you are. The easiest way to get caught up is to increase your environment's capacity to a size that will enable you to overcome the difference.
For example, if your S1 environment is demonstrating lag of 5,000,000 messages, you might increase the size of your environment to six units for around a day to get caught up. You could increase even further to catch up faster. The catch-up period is a common occurrence when initially provisioning an environment, particularly when you connect it to an event source that already has events in it or when you bulk upload lots of historical data. - Another technique is to set an **Ingress Stored Events** alert >= a threshold slightly below your total environment capacity for a period of 2 hours. This alert can help you understand if you are constantly at capacity, which indicates a high likelihood of latency.
- For example, if you have three S1 units provisioned (or 2100 events per minute ingress capacity), you can set an **Ingress Stored Events** alert for >= 1900 events for 2 hours. If you are constantly exceeding this threshold, and therefore, triggering your alert, you are likely under-provisioned.
+ For example, if you have three S1 units provisioned (or 2100 events per minute ingress capacity), you can set an **Ingress Stored Events** alert for >= 1900 events for 2 hours. If you are constantly exceeding this threshold, and therefore, triggering your alert, you are likely under-provisioned.
- If you suspect you are being throttled, you can compare your **Ingress Received Messages** with your event sourceΓÇÖs egressed messages. If ingress into your Event Hub is greater than your **Ingress Received Messages**, your Azure Time Series Insights are likely being throttled.
time-series-insights Time Series Insights Environment Planning https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/time-series-insights/time-series-insights-environment-planning.md
Title: 'Plan your Gen1 environment - Azure Time Series Insights | Microsoft Docs
description: Best practices for preparing, configuring, and deploying your Azure Time Series Insights Gen1 environment. ---++++ ms.devlang: csharp
This article describes how to plan your Azure Time Series Insights Gen1 environm
## Best practices
-To get started with Azure Time Series Insights, it's best if you know how much data you expect to push by the minute and how long you need to store your data.
+To get started with Azure Time Series Insights, it's best if you know how much data you expect to push by the minute and how long you need to store your data.
For more information about capacity and retention for both Azure Time Series Insights SKUs, read [Azure Time Series Insights pricing](https://azure.microsoft.com/pricing/details/time-series-insights/).
It's important to ensure that the way you send events to Azure Time Series Insig
A *reference dataset* is a collection of items that augment the events from your event source. The Azure Time Series Insights ingress engine joins each event from your event source with the corresponding data row in your reference dataset. The augmented event is then available for query. The join is based on the **Primary Key** columns that are defined in your reference dataset. > [!NOTE]
-> Reference data isn't joined retroactively. Only current and future ingress data is matched and joined to the reference dataset after it's configured and uploaded. If you plan to send a large amount of historical data to Azure Time Series Insights and don't first upload or create reference data in Azure Time Series Insights, you might have to redo your work (hint: not fun).
+> Reference data isn't joined retroactively. Only current and future ingress data is matched and joined to the reference dataset after it's configured and uploaded. If you plan to send a large amount of historical data to Azure Time Series Insights and don't first upload or create reference data in Azure Time Series Insights, you might have to redo your work (hint: not fun).
To learn more about how to create, upload, and manage your reference data in Azure Time Series Insights, read our [Reference dataset documentation](time-series-insights-add-reference-data-set.md).
time-series-insights Time Series Insights Get Started https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/time-series-insights/time-series-insights-get-started.md
Title: 'Create an environment - Azure Time Series Insights | Microsoft Docs'
-description: Learn how to use the Azure portal to create a new Azure Time Series Insights environment.
+description: Learn how to use the Azure portal to create a new Azure Time Series Insights environment.
----++++ -+ Last updated 09/29/2020
Follow these steps to create an environment:
Location | Nearest your event source | Preferably, choose the same data center location that contains your event source data, in effort to avoid added cross-region and cross-zone bandwidth costs and added latency when moving data out of the region. Pricing tier | S1 | Choose the throughput needed. For lowest costs and starter capacity, select S1. Capacity | 1 | Capacity is the multiplier applies to the ingress rate, storage capacity, and cost associated with the selected SKU. You can change capacity of an environment after creation. For lowest costs, select a capacity of 1.
-
+ 1. Select **Create** to begin the provisioning process. It may take a couple of minutes. 1. To monitor the deployment process, select the **Notifications** symbol (bell icon).
time-series-insights Time Series Insights How To Scale Your Environment https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/time-series-insights/time-series-insights-how-to-scale-your-environment.md
Title: 'How to scale your environment - Azure Time Series Insights| Microsoft Do
description: Learn how to scale your Azure Time Series Insights environment using the Azure portal. ---++++ ms.devlang: csharp
time-series-insights Time Series Insights Manage Reference Data Csharp https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/time-series-insights/time-series-insights-manage-reference-data-csharp.md
Title: 'Manage reference data in GA environments using C# - Azure Time Series In
description: Learn how to manage reference data for your GA environment by creating a custom application written in C#. ---++++ ms.devlang: csharp
time-series-insights Time Series Insights Manage Resources Using Azure Resource Manager Template https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/time-series-insights/time-series-insights-manage-resources-using-azure-resource-manager-template.md
Title: 'Manage your environment using Azure Resource Manager templates - Azure T
description: Learn how to manage your Azure Time Series Insights environment programmatically using Azure Resource Manager. ---++++ ms.devlang: csharp
time-series-insights Time Series Insights Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/time-series-insights/time-series-insights-overview.md
Title: 'Overview: What is Azure Time Series Insights? - Azure Time Series Insigh
description: Introduction to Azure Time Series Insights, a new service for time series data analytics and IoT solutions. ---++++ Last updated 09/30/2020
time-series-insights Time Series Insights Parameterized Urls https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/time-series-insights/time-series-insights-parameterized-urls.md
Title: 'Share custom views with parameterized URLs - Azure Time Series Insights
description: Learn how to create parameterized URLs to easily share customized Explorer views in Azure Time Series Insights. ---++++ Last updated 10/02/2020
The `timeSeriesDefinitions=<collection of term objects>` parameter specifies pre
| **useSum** | `true` | An optional parameter that specifies using sum for your measure. | > [!NOTE]
-> If `Events` is the selected **useSum** measure, count is selected by default.
+> If `Events` is the selected **useSum** measure, count is selected by default.
> If `Events` is not selected, average is selected by default. | * The `multiChartStack=<true/false>` key-value pair enables stacking in the chart.
-* The `multiChartSameScale=<true/false>` key-value pair enables the same Y-axis scale across terms within an optional parameter.
-* The `timeBucketUnit=<Unit>&timeBucketSize=<integer>` enables you to adjust the interval slider to provide a more granular or smoother, more aggregated view of the chart.
+* The `multiChartSameScale=<true/false>` key-value pair enables the same Y-axis scale across terms within an optional parameter.
+* The `timeBucketUnit=<Unit>&timeBucketSize=<integer>` enables you to adjust the interval slider to provide a more granular or smoother, more aggregated view of the chart.
* The `timezoneOffset=<integer>` parameter enables you to set the timezone for the chart to be viewed in as an offset to UTC. | Pair(s) | Description |
time-series-insights Time Series Insights Send Events https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/time-series-insights/time-series-insights-send-events.md
Title: 'Send events to an environment - Azure Time Series Insights | Microsoft D
description: Learn how to configure an event hub, run a sample application, and send events to your Azure Time Series Insights environment. ---++++ ms.devlang: csharp
time-series-insights Time Series Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/time-series-insights/time-series-quickstart.md
Title: 'Quickstart: Azure Time Series Insights Explorer - Azure Time Series Insights | Microsoft Docs' description: Learn how to get started with the Azure Time Series Insights Explorer. Visualize large volumes of IoT data and tour key features of your environment.-+ ---++++
time-series-insights Tutorial Create Populate Tsi Environment https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/time-series-insights/tutorial-create-populate-tsi-environment.md
Title: 'Tutorial: Create an environment - Azure Time Series Insights | Microsoft Docs' description: Learn how to create an Azure Time Series Insights environment that's populated with data from simulated devices. ---++++ Last updated 10/01/2020
virtual-desktop Install Office On Wvd Master Image https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-desktop/install-office-on-wvd-master-image.md
Here's how to install OneDrive in per-machine mode:
Azure Virtual Desktop doesn't support Skype for Business.
-For help with installing Microsoft Teams, see [Use Microsoft Teams on Azure Virtual desktop](teams-on-wvd.md). Media optimization for Microsoft Teams on Azure Virtual Desktop is available in preview.
+For help with installing Microsoft Teams, see [Use Microsoft Teams on Azure Virtual desktop](teams-on-wvd.md).
## Next steps
virtual-machines Disks Redundancy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/disks-redundancy.md
az vmss create -g $rgName \
--data-disk-sizes-gb 128 \ --storage-sku os=$osDiskSku 0=$dataDiskSku ```
+# [Azure PowerShell](#tab/azure-powershell)
++
+#### Prerequisites
+
+You must enable the feature for your subscription. Use the following steps to enable the feature for your subscription:
+
+1. Execute the following command to register the feature for your subscription
+
+ ```powershell
+ Register-AzProviderFeature -FeatureName "SsdZrsManagedDisks" -ProviderNamespace "Microsoft.Compute"
+ ```
+
+2. Confirm that the registration state is **Registered** (it may take a few minutes) using the following command before trying out the feature.
+
+ ```powershell
+ Get-AzProviderFeature -FeatureName "SsdZrsManagedDisks" -ProviderNamespace "Microsoft.Compute"
+ ```
+
+#### Create a VM with ZRS disks
+
+```powershell
+$subscriptionId="yourSubscriptionId"
+$vmLocalAdminUser = "yourAdminUserName"
+$vmLocalAdminSecurePassword = ConvertTo-SecureString "yourVMPassword" -AsPlainText -Force
+$location = "westus2"
+$rgName = "yourResourceGroupName"
+$vmName = "yourVMName"
+$vmSize = "Standard_DS2_v2"
+$osDiskSku = "StandardSSD_ZRS"
+$dataDiskSku = "Premium_ZRS"
++
+Connect-AzAccount
+
+Set-AzContext -Subscription $subscriptionId
+
+$subnet = New-AzVirtualNetworkSubnetConfig -Name $($vmName+"_subnet") `
+ -AddressPrefix "10.0.0.0/24"
+
+$vnet = New-AzVirtualNetwork -Name $($vmName+"_vnet") `
+ -ResourceGroupName $rgName `
+ -Location $location `
+ -AddressPrefix "10.0.0.0/16" `
+ -Subnet $subnet
+
+$nic = New-AzNetworkInterface -Name $($vmName+"_nic") `
+ -ResourceGroupName $rgName `
+ -Location $location `
+ -SubnetId $vnet.Subnets[0].Id
+
+
+$vm = New-AzVMConfig -VMName $vmName `
+ -VMSize $vmSize
+
+
+$credential = New-Object System.Management.Automation.PSCredential ($vmLocalAdminUser, $vmLocalAdminSecurePassword);
+
+$vm = Set-AzVMOperatingSystem -VM $vm `
+ -ComputerName $vmName `
+ -Windows `
+ -Credential $credential
+
+$vm = Add-AzVMNetworkInterface -VM $vm -Id $NIC.Id
+
+$vm = Set-AzVMSourceImage -VM $vm `
+ -PublisherName 'MicrosoftWindowsServer' `
+ -Offer 'WindowsServer' `
+ -Skus '2012-R2-Datacenter' `
+ -Version latest
++
+$vm = Set-AzVMOSDisk -VM $vm `
+ -Name $($vmName +"_OSDisk") `
+ -CreateOption FromImage `
+ -StorageAccountType $osDiskSku
+
+$vm = Add-AzVMDataDisk -VM $vm `
+ -Name $($vmName +"_DataDisk1") `
+ -DiskSizeInGB 128 `
+ -StorageAccountType $dataDiskSku `
+ -CreateOption Empty -Lun 0
+
+New-AzVM -ResourceGroupName $rgName `
+ -Location $location `
+ -VM $vm -Verbose
+```
+
+#### Create VMs with a shared ZRS disk attached to the VMs in different zones
+
+```powershell
+$location = "westus2"
+$rgName = "yourResourceGroupName"
+$vmNamePrefix = "yourVMPrefix"
+$vmSize = "Standard_DS2_v2"
+$sharedDiskName = "yourSharedDiskName"
+$sharedDataDiskSku = "Premium_ZRS"
+$vmLocalAdminUser = "yourVMAdminUserName"
+$vmLocalAdminSecurePassword = ConvertTo-SecureString "yourPassword" -AsPlainText -Force
++
+$datadiskconfig = New-AzDiskConfig -Location $location `
+ -DiskSizeGB 1024 `
+ -AccountType $sharedDataDiskSku `
+ -CreateOption Empty `
+ -MaxSharesCount 2 `
+
+$sharedDisk=New-AzDisk -ResourceGroupName $rgName `
+ -DiskName $sharedDiskName `
+ -Disk $datadiskconfig
+
+$credential = New-Object System.Management.Automation.PSCredential ($vmLocalAdminUser, $vmLocalAdminSecurePassword);
+
+$vm1 = New-AzVm `
+ -ResourceGroupName $rgName `
+ -Name $($vmNamePrefix+"01") `
+ -Zone 1 `
+ -Location $location `
+ -Size $vmSize `
+ -VirtualNetworkName $($vmNamePrefix+"_vnet") `
+ -SubnetName $($vmNamePrefix+"_subnet") `
+ -SecurityGroupName $($vmNamePrefix+"01_sg") `
+ -PublicIpAddressName $($vmNamePrefix+"01_ip") `
+ -Credential $credential `
+ -OpenPorts 80,3389
++
+$vm1 = Add-AzVMDataDisk -VM $vm1 -Name $sharedDiskName -CreateOption Attach -ManagedDiskId $sharedDisk.Id -Lun 0
+
+update-AzVm -VM $vm1 -ResourceGroupName $rgName
+
+$vm2 = New-AzVm `
+ -ResourceGroupName $rgName `
+ -Name $($vmNamePrefix+"02") `
+ -Zone 2 `
+ -Location $location `
+ -Size $vmSize `
+ -VirtualNetworkName $($vmNamePrefix+"_vnet") `
+ -SubnetName ($vmNamePrefix+"_subnet") `
+ -SecurityGroupName $($vmNamePrefix+"02_sg") `
+ -PublicIpAddressName $($vmNamePrefix+"02_ip") `
+ -Credential $credential `
+ -OpenPorts 80,3389
++
+$vm2 = Add-AzVMDataDisk -VM $vm1 -Name $sharedDiskName -CreateOption Attach -ManagedDiskId $sharedDisk.Id -Lun 0
+
+update-AzVm -VM $vm1 -ResourceGroupName $rgName
+```
+
+#### Create a virtual machine scale set with ZRS Disks
+```powershell
+$vmLocalAdminUser = "yourLocalAdminUser"
+$vmLocalAdminSecurePassword = ConvertTo-SecureString "yourVMPassword" -AsPlainText -Force
+$location = "westus2"
+$rgName = "yourResourceGroupName"
+$vmScaleSetName = "yourScaleSetName"
+$vmSize = "Standard_DS3_v2"
+$osDiskSku = "StandardSSD_ZRS"
+$dataDiskSku = "Premium_ZRS"
+
+
+$subnet = New-AzVirtualNetworkSubnetConfig -Name $($vmScaleSetName+"_subnet") `
+ -AddressPrefix "10.0.0.0/24"
+
+$vnet = New-AzVirtualNetwork -Name $($vmScaleSetName+"_vnet") `
+ -ResourceGroupName $rgName `
+ -Location $location `
+ -AddressPrefix "10.0.0.0/16" `
+ -Subnet $subnet
+
+$ipConfig = New-AzVmssIpConfig -Name "myIPConfig" `
+ -SubnetId $vnet.Subnets[0].Id
++
+$vmss = New-AzVmssConfig -Location $location `
+ -SkuCapacity 2 `
+ -SkuName $vmSize `
+ -UpgradePolicyMode 'Automatic'
+
+$vmss = Add-AzVmssNetworkInterfaceConfiguration -Name "myVMSSNetworkConfig" `
+ -VirtualMachineScaleSet $vmss `
+ -Primary $true `
+ -IpConfiguration $ipConfig
+
+$vmss = Set-AzVmssStorageProfile $vmss -OsDiskCreateOption "FromImage" `
+ -ImageReferenceOffer 'WindowsServer' `
+ -ImageReferenceSku '2012-R2-Datacenter' `
+ -ImageReferenceVersion latest `
+ -ImageReferencePublisher 'MicrosoftWindowsServer' `
+ -ManagedDisk $osDiskSku
+
+$vmss = Set-AzVmssOsProfile $vmss -ComputerNamePrefix $vmScaleSetName `
+ -AdminUsername $vmLocalAdminUser `
+ -AdminPassword $vmLocalAdminSecurePassword
+
+$vmss = Add-AzVmssDataDisk -VirtualMachineScaleSet $vmss `
+ -CreateOption Empty `
+ -Lun 1 `
+ -DiskSizeGB 128 `
+ -StorageAccountType $dataDiskSku
+
+New-AzVmss -VirtualMachineScaleSet $vmss `
+ -ResourceGroupName $rgName `
+ -VMScaleSetName $vmScaleSetName
+```
# [Resource Manager Template](#tab/azure-resource-manager)
virtual-machines Maintenance Control Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/maintenance-control-cli.md
az maintenance update list \
## Apply updates
-Use `az maintenance apply update` to apply pending updates. On success, this command will return JSON containing the details of the update.
+Use `az maintenance apply update` to apply pending updates. On success, this command will return JSON containing the details of the update. Apply update calls can take upto 2 hours to complete.
### Isolated VM
virtual-machines Maintenance Control Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/maintenance-control-portal.md
You can also check a specific host using **Virtual Machines** or properties of t
## Apply updates
-You can apply pending updates on demand using **Virtual Machines**. On the VM details, click **Maintenance** and click **Apply maintenance now**.
+You can apply pending updates on demand. On the VM or Azure Dedicated Host details, click **Maintenance** and click **Apply maintenance now**. Apply update calls can take upto 2 hours to complete.
![Screenshot showing how to apply pending updates](media/virtual-machines-maintenance-control-portal/maintenance-configurations-apply-updates-now.png)
virtual-machines Maintenance Control Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/maintenance-control-powershell.md
Get-AzMaintenanceUpdate `
## Apply updates
-Use [New-AzApplyUpdate](/powershell/module/az.maintenance/new-azapplyupdate) to apply pending updates.
+Use [New-AzApplyUpdate](/powershell/module/az.maintenance/new-azapplyupdate) to apply pending updates. Apply update calls can take upto 2 hours to complete.
### Isolated VM
virtual-machines Maintenance Control https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/maintenance-control.md
Maintenance control lets you decide when to apply updates to your isolated VMs a
With maintenance control, you can: - Batch updates into one update package. - Wait up to 35 days to apply updates. -- Automate platform updates by configuring a maintenance schedule or by using [Azure Functions](https://github.com/Azure/azure-docs-powershell-samples/tree/master/maintenance-auto-scheduler).
+- Automate platform updates by configuring a maintenance schedule.
- Maintenance configurations work across subscriptions and resource groups. ## Limitations - VMs must be on a [dedicated host](./dedicated-hosts.md), or be created using an [isolated VM size](isolation.md).-- If a maintenance schedule is declared,it must be for minimum 2 hours.
+- The maintenance window duration must be 2 hours or more. Maintenance window duration is the time from when the customer initiates the update to the time it completes.
- After 35 days, an update will automatically be applied. - User must have **Resource Contributor** access.
virtual-machines Get Started https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/workloads/sap/get-started.md
ms.assetid: ad8e5c75-0cf6-4564-ae62-ea1246b4e5f2
vm-linux Previously updated : 06/09/2021 Last updated : 06/17/2021
In this section, you can find information in how to configure SSO with most of t
In this section, you find documents about Microsoft Power BI integration into SAP data sources as well as Azure Data Factory integration into SAP BW. ## Change Log
+- June 17, 2020: Change in [High availability of SAP HANA on Azure VMs on RHEL](./sap-hana-high-availability-rhel.md) to remove meta keyword from HANA resource creation command (RHEL 8.x)
- June 09, 2021: Correct VM SKU names for M192_v2 in [SAP HANA Azure virtual machine storage configurations](./hana-vm-operations-storage.md) - May 26, 2021: Change in [SAP HANA scale-out HSR with Pacemaker on Azure VMs on SLES](./sap-hana-high-availability-scale-out-hsr-suse.md), [HA for SAP HANA scale-up with ANF on RHEL](./sap-hana-high-availability-netapp-files-red-hat.md) and [SAP HANA scale-out HSR with Pacemaker on Azure VMs on RHEL](./sap-hana-high-availability-scale-out-hsr-rhel.md) to add configuration to prepare the OS for running HANA on ANF - May 13, 2021: Change in [Setting up Pacemaker on SLES in Azure](./high-availability-guide-suse-pacemaker.md) to clarify how resource agent azure-events operates
virtual-machines Sap Hana High Availability Rhel https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/workloads/sap/sap-hana-high-availability-rhel.md
vm-linux Previously updated : 04/12/2021 Last updated : 06/17/2021
op start timeout=3600 op stop timeout=3600 \
op monitor interval=61 role="Slave" timeout=700 \ op monitor interval=59 role="Master" timeout=700 \ op promote timeout=3600 op demote timeout=3600 \
-promotable meta notify=true clone-max=2 clone-node-max=1 interleave=true
+promotable notify=true clone-max=2 clone-node-max=1 interleave=true
sudo pcs resource create vip_<b>HN1</b>_<b>03</b> IPaddr2 ip="<b>10.0.0.13</b>" sudo pcs resource create nc_<b>HN1</b>_<b>03</b> azure-lb port=625<b>03</b>
virtual-machines Troubleshooting Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/workloads/sap/troubleshooting-monitoring.md
Title: Monitoring of SAP HANA on Azure (Large Instances) | Microsoft Docs
-description: Monitor SAP HANA on an Azure (Large Instances).
+ Title: Monitoring SAP HANA on Azure (Large Instances) | Microsoft Docs
+description: Learn about monitoring SAP HANA on an Azure (Large Instances).
documentationcenter:
vm-linux Previously updated : 09/10/2018-- Last updated : 6/17/2021++
+ - H1Hack27Feb2017
+ - contperf-fy21q4
-# How to monitor SAP HANA (large instances) on Azure
+# Monitor SAP HANA (Large instances) on Azure
-SAP HANA on Azure (Large Instances) is no different from any other IaaS deployment ΓÇö you need to monitor what the OS and the application is doing and how the applications consume the following resources:
+In this article, we'll look at monitoring SAP HANA Large Instances on Azure (otherwise known as BareMetal Infrastructure).
+
+SAP HANA on Azure (Large Instances) is no different from any other IaaS deployment. Monitoring the operating system and application is important. You'll want to know how the applications consume the following resources:
- CPU - Memory - Network bandwidth - Disk space
-With Azure Virtual Machines, you need to figure out whether the resource classes named above are sufficient or they get depleted. Here is more detail on each of the different classes:
+Monitor your SAP HANA Large Instances to see whether the above resources are sufficient or whether they're being depleted. The following sections give more detail on each of these resources.
+
+## CPU resource consumption
+
+SAP defines a maximum threshold of CPU use for the SAP HANA workload. Staying within this threshold ensures you have enough CPU resources to work through the data stored in memory. High CPU consumption can happen when SAP HANA services execute queries because of missing indexes or similar issues. So monitoring CPU consumption of the HANA Large Instance and CPU consumption of specific HANA services is critical.
+
+## Memory consumption
+
+It's important to monitor memory consumption both within HANA and outside of HANA on the SAP HANA Large Instance. Monitor how the data is consuming HANA-allocated memory so you can stay within the sizing guidelines of SAP. Monitor memory consumption on the Large Instance to make sure non-HANA software doesn't consume too much memory. You don't want non-HANA software competing with HANA for memory.
-**CPU resource consumption:** The ratio that SAP defined for certain workload against HANA is enforced to make sure that there should be enough CPU resources available to work through the data that is stored in memory. Nevertheless, there might be cases where HANA consumes many CPUs executing queries due to missing indexes or similar issues. This means you should monitor CPU resource consumption of the HANA large instance unit as well as CPU resources consumed by the specific HANA services.
+## Network bandwidth
-**Memory consumption:** Is important to monitor from within HANA, as well as outside of HANA on the unit. Within HANA, monitor how the data is consuming HANA allocated memory in order to stay within the required sizing guidelines of SAP. You also want to monitor memory consumption on the Large Instance level to make sure that additional installed non-HANA software does not consume too much memory, and therefore compete with HANA for memory.
+The bandwidth of the Azure Virtual Network (VNet) gateway is limited. Only so much data can move into the Azure VNet. Monitor the data received by all Azure VMs within a VNet. This way you'll know when you're nearing the limits of the Azure gateway SKU you selected. It also makes sense to monitor incoming and outgoing network traffic on the HANA Large Instance to track the volumes handled over time.
-**Network bandwidth:** The Azure VNet gateway is limited in bandwidth of data moving into the Azure VNet, so it is helpful to monitor the data received by all the Azure VMs within a VNet to figure out how close you are to the limits of the Azure gateway SKU you selected. On the HANA Large Instance unit, it does make sense to monitor incoming and outgoing network traffic as well, and to keep track of the volumes that are handled over time.
+## Disk space
-**Disk space:** Disk space consumption usually increases over time. Most common causes are: data volume increases, execution of transaction log backups, storing trace files, and performing storage snapshots. Therefore, it is important to monitor disk space usage and manage the disk space associated with the HANA Large Instance unit.
+Disk space consumption usually increases over time. Common causes include:
+- Data volume increases over time
+- Execution of transaction log backups
+- Storing trace files
+- Taking storage snapshots
+
+So it's important to monitor disk space usage and manage the disk space associated with the HANA Large Instance.
+
+## Preloaded system diagnostic tools
+
+For the **Type II SKUs** of the HANA Large Instances, the server comes with the preloaded system diagnostic tools. You can use these diagnostic tools to do the system health check.
+
+Run the following command to generate the health check log file at /var/log/health_check.
-For the **Type II SKUs** of the HANA Large Instances, the server comes with the preloaded system diagnostic tools. You can utilize these diagnostic tools to perform the system health check.
-Run the following command to generates the health check log file at /var/log/health_check.
``` /opt/sgi/health_check/microsoft_tdi.sh ```
-When you work with the Microsoft Support team to troubleshoot an issue, you may also be asked to provide the log files by using these diagnostic tools. You can zip the file using the following command.
+When you work with the Microsoft Support team to troubleshoot an issue, you may be asked to provide the log files by using these diagnostic tools. You can zip the file using this command:
+ ``` tar -czvf health_check_logs.tar.gz /var/log/health_check ```
-**Next steps**
+## Next steps
+
+Learn about how to monitor and troubleshoot from within SAP HANA.
-- Refer [How to monitor SAP HANA (large instances) on Azure](./hana-monitor-troubleshoot.md).
+> [!div class="nextstepaction"]
+> [Monitoring and troubleshooting from HANA side](hana-monitor-troubleshoot.md)